CN106658079A - Customized expression image generation method and device - Google Patents
Customized expression image generation method and device Download PDFInfo
- Publication number
- CN106658079A CN106658079A CN201710007418.9A CN201710007418A CN106658079A CN 106658079 A CN106658079 A CN 106658079A CN 201710007418 A CN201710007418 A CN 201710007418A CN 106658079 A CN106658079 A CN 106658079A
- Authority
- CN
- China
- Prior art keywords
- image
- video
- user
- social networking
- networking application
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/422—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
- H04N21/42204—User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor
- H04N21/42206—User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor characterized by hardware details
- H04N21/42224—Touch pad or touch panel provided on the remote control
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/254—Management at additional data server, e.g. shopping server, rights management server
- H04N21/2541—Rights Management
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
- H04N21/4318—Generation of visual interfaces for content selection or interaction; Content or additional data rendering by altering the content in the rendering process, e.g. blanking, blurring or masking an image region
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/433—Content storage operation, e.g. storage operation in response to a pause request, caching operations
- H04N21/4333—Processing operations in response to a pause request
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/488—Data services, e.g. news ticker
- H04N21/4884—Data services, e.g. news ticker for displaying subtitles
Abstract
The invention relates to a customized expression image generation method and device. The customized expression image generation method comprises: acquiring a video screenshot command generated by a video application; performing image capture on a video playing in the video application according to the video screenshot command, thereby obtaining a to-be-processed image; invoking a built-in image processing plug-in in the video application to process the to-be-processed image, thereby generating an expression image; and pushing the expression image to a social application server according to a social application identification of a user, wherein the social application identification of the user and the video application are corresponded with each other. By adopting the customized expression image generation method and device, provided by the invention, the generation efficiency of the expression image can be improved.
Description
Technical field
The present invention relates to technical field of image processing, more particularly to a kind of self-defined method and dress for generating facial expression image
Put.
Background technology
With the continuous development of Internet technology, the information transmitted in social networking application has been no longer limited to traditional
Word, for example, language of expressing one's feelings falls within a kind of information transmitted in social networking application, this kind of expression table of the language to portray in image
Feelings are representing corresponding information.
At present, social networking application generally provides expression language, user by an expression bag comprising various facial expression images
Mood at this very moment can be expressed by a facial expression image is selected in expression bag, and then be passed to other side by the facial expression image
Up to certain information.But, the facial expression image often limited amount in expression bag, and picture material fixes, and is insufficient for user
Individual demand, therefore, user may also want to self-defined generation facial expression image, with make it is one's own expression bag.
A kind of method that prior art is provided is that user first obtains one section of video, and actually required by intercepting in this section of video
The image wanted, then open the image of (such as Photoshop) to being truncated to third party's image processing tool and process, finally lead to
The facial expression image of self-defined generation is forwarded to social networking application by the mode for crossing preservation upload.
Although it follows that above-mentioned prior art can generate facial expression image for User Defined, operating excessively numerous
It is trivial, however it remains the relatively low problem of the formation efficiency of facial expression image.
The content of the invention
The embodiment of the present invention provides a kind of self-defined method and apparatus for generating facial expression image, it is possible to increase facial expression image
Formation efficiency.
A kind of self-defined method for generating facial expression image, including:Obtain the video interception instruction generated in Video Applications;Root
Instruct the video to playing in the Video Applications to carry out image interception according to the video interception, obtain pending image;Call
The image procossing plug-in unit embedded in the Video Applications carries out image procossing to the pending image, generates facial expression image;Root
Identify to facial expression image described in social networking application server push according to the social networking application of user, the social networking application mark of the user with
There is corresponding relation in the Video Applications.
A kind of self-defined device for generating facial expression image, including:Instruction acquisition module, for obtaining Video Applications in generate
Video interception instruction;Video interception module, for being regarded to broadcasting in the Video Applications according to video interception instruction
Frequency carries out image interception, obtains pending image;Image processing module, for calling the Video Applications in embed image at
Reason plug-in unit carries out image procossing to the pending image, generates facial expression image;Image pushing module, for according to the society of user
Facial expression image described in application identities to social networking application server push is handed over, social networking application mark and the video of the user should
With there is corresponding relation.
Compared with prior art, the invention has the advantages that:
Instruct the video to wherein playing to carry out image interception according to the video interception obtained in Video Applications, and call this
The image procossing plug-in unit embedded in Video Applications carries out image procossing to the pending image being truncated to, and generates facial expression image, enters
And facial expression image is pushed to by social networking application clothes according to the social networking application of the user that there is corresponding relation with Video Applications mark
Business device, for can be by obtaining corresponding expression figure in social networking application server subsequently through the social networking application of user mark
Picture.
User need not both exit Video Applications, it is not required that additional downloads third party's image processing tool, you can complete
Above-mentioned sequence of operations, simple and fast, and the facial expression image of self-defined generation can be straight by the social networking application of user mark
Connect and push to social networking application server such that it is able to improve the formation efficiency of facial expression image.
Description of the drawings
Accompanying drawing herein is merged in specification and constitutes the part of this specification, shows the enforcement for meeting the present invention
Example, and be used to together explain the principle of the embodiment of the present invention in specification.
Fig. 1 is the schematic diagram of implementation environment involved according to embodiments of the present invention;
Fig. 2 is a kind of block diagram of the terminal according to an exemplary embodiment;
Fig. 3 is the flow chart of the method for a kind of self-defined generation facial expression image according to an exemplary embodiment;
Fig. 4 is to obtain the video interception generated in Video Applications in Fig. 3 correspondence embodiments to instruct step in one embodiment
Flow chart;
Fig. 5 is to instruct the video to playing in Video Applications to intercept according to video interception in Fig. 3 correspondence embodiments, is obtained
To pending image step one embodiment flow chart;
Fig. 6 is the flow chart of the method for another kind of self-defined generation facial expression image according to an exemplary embodiment;
Fig. 7 is the flow chart of the method for another kind of self-defined generation facial expression image according to an exemplary embodiment;
Fig. 8 is that a kind of self-defined method for generating facial expression image implements schematic diagram in an application scenarios;
Fig. 9 is the block diagram of the device of a kind of self-defined generation facial expression image according to an exemplary embodiment;
Figure 10 is block diagram of the instruction acquisition module in one embodiment in Fig. 9 correspondence embodiments;
Figure 11 is block diagram of the video interception module in one embodiment in Fig. 9 correspondence embodiments;
Figure 12 is the block diagram of the device of another kind of self-defined generation facial expression image according to an exemplary embodiment;
Figure 13 is the block diagram of the device of another kind of self-defined generation facial expression image according to an exemplary embodiment.
By above-mentioned accompanying drawing, it has been shown that the present invention clearly embodiment, hereinafter will be described in more detail, these accompanying drawings
It is not intended to limit the scope of present inventive concept by any mode with word description, but is by reference to specific embodiment
Those skilled in the art illustrate idea of the invention.
Specific embodiment
Here in detail explanation will be performed to exemplary embodiment, its example is illustrated in the accompanying drawings.Explained below is related to
During accompanying drawing, unless otherwise indicated, the same numbers in different accompanying drawings represent same or analogous key element.Following exemplary embodiment
Described in embodiment do not represent and the consistent all embodiments of the present invention.Conversely, they be only with it is such as appended
The example of the consistent apparatus and method of some aspects described in detail in claims, the present invention.
As it was previously stated, the method that prior art is provided requires first to preserve the image being truncated to, then import third party's image
Handling implement carries out image procossing, and ageing and convenience is all poor.Meanwhile, third party's image processing tool generally falls into specially
The stronger instrument of industry, function is complicated, and user's left-hand seat is relatively difficult will to affect the customized experiences of user.
Additionally, after image procossing is finished, the facial expression image of self-defined generation can not directly push to social networking application
Server, but need first to preserve upload and then just forward, the efficiency of transmission of facial expression image certainly will be caused low.
From the foregoing, it will be observed that the method that prior art is provided still suffers from the relatively low problem of formation efficiency of facial expression image.
Therefore, in order to improve the formation efficiency of facial expression image, spy proposes a kind of self-defined method for generating facial expression image.
Fig. 1 is the implementation environment involved by this kind of self-defined method for generating facial expression image.The implementation environment includes terminal
100 and social interaction server device 200.
Wherein, terminal 100 can be smart mobile phone, intelligent television, panel computer, palm PC, notebook computer or
Other electronic equipments for being available for social networking application to run.Social interaction server device 200 is corresponding to the social networking application run in terminal 100
Server.
During concrete implementation, terminal 100 will carry out the generation of self-defined facial expression image in Video Applications, and can
So that the facial expression image of self-defined generation is directly pushed into social networking application server 200 by the social networking application mark of user, with
Can be by obtaining corresponding facial expression image in social networking application server 200 for the social networking application mark subsequently through the user.
Fig. 2 is referred to, Fig. 2 is a kind of block diagram of the terminal according to an exemplary embodiment.It should be noted that should
Terminal 100 simply adapts to example of the invention for one, it is impossible to be considered any limit for providing the use range to the present invention
System.The terminal 100 can not be construed to need to rely on or must have in the exemplary terminal 100 illustrated in Fig. 2
One or more part.
As shown in Fig. 2 terminal 100 include memory 101, storage control 103, one or more (one is only illustrated in figure
It is individual) processor 105, Peripheral Interface 107, radio-frequency module 109, locating module 111, photographing module 113, audio-frequency module 115, touch-control
Screen 117 and key-press module 119.These components are mutually communicated by one or more communication bus/holding wire 121.
It is appreciated that the structure shown in Fig. 2 is only illustrated, terminal 100 may also include more more or less of than shown in Fig. 2
Component, or with the component different from shown in Fig. 2.Each component shown in Fig. 2 can be using hardware, software or its combination
To realize.
Wherein, memory 101 can be used to store software program and module, such as in each exemplary embodiment of the invention from
Definition generates the programmed instruction and module corresponding to the method and device of facial expression image, and processor 105 is stored in by operation
Programmed instruction in reservoir 101, so as to perform various functions and data processing, that is, realizes above-mentioned self-defined generation facial expression image
Method.
The carrier that memory 101 is stored as resource, can be random storage medium, such as high speed random access memory, non-
Volatile memory, such as one or more magnetic storage devices, flash memory or other solid-state memories.Storage mode can be
Of short duration storage is permanently stored.
Peripheral Interface 107 can include at least one wired or wireless network interface, an at least connection in series-parallel translation interface, at least
One input/output interface and at least USB interface etc., for outside various input/output devices to be coupled into memory
101 and processor 105, to realize the communication with outside various input/output devices.
Radio-frequency module 109 is used for transceiving electromagnetic ripple, the mutual conversion of electromagnetic wave and electric signal is realized, so as to pass through communication network
Network is communicated with other equipment.Communication network includes cellular telephone networks, WLAN or Metropolitan Area Network (MAN), above-mentioned communication network
Network can use various communication standards, agreement and technology.
Locating module 111 is used to obtain the geographical position being currently located of terminal 100.The example of locating module 111 includes
But it is not limited to GPS (GPS), the location technology based on WLAN or mobile radio communication.
Photographing module 113 is under the jurisdiction of camera, for shooting picture or video.The picture or video of shooting can be deposited
Store up to memory 101, can also be sent to host computer by radio-frequency module 109.
Audio-frequency module 115 provides a user with COBBAIF, its may include one or more microphone interfaces, one or more
Speaker interface and one or more earphone interfaces.Interacting for voice data is carried out with miscellaneous equipment by COBBAIF.Sound
Frequency can also be sent according to storing to memory 101 by radio-frequency module 109.
Touch Screen 117 provides an I/O Interface between terminal 100 and user.Specifically, user can pass through
Touch Screen 117 carries out input operation, for example the gesture operation such as click, touch, slip, so that electronic equipment is to the input operation
Responded.Terminal 100 then passes through word, the picture output content that either any one form of video or combination are formed
Touch Screen 117 displays to the user that output.
Key-press module 119 includes at least one button, to provide the interface that user is input into terminal 100, user
By pressing different buttons terminal 100 can be made to perform different functions.For example, to be available for user to realize right for sound regulating key
The regulation of the wave volume that terminal 100 is played.
Fig. 3 is referred to, in one exemplary embodiment, a kind of self-defined method for generating facial expression image is applied to Fig. 1 institutes
Show the terminal 100 of implementation environment, this kind of self-defined method for generating facial expression image can be performed by terminal 100, can include with
Lower step:
Step 310, obtains the video interception instruction generated in Video Applications.
In order to allow users to indiscriminately ad. as one wishes be truncated to the image admired, the present embodiment while video display collection of drama is watched
In, video interception instruction will be generated in the Video Applications for be available for video playback.Wherein, video interception instruction at least can indicate that
Whether terminal use needs to carry out image interception operation to video.
For example, image interception entrance can be set up in Video Applications, when user needs to carry out image interception to video
During operation, you can trigger associative operation in the image interception entrance, so cause terminal to listen to the response associative operation and
The video interception instruction of generation, knows that user needs to carry out image interception operation to the video with this.
The image interception entrance can be the shortcut order for pre-setting, and can also be and be set in advance in Video Applications
Virtual sectional drawing button on broadcast interface etc..Correspondingly, the associative operation that user triggers in image sectional drawing entrance can be logical
Shortcut of the keyboard percussion corresponding to shortcut order is crossed, to be can also be and click on virtual sectional drawing button by mouse or touch screen
Deng.
Certainly, in different application scenarios, for example, terminal also needs to know the frame number for carrying out image interception, then video
Sectional drawing instruction can be to generate, now according to the selected frame number of user, and video interception instruction may also indicate that terminal-pair video enters
Frame number during row image interception.
After video interception instruction is generated, terminal gets the video interception instruction, and then prepares according to the video
Sectional drawing instruction carries out the image interception operation of video.
Step 330, instructs the video to playing in Video Applications to carry out image interception according to video interception, obtains pending
Image.
Express one's feelings in order to make social networking application the various facial expression images of bag, in addition it is also necessary to carry out the acquisition of video, with by obtaining
To video in intercept the practically necessary image of user.
Wherein, the mode of acquisition video can be obtained in the video file prestored in the local storage space by terminal
Take, or by internet by certain section of video is downloaded on the resources of movie & TV server, can also be entrained by using terminal
Camera carries out captured in real-time and gets.
In the present embodiment, the acquisition of video is completed in Video Applications.For example, when user is carried out by Video Applications
During the viewing of video display collection of drama, the video of pending image interception is considered as by the video display collection of drama watched.
After the video for getting pending image interception, terminal can be instructed to this according to the video interception for getting
Video carries out image interception operation.
Further, video interception instruction not only can instruction terminal whether need to carry out image interception to video, also may be used
Frame number during image interception is carried out to video with instruction terminal, whether the image interception that also can indicate that terminal-pair video is carried out needs
Carrying captions, i.e. video interception instruction can reflect the actual sectional drawing demand of user.
Correspondingly, the image interception for being carried out according to video interception instruction, the pending image being truncated to is and meets user
Actual sectional drawing demand, be conducive to subsequently producing the facial expression image for meeting users ' individualized requirement.
Step 350, calls the image procossing plug-in unit embedded in Video Applications that image procossing is carried out to pending image, generates
Facial expression image.
In order to avoid using third party's image processing tool, in the present embodiment, embedded image is processed and inserted in Video Applications
Part, in order to user the making of self-defined facial expression image is carried out, and then is conducive to improving the formation efficiency of facial expression image.
Further, the image procossing plug-in unit can be all the time showed in Video Applications by the form of toolbar, also may be used
Only to be ejected in the form of toolbar when user selects to carry out image procossing.
More preferably, image procossing plug-in unit with the form of toolbar user select carry out image procossing when ejection, for example, with
Maximized mode is showed in interface foremost, and with this customized experiences of user are improved.Meanwhile, play the broadcast interface of video
It is then reduced, for example, by minimize in the way of be contracted to the interface lower left corner, in order to subsequently facial expression image push finish when
Recover, so as to improve the viewing experience of user.
The image procossing that pending image is carried out is included but is not limited to by image procossing plug-in unit:Addition pre-sets text
Word and pre-set picture synthesis, pre-set face replace etc..Wherein, pre-set face replacement and adopt face recognition technology, it is first
The face in pending image is first identified, and then the face for recognizing is replaced to pre-set face.
Further, because pending image includes (such as one section of still image (such as a pictures) and dynamic image
Video), correspondingly, image procossing will respectively be carried out based on still image and dynamic image.
Specifically, when still image is a two field picture, only image procossing is carried out to the two field picture.
When dynamic image is multiple image, then the frame position being located according to the multiple image carries out successively image procossing.
Step 370, identifies to social networking application server push facial expression image, the social activity of user according to the social networking application of user
There is corresponding relation with Video Applications in application identities.
In the present embodiment, the push of facial expression image can be carried out directly, without by first preserving the side that upload is forwarded again
Formula is carried out.
Specifically, social interaction server device is the server corresponding to the social networking application run in terminal, and itself is just stored
The social networking application mark of mass users.By social interaction server device Video Applications are tied up with the social networking application mark of user in advance
It is fixed, with the corresponding relation set up in the terminal between Video Applications and the social networking application mark of user by the binding.
The corresponding relation existed with Video Applications by the social networking application mark of user, self-defined generation in Video Applications
Facial expression image can directly push to social interaction server device, for being answered by social subsequently through the social networking application of user mark
With obtaining corresponding facial expression image in server.
Further, terminal can guide user that corresponding Video Applications mark is obtained in the way of Login Register, accordingly
Ground, the social networking application mark for being the user that there is corresponding relation in terminal is identified with the Video Applications of the user, is beneficial to
The corresponding of facial expression image is carried out according to the different social networking applications mark that there is corresponding relation from different video application identities in terminal
Push.
For example, if Video Applications have the Video Applications mark A1 of party A-subscriber and the Video Applications mark of party B-subscriber
B1, with the Video Applications of party A-subscriber mark A1 there is corresponding relation be party A-subscriber social networking application mark A2, the video with party B-subscriber
That application identities B1 have corresponding relation is the social networking application mark B2 of party B-subscriber, and now, party A-subscriber passes through Video Applications institute certainly
The facial expression image that definition is generated, only can push to according to the social networking application mark A2 that there is corresponding relation with Video Applications mark A1
In social interaction server device.
It should be noted that the corresponding relation for either existing between the social networking application mark and Video Applications of user, also
It is there is corresponding relation between the social networking application mark of user and Video Applications mark, will be all stored in the configuration of Video Applications
In file, for carrying out being extracted when facial expression image is pushed.
By process as above, realize self-defined generation facial expression image, i.e. user in Video Applications and both need not
Exit Video Applications, it is not required that additional downloads third party's image processing tool, you can complete a series of self-defined generations and express one's feelings
The operation of image, it is simple and convenient, while the customized experiences of user are improved, also improve the viewing experience of user.
Additionally, built the bridge between Video Applications and social networking application, i.e. Video Applications for user can directly by certainly
The facial expression image that definition is generated pushes to social networking application server, and ageing and convenience is all greatly improved, is effectively improved
The efficiency of transmission of facial expression image, so as to further increasing the formation efficiency of facial expression image.
Fig. 4 is referred to, in one exemplary embodiment, step 310 may comprise steps of:
Step 311, the trigger action for responding user suspends the video played in Video Applications.
For example, a sectional drawing icon is set on the broadcast interface of Video Applications.When user needs to broadcast Video Applications
Put the video played in interface carry out image interception operate when, user can click on broadcasting circle by mouse or touch screen
Sectional drawing icon on face, the click is considered as the operation that user is triggered.
Correspondingly, terminal passes through the trigger action of response user by the video pause played, in order to enter to video
Row image interception is operated.
Step 313, generates picture material and selects information, selects information alert user to carry out pending figure by picture material
As the selection of content.
After the video played in the broadcast interface of Video Applications is suspended, terminal will also further to user
Inquiry and the relevant information of image interception.For example, the relevant information can be intercept image frame number, or intercept image be
It is no to need to carry captions etc..
Specifically, picture material will be generated in terminal and will select information, and for example, select to be pointed out in the way of dialog box to eject
User carries out the selection of pending picture material, and the pending picture material includes but is not limited to a two field picture, multiple image, takes
With captions, captions etc. are not carried.
Step 315, according to the selection of user video interception instruction is generated.
After user selects information to complete to select according to picture material, video interception instruction can be made according to user
Select to generate, i.e., contain the selected pending picture material of user in video interception instruction.
For example, if the pending picture material that user selects is multiple image, video interception instruction instruction terminal is intercepted
The pending image for arriving is dynamic image (such as one section video);If the pending picture material that user selects is not carry word
Curtain, the then pending image that video interception instruction instruction terminal is truncated to is mask word image behind the scenes.
By process as above, the video interception for realizing generation instructs the sectional drawing for reflecting user's reality to need
Ask, be conducive to subsequently producing the facial expression image for meeting users ' individualized requirement.
Fig. 5 is referred to, in one exemplary embodiment, step 330 may comprise steps of:
Step 331, using in Video Applications play video current frame position as initial frame position, according to video interception
The frame number indicated in instruction determines termination frame position.
It should be appreciated that after the video played in the broadcast interface of Video Applications is suspended, stopping in broadcast interface
The frame position corresponding to video image stayed is considered as the current frame position of the video.
As it was previously stated, video interception instruction not only can instruction terminal whether need to carry out image interception to video, also may be used
Frame number during image interception is carried out to video with instruction terminal, whether the image interception that also can indicate that terminal-pair video is carried out needs
Carrying captions, i.e. video interception instruction can reflect the actual sectional drawing demand of user.
Based on this, in the present embodiment, according to the frame number indicated in video interception instruction, you can determined by current frame position
Go to start position and termination frame position.
Specifically, current frame position is initial frame position, terminates frame position and is initial frame position and frame number sum.
Step 333, according to initial frame position and terminates frame position by intercepting pending image in video.
It is noted that as it was previously stated, the image section that terminal-pair video is carried out is also can indicate that in video interception instruction
Whether need carry captions, based on this, terminal will also when image interception is carried out according to initial frame position and termination frame position if taking
Carry out the associative operation of subtitle shielding simultaneously according to indicating of instructing of video interception so that the pending image being finally truncated to
The actual sectional drawing demand of user is consistent.
Fig. 6 is referred to, in one exemplary embodiment, before step 350, method as above can also include following
Step:
Step 410, generates image procossing and selects information, selects whether information alert user selects immediately by image procossing
Carry out image procossing.
It is appreciated that after the image interception operation that terminal completes to video, user can according to actual needs, immediately
Pending image to being truncated to carries out image procossing, or delays and process the pending image being truncated to.
Based on this, terminal before image procossing is carried out to pending image, will also further to user's query whether
Need to carry out image procossing immediately.
Specifically, image procossing will be generated in terminal and will select information, and for example, point out to use in the way of ejecting and select dialog box
Whether family selects to carry out image procossing immediately.If user selects to carry out image procossing immediately, into step 350, immediately treat
Pending image.
If conversely, user is non-selected to carry out immediately image procossing, into step 430, delaying the pending image of process.
Step 430, user is non-selected carry out image procossing immediately when, continue to play the video in Video Applications, and protect
Pending image is deposited to default memory space, so that the pending image in default memory space is delayed process.
By process as above, the adaptability of self-defined generation facial expression image is improve, extend self-defined generation
The application scenarios of facial expression image, that is, be applicable not only to immediately treating for pending image, is equally applicable to prolonging for pending image
Post processing.
Further, in one exemplary embodiment, method as above can also be comprised the following steps:
The image zooming-out instruction of generation is triggered in default memory space is listened to, is treated by extracting in default memory space
Reason image, to carry out image procossing to the pending image for extracting.
Image zooming-out is instructed needs the pending image to delaying process to carry out image procossing for instruction terminal user.
Therefore, triggering generates image zooming-out instruction in memory space is preset, and terminal knows that user is needed to prolonging
The pending image of post processing carries out image procossing.Correspondingly, terminal will be by extracting pending image in default memory space
And carry out image procossing.
For example, while pending image is stored in default memory space, terminal links to the default memory space
One file, and pending image is then linked to the file in this document folder, is somebody's turn to do when user is clicked on by mouse or touch screen
Some file in file, i.e. triggering generate image zooming-out instruction, and then are instructed by default storage according to image zooming-out
Extract in space and obtain pending image corresponding with some file.
Fig. 7 is referred to, in one exemplary embodiment, before step 370, method as above can also include following
Step:
Step 510, the social networking application mark of couple user that there is corresponding relation with Video Applications is scanned for, if search is not
Social networking application to the user that there is corresponding relation with Video Applications is identified, then initiating social networking application binding to social interaction server device please
Ask.
As it was previously stated, in order to facial expression image directly to be pushed to social interaction server device, needing pre- by social interaction server device
First the social networking application mark of Video Applications and user is bound, so by the binding set up in the terminal Video Applications with
Corresponding relation between the social networking application mark of user.
Based on this, before facial expression image push is carried out, terminal will carry out the user that there is corresponding relation with Video Applications
Social networking application mark search, judge whether to be established in terminal between Video Applications and the social networking application mark of user with this
Corresponding relation, and then determine whether directly to carry out the push of facial expression image.
If the social networking application mark of the user that there is corresponding relation with Video Applications is searched, into step 370, by table
Feelings image directly pushes to social interaction server device.
If conversely, search is identified less than the social networking application of the user that there is corresponding relation with Video Applications, guiding user
By Video Applications and its social networking application mark binding, i.e., being identified by the social networking application of user should to social interaction server device initiation social activity
Use bind request.Wherein, the social networking application mark of user is at least carried in social networking application bind request.
Step 530, by social interaction server device social networking application bind request is responded, and setting up Video Applications should with the social activity of user
With the corresponding relation between mark.
Social interaction server device is received after social networking application bind request, is therefrom extracted the social networking application mark of user and is carried out
Checking, that is, confirm in the social networking application mark of its mass users for being stored with the presence or absence of corresponding social networking application mark.
If confirm exist, social networking application bind request is responded, so cause terminal in set up Video Applications with
Corresponding relation between the social networking application mark of user, is beneficial to the direct push of facial expression image.
Fig. 8 is that a kind of self-defined method for generating facial expression image implements schematic diagram in an application scenarios, in conjunction with
Concrete application scene shown in Fig. 8 is entered to the method for the self-defined generation facial expression image involved by each exemplary embodiment of the invention
Row description.
User clicks on the sectional drawing icon being arranged in Video Applications by execution step 601 so that terminal is by performing step
The trigger action of rapid 602 response user suspends the video played in Video Applications, and is further advanced by execution step 602
To step 603 obtain the user Video Applications mark, for subsequently set up in the terminal the user Video Applications identify with
Corresponding relation between social networking application mark.
After the Video Applications for getting the user are identified, will be right according to the video interception instruction generated in Video Applications
Video carries out image interception, and is identified the Video Applications of the pending image being truncated to and the user by execution step 604
Storage.
After storage is completed, whether select to immediately treat pending image to user's query by execution step 605, if
It is no, then continues to play the video that broadcasting is suspended in Video Applications by execution step 606, otherwise, by execution step 607
Call the image procossing plug-in unit being embedded in Video Applications that image procossing is carried out to pending image, generate facial expression image.
After facial expression image is generated, whether facial expression image is pushed to party clothes to user's query by execution step 608
Business device, if it has not, then continue to play the video that broadcasting is suspended in Video Applications by execution step 606, otherwise, by performing
The corresponding relation that step 609 to step 610 is set up between the Video Applications mark of the user and social networking application mark, to exist
After corresponding relation, identified to social interaction server device according to the social networking application of the user by execution step 611 and push facial expression image.
After social interaction server device receives the facial expression image, you can the facial expression image is stored to the social activity with the user
In the corresponding expression bag of application identities.When the user is when subsequently using social networking application, you can by transfer in social interaction server device with
The corresponding expression bag of its social networking application mark, and expressed at this very moment by a facial expression image is selected in the expression bag being deployed into
Mood, and then pass on certain information to other side by the facial expression image.
In various embodiments of the present invention, efficient self-defined generation facial expression image is realized.
It is following for apparatus of the present invention embodiment, can be used for performing self-defined generation facial expression image involved in the present invention
Method.For the details not disclosed in apparatus of the present invention embodiment, self-defined generation expression figure involved in the present invention is refer to
The embodiment of the method for the method of picture.
Refer to Fig. 9, in one exemplary embodiment, it is a kind of it is self-defined generate facial expression image device 700 include but not
It is limited to:Instruction acquisition module 710, video interception module 730, image processing module 750 and image pushing module 770.
Wherein, instruction acquisition module 710 is used to obtain the video interception instruction generated in Video Applications.
Video interception module 730 is used to instruct the video to playing in Video Applications to intercept according to video interception, obtains
To pending image.
Image processing module 750 is used to call the image procossing plug-in unit embedded in Video Applications to carry out figure to pending image
As processing, facial expression image is generated.
Image pushing module 770 is used to be identified to social networking application server push expression figure according to the social networking application of user
Picture.There is corresponding relation with Video Applications in the social networking application mark of user.
Figure 10 is referred to, in one exemplary embodiment, instruction acquisition module 710 is included but is not limited to:Operation response is single
Unit 711, information generating unit 713 and instruction generation unit 715.
Wherein, the trigger action that operating response unit 711 is used to respond user suspends the video played in Video Applications.
Information generating unit 713 is used to generate picture material selection information, and by picture material information alert user is selected
Carry out the selection of pending picture material.
Instruction generation unit 715 is used to generate video interception instruction according to the selection of user.
Figure 11 is referred to, in one exemplary embodiment, video interception module 730 is included but is not limited to:Frame position determines
Unit 731 and image interception unit 733.
Wherein, frame position determining unit 731 is used for the current frame position of the video played using in Video Applications as starting
Frame position, determines according to the frame number indicated in video interception instruction and terminates frame position.
Image interception unit 733 is used for according to initial frame position and terminates frame position by intercepting pending image in video.
Figure 12 is referred to, in one exemplary embodiment, device as above 700 is also included but is not limited to:Information is given birth to
Into module 810 and image storage module 830.
Wherein, information generating module 810 is used to generate image procossing selection information, and by image procossing information alert is selected
Whether user selects to carry out image procossing immediately.
Image storage module 830 be used for user is non-selected carry out image procossing immediately when, continue play Video Applications in
Video, and pending image is preserved to default memory space, so that the pending image in default memory space is delayed place
Reason.
Further, in one exemplary embodiment, device as above 700 is also included but is not limited to:Image zooming-out
Module.
Wherein, image zooming-out module is used to trigger the image zooming-out instruction of generation in default memory space is listened to, by
Pending image is extracted in default memory space, to carry out image procossing to the pending image for extracting.
Figure 13 is referred to, in one exemplary embodiment, device as above 700 is also included but is not limited to:Search mould
Block 910 and binding module 930.
Wherein, search module 910 is used for pair social networking application mark of the user that there is corresponding relation with Video Applications is carried out
Search, if search is identified less than the social networking application of the user that there is corresponding relation with Video Applications, initiates to social interaction server device
Social networking application bind request.
Binding module 930 is used to respond social networking application bind request by social interaction server device, sets up Video Applications and user
Social networking application mark between corresponding relation.
It should be noted that the device of self-defined generation facial expression image that above-described embodiment is provided is carrying out self-defined life
Into facial expression image when, be only illustrated with the division of above-mentioned each functional module, in practical application, can be as desired
Above-mentioned functions distribution is completed by different functional modules, i.e., the internal structure of the self-defined device for generating facial expression image will be divided
For different functional modules, to complete all or part of function described above.
In addition, above-described embodiment device of self-defined generation facial expression image for being provided and the self-defined facial expression image that generates
The embodiment of the method for method belongs to same design, and the concrete mode that wherein modules execution is operated is in embodiment of the method
It has been described in detail, here is omitted.
The above, preferable examples embodiment only of the invention, is not intended to limit embodiment of the present invention, this
Field those of ordinary skill central scope of the invention and spirit, can very easily carry out corresponding flexible or repair
Change, therefore protection scope of the present invention should be defined by the protection domain required by claims.
Claims (12)
1. it is a kind of it is self-defined generate facial expression image method, it is characterised in that include:
Obtain the video interception instruction generated in Video Applications;
Instruct the video to playing in the Video Applications to carry out image interception according to the video interception, obtain pending figure
Picture;
Call the image procossing plug-in unit embedded in the Video Applications that image procossing is carried out to the pending image, generate expression
Image;
The facial expression image according to the social networking application of user is identified to social networking application server push, the social networking application of the user
There is corresponding relation with the Video Applications in mark.
2. the method for claim 1, it is characterised in that the video interception instruction generated in the acquisition Video Applications
Step includes:
The trigger action for responding the user suspends the video played in the Video Applications;
Generate picture material and select information, point out the user to carry out the pending figure by described image content selection information
As the selection of content;
The video interception instruction is generated according to the selection of the user.
3. the method for claim 1, it is characterised in that described to be instructed to the Video Applications according to the video interception
The video of middle broadcasting carries out image interception, includes the step of obtain pending image:
The current frame position of the video played using in the Video Applications is instructed as initial frame position according to the video interception
The frame number of middle instruction determines termination frame position;
According to the initial frame position and terminate frame position by intercepting the pending image in the video.
4. the method for claim 1, it is characterised in that described to call the image procossing embedded in the Video Applications to insert
Before the step of part carries out image procossing, generation facial expression image to the pending image, methods described also includes:
Generate image procossing and select information, whether select to carry out immediately by user described in described image process selection information alert
Image procossing;
The user is non-selected carry out image procossing immediately when, continue to play the video in the Video Applications, and preserve institute
Pending image is stated to default memory space, so that the pending image in the default memory space is delayed process.
5. method as claimed in claim 4, it is characterised in that methods described also includes:
The image zooming-out instruction of generation is triggered in the default memory space is listened to, is extracted by the default memory space
The pending image, to carry out image procossing to the pending image for extracting.
6. the method as described in any one of claim 1 to 5, it is characterised in that the social networking application according to user identify to
Described in social networking application server push the step of facial expression image before, methods described also includes:
Pair with the Video Applications exist corresponding relation the user social networking application mark scan for, if search less than with
There is the social networking application mark of the user of corresponding relation in the Video Applications, then initiate social answering to the social interaction server device
Use bind request;
The social networking application bind request is responded by the social interaction server device, the society of the Video Applications and the user is set up
Hand over the corresponding relation between application identities.
7. it is a kind of it is self-defined generate facial expression image device, it is characterised in that include:
Instruction acquisition module, for obtaining Video Applications in generate video interception instruction;
Video interception module, cuts for being instructed the video to playing in the Video Applications to carry out image according to the video interception
Take, obtain pending image;
Image processing module, for calling the Video Applications in embed image procossing plug-in unit the pending image is carried out
Image procossing, generates facial expression image;
Image pushing module, for the facial expression image according to the social networking application of user is identified to social networking application server push,
There is corresponding relation with the Video Applications in the social networking application mark of the user.
8. device as claimed in claim 7, it is characterised in that the instruction acquisition module includes:
Operation response unit, the trigger action for responding the user suspends the video played in the Video Applications;
Information generating unit, for generating picture material information is selected, and by described image content selection information the use is pointed out
Family carries out the selection of the pending picture material;
Instruction generation unit, for generating the video interception instruction according to the selection of the user.
9. device as claimed in claim 7, it is characterised in that the video interception module includes:
Frame position determining unit, for using in the Video Applications play video current frame position as initial frame position,
Frame number according to indicating in video interception instruction determines termination frame position;
Image interception unit, for described pending by intercepting in the video according to the initial frame position and termination frame position
Image.
10. device as claimed in claim 7, it is characterised in that described device also includes:
Information generating module, for generating image procossing information is selected, and selects to be used described in information alert by described image process
Whether family selects to carry out image procossing immediately;
Image storage module, for the user is non-selected carry out image procossing immediately when, continue to play the Video Applications
In video, and the pending image is preserved to default memory space, so that the pending figure in the default memory space
As being delayed process.
11. devices as claimed in claim 10, it is characterised in that described device also includes:
Image zooming-out module, for triggering the image zooming-out instruction of generation in the default memory space is listened to, by described
The pending image is extracted in default memory space, to carry out image procossing to the pending image for extracting.
12. devices as described in any one of claim 7 to 11, it is characterised in that described device also includes:
Search module, searches for pair social networking application mark of the user that there is corresponding relation with the Video Applications
Rope, if search is identified less than the social networking application of the user that there is corresponding relation with the Video Applications, to the social activity
Server initiates social networking application bind request;
Binding module, for responding the social networking application bind request by the social interaction server device, sets up the Video Applications
And the corresponding relation between the social networking application mark of the user.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710007418.9A CN106658079B (en) | 2017-01-05 | 2017-01-05 | The customized method and device for generating facial expression image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710007418.9A CN106658079B (en) | 2017-01-05 | 2017-01-05 | The customized method and device for generating facial expression image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106658079A true CN106658079A (en) | 2017-05-10 |
CN106658079B CN106658079B (en) | 2019-04-30 |
Family
ID=58843254
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710007418.9A Active CN106658079B (en) | 2017-01-05 | 2017-01-05 | The customized method and device for generating facial expression image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106658079B (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108200463A (en) * | 2018-01-19 | 2018-06-22 | 上海哔哩哔哩科技有限公司 | The generation system of the generation method of barrage expression packet, server and barrage expression packet |
CN108596114A (en) * | 2018-04-27 | 2018-09-28 | 佛山市日日圣科技有限公司 | A kind of expression generation method and device |
CN108712323A (en) * | 2018-05-02 | 2018-10-26 | 广州市百果园网络科技有限公司 | Voice transmitting method, system, computer storage media and computer equipment |
CN109120866A (en) * | 2018-09-27 | 2019-01-01 | 腾讯科技(深圳)有限公司 | Dynamic expression generation method, device, computer readable storage medium and computer equipment |
CN109472849A (en) * | 2017-09-07 | 2019-03-15 | 腾讯科技(深圳)有限公司 | Method, apparatus, terminal device and the storage medium of image in processing application |
CN110049377A (en) * | 2019-03-12 | 2019-07-23 | 北京奇艺世纪科技有限公司 | Expression packet generation method, device, electronic equipment and computer readable storage medium |
CN110149549A (en) * | 2019-02-26 | 2019-08-20 | 腾讯科技(深圳)有限公司 | The display methods and device of information |
WO2020010974A1 (en) * | 2018-07-12 | 2020-01-16 | 腾讯科技(深圳)有限公司 | Image processing method and device, computer readable medium and electronic device |
CN111507143A (en) * | 2019-01-31 | 2020-08-07 | 北京字节跳动网络技术有限公司 | Expression image effect generation method and device and electronic equipment |
CN111984173A (en) * | 2020-07-17 | 2020-11-24 | 维沃移动通信有限公司 | Expression package generation method and device |
WO2020238320A1 (en) * | 2019-05-27 | 2020-12-03 | 北京字节跳动网络技术有限公司 | Method and device for generating emoticon |
CN113032339A (en) * | 2019-12-09 | 2021-06-25 | 腾讯科技(深圳)有限公司 | Image processing method, image processing device, electronic equipment and computer readable storage medium |
CN113345054A (en) * | 2021-05-28 | 2021-09-03 | 上海哔哩哔哩科技有限公司 | Virtual image decorating method, detection method and device |
CN113568551A (en) * | 2021-07-26 | 2021-10-29 | 北京达佳互联信息技术有限公司 | Picture saving method and device |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090010485A1 (en) * | 2007-07-03 | 2009-01-08 | Duncan Lamb | Video communication system and method |
CN101527690A (en) * | 2009-04-13 | 2009-09-09 | 腾讯科技(北京)有限公司 | Method for intercepting dynamic image, system and device thereof |
CN105828167A (en) * | 2016-03-04 | 2016-08-03 | 乐视网信息技术(北京)股份有限公司 | Screen-shot sharing method and device |
-
2017
- 2017-01-05 CN CN201710007418.9A patent/CN106658079B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090010485A1 (en) * | 2007-07-03 | 2009-01-08 | Duncan Lamb | Video communication system and method |
CN101527690A (en) * | 2009-04-13 | 2009-09-09 | 腾讯科技(北京)有限公司 | Method for intercepting dynamic image, system and device thereof |
CN105828167A (en) * | 2016-03-04 | 2016-08-03 | 乐视网信息技术(北京)股份有限公司 | Screen-shot sharing method and device |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109472849A (en) * | 2017-09-07 | 2019-03-15 | 腾讯科技(深圳)有限公司 | Method, apparatus, terminal device and the storage medium of image in processing application |
CN109472849B (en) * | 2017-09-07 | 2023-04-07 | 腾讯科技(深圳)有限公司 | Method, device, terminal equipment and storage medium for processing image in application |
CN108200463B (en) * | 2018-01-19 | 2020-11-03 | 上海哔哩哔哩科技有限公司 | Bullet screen expression package generation method, server and bullet screen expression package generation system |
CN108200463A (en) * | 2018-01-19 | 2018-06-22 | 上海哔哩哔哩科技有限公司 | The generation system of the generation method of barrage expression packet, server and barrage expression packet |
CN108596114A (en) * | 2018-04-27 | 2018-09-28 | 佛山市日日圣科技有限公司 | A kind of expression generation method and device |
CN108712323A (en) * | 2018-05-02 | 2018-10-26 | 广州市百果园网络科技有限公司 | Voice transmitting method, system, computer storage media and computer equipment |
US11282182B2 (en) | 2018-07-12 | 2022-03-22 | Tencent Technology (Shenzhen) Company Limited | Image processing method and apparatus, computer-readable medium, and electronic device |
WO2020010974A1 (en) * | 2018-07-12 | 2020-01-16 | 腾讯科技(深圳)有限公司 | Image processing method and device, computer readable medium and electronic device |
CN109120866B (en) * | 2018-09-27 | 2020-04-03 | 腾讯科技(深圳)有限公司 | Dynamic expression generation method and device, computer readable storage medium and computer equipment |
US11645804B2 (en) | 2018-09-27 | 2023-05-09 | Tencent Technology (Shenzhen) Company Limited | Dynamic emoticon-generating method, computer-readable storage medium and computer device |
WO2020063319A1 (en) * | 2018-09-27 | 2020-04-02 | 腾讯科技(深圳)有限公司 | Dynamic emoticon-generating method, computer-readable storage medium and computer device |
CN109120866A (en) * | 2018-09-27 | 2019-01-01 | 腾讯科技(深圳)有限公司 | Dynamic expression generation method, device, computer readable storage medium and computer equipment |
CN111507143A (en) * | 2019-01-31 | 2020-08-07 | 北京字节跳动网络技术有限公司 | Expression image effect generation method and device and electronic equipment |
CN110149549A (en) * | 2019-02-26 | 2019-08-20 | 腾讯科技(深圳)有限公司 | The display methods and device of information |
CN110149549B (en) * | 2019-02-26 | 2022-09-13 | 腾讯科技(深圳)有限公司 | Information display method and device |
CN110049377A (en) * | 2019-03-12 | 2019-07-23 | 北京奇艺世纪科技有限公司 | Expression packet generation method, device, electronic equipment and computer readable storage medium |
CN110049377B (en) * | 2019-03-12 | 2021-06-22 | 北京奇艺世纪科技有限公司 | Expression package generation method and device, electronic equipment and computer readable storage medium |
US11023716B2 (en) | 2019-05-27 | 2021-06-01 | Beijing Bytedance Network Technology Co., Ltd. | Method and device for generating stickers |
WO2020238320A1 (en) * | 2019-05-27 | 2020-12-03 | 北京字节跳动网络技术有限公司 | Method and device for generating emoticon |
CN113032339A (en) * | 2019-12-09 | 2021-06-25 | 腾讯科技(深圳)有限公司 | Image processing method, image processing device, electronic equipment and computer readable storage medium |
CN113032339B (en) * | 2019-12-09 | 2023-10-20 | 腾讯科技(深圳)有限公司 | Image processing method, device, electronic equipment and computer readable storage medium |
CN111984173A (en) * | 2020-07-17 | 2020-11-24 | 维沃移动通信有限公司 | Expression package generation method and device |
CN111984173B (en) * | 2020-07-17 | 2022-03-25 | 维沃移动通信有限公司 | Expression package generation method and device |
CN113345054A (en) * | 2021-05-28 | 2021-09-03 | 上海哔哩哔哩科技有限公司 | Virtual image decorating method, detection method and device |
CN113568551A (en) * | 2021-07-26 | 2021-10-29 | 北京达佳互联信息技术有限公司 | Picture saving method and device |
Also Published As
Publication number | Publication date |
---|---|
CN106658079B (en) | 2019-04-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106658079A (en) | Customized expression image generation method and device | |
US11030987B2 (en) | Method for selecting background music and capturing video, device, terminal apparatus, and medium | |
CN105391965B (en) | Video recording method based on multi-cam and device | |
CN105376496A (en) | Photographing method and device | |
CN103648048B (en) | Intelligent television video resource searching method and system | |
CN106101743B (en) | Panoramic video recognition methods and device | |
CN106570100A (en) | Information search method and device | |
CN106331178B (en) | A kind of information sharing method and mobile terminal | |
CN105892868A (en) | Screen capture method and screen capture device | |
CN109286836B (en) | Multimedia data processing method and device, intelligent terminal and storage medium | |
CN105550251A (en) | Picture play method and device | |
CN104038560A (en) | Remote assistance method between mobile terminals, client side, electronic device and system | |
CN109379623A (en) | Video content generation method, device, computer equipment and storage medium | |
CN107547934A (en) | Information transferring method and device based on video | |
CN110162652A (en) | A kind of picture display method and device, terminal device | |
CN107040808A (en) | Treating method and apparatus for barrage picture in video playback | |
CN106162364A (en) | Intelligent television system input method and device, terminal auxiliary input method and device | |
CN109151565A (en) | Play method, apparatus, electronic equipment and the storage medium of voice | |
CN106406111A (en) | Intelligent household electrical appliance operation method based on AR technology and image recognition technology | |
CN106886540A (en) | A kind of data search method, device and the device for data search | |
CN105653195B (en) | Screenshotss method and mobile terminal | |
CN111601012B (en) | Image processing method and device and electronic equipment | |
CN106030535A (en) | Application program switch method, apparatus and electronic terminal | |
CN104537049B (en) | A kind of picture browsing method and device | |
CN109842820A (en) | Barrage data inputting method and device, mobile terminal and readable storage medium storing program for executing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |