CN106358087B - Expression packet generation method and device - Google Patents
Expression packet generation method and device Download PDFInfo
- Publication number
- CN106358087B CN106358087B CN201610931186.1A CN201610931186A CN106358087B CN 106358087 B CN106358087 B CN 106358087B CN 201610931186 A CN201610931186 A CN 201610931186A CN 106358087 B CN106358087 B CN 106358087B
- Authority
- CN
- China
- Prior art keywords
- interested
- segment
- subset
- segments
- expression
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/4728—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for selecting a Region Of Interest [ROI], e.g. for requesting a higher resolution version of a selected region
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/04—Real-time or near real-time messaging, e.g. instant messaging [IM]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/07—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
- H04L51/10—Multimedia information
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/52—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services
Abstract
The disclosure is directed to a kind of expression packet generation method and devices, the described method includes: the video that detection user specifies, obtain the segment interested in video, subset of segments interested is determined from segment interested, include at least one segment interested in subset of segments interested, expression packet is generated according to subset of segments interested.The disclosure may be implemented to generate personalized expression packet according to the video that user provides, and the process for making expression packet is simple and quick, improves the flexibility and interest of expression packet.
Description
Technical field
This disclosure relates to computer application technology more particularly to a kind of expression packet generation method and device.
Background technique
In various social applications, such as the communication softwares such as wechat, QQ, credulity, the utilization of expression packet are very extensive.Expression
Wrap in (i.e. expression library), be often referred to one group of expression, good expression packet can greatly increase user-to-user information interaction flexibility and
Interest, between many users, " slightest disagreement, expression Great War " are also very common.
The expression that user uses at present is come from third party producer, and multi-pass crosses producer and uses drawing tool, moves
The design such as painting tool and drafting expression, how complex manufacturing process, time-consuming, lacking individuality allow user to be provided according to oneself
Video easily makes personalized expression packet, is a problem to be solved.
Summary of the invention
To overcome the problems in correlation technique, the disclosure provides a kind of expression packet generation method and device.The skill
Art scheme is as follows:
According to the first aspect of the embodiments of the present disclosure, a kind of expression packet generation method is provided, comprising:
The video that detection user specifies, obtains the segment interested in the video;
Subset of segments interested is determined from the segment interested, includes at least one in the subset of segments interested
A segment interested;
Expression packet is generated according to the subset of segments interested.
The technical scheme provided by this disclosed embodiment can include the following benefits: the view specified by detecting user
Frequently, the segment interested in video is obtained, determines subset of segments interested from segment interested, in subset of segments interested
Including at least one segment interested, expression packet is finally generated according to subset of segments interested, i.e., it will be in subset of segments interested
Segment interested as the expression in the corresponding expression packet of the video, thus realize according to user provide video generate
Property expression packet, make expression packet process it is simple and quick, improve the flexibility and interest of expression packet.
Further, the segment interested includes in face video segment, action video segment and event video clip
At least one.
It further, include audio data in the segment interested.
It is further, described before determining subset of segments interested in the segment interested, further includes:
The segment interested is screened according to the quality of the segment interested;And/or
The segment interested is cut, geometric transformation, luminance transformation or colour switching.
By screening according to the quality of segment interested to segment interested, the table that expression packet is included can be improved
The quality of feelings.
It is further, described to determine subset of segments interested from the segment interested, comprising:
It selects to meet the interested of the conditional parameter from the segment interested according to the conditional parameter that user selects
Segment;
The segment interested for meeting the conditional parameter is formed into the subset of segments interested, the conditional parameter includes
At least one of expression, mood and movement.
It is further, described to determine subset of segments interested from the segment interested, comprising:
Permutation and combination is carried out to the segment interested, obtains multiple subsets comprising N number of segment interested;
Calculate the similarity in each subset between N number of segment interested and/or similarity product;
It determines the sum of similarity described in the multiple subset minimum or the smallest subset of similarity product is described
Subset of segments interested.
It is further, described to determine subset of segments interested from the segment interested, comprising:
The segment interested is clustered, at least a kind of segment interested is obtained, wherein every one kind segment interested
In similarity between segment interested in same range;Alternatively, the corresponding conditional parameter of every one kind segment interested is identical, institute
Stating conditional parameter includes at least one of expression, mood and movement;
It selects top-quality segment interested in every one kind segment interested and forms the subset of segments interested.
Further, it is described expression packet is generated according to the subset of segments interested before, further includes:
At least one of following processing is carried out to the segment interested in the subset of segments interested:
Stylized filter or cartooning's filter, accelerate processing or reprocessing at macromimiaization processing.
The interest of expression packet can be increased by above-mentioned processing.
Further, it is described expression packet is generated according to the subset of segments interested after, further includes:
In response to user's operation, the expression in the expression packet is post-processed, and store it is post-treated after expression
Packet, the post-processing include modification expression or deletion expression.
It can be participated in by user in the manufacturing process of expression packet, improve user experience, it can also be according to the user's choice
Expression packet needed for exporting user with processing.
According to the second aspect of an embodiment of the present disclosure, a kind of expression packet generating means are provided, comprising:
Module is obtained, the video that detection user specifies is configured as, obtains the segment interested in the video;
Determining module is configured as determining subset of segments interested from the segment interested, described interested
It includes at least one segment interested that cross-talk, which is concentrated,;
Generation module is configured as generating expression packet according to the subset of segments interested.
The device that embodiment of the disclosure provides can include the following benefits: the video specified by detecting user,
The segment interested in video is obtained, subset of segments interested is determined from segment interested, is wrapped in subset of segments interested
It includes at least one segment interested, expression packet is finally generated according to subset of segments interested, i.e., it will be in subset of segments interested
Segment interested generates individual character according to the video that user provides as the expression in the corresponding expression packet of the video, to realize
The expression packet of change, the process for making expression packet is simple and quick, improves the flexibility and interest of expression packet.
Further, the segment interested includes in face video segment, action video segment and event video clip
At least one.
It further, include audio data in the segment interested.
Further, further includes:
First processing module is configured as determining segment interested from the segment interested in the determining module
Before subset, the segment interested is screened according to the quality of the segment interested;And/or
The segment interested is cut, geometric transformation, luminance transformation or colour switching.
By screening according to the quality of segment interested to segment interested, the table that expression packet is included can be improved
The quality of feelings.
Further, the determining module includes:
First choice submodule, the conditional parameter for being configured as being selected according to user selector from the segment interested
Close the segment interested of the conditional parameter;
Submodule is combined, the segment interested for being configured as to meet the conditional parameter forms the interested cross-talk
Collection, the conditional parameter includes at least one of expression, mood and movement.
Further, the determining module includes:
Computational submodule is configured as carrying out permutation and combination to the segment interested, obtains multiple comprising N number of interested
The subset of segment, calculate the similarity in each subset between N number of segment interested and/or similarity product;
It determines submodule, is configured to determine that the sum of similarity described in the multiple subset minimum or the similarity multiply
The smallest subset of product is the subset of segments interested.
Further, the determining module includes:
Submodule is clustered, is configured as clustering the segment interested, obtains at least a kind of segment interested,
In, the similarity in every one kind segment interested between segment interested is in same range;Alternatively, every one kind segment pair interested
The conditional parameter answered is identical, and the conditional parameter includes at least one of expression, mood and movement;
Second selection submodule, is configured as selecting top-quality segment group interested in every one kind segment interested
At the subset of segments interested.
Further, further includes:
Second processing module, be configured as the generation module according to the subset of segments interested generate expression packet it
Before, at least one of following processing is carried out to the segment interested in the subset of segments interested:
Stylized filter or cartooning's filter, accelerate processing or reprocessing at macromimiaization processing.
The interest of expression packet can be increased by above-mentioned processing.
Further, further includes:
Third processing module, be configured as the generation module according to the subset of segments interested generate expression packet it
Afterwards, in response to user's operation, the expression in the expression packet is post-processed, and store it is post-treated after expression packet, institute
Stating post-processing includes modification expression or deletion expression.
It can be participated in by user in the manufacturing process of expression packet, improve user experience, it can also be according to the user's choice
Expression packet needed for exporting user with processing.
It should be understood that above general description and following detailed description be only it is exemplary and explanatory, not
The disclosure can be limited.
Detailed description of the invention
The drawings herein are incorporated into the specification and forms part of this specification, and shows the implementation for meeting the disclosure
Example, and together with specification for explaining the principles of this disclosure.
Fig. 1 is a kind of flow chart of expression packet generation method shown according to an exemplary embodiment.
Fig. 2 is a kind of flow chart of expression packet generation method shown according to an exemplary embodiment.
Fig. 3 is a kind of block diagram of expression packet generating means shown according to an exemplary embodiment.
Fig. 4 is a kind of block diagram of the expression packet generating means shown according to another exemplary embodiment.
Fig. 5 is a kind of block diagram of the expression packet generating means shown according to another exemplary embodiment.
Fig. 6 is a kind of block diagram of the expression packet generating means shown according to another exemplary embodiment.
Fig. 7 is a kind of block diagram of the expression packet generating means shown according to another exemplary embodiment.
Fig. 8 is a kind of block diagram of the expression packet generating means shown according to another exemplary embodiment.
Fig. 9 is a kind of block diagram of the expression packet generating means shown according to another exemplary embodiment.
Figure 10 is a kind of block diagram of expression packet generating means shown according to an exemplary embodiment.
Through the above attached drawings, it has been shown that the specific embodiment of the disclosure will be hereinafter described in more detail.These attached drawings
It is not intended to limit the scope of this disclosure concept by any means with verbal description, but is by referring to specific embodiments
Those skilled in the art illustrate the concept of the disclosure.
Specific embodiment
Example embodiments are described in detail here, and the example is illustrated in the accompanying drawings.Following description is related to
When attached drawing, unless otherwise indicated, the same numbers in different drawings indicate the same or similar elements.Following exemplary embodiment
Described in embodiment do not represent all implementations consistent with this disclosure.On the contrary, they be only with it is such as appended
The example of the consistent device and method of some aspects be described in detail in claims, the disclosure.
The expression packet generation method that each embodiment of the disclosure provides, can by be equipped with the terminal device of communication software Lai
It realizes, which can be smart phone, tablet computer, E-book reader and portable computer etc..
Fig. 1 is a kind of flow chart of expression packet generation method shown according to an exemplary embodiment, and the present embodiment is with this
Expression packet generation method is applied to illustrate in terminal device.The expression packet generation method may include steps of.
In step s 11, the video that detection user specifies obtains the segment interested in video.
Wherein, segment interested includes at least one in face video segment, action video segment and event video clip
Kind.Action video segment therein be people's limbs movement segment, event video clip be such as shoot this event constitute view
Frequency segment.Video clip is one section of video frame, including initial frame number, frame number and end frame number.
It further, include audio data in segment interested, audio data can be used as identification expression or mood etc.
Condition.Or further include sensing data, which is biography when user's using terminal equipment shoots video on terminal device
The data that sensor measures, exercise data when video is shot such as user, and sensing data can be used as the condition of identification maneuver etc..
In step s 12, subset of segments interested is determined from segment interested, includes extremely in subset of segments interested
A few segment interested.
Further, before step S12, can also include:
Segment interested is screened according to the quality of segment interested, such as whether fuzzy, face is according to image
Whether no complete, video clip, which is blocked etc., is screened segment interested, alternatively, segment interested is cut,
Geometric transformation, luminance transformation or colour switching.Alternatively, being carried out screening it to segment interested according to the quality of segment interested
Afterwards, the obtained segment interested of screening is cut, geometric transformation, luminance transformation or colour switching.It wherein cuts and for example may be used
With the cutting of the whole region according to shared by the region of face or human body.
By screening according to the quality of segment interested to segment interested, the table that expression packet is included can be improved
The quality of feelings.
For example, determining subset of segments interested from segment interested, following manner can be used:
One, the segment interested for meeting conditional parameter is selected from segment interested according to the conditional parameter that user selects,
The segment interested of eligible parameter is formed into subset of segments interested, conditional parameter includes in expression, mood and movement
It is at least one.Conditional parameter can be specified for default or user, such as conditional parameter is expression, to segment interested into
The segment interested for belonging to expression is selected in the detection of row expression, such as crys, is dejected, laughing, smiling, is angry, surprised table
Feelings form subset of segments interested.Such as conditional parameter is mood, mood detection is carried out to segment interested, when mood detects
It can be combined with sound and body action detection, select the segment interested for belonging to mood, such as: pleasure, anger, sorrow, happiness segment,
Form subset of segments interested, behavior showed on body action it is more strong just illustrate that its mood is stronger, if happiness can be hand dance foot
Step, anger can be gnash one's teeth, sorrow can be have lost all desire for food and drink, compassion can be painful etc. is exactly that mood is anti-on body action
It answers.Such as conditional parameter be movement, such as dancing, high jump, run, rotate, hurdling, movement inspection is carried out to segment interested
It surveys, selecting is the segment composition subset of segments interested for belonging to movement.Alternatively, can also be any combination in above-mentioned three kinds,
Concrete condition can be depending on conditional parameter.
Two, permutation and combination is carried out to segment interested, obtains multiple subsets comprising N number of segment interested, then calculates
Similarity in each subset between N number of segment interested and/or similarity product, finally determine in multiple subsets similarity it
It is subset of segments interested with the minimum or the smallest subset of similarity product.
Similarity therein can be measured with features such as color, gray scale, histogram, movement and combinations thereof, can also be with base
In conditional parameter selection as a result, such as face, expression, scene define, each piece of video in subset of segments interested
Section will have enough representativenesses, in other words, there is enough othernesses.Wherein, N be default positive integer can user specify it is just whole
Number, user specify segment number N interested included in the subset of segment interested to can be in designated while referring to
The number N for determining expression packet, after can also be the segment interested detected in the video specified of user in S11.
Three, segment interested is clustered, obtains at least a kind of segment interested, wherein every one kind segment interested
In similarity between segment interested in same range, k-means algorithm cluster can be used for example according to similarity cluster,
That is input cluster number k, and the database comprising n data object, output meet k cluster of variance minimum sandards, k-
Means algorithm receives input quantity k;Then n data object is divided into k cluster to meet cluster obtained:
Object similarity in same cluster is higher, and the object similarity in different clusters is smaller.Alternatively, every one kind segment interested
Corresponding conditional parameter is identical, it is exactly according to conditional parameter that conditional parameter, which includes at least one of expression, mood and movement,
It is clustered, as obtained expression, mood and movement three classes segment interested after clustering.Finally, selecting interested of every one kind
Top-quality segment interested forms subset of segments interested in section.
In step s 13, expression packet is generated according to subset of segments interested.
Wherein, the expression packet of generation can be the combination of picture frame, video clip or both, and figure is selected from video clip
Realize that details are not described herein again as existing technology can be used in frame.
Further, before according to subset of segments interested generation expression packet, further includes: in subset of segments interested
Segment interested carries out at least one of following processing: stylized filter or cartooning's filter, macromimia processing, acceleration processing
Or reprocessing, text can also be added.The interest of expression packet can be increased through this process.
Further, after generating expression packet according to subset of segments interested, can also include:
In response to user's operation, the expression in expression packet is post-processed, and store it is post-treated after expression packet, after
Processing includes modification expression or deletion expression.
It after generating expression packet according to subset of segments interested, can be handled by user, such as delete expression or modification
Expression etc., user return again to after having handled to terminal device, are stored as the expression in expression packet by terminal device.Pass through user
It participates in the manufacturing process of expression packet, improves user experience, can also be exported needed for user with processing according to the user's choice
Expression packet.
In conclusion expression packet generation method provided in this embodiment, the video specified by detecting user, obtain video
In segment interested, determine subset of segments interested from segment interested, include at least one in subset of segments interested
A segment interested finally generates expression packet according to subset of segments interested, i.e., by interested in subset of segments interested
Expression in the corresponding expression packet of the Duan Zuowei video generates personalized expression according to the video that user provides to realize
Packet, the process for making expression packet is simple and quick, improves the flexibility and interest of expression packet.
An exemplary embodiment is used below, and the technical solution of embodiment of the method shown in Fig. 1 is described in detail.
Fig. 2 is a kind of flow chart of expression packet generation method shown according to an exemplary embodiment, and the present embodiment is with this
Expression packet generation method is applied to illustrate in terminal device.The expression packet generation method may include steps of.
In the step s 21, the video that detection user specifies obtains the segment interested in video.
Wherein, segment interested includes at least one in face video segment, action video segment and event video clip
Kind.
In step S22, segment interested is screened according to the quality of segment interested.
In step S23, the segment interested that screening obtains is cut, geometric transformation, luminance transformation or color become
It changes.
In step s 24, subset of segments interested is determined from segment interested, includes extremely in subset of segments interested
A few segment interested.
For example, following manner can be used:
One, the segment interested for meeting conditional parameter is selected from segment interested according to the conditional parameter that user selects,
The segment interested of eligible parameter is formed into subset of segments interested, conditional parameter includes in expression, mood and movement
It is at least one.
Two, permutation and combination is carried out to segment interested, obtains multiple subsets comprising N number of segment interested, then calculates
Similarity in each subset between N number of segment interested and/or similarity product, finally determine in multiple subsets similarity it
It is subset of segments interested with the minimum or the smallest subset of similarity product.
Three, segment interested is clustered, obtains at least a kind of segment interested, wherein every one kind segment interested
In similarity between segment interested in same range, alternatively, the corresponding conditional parameter of every one kind segment interested is identical, item
Part parameter includes at least one of expression, mood and movement, that is, is exactly to be clustered according to conditional parameter, as obtained after clustering
Expression, mood and movement three classes segment interested.Finally, selecting in every one kind segment interested top-quality interested
Duan Zucheng subset of segments interested.
In step s 25, carry out at least one of following processing to subset of segments interested: stylized filter or cartooning are filtered
Mirror, accelerates processing or reprocessing at macromimiaization processing, can also add text.
It can be further improved the interest of expression packet through this process.
In step S26, expression packet is generated according to treated subset of segments interested has been carried out.
In step s 27, in response to user's operation, the expression in expression packet is post-processed, and is stored post-treated
Expression packet afterwards, post-processing include modification expression or deletion expression.
In conclusion expression packet generation method provided in this embodiment, realizes the video generation provided according to user
Property expression packet, make expression packet process it is simple and quick, improve the flexibility and interest of expression packet.
Following is embodiment of the present disclosure, can be used for executing embodiments of the present disclosure.It is real for disclosure device
Undisclosed details in example is applied, embodiments of the present disclosure is please referred to.
Fig. 3 is a kind of block diagram of expression packet generating means shown according to an exemplary embodiment.The expression packet generates dress
Setting being implemented in combination with as some or all of of terminal device by software, hardware or both.Referring to Fig. 3, the dress
Set includes: to obtain module 11, determining module 12 and generation module 13.Wherein,
It obtains module 11 and is configured as the video that detection user specifies, obtain the segment interested in video;
Determining module 12 is configured as determining subset of segments interested from segment interested, in subset of segments interested
Including at least one segment interested;
Generation module 13 is configured as generating expression packet according to subset of segments interested.
Wherein, segment interested includes at least one in face video segment, action video segment and event video clip
Kind.
It further, include audio data in segment interested.
In conclusion device provided in this embodiment, the video specified by detecting user obtain interested in video
Segment determines subset of segments interested from segment interested, includes at least one interested in subset of segments interested
Section finally generates expression packet according to subset of segments interested, i.e., using the segment interested in subset of segments interested as the view
Frequently the expression in corresponding expression packet generates personalized expression packet according to the video that user provides to realize, makes table
The process of feelings packet is simple and quick, improves the flexibility and interest of expression packet.
Fig. 4 is a kind of block diagram of the expression packet generating means shown according to another exemplary embodiment, the dress of the present embodiment
On the basis of setting device shown in Fig. 3, further, further includes: first processing module 14, the first processing module 14 are configured
For in determining module 12 before determining subset of segments interested in segment interested, according to the quality of segment interested to sense
Interest segment is screened;And/or segment interested is cut, geometric transformation, luminance transformation or colour switching.
Segment interested is screened according to the quality of segment interested by first processing module 14, table can be improved
The quality for the expression that feelings packet is included.
Fig. 5 is a kind of block diagram of the expression packet generating means shown according to another exemplary embodiment, the dress of the present embodiment
On the basis of setting device shown in Fig. 3, further, it is determined that module 12 includes first choice submodule 121 and combination submodule
122, first choice submodule 121 is configured as being selected from segment interested according to the conditional parameter that user selects eligible
The segment interested of parameter, combination submodule 122 are configured as the segment interested of eligible parameter forming interested
Cross-talk collection, conditional parameter include at least one of expression, mood and movement.
Fig. 6 is a kind of block diagram of the expression packet generating means shown according to another exemplary embodiment, the dress of the present embodiment
On the basis of setting device shown in Fig. 3, further, it is determined that module 12 is including computational submodule 123 and determines submodule 124,
Computational submodule 123 is configured as carrying out permutation and combination to segment interested, obtains multiple sons comprising N number of segment interested
Collection, calculate the similarity in each subset between N number of segment interested and/or similarity product, determine that submodule 124 is configured
It is subset of segments interested for the sum of similarity in the multiple subsets of determination minimum or the smallest subset of similarity product.
Fig. 7 is a kind of block diagram of the expression packet generating means shown according to another exemplary embodiment, the dress of the present embodiment
On the basis of setting device shown in Fig. 3, further, it is determined that module 12 includes cluster submodule 125 and the second selection submodule
126, cluster submodule 125 is configured as clustering segment interested, obtains at least a kind of segment interested, wherein every
Similarity in a kind of segment interested between segment interested is in same range;Alternatively, every one kind segment interested is corresponding
Conditional parameter is identical, and conditional parameter includes at least one of expression, mood and movement.Second selection submodule 126 is configured
Subset of segments interested is formed to select top-quality segment interested in every one kind segment interested.
Fig. 8 is a kind of block diagram of the expression packet generating means shown according to another exemplary embodiment, the dress of the present embodiment
It sets on the basis of any of the above-described embodiment shown device, further, further includes: Second processing module 15, second processing mould
Block 15 is configured as before generation module 13 generates expression packet according to subset of segments interested, is carried out to subset of segments interested
At least one of following processing: stylized filter or cartooning's filter, accelerate processing or reprocessing at macromimiaization processing.It is logical
The processing for crossing Second processing module 15 can increase the interest of expression packet.
Fig. 9 is a kind of block diagram of the expression packet generating means shown according to another exemplary embodiment, the dress of the present embodiment
It sets on the basis of any of the above-described embodiment shown device, further, further includes: third processing module 16, third handle mould
Block 16 is configured as after generation module generates expression packet according to subset of segments interested, in response to user's operation, to expression
Expression in packet is post-processed, and store it is post-treated after expression packet, post-processing include modification expression or delete expression.
It in the present embodiment, is participated in by user in the manufacturing process of expression packet, improves user experience, it can also be according to user
Selection and processing output user needed for expression packet.
About the device in above-mentioned each embodiment, wherein modules execute the mode of operation in related this method
Embodiment in be described in detail, no detailed explanation will be given here.
Figure 10 is a kind of block diagram of expression packet generating means shown according to an exemplary embodiment.For example, device 800 can
To be mobile phone, computer, digital broadcasting terminal, messaging device, game console, tablet device, Medical Devices are good for
Body equipment, personal digital assistant etc..
Referring to Fig.1 0, device 800 may include following one or more components: processing component 802, memory 804, power supply
Component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor module 814, Yi Jitong
Believe component 816.
The integrated operation of the usual control device 800 of processing component 802, such as with display, telephone call, data communication, phase
Machine operation and record operate associated operation.Processing component 802 may include that one or more processors 820 refer to execute
It enables, to perform all or part of the steps of the methods described above.In addition, processing component 802 may include one or more modules, just
Interaction between processing component 802 and other assemblies.For example, processing component 802 may include multi-media module, it is more to facilitate
Interaction between media component 808 and processing component 802.
Memory 804 is configured as storing various types of data to support the operation in device 800.These data are shown
Example includes the instruction of any application or method for operating on device 800, contact data, and telephone book data disappears
Breath, picture, video etc..Memory 804 can be by any kind of volatibility or non-volatile memory device or their group
It closes and realizes, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM) is erasable to compile
Journey read-only memory (EPROM), programmable read only memory (PROM), read-only memory (ROM), magnetic memory, flash
Device, disk or CD.
Power supply module 806 provides electric power for the various assemblies of device 800.Power supply module 806 may include power management system
System, one or more power supplys and other with for device 800 generate, manage, and distribute the associated component of electric power.
Multimedia component 808 includes the screen of one output interface of offer between device 800 and user.In some realities
It applies in example, screen may include liquid crystal display (LCD) and touch panel (TP).If screen includes touch panel, screen can
To be implemented as touch screen, to receive input signal from the user.Touch panel include one or more touch sensors with
Sense the gesture on touch, slide, and touch panel.The touch sensor can not only sense the side of touch or sliding action
Boundary, but also detect duration and pressure associated with the touch or slide operation.In some embodiments, multimedia group
Part 808 includes a front camera and/or rear camera.When device 800 is in operation mode, such as screening-mode or video
When mode, front camera and/or rear camera can receive external multi-medium data.Each front camera and postposition
Camera can be a fixed optical lens system or have focusing and optical zoom capabilities.
Audio component 810 is configured as output and/or input audio signal.For example, audio component 810 includes a Mike
Wind (MIC), when device 800 is in operation mode, when such as call mode, recording mode, and voice recognition mode, microphone is matched
It is set to reception external audio signal.The received audio signal can be further stored in memory 804 or via communication set
Part 816 is sent.In some embodiments, audio component 810 further includes a loudspeaker, is used for output audio signal.
I/O interface 812 provides interface between processing component 802 and peripheral interface module, and above-mentioned peripheral interface module can
To be keyboard, click wheel, button etc..These buttons may include, but are not limited to: home button, volume button, start button and lock
Determine button.
Sensor module 814 includes one or more sensors, and the state for providing various aspects for device 800 is commented
Estimate.For example, sensor module 814 can detecte the state that opens/closes of device 800, and the relative positioning of component, for example, it is described
Component is the display and keypad of device 800, and sensor module 814 can be with 800 1 components of detection device 800 or device
Position change, the existence or non-existence that user contacts with device 800,800 orientation of device or acceleration/deceleration and device 800
Temperature change.Sensor module 814 may include proximity sensor, be configured to detect without any physical contact
Presence of nearby objects.Sensor module 814 can also include optical sensor, such as CMOS or ccd image sensor, at
As being used in application.In some embodiments, which can also include acceleration transducer, gyro sensors
Device, Magnetic Sensor, pressure sensor or temperature sensor.
Communication component 816 is configured to facilitate the communication of wired or wireless way between device 800 and other equipment.Device
800 can access the wireless network based on communication standard, such as WiFi, 2G or 3G or their combination.In an exemplary implementation
In example, communication component 816 receives broadcast singal or broadcast related information from external broadcasting management system via broadcast channel.
In one exemplary embodiment, the communication component 816 further includes near-field communication (NFC) module, to promote short range communication.Example
Such as, NFC module can be based on radio frequency identification (RFID) technology, Infrared Data Association (IrDA) technology, ultra wide band (UWB) technology,
Bluetooth (BT) technology and other technologies are realized.
In the exemplary embodiment, device 800 can be believed by one or more application specific integrated circuit (ASIC), number
Number processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD), field programmable gate array
(FPGA), controller, microcontroller, microprocessor or other electronic components are realized, for executing the above method.
In the exemplary embodiment, a kind of non-transitorycomputer readable storage medium including instruction, example are additionally provided
It such as include the memory 804 of instruction, above-metioned instruction can be executed by the processor 820 of device 800 to complete the above method.For example,
The non-transitorycomputer readable storage medium can be ROM, random access memory (RAM), CD-ROM, tape, floppy disk
With optical data storage devices etc..
A kind of non-transitorycomputer readable storage medium, when the instruction in the storage medium is by the processing of device 800
When device executes, so that device 800 is able to carry out a kind of expression packet generation method.
Those skilled in the art after considering the specification and implementing the invention disclosed here, will readily occur to its of the disclosure
Its embodiment.This application is intended to cover any variations, uses, or adaptations of the disclosure, these modifications, purposes or
Person's adaptive change follows the general principles of this disclosure and including the undocumented common knowledge in the art of the disclosure
Or conventional techniques.The description and examples are only to be considered as illustrative, and the true scope and spirit of the disclosure are by following
Claims are pointed out.
It should be understood that the present disclosure is not limited to the precise structures that have been described above and shown in the drawings, and
And various modifications and changes may be made without departing from the scope thereof.The scope of the present disclosure is only limited by appended claims
System.
Claims (13)
1. a kind of expression packet generation method characterized by comprising
The video that detection user specifies, obtains the segment interested in the video;
Subset of segments interested is determined from the segment interested, includes at least one sense in the subset of segments interested
Interest segment;
Expression packet is generated according to the subset of segments interested;
It is wherein, described to determine subset of segments interested from the segment interested, comprising:
The segment interested for meeting the conditional parameter is selected from the segment interested according to the conditional parameter that user selects;
The segment interested for meeting the conditional parameter is formed into the subset of segments interested, the conditional parameter includes table
At least one of feelings, mood and movement;
Or
It is described to determine subset of segments interested from the segment interested, comprising:
Permutation and combination is carried out to the segment interested, obtains multiple subsets comprising N number of segment interested;
Calculate the similarity in each subset between N number of segment interested and/or similarity product;
It determines the sum of similarity described in the multiple subset minimum or the smallest subset of similarity product is that the sense is emerging
Interesting subset of segments;
Or
It is described to determine subset of segments interested from the segment interested, comprising:
The segment interested is clustered, at least a kind of segment interested is obtained, wherein is felt in every one kind segment interested
Similarity between interest segment is in same range, alternatively, the corresponding conditional parameter of every one kind segment interested is identical, the item
Part parameter includes at least one of expression, mood and movement;
It selects top-quality segment interested in every one kind segment interested and forms the subset of segments interested.
2. the method according to claim 1, wherein the segment interested includes face video segment, movement
At least one of video clip and event video clip.
3. according to the method described in claim 2, it is characterized in that, including audio data in the segment interested.
4. the method according to claim 1, wherein described determine interested from the segment interested
Before cross-talk collection, further includes:
The segment interested is screened according to the quality of the segment interested;And/or
The segment interested is cut, geometric transformation, luminance transformation or colour switching.
5. method according to claim 1-4, which is characterized in that described raw according to the subset of segments interested
Before expression packet, further includes:
At least one of following processing is carried out to the segment interested in the subset of segments interested:
Stylized filter or cartooning's filter, accelerate processing or reprocessing at macromimiaization processing.
6. method according to claim 1-4, which is characterized in that described raw according to the subset of segments interested
After expression packet, further includes:
In response to user's operation, the expression in the expression packet is post-processed, and store it is post-treated after expression packet, institute
Stating post-processing includes modification expression or deletion expression.
7. a kind of expression packet generating means characterized by comprising
Module is obtained, the video that detection user specifies is configured as, obtains the segment interested in the video;
Determining module is configured as determining subset of segments interested from the segment interested, the interested cross-talk
Concentrating includes at least one segment interested;
Generation module is configured as generating expression packet according to the subset of segments interested;
Wherein, the determining module includes:
First choice submodule is configured as selecting to meet institute from the segment interested according to the conditional parameter that user selects
State the segment interested of conditional parameter;
Submodule is combined, the segment interested for being configured as to meet the conditional parameter forms the subset of segments interested,
The conditional parameter includes at least one of expression, mood and movement;
Or
The determining module includes:
Computational submodule is configured as carrying out permutation and combination to the segment interested, obtains multiple comprising N number of segment interested
Subset, calculate the similarity in each subset between N number of segment interested and/or similarity product;
It determines submodule, is configured to determine that the sum of similarity described in the multiple subset minimum or the similarity product most
Small subset is the subset of segments interested;
Or
The determining module includes:
Submodule is clustered, is configured as clustering the segment interested, obtains at least a kind of segment interested, wherein
Similarity in every one kind segment interested between segment interested is in same range;Alternatively, every one kind segment interested is corresponding
Conditional parameter it is identical, the conditional parameter includes at least one of expression, mood and movement;
Second selection submodule is configured as selecting top-quality segment composition interested institute in every one kind segment interested
State subset of segments interested.
8. device according to claim 7, which is characterized in that the segment interested includes face video segment, movement
At least one of video clip and event video clip.
9. device according to claim 8, which is characterized in that include audio data in the segment interested.
10. device according to claim 7, which is characterized in that further include:
First processing module is configured as determining subset of segments interested from the segment interested in the determining module
Before, the segment interested is screened according to the quality of the segment interested;And/or
The segment interested is cut, geometric transformation, luminance transformation or colour switching.
11. according to the described in any item devices of claim 7-10, which is characterized in that further include:
Second processing module is configured as before the generation module generates expression packet according to the subset of segments interested,
At least one of following processing is carried out to the segment interested in the subset of segments interested:
Stylized filter or cartooning's filter, accelerate processing or reprocessing at macromimiaization processing.
12. according to the described in any item devices of claim 7-10, which is characterized in that further include:
Third processing module is configured as after the generation module generates expression packet according to the subset of segments interested,
In response to user's operation, the expression in the expression packet is post-processed, and store it is post-treated after expression packet, after described
Processing includes modification expression or deletion expression.
13. a kind of expression packet generating means characterized by comprising
Processor;
For storing the memory of the executable instruction of the processor;
Wherein, the processor is configured to:
The video that detection user specifies, obtains the segment interested in the video;
Subset of segments interested is determined from the segment interested, includes at least one sense in the subset of segments interested
Interest segment;
Expression packet is generated according to the subset of segments interested;
It is wherein, described to determine subset of segments interested from the segment interested, comprising:
The segment interested for meeting the conditional parameter is selected from the segment interested according to the conditional parameter that user selects;
The segment interested for meeting the conditional parameter is formed into the subset of segments interested, the conditional parameter includes table
At least one of feelings, mood and movement;
Or
It is described to determine subset of segments interested from the segment interested, comprising:
Permutation and combination is carried out to the segment interested, obtains multiple subsets comprising N number of segment interested;
Calculate the similarity in each subset between N number of segment interested and/or similarity product;
It determines the sum of similarity described in the multiple subset minimum or the smallest subset of similarity product is that the sense is emerging
Interesting subset of segments;
Or
It is described to determine subset of segments interested from the segment interested, comprising:
The segment interested is clustered, at least a kind of segment interested is obtained, wherein is felt in every one kind segment interested
Similarity between interest segment is in same range, alternatively, the corresponding conditional parameter of every one kind segment interested is identical, the item
Part parameter includes at least one of expression, mood and movement;
It selects top-quality segment interested in every one kind segment interested and forms the subset of segments interested.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610931186.1A CN106358087B (en) | 2016-10-31 | 2016-10-31 | Expression packet generation method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610931186.1A CN106358087B (en) | 2016-10-31 | 2016-10-31 | Expression packet generation method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106358087A CN106358087A (en) | 2017-01-25 |
CN106358087B true CN106358087B (en) | 2019-04-26 |
Family
ID=57863992
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610931186.1A Active CN106358087B (en) | 2016-10-31 | 2016-10-31 | Expression packet generation method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106358087B (en) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106951856A (en) * | 2017-03-16 | 2017-07-14 | 腾讯科技(深圳)有限公司 | Bag extracting method of expressing one's feelings and device |
CN107240143A (en) * | 2017-05-09 | 2017-10-10 | 北京小米移动软件有限公司 | Bag generation method of expressing one's feelings and device |
CN108320316B (en) * | 2018-02-11 | 2022-03-04 | 秦皇岛中科鸿合信息科技有限公司 | Personalized facial expression package manufacturing system and method |
CN108596114A (en) * | 2018-04-27 | 2018-09-28 | 佛山市日日圣科技有限公司 | A kind of expression generation method and device |
CN108846881B (en) * | 2018-05-29 | 2023-05-12 | 珠海格力电器股份有限公司 | Expression image generation method and device |
CN110049377B (en) * | 2019-03-12 | 2021-06-22 | 北京奇艺世纪科技有限公司 | Expression package generation method and device, electronic equipment and computer readable storage medium |
CN109982109B (en) * | 2019-04-03 | 2021-08-03 | 睿魔智能科技(深圳)有限公司 | Short video generation method and device, server and storage medium |
CN111530087B (en) * | 2020-04-17 | 2021-12-21 | 完美世界(重庆)互动科技有限公司 | Method and device for generating real-time expression package in game |
CN111753131A (en) * | 2020-06-28 | 2020-10-09 | 北京百度网讯科技有限公司 | Expression package generation method and device, electronic device and medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101179471A (en) * | 2007-05-31 | 2008-05-14 | 腾讯科技(深圳)有限公司 | Method and apparatus for implementing user personalized dynamic expression picture with characters |
CN101252550A (en) * | 2008-03-31 | 2008-08-27 | 腾讯科技(深圳)有限公司 | User-defined information management apparatus, method and system |
CN101527690A (en) * | 2009-04-13 | 2009-09-09 | 腾讯科技(北京)有限公司 | Method for intercepting dynamic image, system and device thereof |
CN104750387A (en) * | 2015-03-24 | 2015-07-01 | 联想(北京)有限公司 | Information processing method and electronic equipment |
CN104917666A (en) * | 2014-03-13 | 2015-09-16 | 腾讯科技(深圳)有限公司 | Method of making personalized dynamic expression and device |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140153900A1 (en) * | 2012-12-05 | 2014-06-05 | Samsung Electronics Co., Ltd. | Video processing apparatus and method |
-
2016
- 2016-10-31 CN CN201610931186.1A patent/CN106358087B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101179471A (en) * | 2007-05-31 | 2008-05-14 | 腾讯科技(深圳)有限公司 | Method and apparatus for implementing user personalized dynamic expression picture with characters |
CN101252550A (en) * | 2008-03-31 | 2008-08-27 | 腾讯科技(深圳)有限公司 | User-defined information management apparatus, method and system |
CN101527690A (en) * | 2009-04-13 | 2009-09-09 | 腾讯科技(北京)有限公司 | Method for intercepting dynamic image, system and device thereof |
CN104917666A (en) * | 2014-03-13 | 2015-09-16 | 腾讯科技(深圳)有限公司 | Method of making personalized dynamic expression and device |
CN104750387A (en) * | 2015-03-24 | 2015-07-01 | 联想(北京)有限公司 | Information processing method and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN106358087A (en) | 2017-01-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106358087B (en) | Expression packet generation method and device | |
CN105119812B (en) | In the method, apparatus and terminal device of chat interface change emoticon | |
CN104753766B (en) | Expression sending method and device | |
CN105955579B (en) | Music control method and device | |
CN104408402B (en) | Face identification method and device | |
CN105162693B (en) | message display method and device | |
CN105607857B (en) | page selection method and device | |
CN104636453B (en) | The recognition methods of disabled user's data and device | |
CN109618184A (en) | Method for processing video frequency and device, electronic equipment and storage medium | |
CN104133956B (en) | Handle the method and device of picture | |
CN106412706B (en) | Control method, device and its equipment of video playing | |
CN109872297A (en) | Image processing method and device, electronic equipment and storage medium | |
CN108038102A (en) | Recommendation method, apparatus, terminal and the storage medium of facial expression image | |
CN105677023B (en) | Information demonstrating method and device | |
CN110366050A (en) | Processing method, device, electronic equipment and the storage medium of video data | |
CN108108671A (en) | Description of product information acquisition method and device | |
CN110135349A (en) | Recognition methods, device, equipment and storage medium | |
CN109981624B (en) | Intrusion detection method, device and storage medium | |
CN109450894A (en) | Information interacting method, device, system, server user's terminal and storage medium | |
CN103970831B (en) | Recommend the method and apparatus of icon | |
CN105635573B (en) | Camera visual angle regulating method and device | |
CN106447747B (en) | Image processing method and device | |
CN105469107B (en) | Image classification method and device | |
CN105119815B (en) | The method and device of music is realized in instant communication interface | |
CN109977303A (en) | Exchange method, device and the storage medium of multimedia messages |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |