CN106358087A - Method and device for generating expression package - Google Patents

Method and device for generating expression package Download PDF

Info

Publication number
CN106358087A
CN106358087A CN201610931186.1A CN201610931186A CN106358087A CN 106358087 A CN106358087 A CN 106358087A CN 201610931186 A CN201610931186 A CN 201610931186A CN 106358087 A CN106358087 A CN 106358087A
Authority
CN
China
Prior art keywords
segments
interest
segment
interesting
subset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610931186.1A
Other languages
Chinese (zh)
Other versions
CN106358087B (en
Inventor
丁大勇
刘浩
张波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN201610931186.1A priority Critical patent/CN106358087B/en
Publication of CN106358087A publication Critical patent/CN106358087A/en
Application granted granted Critical
Publication of CN106358087B publication Critical patent/CN106358087B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4728End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for selecting a Region Of Interest [ROI], e.g. for requesting a higher resolution version of a selected region
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/04Real-time or near real-time messaging, e.g. instant messaging [IM]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/07User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
    • H04L51/10Multimedia information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/52User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention relates to a method and device for generating an expression package. The method comprises the following steps: detecting a user designated video to acquire interested fragments from the video; determining an interested fragment subset from the interested fragments, wherein the interested fragment subset includes at least one interested fragment; and generating the expression package according to the interested fragment subset. According to the method and device, a customized expression package can be generated according to the video provided by a user, the expression package manufacturing process is simple and quick, and the flexibility and attractiveness of the expression package can be improved.

Description

Expression package generation method and device
Technical Field
The disclosure relates to the technical field of computer application, in particular to an expression package generation method and device.
Background
The application of the emoticons is very wide in various social applications, such as WeChat, QQ, easy-to-believe and other communication software. The expression package (namely an expression library) generally refers to a group of expressions, and a good expression package can greatly increase the flexibility and the interestingness of information interaction between users, so that' the expression is inappropriate and the expression is very common among a plurality of users.
At present, expressions used by users are mostly from third-party manufacturers, most of the expressions are designed and drawn by drawing tools, animation tools and the like through manufacturers, the manufacturing process is complex and time-consuming, personalization is lacked, and how to enable the users to conveniently manufacture personalized expression packages according to videos provided by the users is a problem to be solved.
Disclosure of Invention
In order to overcome the problems in the related art, the present disclosure provides an emoticon generation method and apparatus. The technical scheme is as follows:
according to a first aspect of the embodiments of the present disclosure, there is provided an expression package generating method, including:
detecting a video appointed by a user, and acquiring an interesting segment in the video;
determining an interesting segment subset from the interesting segments, wherein the interesting segment subset comprises at least one interesting segment;
and generating an expression package according to the interested segment subset.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: the method comprises the steps of detecting a video appointed by a user, obtaining an interesting segment in the video, determining an interesting segment subset from the interesting segment, wherein the interesting segment subset comprises at least one interesting segment, and finally generating an expression package according to the interesting segment subset, namely, the interesting segment in the interesting segment subset is used as an expression in the expression package corresponding to the video, so that the purpose of generating a personalized expression package according to the video provided by the user is achieved, the process of making the expression package is simple and quick, and the flexibility and interestingness of the expression package are improved.
Further, the interesting segments comprise at least one of a human face video segment, an action video segment and an event video segment.
Further, audio data is included in the segment of interest.
Further, before the determining the interest segment subset from the interest segments, the method further includes:
screening the interesting fragments according to the quality of the interesting fragments; and/or the presence of a gas in the gas,
and performing clipping, geometric transformation, brightness transformation or color transformation on the interested segment.
The quality of the expressions contained in the expression package can be improved by screening the interesting segments according to the quality of the interesting segments.
Further, the determining a subset of the segments of interest from the segments of interest includes:
selecting an interested segment meeting the condition parameters from the interested segments according to the condition parameters selected by the user;
and composing the interest segments meeting the condition parameters into the interest segment subset, wherein the condition parameters comprise at least one of expressions, emotions and actions.
Further, the determining a subset of the segments of interest from the segments of interest includes:
arranging and combining the interesting segments to obtain a plurality of subsets containing N interesting segments;
calculating the sum or product of the similarity between N interesting segments in each subset;
and determining the subset with the minimum sum of the similarities or the minimum product of the similarities in the plurality of subsets as the interest segment subset.
Further, the determining a subset of the segments of interest from the segments of interest includes:
clustering the interesting segments to obtain at least one type of interesting segments, wherein the similarity between the interesting segments in each type of interesting segments is in the same range; or the condition parameters corresponding to each type of interesting segments are the same, and the condition parameters comprise at least one of expressions, emotions and actions;
and selecting the segment of interest with the best quality in each type of segment of interest to form the segment of interest subset.
Further, before generating the expression package according to the interest segment subset, the method further includes:
performing at least one of the following processes on the segments of interest in the subset of segments of interest:
stylized or cartoonized filters, expression exaggeration, acceleration or repetition.
The interestingness of the expression bag can be increased through the processing.
Further, after generating the expression package according to the interest segment subset, the method further includes:
and responding to user operation, performing post-processing on the expressions in the expression package, and storing the post-processed expression package, wherein the post-processing comprises modifying the expressions or deleting the expressions.
The user can participate in the production process of the expression package, so that the user experience is improved, and the expression package required by the user can be output according to the selection and processing of the user.
According to a second aspect of the embodiments of the present disclosure, there is provided an emoticon generation apparatus, including:
the acquisition module is configured to detect a video designated by a user and acquire an interesting segment in the video;
a determining module configured to determine a segment subset of interest from the segments of interest, the segment subset of interest including at least one segment of interest;
a generating module configured to generate an expression package according to the interest segment subset.
The device provided by the embodiment of the disclosure can have the following beneficial effects: the method comprises the steps of detecting a video appointed by a user, obtaining an interesting segment in the video, determining an interesting segment subset from the interesting segment, wherein the interesting segment subset comprises at least one interesting segment, and finally generating an expression package according to the interesting segment subset, namely, the interesting segment in the interesting segment subset is used as an expression in the expression package corresponding to the video, so that the purpose of generating a personalized expression package according to the video provided by the user is achieved, the process of making the expression package is simple and quick, and the flexibility and interestingness of the expression package are improved.
Further, the interesting segments comprise at least one of a human face video segment, an action video segment and an event video segment.
Further, audio data is included in the segment of interest.
Further, still include:
a first processing module configured to screen the segments of interest according to the quality of the segments of interest before the determining module determines the subset of the segments of interest from the segments of interest; and/or the presence of a gas in the gas,
and performing clipping, geometric transformation, brightness transformation or color transformation on the interested segment.
The quality of the expressions contained in the expression package can be improved by screening the interesting segments according to the quality of the interesting segments.
Further, the determining module includes:
a first selection submodule configured to select, from the segments of interest, segments of interest that meet a condition parameter selected by a user according to the condition parameter;
a combining sub-module configured to combine the segments of interest that meet the condition parameters into the segment of interest subset, the condition parameters including at least one of expressions, emotions, and actions.
Further, the determining module includes:
the calculation sub-module is configured to perform permutation and combination on the interesting segments to obtain a plurality of subsets comprising N interesting segments, and calculate the sum or product of the similarity between the N interesting segments in each subset;
a determining submodule configured to determine a subset of the plurality of subsets in which the sum of the similarities is minimum or the product of the similarities is minimum as the segment subset of interest.
Further, the determining module includes:
the clustering sub-module is configured to cluster the interesting segments to obtain at least one type of interesting segments, wherein the similarity between the interesting segments in each type of interesting segments is in the same range; or the condition parameters corresponding to each type of interesting segments are the same, and the condition parameters comprise at least one of expressions, emotions and actions;
and the second selection submodule is configured to select the segment of interest with the best quality in each type of segment of interest to form the segment of interest subset.
Further, still include:
a second processing module configured to, before the generating module generates an expression package according to the interest segment subset, perform at least one of the following processes on the interest segments in the interest segment subset:
stylized or cartoonized filters, expression exaggeration, acceleration or repetition.
The interestingness of the expression bag can be increased through the processing.
Further, still include:
and the third processing module is configured to respond to user operation after the generation module generates the expression package according to the interesting segment subset, perform post-processing on the expressions in the expression package, and store the post-processed expression package, wherein the post-processing includes modifying the expressions or deleting the expressions.
The user can participate in the production process of the expression package, so that the user experience is improved, and the expression package required by the user can be output according to the selection and processing of the user.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a flowchart illustrating a method of generating an emoticon according to an exemplary embodiment.
FIG. 2 is a flow diagram illustrating a method of emoticon generation according to an exemplary embodiment.
Fig. 3 is a block diagram illustrating an emoticon generating apparatus according to an exemplary embodiment.
Fig. 4 is a block diagram illustrating an emoticon generating apparatus according to another exemplary embodiment.
Fig. 5 is a block diagram illustrating an emoticon generating apparatus according to another exemplary embodiment.
Fig. 6 is a block diagram illustrating an emoticon generating apparatus according to another exemplary embodiment.
Fig. 7 is a block diagram illustrating an emoticon generating apparatus according to another exemplary embodiment.
Fig. 8 is a block diagram illustrating an emoticon generating apparatus according to another exemplary embodiment.
Fig. 9 is a block diagram illustrating an emoticon generating apparatus according to another exemplary embodiment.
Fig. 10 is a block diagram illustrating an emoticon generating apparatus according to an exemplary embodiment.
With the foregoing drawings in mind, certain embodiments of the disclosure have been shown and described in more detail below. These drawings and written description are not intended to limit the scope of the disclosed concepts in any way, but rather to illustrate the concepts of the disclosure to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The method for generating the expression package provided by each embodiment of the present disclosure may be implemented by a terminal device installed with communication software, where the terminal device may be a smart phone, a tablet computer, an electronic book reader, a portable computer, and the like.
Fig. 1 is a flowchart illustrating an emotion bag generation method according to an exemplary embodiment, which is exemplified by applying the emotion bag generation method to a terminal device in the present embodiment. The expression package generation method can comprise the following steps.
In step S11, a video specified by the user is detected, and a segment of interest in the video is acquired.
Wherein the interesting segments comprise at least one of face video segments, action video segments and event video segments. The action video clips are action clips of human limbs, and the event video clips are video clips formed by shooting events. The video clip is a segment of video frame, including a start frame number, a frame number and an end frame number.
Further, audio data is included in the segment of interest, and the audio data may be used as a condition for recognizing an expression, emotion, or the like. Or the terminal equipment also comprises sensing data, wherein the sensing data is data measured by a sensor on the terminal equipment when the user uses the terminal equipment to shoot the video, such as motion data when the user shoots the video, and the sensing data can be used as conditions for recognizing actions and the like.
In step S12, a segment-of-interest subset is determined from the segments-of-interest, the segment-of-interest subset including at least one segment-of-interest.
Further, before step S12, the method may further include:
the segments of interest are filtered according to their quality, for example, whether the image is blurred, whether the face is complete, whether the video segment is occluded, etc., or are cropped, geometrically transformed, brightly transformed, or color transformed. Or after the interesting sections are screened according to the quality of the interesting sections, the screened interesting sections are subjected to cutting, geometric transformation, brightness transformation or color transformation. The clipping may be performed according to the area of the face or the entire area occupied by the body, for example.
The quality of the expressions contained in the expression package can be improved by screening the interesting segments according to the quality of the interesting segments.
For example, the following method can be used to determine the interest segment subset from the interest segments:
the method comprises the steps of firstly, selecting interesting segments meeting conditional parameters from interesting segments according to the conditional parameters selected by a user, and forming the interesting segments meeting the conditional parameters into interesting segment subsets, wherein the conditional parameters comprise at least one of expressions, emotions and actions. The condition parameter may be preset or may be specified by a user, for example, the condition parameter is an expression, the expression detection is performed on the segment of interest, the segment of interest belonging to the expression is selected, and the sub-set of the segment of interest is composed of expressions such as crying, depression, laugh, smile, anger, surprise and the like. For example, the condition parameter is emotion, emotion detection is performed on the segment of interest, and the segment of interest belonging to emotion can be selected by combining sound and body motion detection during emotion detection, such as: the segments of happiness, anger, sadness, happiness and the like form an interested segment subset, and the stronger the behavior is expressed on the body action, the stronger the emotion of the behavior, for example, the happiness is chorea, the anger is biting and cutting teeth, the worry is unsettled with tea and rice, the sadness is pain, and the like are reactions of the emotion on the body action. For example, the condition parameter is an action, such as dancing, high jump, running, rotating, hurdling, etc., the action of the segment of interest is detected, and the segments selected as belonging to the action constitute the segment subset of interest. Or, any combination of the three above may be adopted, and the specific case may be determined according to the condition parameters.
Secondly, the interesting segments are arranged and combined to obtain a plurality of subsets containing N interesting segments, then the sum of the similarity or the product of the similarity between the N interesting segments in each subset is calculated, and finally the subset with the minimum sum of the similarity or the minimum product of the similarity in the plurality of subsets is determined as the interesting segment subset.
The similarity may be measured by using features such as color, gray scale, histogram, motion, and the like, and combinations thereof, or may be defined based on the result of condition parameter selection, such as face, expression, scene, and the like, and each video segment in the interest segment subset should have sufficient representativeness, or sufficient difference. Where N is a positive integer specified by the user in the preset positive integer, and the number N of the interesting segments included in the subset of the interesting segments specified by the user may be the number N of the emoticons specified at the same time when the video is specified, or may be after the interesting segments in the video specified by the user are detected in S11.
Clustering the interested segments to obtain at least one type of interested segments, wherein the similarity between the interested segments in each type of interested segments is in the same range, for example, k-means algorithm clustering can be used according to the similarity clustering, namely, the clustering number k is input, a database containing n data objects is input, k clusters meeting the minimum variance standard are output, and the k-means algorithm receives an input quantity k; the n data objects are then divided into k clusters so that the obtained clusters satisfy: the object similarity in the same cluster is higher, while the object similarity in different clusters is smaller. Or the condition parameters corresponding to each type of interesting segments are the same, and the condition parameters comprise at least one of expressions, emotions and actions, namely clustering is carried out according to the condition parameters, and the interesting segments of the expressions, the emotions and the actions are obtained after clustering. And finally, selecting the segment of interest with the best quality in each type of segment of interest to form a segment of interest subset.
In step S13, an expression package is generated from the subset of segments of interest.
The generated expression package may be an image frame, a video clip, or a combination of the two, and selecting an image frame from a video clip may be implemented by using the existing technology, which is not described herein again.
Further, before generating the expression package according to the interest segment subset, the method further includes: processing the segment of interest in the subset of segments of interest by at least one of: stylized filter or cartoon filter, expression exaggeration process, acceleration process or repetition process, and can be added with characters. The interest of the expression bag can be increased through the processing.
Further, after generating the expression package according to the interest segment subset, the method may further include:
and responding to the user operation, performing post-processing on the expressions in the expression package, and storing the post-processed expression package, wherein the post-processing comprises modifying the expressions or deleting the expressions.
After the expression package is generated according to the interested segment subset, the user can process the expression package, such as deleting the expression or modifying the expression, and the like, and the user returns the expression package to the terminal device after processing the expression package, and the expression package is stored as the expression in the expression package by the terminal device. The user experience is improved by the fact that the user participates in the production process of the expression package, and the expression package required by the user can be output according to the selection and the processing of the user.
In summary, according to the expression package generation method provided by this embodiment, a video specified by a user is detected, an interested segment in the video is obtained, an interested segment subset is determined from the interested segments, the interested segment subset includes at least one interested segment, and finally an expression package is generated according to the interested segment subset, that is, the interested segment in the interested segment subset is used as an expression in an expression package corresponding to the video, so that a personalized expression package is generated according to the video provided by the user, the process of making the expression package is simple and fast, and the flexibility and the interestingness of the expression package are improved.
The following describes in detail the technical solution of the embodiment of the method shown in fig. 1, using an exemplary embodiment.
Fig. 2 is a flowchart illustrating an emotion bag generation method according to an exemplary embodiment, which is exemplified by applying the emotion bag generation method to a terminal device in the present embodiment. The expression package generation method can comprise the following steps.
In step S21, a video specified by the user is detected, and a segment of interest in the video is acquired.
Wherein the interesting segments comprise at least one of face video segments, action video segments and event video segments.
In step S22, the segment of interest is screened according to its quality.
In step S23, the filtered interesting sections are subjected to clipping, geometric transformation, luminance transformation or color transformation.
In step S24, a segment-of-interest subset is determined from the segments-of-interest, the segment-of-interest subset including at least one segment-of-interest.
For example, the following may be employed:
the method comprises the steps of firstly, selecting interesting segments meeting conditional parameters from interesting segments according to the conditional parameters selected by a user, and forming the interesting segments meeting the conditional parameters into interesting segment subsets, wherein the conditional parameters comprise at least one of expressions, emotions and actions.
Secondly, the interesting segments are arranged and combined to obtain a plurality of subsets containing N interesting segments, then the sum of the similarity or the product of the similarity between the N interesting segments in each subset is calculated, and finally the subset with the minimum sum of the similarity or the minimum product of the similarity in the plurality of subsets is determined as the interesting segment subset.
And thirdly, clustering the interesting segments to obtain at least one type of interesting segments, wherein the similarity between the interesting segments in each type of interesting segments is in the same range, or the condition parameters corresponding to each type of interesting segments are the same, the condition parameters comprise at least one of expression, emotion and action, namely clustering is carried out according to the condition parameters, and the three types of interesting segments of expression, emotion and action are obtained after clustering. And finally, selecting the segment of interest with the best quality in each type of segment of interest to form a segment of interest subset.
In step S25, at least one of the following processes is performed on the segment subset of interest: stylized filter or cartoon filter, expression exaggeration process, acceleration process or repetition process, and can be added with characters.
The interest of the expression bag can be further improved through the processing.
In step S26, an expression package is generated from the subset of the segments of interest on which the above-described processing has been performed.
In step S27, in response to a user operation, post-processing is performed on the emoticons in the emoticon package, and the post-processed emoticons are stored, the post-processing including modifying or deleting the emoticons.
In summary, the method for generating the expression package provided by the embodiment realizes generation of the personalized expression package according to the video provided by the user, the process of making the expression package is simple and quick, and the flexibility and the interestingness of the expression package are improved.
The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the disclosed methods. For details not disclosed in the embodiments of the apparatus of the present disclosure, refer to the embodiments of the method of the present disclosure.
Fig. 3 is a block diagram illustrating an emoticon generating apparatus according to an exemplary embodiment. The expression package generating device can be realized by software, hardware or a combination of the software and the hardware to become part or all of the terminal equipment. Referring to fig. 3, the apparatus includes: an acquisition module 11, a determination module 12 and a generation module 13. Wherein,
the acquisition module 11 is configured to detect a video specified by a user, and acquire a segment of interest in the video;
the determining module 12 is configured to determine a segment subset of interest from the segments of interest, the segment subset of interest including at least one segment of interest;
the generation module 13 is configured to generate an expression package from the subset of segments of interest.
Wherein the interesting segments comprise at least one of face video segments, action video segments and event video segments.
Further, audio data is included in the segment of interest.
In summary, in the apparatus provided in this embodiment, a video specified by a user is detected, an interested segment in the video is obtained, an interested segment subset is determined from the interested segments, the interested segment subset includes at least one interested segment, and finally, an expression package is generated according to the interested segment subset, that is, the interested segment in the interested segment subset is used as an expression in the expression package corresponding to the video, so that a personalized expression package is generated according to the video provided by the user, the process of making the expression package is simple and fast, and the flexibility and the interest of the expression package are improved.
Fig. 4 is a block diagram of an expression package generation apparatus according to another exemplary embodiment, where the apparatus of this embodiment further includes, on the basis of the apparatus shown in fig. 3: a first processing module 14, wherein the first processing module 14 is configured to filter the segments of interest according to the quality of the segments of interest before the determining module 12 determines the subset of the segments of interest from the segments of interest; and/or, clipping, geometric transformation, luminance transformation, or color transformation is performed on the segment of interest.
The first processing module 14 can filter the interesting sections according to the quality of the interesting sections, so that the quality of the expressions included in the expression package can be improved.
Fig. 5 is a block diagram of an expression package generation apparatus according to another exemplary embodiment, in the apparatus of this embodiment, on the basis of the apparatus shown in fig. 3, the determination module 12 further includes a first selection submodule 121 and a combination submodule 122, the first selection submodule 121 is configured to select an interesting segment meeting a condition parameter from the interesting segments according to the condition parameter selected by the user, the combination submodule 122 is configured to combine the interesting segments meeting the condition parameter into an interesting segment subset, and the condition parameter includes at least one of an expression, an emotion and an action.
Fig. 6 is a block diagram of an expression package generating apparatus according to another exemplary embodiment, in the apparatus of this embodiment, on the basis of the apparatus shown in fig. 3, the determining module 12 further includes a calculating submodule 123 and a determining submodule 124, the calculating submodule 123 is configured to perform permutation and combination on the segments of interest, obtain a plurality of subsets including N segments of interest, calculate the sum of similarities or the similarity product between the N segments of interest in each subset, and the determining submodule 124 is configured to determine the subset with the smallest sum of similarities or the smallest similarity product among the plurality of subsets as the segment subset of interest.
Fig. 7 is a block diagram of an expression package generating apparatus according to another exemplary embodiment, in the apparatus of this embodiment, on the basis of the apparatus shown in fig. 3, the determining module 12 further includes a clustering submodule 125 and a second selecting submodule 126, the clustering submodule 125 is configured to cluster the segments of interest to obtain at least one type of segments of interest, where the similarity between the segments of interest in each type of segments of interest is in the same range; or the condition parameters corresponding to each type of interest segments are the same, and the condition parameters comprise at least one of expressions, emotions and actions. The second selection submodule 126 is configured to select the segment of interest with the best quality in each type of segment of interest to constitute the segment of interest subset.
Fig. 8 is a block diagram of an expression package generation apparatus according to another exemplary embodiment, where the apparatus of this embodiment further includes, on the basis of the apparatus shown in any one of the above embodiments: a second processing module 15, wherein the second processing module 15 is configured to perform at least one of the following processes on the interest segment subset before the generating module 13 generates the expression package according to the interest segment subset: stylized or cartoonized filters, expression exaggeration, acceleration or repetition. The interest of the expression bag can be increased through the processing of the second processing module 15.
Fig. 9 is a block diagram of an expression package generation apparatus according to another exemplary embodiment, where the apparatus of this embodiment further includes, on the basis of the apparatus shown in any one of the above embodiments: and the third processing module 16, after the generating module generates the expression package according to the interest segment subset, and in response to a user operation, performing post-processing on the expressions in the expression package, and storing the post-processed expression package, where the post-processing includes modifying the expressions or deleting the expressions.
In the embodiment, the user experience is improved by the fact that the user participates in the production process of the expression package, and the expression package required by the user can be output according to the selection and the processing of the user.
With regard to the apparatuses in the above embodiments, the manner in which the respective modules perform operations has been described in detail in the embodiments related to the method, and will not be described in detail here.
Fig. 10 is a block diagram illustrating an emoticon generating apparatus according to an exemplary embodiment. For example, the apparatus 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 10, the apparatus 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the apparatus 800. Examples of such data include instructions for any application or method operating on device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power components 806 provide power to the various components of device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus 800.
The multimedia component 808 includes a screen that provides an output interface between the device 800 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 800 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the apparatus 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the device 800. For example, the sensor assembly 814 may detect the open/closed status of the device 800, the relative positioning of components, such as a display and keypad of the device 800, the sensor assembly 814 may also detect a change in the position of the device 800 or a component of the device 800, the presence or absence of user contact with the device 800, the orientation or acceleration/deceleration of the device 800, and a change in the temperature of the device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communications between the apparatus 800 and other devices in a wired or wireless manner. The device 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 804 comprising instructions, executable by the processor 820 of the device 800 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
A non-transitory computer readable storage medium having instructions therein that, when executed by a processor of the device 800, enable the device 800 to perform an emoticon generation method.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (19)

1. An expression package generation method, comprising:
detecting a video appointed by a user, and acquiring an interesting segment in the video;
determining an interesting segment subset from the interesting segments, wherein the interesting segment subset comprises at least one interesting segment;
and generating an expression package according to the interested segment subset.
2. The method of claim 1, wherein the segments of interest comprise at least one of face video segments, action video segments, and event video segments.
3. The method of claim 2, wherein the segment of interest includes audio data.
4. The method of claim 1, wherein prior to determining the subset of segments of interest from the segments of interest, further comprising:
screening the interesting fragments according to the quality of the interesting fragments; and/or the presence of a gas in the gas,
and performing clipping, geometric transformation, brightness transformation or color transformation on the interested segment.
5. The method of claim 1, wherein said determining a subset of segments of interest from said segments of interest comprises:
selecting an interested segment meeting the condition parameters from the interested segments according to the condition parameters selected by the user;
and composing the interest segments meeting the condition parameters into the interest segment subset, wherein the condition parameters comprise at least one of expressions, emotions and actions.
6. The method of claim 1, wherein said determining a subset of segments of interest from said segments of interest comprises:
arranging and combining the interesting segments to obtain a plurality of subsets containing N interesting segments;
calculating the sum or product of the similarity between N interesting segments in each subset;
and determining the subset with the minimum sum of the similarities or the minimum product of the similarities in the plurality of subsets as the interest segment subset.
7. The method of claim 1, wherein said determining a subset of segments of interest from said segments of interest comprises:
clustering the interesting segments to obtain at least one type of interesting segments, wherein the similarity between the interesting segments in each type of interesting segments is in the same range, or the condition parameters corresponding to each type of interesting segments are the same, and the condition parameters comprise at least one of expressions, emotions and actions;
and selecting the segment of interest with the best quality in each type of segment of interest to form the segment of interest subset.
8. The method of any of claims 1-7, wherein prior to generating an expression package from the subset of segments of interest, further comprising:
performing at least one of the following processes on the segments of interest in the subset of segments of interest:
stylized or cartoonized filters, expression exaggeration, acceleration or repetition.
9. The method of any one of claims 1-7, wherein after generating the expression package according to the subset of segments of interest, further comprising:
and responding to user operation, performing post-processing on the expressions in the expression package, and storing the post-processed expression package, wherein the post-processing comprises modifying the expressions or deleting the expressions.
10. An expression package generation apparatus, comprising:
the acquisition module is configured to detect a video designated by a user and acquire an interesting segment in the video;
a determining module configured to determine a segment subset of interest from the segments of interest, the segment subset of interest including at least one segment of interest;
a generating module configured to generate an expression package according to the interest segment subset.
11. The apparatus of claim 10, wherein the segments of interest comprise at least one of face video segments, action video segments, and event video segments.
12. The apparatus of claim 11, wherein audio data is included in the segment of interest.
13. The apparatus of claim 10, further comprising:
a first processing module configured to screen the segments of interest according to the quality of the segments of interest before the determining module determines the subset of the segments of interest from the segments of interest; and/or the presence of a gas in the gas,
and performing clipping, geometric transformation, brightness transformation or color transformation on the interested segment.
14. The apparatus of claim 10, wherein the determining module comprises:
a first selection submodule configured to select, from the segments of interest, segments of interest that meet a condition parameter selected by a user according to the condition parameter;
a combining sub-module configured to combine the segments of interest that meet the condition parameters into the segment of interest subset, the condition parameters including at least one of expressions, emotions, and actions.
15. The apparatus of claim 10, wherein the determining module comprises:
the calculation sub-module is configured to perform permutation and combination on the interesting segments to obtain a plurality of subsets comprising N interesting segments, and calculate the sum or product of the similarity between the N interesting segments in each subset;
a determining submodule configured to determine a subset of the plurality of subsets in which the sum of the similarities is minimum or the product of the similarities is minimum as the segment subset of interest.
16. The apparatus of claim 10, wherein the determining module comprises:
the clustering sub-module is configured to cluster the interesting segments to obtain at least one type of interesting segments, wherein the similarity between the interesting segments in each type of interesting segments is in the same range; or the condition parameters corresponding to each type of interesting segments are the same, and the condition parameters comprise at least one of expressions, emotions and actions;
and the second selection submodule is configured to select the segment of interest with the best quality in each type of segment of interest to form the segment of interest subset.
17. The apparatus of any one of claims 10-16, further comprising:
a second processing module configured to, before the generating module generates an expression package according to the interest segment subset, perform at least one of the following processes on the interest segments in the interest segment subset:
stylized or cartoonized filters, expression exaggeration, acceleration or repetition.
18. The apparatus of any one of claims 10-16, further comprising:
and the third processing module is configured to respond to user operation after the generation module generates the expression package according to the interesting segment subset, perform post-processing on the expressions in the expression package, and store the post-processed expression package, wherein the post-processing includes modifying the expressions or deleting the expressions.
19. An expression package generation apparatus, comprising:
a processor;
a memory for storing executable instructions of the processor;
wherein the processor is configured to:
detecting a video appointed by a user, and acquiring an interesting segment in the video;
determining an interesting segment subset from the interesting segments, wherein the interesting segment subset comprises at least one interesting segment;
and generating an expression package according to the interested segment subset.
CN201610931186.1A 2016-10-31 2016-10-31 Expression packet generation method and device Active CN106358087B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610931186.1A CN106358087B (en) 2016-10-31 2016-10-31 Expression packet generation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610931186.1A CN106358087B (en) 2016-10-31 2016-10-31 Expression packet generation method and device

Publications (2)

Publication Number Publication Date
CN106358087A true CN106358087A (en) 2017-01-25
CN106358087B CN106358087B (en) 2019-04-26

Family

ID=57863992

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610931186.1A Active CN106358087B (en) 2016-10-31 2016-10-31 Expression packet generation method and device

Country Status (1)

Country Link
CN (1) CN106358087B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106951856A (en) * 2017-03-16 2017-07-14 腾讯科技(深圳)有限公司 Bag extracting method of expressing one's feelings and device
CN107240143A (en) * 2017-05-09 2017-10-10 北京小米移动软件有限公司 Bag generation method of expressing one's feelings and device
CN108320316A (en) * 2018-02-11 2018-07-24 秦皇岛中科鸿合信息科技有限公司 Personalized emoticons, which pack, makees system and method
CN108596114A (en) * 2018-04-27 2018-09-28 佛山市日日圣科技有限公司 A kind of expression generation method and device
CN108846881A (en) * 2018-05-29 2018-11-20 珠海格力电器股份有限公司 Expression image generation method and device
CN109982109A (en) * 2019-04-03 2019-07-05 睿魔智能科技(深圳)有限公司 The generation method and device of short-sighted frequency, server and storage medium
CN110049377A (en) * 2019-03-12 2019-07-23 北京奇艺世纪科技有限公司 Expression packet generation method, device, electronic equipment and computer readable storage medium
CN111530087A (en) * 2020-04-17 2020-08-14 完美世界(重庆)互动科技有限公司 Method and device for generating real-time expression package in game
WO2022000991A1 (en) * 2020-06-28 2022-01-06 北京百度网讯科技有限公司 Expression package generation method and device, electronic device, and medium
US12051142B2 (en) 2020-06-28 2024-07-30 Beijing Baidu Netcom Science And Technology Co., Ltd. Meme package generation method, electronic device, and medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101179471A (en) * 2007-05-31 2008-05-14 腾讯科技(深圳)有限公司 Method and apparatus for implementing user personalized dynamic expression picture with characters
CN101252550A (en) * 2008-03-31 2008-08-27 腾讯科技(深圳)有限公司 User-defined information management apparatus, method and system
CN101527690A (en) * 2009-04-13 2009-09-09 腾讯科技(北京)有限公司 Method for intercepting dynamic image, system and device thereof
US20140153900A1 (en) * 2012-12-05 2014-06-05 Samsung Electronics Co., Ltd. Video processing apparatus and method
CN104750387A (en) * 2015-03-24 2015-07-01 联想(北京)有限公司 Information processing method and electronic equipment
CN104917666A (en) * 2014-03-13 2015-09-16 腾讯科技(深圳)有限公司 Method of making personalized dynamic expression and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101179471A (en) * 2007-05-31 2008-05-14 腾讯科技(深圳)有限公司 Method and apparatus for implementing user personalized dynamic expression picture with characters
CN101252550A (en) * 2008-03-31 2008-08-27 腾讯科技(深圳)有限公司 User-defined information management apparatus, method and system
CN101527690A (en) * 2009-04-13 2009-09-09 腾讯科技(北京)有限公司 Method for intercepting dynamic image, system and device thereof
US20140153900A1 (en) * 2012-12-05 2014-06-05 Samsung Electronics Co., Ltd. Video processing apparatus and method
CN104917666A (en) * 2014-03-13 2015-09-16 腾讯科技(深圳)有限公司 Method of making personalized dynamic expression and device
CN104750387A (en) * 2015-03-24 2015-07-01 联想(北京)有限公司 Information processing method and electronic equipment

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106951856A (en) * 2017-03-16 2017-07-14 腾讯科技(深圳)有限公司 Bag extracting method of expressing one's feelings and device
CN107240143A (en) * 2017-05-09 2017-10-10 北京小米移动软件有限公司 Bag generation method of expressing one's feelings and device
CN108320316B (en) * 2018-02-11 2022-03-04 秦皇岛中科鸿合信息科技有限公司 Personalized facial expression package manufacturing system and method
CN108320316A (en) * 2018-02-11 2018-07-24 秦皇岛中科鸿合信息科技有限公司 Personalized emoticons, which pack, makees system and method
CN108596114A (en) * 2018-04-27 2018-09-28 佛山市日日圣科技有限公司 A kind of expression generation method and device
CN108846881A (en) * 2018-05-29 2018-11-20 珠海格力电器股份有限公司 Expression image generation method and device
CN108846881B (en) * 2018-05-29 2023-05-12 珠海格力电器股份有限公司 Expression image generation method and device
CN110049377A (en) * 2019-03-12 2019-07-23 北京奇艺世纪科技有限公司 Expression packet generation method, device, electronic equipment and computer readable storage medium
CN110049377B (en) * 2019-03-12 2021-06-22 北京奇艺世纪科技有限公司 Expression package generation method and device, electronic equipment and computer readable storage medium
CN109982109B (en) * 2019-04-03 2021-08-03 睿魔智能科技(深圳)有限公司 Short video generation method and device, server and storage medium
CN109982109A (en) * 2019-04-03 2019-07-05 睿魔智能科技(深圳)有限公司 The generation method and device of short-sighted frequency, server and storage medium
CN111530087B (en) * 2020-04-17 2021-12-21 完美世界(重庆)互动科技有限公司 Method and device for generating real-time expression package in game
CN111530087A (en) * 2020-04-17 2020-08-14 完美世界(重庆)互动科技有限公司 Method and device for generating real-time expression package in game
WO2022000991A1 (en) * 2020-06-28 2022-01-06 北京百度网讯科技有限公司 Expression package generation method and device, electronic device, and medium
US12051142B2 (en) 2020-06-28 2024-07-30 Beijing Baidu Netcom Science And Technology Co., Ltd. Meme package generation method, electronic device, and medium

Also Published As

Publication number Publication date
CN106358087B (en) 2019-04-26

Similar Documents

Publication Publication Date Title
CN106358087B (en) Expression packet generation method and device
CN110517185B (en) Image processing method, device, electronic equipment and storage medium
CN108038102B (en) Method and device for recommending expression image, terminal and storage medium
CN107463643B (en) Barrage data display method and device and storage medium
CN107341509B (en) Convolutional neural network training method and device and readable storage medium
CN105208284B (en) Shoot based reminding method and device
CN107025421B (en) Fingerprint identification method and device
CN105678266A (en) Method and device for combining photo albums of human faces
CN113676671B (en) Video editing method, device, electronic equipment and storage medium
CN112464031A (en) Interaction method, interaction device, electronic equipment and storage medium
CN113032627A (en) Video classification method and device, storage medium and terminal equipment
CN111526287A (en) Image shooting method, image shooting device, electronic equipment, server, image shooting system and storage medium
CN110019897B (en) Method and device for displaying picture
CN112347911A (en) Method and device for adding special effects of fingernails, electronic equipment and storage medium
CN109981624B (en) Intrusion detection method, device and storage medium
CN105260743A (en) Pattern classification method and device
US11600300B2 (en) Method and device for generating dynamic image
CN107992894B (en) Image recognition method, image recognition device and computer-readable storage medium
CN106447747B (en) Image processing method and device
CN112130719B (en) Page display method, device and system, electronic equipment and storage medium
CN113515251A (en) Content processing method and device, electronic equipment and storage medium
CN105551047A (en) Picture content detecting method and device
CN105469107B (en) Image classification method and device
CN105635573B (en) Camera visual angle regulating method and device
CN107229707A (en) Search for the method and device of image

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant