CN107885855A - Dynamic caricature generation method and system based on intelligent terminal - Google Patents
Dynamic caricature generation method and system based on intelligent terminal Download PDFInfo
- Publication number
- CN107885855A CN107885855A CN201711132850.7A CN201711132850A CN107885855A CN 107885855 A CN107885855 A CN 107885855A CN 201711132850 A CN201711132850 A CN 201711132850A CN 107885855 A CN107885855 A CN 107885855A
- Authority
- CN
- China
- Prior art keywords
- scene
- animation
- information
- different
- intelligent terminal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/40—Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
- G06F16/43—Querying
- G06F16/435—Filtering based on additional data, e.g. user or group profiles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/61—Scene description
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
The disclosure belongs to cartoon design field, and in particular to a kind of dynamic caricature generation method and system, this method based on intelligent terminal include:The file request of identification information comprising one or more default animation files is sent to predetermined server, the one or more that the predetermined server is asked and returned in response to this document is received and presets animation file;Multiple scene information and audio resource data of animation to be generated are obtained according to one or more of default animation files;Parse and judge the scene type of the scene described by the multiple scene information, the different audio resource data according to corresponding to different scene type Auto-matchings.The disclosure can improve voice data treatment effeciency when animation is produced and generated.
Description
Technical field
This disclosure relates to cartoon design technical field, more particularly to a kind of dynamic caricature generation method based on intelligent terminal
And dynamic cartoon generating system.
Background technology
With social progress and development in science and technology, the raising of people's economic level and gradually enriching for culture life are brought, because
This animation is also increasingly deep among the life of young man.
At present, user is more desirable to be actively engaged among the creation of various animations.But its mistake is created in animation
Journey is cumbersome, requirement to creator is higher, makes the efficiency of animation at present and than relatively low, therefore urgent need improves these and asked
Topic.
The content of the invention
The purpose of the disclosure is to provide a kind of dynamic caricature generation method based on intelligent terminal and the generation of dynamic caricature
System, and then said one or multiple problems are at least overcome to a certain extent.
According to the first aspect of the embodiment of the present disclosure, there is provided a kind of dynamic caricature generation method based on intelligent terminal, should
Method includes:
The file request of identification information comprising one or more default animation files is sent to predetermined server, received
The one or more that the predetermined server is asked and returned in response to this document presets animation file;
The multiple scene informations and audio resource of animation to be generated are obtained according to one or more of default animation files
Data;
Parse and judge the scene type of the scene described by the multiple scene information, according to different scene types certainly
Different audio resource data corresponding to dynamic matching.
In embodiment of the disclosure, this method also includes:
By the file request of the identification informations comprising one or more default animation files send to predetermined server it
Before, the web-based history that user is obtained based on the intelligent terminal browses information;
The animation type that information determines user preferences is browsed based on the web-based history, according to the animation type of determination
Inquiry obtains the corresponding identification information from the mapping table to prestore;
Wherein, different each self-corresponding one or more default animation files of animation type are stored with the mapping table
Identification information.
In embodiment of the disclosure, each scene information includes the background picture of corresponding scene;The parsing is simultaneously
The step of scene type for judging the scene described by the multiple scene information, includes:
The background picture of scene according to described by the obtained each scene information of parsing judge with corresponding to determining each
The scene type of scene.
In embodiment of the disclosure, the audio resource data different according to corresponding to different scene type Auto-matchings
The step of include:
According to the relation table between default different scenes classification and corresponding audio, search and determine with Auto-matching difference
Scene type corresponding to different audio resource data.
In embodiment of the disclosure, this method also includes:
After Auto-matching audio resource data, at least part picture region in background picture corresponding to each scene
The different information characterized, adjust the play parameter of corresponding audio resource data.
According to the second aspect of the embodiment of the present disclosure, there is provided a kind of dynamic cartoon generating system based on intelligent terminal, should
System includes:
Data transmit-receive module, for the file request of the identification information comprising one or more default animation files to be sent
To predetermined server, receive the one or more that the predetermined server is asked and returned in response to this document and preset animation file;
Data processing module, for obtaining multiple fields of animation to be generated according to one or more of default animation files
Scape information and audio resource data;
Data match module, for parsing and judging the scene type of the scene described by the multiple scene information, root
According to different audio resource data corresponding to different scene type Auto-matchings.
In embodiment of the disclosure, the system also includes:
Data obtaining module, for being sent out by the file request of the identification information comprising one or more default animation files
Deliver to before predetermined server, the web-based history that user is obtained based on the intelligent terminal browses information;
Information determination module, for browsing the animation type that information determines user preferences based on the web-based history, according to
The animation type determined is inquired about from the mapping table to prestore obtains the corresponding identification information;Wherein, the mapping table
In be stored with the identification informations of each self-corresponding one or more default animation files of different animation types.
In embodiment of the disclosure, each scene information includes the background picture of corresponding scene;The data
With module, judge specifically for the background picture of the scene according to described by parsing obtained each scene information to determine correspondingly
Each scene scene type.
In embodiment of the disclosure, the data match module, specifically for according to default different scenes classification with it is right
Relation table between the audio answered, search and determine with different audio resource data corresponding to the different scene type of Auto-matching.
In embodiment of the disclosure, the system also includes:
Parameter adjustment module, for after Auto-matching audio resource data, according in background picture corresponding to each scene
The different information that are characterized of at least part picture region, adjust the play parameter of corresponding audio resource data.
The technical scheme provided by this disclosed embodiment can include the following benefits:
In the embodiment of the present disclosure, by the file request of the identification informations comprising one or more default animation files send to
Predetermined server, receive the one or more that the predetermined server is asked and returned in response to this document and preset animation file;Root
Multiple scene information and audio resource data of animation to be generated are obtained according to one or more of default animation files;Parsing is simultaneously
Judge the scene type of the scene described by the multiple scene information, according to corresponding to different scene type Auto-matchings not
With audio resource data.So, can be according to scene type Auto-matching audio resource data during making animation, and then can carry
Voice data treatment effeciency when high animation is produced and generated, while the making effect of whole animation is also improved to a certain extent
Rate, save human cost.
Brief description of the drawings
Fig. 1 shows the dynamic caricature generation method flow chart based on intelligent terminal in disclosure exemplary embodiment;
Fig. 2 shows dynamic cartoon generating system schematic diagram of the disclosure exemplary embodiment based on intelligent terminal.
Embodiment
Example embodiment is described more fully with referring now to accompanying drawing.However, example embodiment can be with a variety of shapes
Formula is implemented, and is not understood as limited to example set forth herein;On the contrary, these embodiments are provided so that the disclosure will more
Fully and completely, and by the design of example embodiment comprehensively it is communicated to those skilled in the art.Described feature, knot
Structure or characteristic can be incorporated in one or more embodiments in any suitable manner.
In addition, accompanying drawing is only the schematic illustrations of the disclosure, it is not necessarily drawn to scale.Identical accompanying drawing mark in figure
Note represents same or similar part, thus will omit repetition thereof.Some block diagrams shown in accompanying drawing are work(
Can entity, not necessarily must be corresponding with physically or logically independent entity.These work(can be realized using software form
Energy entity, or these functional entitys are realized in one or more hardware modules or integrated circuit, or in different processor device
And/or these functional entitys are realized in microcontroller device.
A kind of dynamic caricature generation method based on intelligent terminal is provided in this example embodiment, this method can portion
Divide or apply all to smart mobile phone, IPAD etc..As shown in figure 1, this method may comprise steps of S101~S103:
Step S101:The file request of identification information comprising one or more default animation files is sent to default clothes
Business device, receive the one or more that the predetermined server is asked and returned in response to this document and preset animation file.
Step S102:According to one or more of default animation files obtain animation to be generated multiple scene informations and
Audio resource data.
Step S103:Parse and judge the scene type of the scene described by the multiple scene information, according to different
Different audio resource data corresponding to scene type Auto-matching.
, can be according to scene type Auto-matching audio resource data when making animation in the present embodiment, and then it can carry
Voice data treatment effeciency when high animation is produced and generated, while the making effect of whole animation is also improved to a certain extent
Rate, save human cost.
Specifically, in step S101, by the file request of the identification information comprising one or more default animation files
Send the default animation text of one or more for predetermined server, receiving that the predetermined server is asked and returned in response to this document
Part.
Exemplary, when making an animation, the one or more required for it can be preset animation file and prestored
In predetermined server, such as long-range animation processing server.When making animation in terminal, only required presetting can be moved
The identification information (such as unique ID) of unrestrained file is carried in a file request and sent to the predetermined server.
Further, in order to improve user's participation so that animation make it is more personalized, should in embodiment of the disclosure
Method can also comprise the following steps:
Step A, sent by the file request of the identification information comprising one or more default animation files to default clothes
It is engaged in before device, the web-based history that user is obtained based on the intelligent terminal browses information.Exemplary, the web-based history browses letter
Breath can be animation information that user's history browses, such as animation picture, video information.
Step B, the animation type that information determines user preferences is browsed based on the web-based history, according to the described dynamic of determination
Unrestrained type is inquired about from the mapping table to prestore obtains the corresponding identification information;Wherein, it is stored with difference in the mapping table
The identification information of each self-corresponding one or more default animation files of animation type.The mapping table can pre-establish.It is described
Animation type can be with self-defined, such as cartoon story class, military class, history story class.Letter is browsed based on the web-based history
Breath determines that the animation type of user preferences can be specifically:Obtain the institute that user browses (as in one month etc.) within a certain period of time
Have an animation information, count occurrence number or frequency of the animation information of wherein each type etc., choose occurrence number or
Animation information corresponding to frequency peak and then the animation type for determining user preferences.
In step s 102, the multiple scenes for animation to be generated being obtained according to one or more of default animation files are believed
Breath and audio resource data.
Exemplary, the multiple scene information can include different background pictures, such as static map or Dynamic Graph
Deng.The default animation file can include file header, file body and end-of-file successively;The file header is used to provide described treat
The description information of animation is generated, the file body storage has scene concordance list and the scene description information of all scenes, the text
Part tail includes resource index table and all resource datas.So, file header, the file of the default animation file can be parsed
Body and end-of-file etc. are to obtain multiple scene information and audio resource data of animation to be generated.
In step s 103, parse and judge the scene type of the scene described by the multiple scene information, according to not
Different audio resource data corresponding to same scene type Auto-matching.
Exemplary, in embodiment of the disclosure, each scene information may each comprise the background of corresponding scene
Picture;Accordingly, it is described to parse and be wrapped the step of judging the scene type of the scene described by the multiple scene information
Include:The background picture of scene according to described by the obtained each scene information of parsing judges with each scene corresponding to determining
Scene type.
Specifically, image recognition can be carried out to the background picture, or based on the scene description in the file body
Information identification determines different scene types, and the scene type can be including the building in virtual screen, grassland, forest etc.
Deng.
Further, it is described different according to corresponding to different scene type Auto-matchings in embodiment of the disclosure
The step of audio resource data, can include:According to the relation table between default different scenes classification and corresponding audio, look into
Determination is looked for different audio resource data corresponding to the different scene type of Auto-matching.The relation table can be pre-set, after
It is continuous either automatically or manually to update or change.Different scene type subaudio frequencies is different, such as in building, grass
Under the scenes such as original, forest, respective audio is different.
On the basis of above-described embodiment, in the other embodiment of the disclosure, this method can also include following
Step:After Auto-matching audio resource data, at least part picture region institute in background picture corresponding to each scene
The different information of sign, adjust the play parameter of corresponding audio resource data.
Specifically, namely it is determined that after audio resource data corresponding to each scene, according to background picture in the scene
Different local features, the play parameter of finer regulation audio resource data, such as increaseing or decreasing for sound, the optimization of sound
Special effect processing etc..Under a virtual scene, wherein specific local detail or different, such as grassland, a pair of distance
As sound that (such as personage) hears at a distance and the sound nearby heard are differentiated, sound is heard in Indoor environment and outdoor
And it is different, this just needs further accurate optimization processing voice data under a just scene so that under virtual scene
Sound more meet the real physical effect of nature, improve the fidelity of animation.
In summary, can be entered when animation is made in the present embodiment according to scene type Auto-matching audio resource data
And voice data treatment effeciency when animation is produced and generated can be improved, while whole animation is also improved to a certain extent
Producing efficiency, save human cost;Voice data can be optimized simultaneously to improve the fidelity of making animation.
It should be noted that although describing each step of method in the disclosure with particular order in the accompanying drawings, still,
This, which does not require that or implied, to perform these steps according to the particular order, or has to carry out the step shown in whole
Desired result could be realized.It is additional or alternative, it is convenient to omit some steps, multiple steps to be merged into a step and held
OK, and/or by a step execution of multiple steps etc. are decomposed into.In addition, being also easy to understand, these steps can be
Such as either synchronously or asynchronously performed in multiple module/process/threads.
With reference to shown in 2, the embodiment of the present disclosure also provides a kind of dynamic cartoon generating system based on intelligent terminal, and this is
System 100 can include data transmit-receive module 101, data processing module 102 and data match module 103;Wherein:
The data transmit-receive module 101, for the file of one or more identification informations for presetting animation files will to be included
It is default dynamic to receive the one or more that the predetermined server is asked and returned in response to this document to predetermined server for request transmission
Unrestrained file;
The data processing module 102, for obtaining animation to be generated according to one or more of default animation files
Multiple scene information and audio resource data;
The data match module 103, for parsing and judging the scene of the scene described by the multiple scene information
Classification, the different audio resource data according to corresponding to different scene type Auto-matchings.
In embodiment of the disclosure, the system can also include data obtaining module and information determination module (not shown).
Wherein, described information acquisition module, for the file request of one or more identification informations for presetting animation files will to be included
Send to before predetermined server, the web-based history that user is obtained based on the intelligent terminal browses information;Described information determines
Module, for browsing the animation type that information determines user preferences based on the web-based history, according to the animation class of determination
Type is inquired about from the mapping table to prestore obtains the corresponding identification information;Wherein, different animations are stored with the mapping table
The identification information of each self-corresponding one or more default animation files of type.
In embodiment of the disclosure, each scene information includes the background picture of corresponding scene;The data
With module 103, judge specifically for the background picture of the scene according to described by parsing obtained each scene information to determine
The scene type of corresponding each scene.
In embodiment of the disclosure, the data match module 103, specifically for according to default different scenes classification with
Relation table between corresponding audio, search and determine with different audio resource numbers corresponding to the different scene type of Auto-matching
According to.
In embodiment of the disclosure, the system can also include parameter adjustment module (not shown), in Auto-matching
After audio resource data, different information that at least part picture region in background picture corresponding to each scene is characterized,
The play parameter of the corresponding audio resource data of adjustment.
It should be noted that on said system embodiment, retouching in detail for preceding method embodiment part is specifically referred to
State, here is omitted.
Each functional module in the above-mentioned each embodiment of the disclosure, which can integrate, to form an independent part,
Can also be modules individualism, can also two or more modules be integrated to form an independent part.It is described
If function is realized in the form of software function module and counted as independent production marketing or in use, one can be stored in
In calculation machine read/write memory medium.Based on such understanding, technical scheme is substantially done to prior art in other words
Going out the part of contribution or the part of the technical scheme can be embodied in the form of software product, the computer software product
It is stored in a storage medium, including some instructions are make it that a computer equipment (can be intelligent terminal, personal meter
Calculation machine, server, or network equipment etc.) perform all or part of step of each embodiment methods described of the present invention.It is and preceding
The storage medium stated can include:USB flash disk, mobile hard disk, read-only storage (ROM, Read-Only Memory), arbitrary access are deposited
Reservoir (RAM, Random Access Memory), magnetic disc or CD etc. are various can be with the medium of store program codes.
It should be noted that herein, such as first and second or the like relational terms are used merely to a reality
Body or operation make a distinction with another entity or operation, and not necessarily require or imply and deposited between these entities or operation
In any this actual relation or order.Moreover, term " comprising ", "comprising" or its any other variant are intended to
Nonexcludability includes, so that process, method, article or equipment including a series of elements not only will including those
Element, but also the other element including being not expressly set out, or it is this process, method, article or equipment also to include
Intrinsic key element.In the absence of more restrictions, the key element limited by sentence "including a ...", it is not excluded that
Other identical element also be present in process, method, article or equipment including the key element.
In a word, those skilled in the art will readily occur to this public affairs after considering specification and putting into practice invention disclosed herein
The other embodiments opened.The application is intended to any modification, purposes or the adaptations of the disclosure, these modifications,
Purposes or adaptations follow the general principle of the disclosure and including the undocumented public affairs in the art of the disclosure
Know general knowledge or conventional techniques.Description and embodiments be considered only as it is exemplary, the true scope of the disclosure and spirit by
Appended claim is pointed out.
Claims (10)
1. a kind of dynamic caricature generation method based on intelligent terminal, it is characterised in that this method includes:
The file request of identification informations comprising one or more default animation files is sent to predetermined server, described in reception
The one or more that predetermined server is asked and returned in response to this document presets animation file;
Multiple scene information and audio resource data of animation to be generated are obtained according to one or more of default animation files;
Parse and judge the scene type of the scene described by the multiple scene information, according to automatic different of scene types
With corresponding different audio resource data.
2. the dynamic caricature generation method based on intelligent terminal according to claim 1, it is characterised in that this method is also wrapped
Include:
Sent by the file request of the identification information comprising one or more default animation files to before predetermined server, base
The web-based history that user is obtained in the intelligent terminal browses information;
The animation type that information determines user preferences is browsed based on the web-based history, according to the animation type of determination from pre-
Inquiry obtains the corresponding identification information in the mapping table deposited;
Wherein, the mark of different each self-corresponding one or more default animation files of animation type is stored with the mapping table
Information.
3. the dynamic caricature generation method based on intelligent terminal according to claim 2, it is characterised in that each scene
Information includes the background picture of corresponding scene;It is described to parse and judge the scene of the scene described by the multiple scene information
The step of classification, includes:
The background picture of scene according to described by the obtained each scene information of parsing is judged with each scene corresponding to determining
Scene type.
4. the dynamic caricature generation method based on intelligent terminal according to claim 3, it is characterised in that described according to difference
Scene type Auto-matching corresponding to different audio resource data the step of include:
According to the relation table between default different scenes classification and corresponding audio, search and determine with the different field of Auto-matching
Different audio resource data corresponding to scape classification.
5. the dynamic caricature generation method based on intelligent terminal according to claim 4, it is characterised in that this method is also wrapped
Include:
After Auto-matching audio resource data, at least part picture region institute table in background picture corresponding to each scene
The different information of sign, adjust the play parameter of corresponding audio resource data.
6. a kind of dynamic cartoon generating system based on intelligent terminal, it is characterised in that the system includes:
Data transmit-receive module, for the file request of the identification information comprising one or more default animation files to be sent to pre-
If server, receive the one or more that the predetermined server is asked and returned in response to this document and preset animation file;
Data processing module, multiple scenes for obtaining animation to be generated according to one or more of default animation files are believed
Breath and audio resource data;And
Data match module, for parsing and judging the scene type of the scene described by the multiple scene information, according to not
Different audio resource data corresponding to same scene type Auto-matching.
7. the dynamic cartoon generating system based on intelligent terminal according to claim 6, it is characterised in that the system is also wrapped
Include:
Data obtaining module, for by the file request of the identification informations comprising one or more default animation files send to
Before predetermined server, the web-based history that user is obtained based on the intelligent terminal browses information;
Information determination module, for browsing the animation type that information determines user preferences based on the web-based history, according to determination
The animation type inquired about from the mapping table to prestore obtain corresponding to the identification information;Wherein, deposited in the mapping table
Contain the identification information of different each self-corresponding one or more default animation files of animation type.
8. the dynamic cartoon generating system based on intelligent terminal according to claim 7, it is characterised in that each scene
Information includes the background picture of corresponding scene;The data match module, specifically for each scene obtained according to parsing
The background picture of scene described by information is judged with the scene type of each scene corresponding to determining.
9. the dynamic cartoon generating system based on intelligent terminal according to claim 8, it is characterised in that the Data Matching
Module, specifically for according to the relation table between default different scenes classification and corresponding audio, searching and determining with automatic
With different audio resource data corresponding to different scene types.
10. the dynamic cartoon generating system based on intelligent terminal according to claim 9, it is characterised in that the system is also wrapped
Include:
Parameter adjustment module, for after Auto-matching audio resource data, according in background picture corresponding to each scene extremely
The different information that small part picture region is characterized, adjust the play parameter of corresponding audio resource data.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711132850.7A CN107885855B (en) | 2017-11-15 | 2017-11-15 | Dynamic cartoon generation method and system based on intelligent terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711132850.7A CN107885855B (en) | 2017-11-15 | 2017-11-15 | Dynamic cartoon generation method and system based on intelligent terminal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107885855A true CN107885855A (en) | 2018-04-06 |
CN107885855B CN107885855B (en) | 2021-07-13 |
Family
ID=61777467
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711132850.7A Active CN107885855B (en) | 2017-11-15 | 2017-11-15 | Dynamic cartoon generation method and system based on intelligent terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107885855B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109994000A (en) * | 2019-03-28 | 2019-07-09 | 掌阅科技股份有限公司 | A kind of reading partner method, electronic equipment and computer storage medium |
CN111259181A (en) * | 2018-12-03 | 2020-06-09 | 连尚(新昌)网络科技有限公司 | Method and equipment for displaying information and providing information |
CN111951357A (en) * | 2020-08-11 | 2020-11-17 | 深圳市前海手绘科技文化有限公司 | Application method of sound material in hand-drawn animation |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101138244A (en) * | 2005-01-07 | 2008-03-05 | 韩国电子通信研究院 | Apparatus and method for providing adaptive broadcast service using classification schemes for usage environment description |
CN102314917A (en) * | 2010-07-01 | 2012-01-11 | 北京中星微电子有限公司 | Method and device for playing video and audio files |
CN103309670A (en) * | 2013-06-20 | 2013-09-18 | 亿览在线网络技术(北京)有限公司 | Implementation method and implementation device for skin of music player |
CN103402121A (en) * | 2013-06-07 | 2013-11-20 | 深圳创维数字技术股份有限公司 | Method, equipment and system for adjusting sound effect |
CN103986980A (en) * | 2014-05-30 | 2014-08-13 | 中国传媒大学 | Hypermedia editing and producing method and system |
CN104394331A (en) * | 2014-12-05 | 2015-03-04 | 厦门美图之家科技有限公司 | Video processing method for adding matching sound effect in video picture |
CN104618445A (en) * | 2014-12-30 | 2015-05-13 | 北京奇虎科技有限公司 | Method and device for arranging files based on cloud storage space |
CN105069104A (en) * | 2015-05-22 | 2015-11-18 | 福建中科亚创通讯科技有限责任公司 | Dynamic cartoon generation method and system |
CN105488044A (en) * | 2014-09-16 | 2016-04-13 | 华为技术有限公司 | Data processing method and device |
CN105872790A (en) * | 2015-12-02 | 2016-08-17 | 乐视网信息技术(北京)股份有限公司 | Method and system for recommending audio/video program |
CN106095387A (en) * | 2016-06-16 | 2016-11-09 | 广东欧珀移动通信有限公司 | The audio method to set up of a kind of terminal and terminal |
CN107169430A (en) * | 2017-05-02 | 2017-09-15 | 哈尔滨工业大学深圳研究生院 | Reading environment audio strengthening system and method based on image procossing semantic analysis |
-
2017
- 2017-11-15 CN CN201711132850.7A patent/CN107885855B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101138244A (en) * | 2005-01-07 | 2008-03-05 | 韩国电子通信研究院 | Apparatus and method for providing adaptive broadcast service using classification schemes for usage environment description |
CN102314917A (en) * | 2010-07-01 | 2012-01-11 | 北京中星微电子有限公司 | Method and device for playing video and audio files |
CN103402121A (en) * | 2013-06-07 | 2013-11-20 | 深圳创维数字技术股份有限公司 | Method, equipment and system for adjusting sound effect |
CN103309670A (en) * | 2013-06-20 | 2013-09-18 | 亿览在线网络技术(北京)有限公司 | Implementation method and implementation device for skin of music player |
CN103986980A (en) * | 2014-05-30 | 2014-08-13 | 中国传媒大学 | Hypermedia editing and producing method and system |
CN105488044A (en) * | 2014-09-16 | 2016-04-13 | 华为技术有限公司 | Data processing method and device |
CN104394331A (en) * | 2014-12-05 | 2015-03-04 | 厦门美图之家科技有限公司 | Video processing method for adding matching sound effect in video picture |
CN104618445A (en) * | 2014-12-30 | 2015-05-13 | 北京奇虎科技有限公司 | Method and device for arranging files based on cloud storage space |
CN105069104A (en) * | 2015-05-22 | 2015-11-18 | 福建中科亚创通讯科技有限责任公司 | Dynamic cartoon generation method and system |
CN105872790A (en) * | 2015-12-02 | 2016-08-17 | 乐视网信息技术(北京)股份有限公司 | Method and system for recommending audio/video program |
CN106095387A (en) * | 2016-06-16 | 2016-11-09 | 广东欧珀移动通信有限公司 | The audio method to set up of a kind of terminal and terminal |
CN107169430A (en) * | 2017-05-02 | 2017-09-15 | 哈尔滨工业大学深圳研究生院 | Reading environment audio strengthening system and method based on image procossing semantic analysis |
Non-Patent Citations (3)
Title |
---|
ORTEGA R A 等: "The use of digital facial animation to present anesthesia history", 《BULLETIN OF ANESTHESIA HISTORY》 * |
S. CHU 等: "Environmental Sound Recognition With Time–Frequency Audio Features", 《IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING》 * |
VICLEE108: "Android视频开发基础(一)", 《HTTPS://BLOG.CSDN.NET/GOODLIXUEYONG/ARTICLE/DETAILS/》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111259181A (en) * | 2018-12-03 | 2020-06-09 | 连尚(新昌)网络科技有限公司 | Method and equipment for displaying information and providing information |
CN111259181B (en) * | 2018-12-03 | 2024-04-12 | 连尚(新昌)网络科技有限公司 | Method and device for displaying information and providing information |
CN109994000A (en) * | 2019-03-28 | 2019-07-09 | 掌阅科技股份有限公司 | A kind of reading partner method, electronic equipment and computer storage medium |
CN109994000B (en) * | 2019-03-28 | 2021-10-19 | 掌阅科技股份有限公司 | Reading accompanying method, electronic equipment and computer storage medium |
CN111951357A (en) * | 2020-08-11 | 2020-11-17 | 深圳市前海手绘科技文化有限公司 | Application method of sound material in hand-drawn animation |
Also Published As
Publication number | Publication date |
---|---|
CN107885855B (en) | 2021-07-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109308357B (en) | Method, device and equipment for obtaining answer information | |
US11153430B2 (en) | Information presentation method and device | |
CN106126524B (en) | Information pushing method and device | |
CN107885855A (en) | Dynamic caricature generation method and system based on intelligent terminal | |
WO2013148724A1 (en) | Content customization | |
CN105339933A (en) | News results through query expansion | |
CN109002338A (en) | Page rendering, page finishing information processing method and device | |
US9639633B2 (en) | Providing information services related to multimodal inputs | |
US11468883B2 (en) | Messaging system with trend analysis of content | |
CN112131472A (en) | Information recommendation method and device, electronic equipment and storage medium | |
CN109800036A (en) | Information flow page display method, system, calculates equipment and storage medium at device | |
CN105205695A (en) | Internet-based advertisement interactive system and method | |
CN103546623A (en) | Method, device and equipment for sending voice information and text description information thereof | |
CN106919703A (en) | Film information searching method and device | |
CN105989018B (en) | Label generation method and label generation device | |
CN104038637B (en) | Ringtone playing method and device and mobile terminal | |
CN110990632B (en) | Video processing method and device | |
CN104077320B (en) | method and device for generating information to be issued | |
CN104244112B (en) | A kind of multi-media processing method, device and server | |
CN101499178A (en) | System and method for optimizing natural language descriptions of objects in a virtual environment | |
CN116738250A (en) | Prompt text expansion method, device, electronic equipment and storage medium | |
CN107992493A (en) | The method that chat topic is found based on two people or more people | |
CN107402994B (en) | Method and device for classifying multi-group hierarchical division | |
CN106294779B (en) | Personal brand label generation method and system | |
KR101628956B1 (en) | Method of generating emotion expressing text and apparatus thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |