CN107885855B - Dynamic cartoon generation method and system based on intelligent terminal - Google Patents

Dynamic cartoon generation method and system based on intelligent terminal Download PDF

Info

Publication number
CN107885855B
CN107885855B CN201711132850.7A CN201711132850A CN107885855B CN 107885855 B CN107885855 B CN 107885855B CN 201711132850 A CN201711132850 A CN 201711132850A CN 107885855 B CN107885855 B CN 107885855B
Authority
CN
China
Prior art keywords
scene
information
preset
animation
file
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711132850.7A
Other languages
Chinese (zh)
Other versions
CN107885855A (en
Inventor
李家志
林潇南
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuzhou Zhangyitong Information Technology Co ltd
Original Assignee
Fuzhou Zhangyitong Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuzhou Zhangyitong Information Technology Co ltd filed Critical Fuzhou Zhangyitong Information Technology Co ltd
Priority to CN201711132850.7A priority Critical patent/CN107885855B/en
Publication of CN107885855A publication Critical patent/CN107885855A/en
Application granted granted Critical
Publication of CN107885855B publication Critical patent/CN107885855B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/435Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/61Scene description

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The disclosure belongs to the field of animation design, and particularly relates to a dynamic cartoon generation method and system based on an intelligent terminal, wherein the method comprises the following steps: sending a file request containing identification information of one or more preset animation files to a preset server, and receiving one or more preset animation files returned by the preset server in response to the file request; obtaining a plurality of scene information and audio resource data of the animation to be generated according to the one or more preset animation files; and analyzing and judging the scene types of the scenes described by the scene information, and automatically matching corresponding different audio resource data according to different scene types. The method and the device can improve the audio data processing efficiency during animation production.

Description

Dynamic cartoon generation method and system based on intelligent terminal
Technical Field
The disclosure relates to the technical field of animation design, in particular to a dynamic cartoon generating method and a dynamic cartoon generating system based on an intelligent terminal.
Background
Along with social progress and technological development, the economic level of people is improved and cultural life is gradually enriched, so that the cartoon also increasingly deepens into the life of young people.
Currently, users prefer to actively participate in the creation of various animations. However, the animation creation process is complicated, the requirements on creators are high, and the efficiency of animation creation is low at present, so that the problems are urgently needed to be improved.
Disclosure of Invention
The present disclosure is directed to a dynamic cartoon generating method and a dynamic cartoon generating system based on an intelligent terminal, so as to overcome one or more of the above problems at least to some extent.
According to a first aspect of the embodiments of the present disclosure, a method for generating a dynamic cartoon based on an intelligent terminal is provided, where the method includes:
sending a file request containing identification information of one or more preset animation files to a preset server, and receiving one or more preset animation files returned by the preset server in response to the file request;
obtaining a plurality of scene information and audio resource data of the animation to be generated according to the one or more preset animation files;
and analyzing and judging the scene types of the scenes described by the scene information, and automatically matching corresponding different audio resource data according to different scene types.
In an embodiment of the present disclosure, the method further includes:
before a file request containing identification information of one or more preset animation files is sent to a preset server, acquiring historical network browsing information of a user based on the intelligent terminal;
determining the preferred cartoon type of the user based on the historical network browsing information, and inquiring a prestored mapping table according to the determined cartoon type to obtain corresponding identification information;
the mapping table stores identification information of one or more preset animation files corresponding to different animation types.
In an embodiment of the present disclosure, each of the scene information includes a background picture of a corresponding scene; the step of analyzing and judging the scene category of the scene described by the plurality of scene information comprises the following steps:
and determining the corresponding scene category of each scene according to the background picture of the scene described by each scene information obtained by analysis.
In an embodiment of the present disclosure, the step of automatically matching corresponding different audio resource data according to different scene categories includes:
and according to a preset relation table between different scene categories and corresponding audios, searching and determining different audio resource data corresponding to different scene categories to be automatically matched.
In an embodiment of the present disclosure, the method further includes:
after the audio resource data are automatically matched, the playing parameters of the corresponding audio resource data are adjusted according to different information represented by at least part of picture areas in the background picture corresponding to each scene.
According to a second aspect of the embodiments of the present disclosure, there is provided a dynamic cartoon generating system based on an intelligent terminal, the system including:
the data transceiving module is used for sending a file request containing identification information of one or more preset animation files to a preset server and receiving one or more preset animation files returned by the preset server in response to the file request;
the data processing module is used for obtaining a plurality of scene information and audio resource data of the animation to be generated according to the one or more preset animation files;
and the data matching module is used for analyzing and judging the scene types of the scenes described by the scene information and automatically matching corresponding different audio resource data according to different scene types.
In an embodiment of the present disclosure, the system further includes:
the information acquisition module is used for acquiring historical network browsing information of a user based on the intelligent terminal before sending a file request containing identification information of one or more preset cartoon files to a preset server;
the information determining module is used for determining the preferred cartoon type of the user based on the historical network browsing information and inquiring a prestored mapping table according to the determined cartoon type to obtain the corresponding identification information; the mapping table stores identification information of one or more preset animation files corresponding to different animation types.
In an embodiment of the present disclosure, each of the scene information includes a background picture of a corresponding scene; the data matching module is specifically configured to determine a scene type of each corresponding scene according to the judgment of the background picture of the scene described by each piece of scene information obtained through analysis.
In an embodiment of the present disclosure, the data matching module is specifically configured to search and determine different audio resource data corresponding to different scene categories according to a preset relationship table between the different scene categories and corresponding audios, so as to automatically match the different audio resource data corresponding to the different scene categories.
In an embodiment of the present disclosure, the system further includes:
and the parameter adjusting module is used for adjusting the playing parameters of the corresponding audio resource data according to different information represented by at least part of picture areas in the background picture corresponding to each scene after the audio resource data are automatically matched.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
in the embodiment of the disclosure, a file request containing identification information of one or more preset animation files is sent to a preset server, and the one or more preset animation files returned by the preset server in response to the file request are received; obtaining a plurality of scene information and audio resource data of the animation to be generated according to the one or more preset animation files; and analyzing and judging the scene types of the scenes described by the scene information, and automatically matching corresponding different audio resource data according to different scene types. Therefore, the audio resource data can be automatically matched according to the scene types when the animation is manufactured, the processing efficiency of the audio data when the animation is manufactured and generated can be improved, the manufacturing efficiency of the whole animation is improved to a certain degree, and the labor cost is saved.
Drawings
Fig. 1 shows a flowchart of a method for generating a dynamic cartoon based on an intelligent terminal in an exemplary embodiment of the present disclosure;
fig. 2 shows a schematic diagram of a dynamic cartoon generation system based on an intelligent terminal according to an exemplary embodiment of the disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different processor devices and/or microcontroller devices.
The present exemplary embodiment provides a method for generating a dynamic cartoon based on an intelligent terminal, and the method may be partially or completely applied to a smart phone, an IPAD, and the like. As shown in fig. 1, the method may include the following steps S101 to S103:
step S101: the method comprises the steps of sending a file request containing identification information of one or more preset animation files to a preset server, and receiving one or more preset animation files returned by the preset server in response to the file request.
Step S102: and obtaining a plurality of scene information and audio resource data of the animation to be generated according to the one or more preset animation files.
Step S103: and analyzing and judging the scene types of the scenes described by the scene information, and automatically matching corresponding different audio resource data according to different scene types.
In the embodiment, the audio resource data can be automatically matched according to the scene types when the animation is manufactured, so that the processing efficiency of the audio data when the animation is manufactured and generated can be improved, the manufacturing efficiency of the whole animation is improved to a certain extent, and the labor cost is saved.
Specifically, in step S101, a file request including identification information of one or more preset animation files is sent to a preset server, and the one or more preset animation files returned by the preset server in response to the file request are received.
For example, when an animation is manufactured, one or more preset animation files required by the animation can be prestored in a preset server, such as a remote animation processing server. When the animation is produced on the terminal, only the identification information (such as a unique ID) of the required preset animation file can be carried in a file request and sent to the preset server.
Further, in order to improve user engagement and make the animation production more personalized, in an embodiment of the present disclosure, the method may further include the following steps:
step A, before a file request containing identification information of one or more preset animation files is sent to a preset server, historical network browsing information of a user is obtained based on the intelligent terminal. Illustratively, the historical network browsing information may be animation information historically browsed by the user, such as animation pictures, videos, and the like.
B, determining the preferred cartoon type of the user based on the historical network browsing information, and inquiring a prestored mapping table according to the determined cartoon type to obtain corresponding identification information; the mapping table stores identification information of one or more preset animation files corresponding to different animation types. The mapping table may be pre-established. The animation type can be customized, such as cartoon story type, military type, historical story type and the like. The determining of the preferred animation type of the user based on the historical web browsing information may specifically be: the method comprises the steps of obtaining all the animation information browsed by a user within a certain time (such as one month), counting the occurrence frequency or frequency and the like of each type of animation information, and selecting the animation information corresponding to the highest value of the occurrence frequency or frequency to further determine the preferred animation type of the user.
In step S102, a plurality of scene information and audio resource data of the animation to be generated are obtained according to the one or more preset animation files.
For example, the plurality of scene information may include different background pictures, such as a static picture or a dynamic picture, and the like. The preset animation file can sequentially comprise a file header, a file body and a file tail; the file header is used for providing description information of the animation to be generated, the file body stores a scene index table and scene description information of all scenes, and the file tail comprises a resource index table and all resource data. Therefore, the file header, the file body, the file tail and the like of the preset animation file can be analyzed to obtain a plurality of pieces of scene information and audio resource data of the animation to be generated.
In step S103, the scene categories of the scenes described by the plurality of pieces of scene information are analyzed and determined, and corresponding different audio resource data are automatically matched according to different scene categories.
For example, in an embodiment of the present disclosure, each of the scene information may include a background picture of the corresponding scene; correspondingly, the step of analyzing and determining the scene category of the scene described by the plurality of scene information may include: and determining the corresponding scene category of each scene according to the background picture of the scene described by each scene information obtained by analysis.
Specifically, image recognition may be performed on the background picture, or different scene categories may be determined based on scene description information in the file body, where the scene categories may include buildings, grasslands, forests, and the like in the virtual picture.
Further, in an embodiment of the present disclosure, the step of automatically matching corresponding different audio resource data according to different scene categories may include: and according to a preset relation table between different scene categories and corresponding audios, searching and determining different audio resource data corresponding to different scene categories to be automatically matched. The relation table can be preset, and can be automatically or manually updated or modified subsequently. The audio is different under different scene categories, such as under the scenes of buildings, grasslands, forests and the like, the respective audio is different.
On the basis of the above embodiments, in some other embodiments of the present disclosure, the method may further include the following steps: after the audio resource data are automatically matched, the playing parameters of the corresponding audio resource data are adjusted according to different information represented by at least part of picture areas in the background picture corresponding to each scene.
Specifically, after the audio resource data corresponding to each scene is determined, the playing parameters of the audio resource data, such as the increase or decrease of sound, the optimization special effect processing of sound, etc., are adjusted more finely according to the different local features of the background picture in the scene. In a virtual scene, specific local details are still different, such as a grassland, a sound heard far away from an object (such as a character) is different from a sound heard near the object, and sounds heard indoors and outdoors in a building are different, so that audio data needs to be further accurately and optimally processed in one scene, the sound in the virtual scene is more consistent with the physical effect of reality in the natural world, and the animation fidelity is improved.
In summary, in the embodiment, audio resource data can be automatically matched according to the scene type when the animation is manufactured, so that the audio data processing efficiency when the animation is manufactured and generated can be improved, the manufacturing efficiency of the whole animation is improved to a certain extent, and the labor cost is saved; and simultaneously, the audio data can be optimized to improve the fidelity of animation production.
It should be noted that although the various steps of the methods of the present disclosure are depicted in the drawings in a particular order, this does not require or imply that these steps must be performed in this particular order, or that all of the depicted steps must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions, etc. Additionally, it will also be readily appreciated that the steps may be performed synchronously or asynchronously, e.g., among multiple modules/processes/threads.
Referring to fig. 2, an embodiment of the present disclosure further provides a dynamic cartoon generating system based on an intelligent terminal, where the system 100 may include a data transceiver module 101, a data processing module 102, and a data matching module 103; wherein:
the data transceiver module 101 is configured to send a file request including identification information of one or more preset animation files to a preset server, and receive one or more preset animation files returned by the preset server in response to the file request;
the data processing module 102 is configured to obtain a plurality of pieces of scene information and audio resource data of the animation to be generated according to the one or more preset animation files;
the data matching module 103 is configured to analyze and determine scene types of scenes described by the multiple pieces of scene information, and automatically match corresponding different audio resource data according to different scene types.
In the embodiment of the present disclosure, the system may further include an information obtaining module and an information determining module (not shown). The information acquisition module is used for acquiring historical network browsing information of a user based on the intelligent terminal before sending a file request containing identification information of one or more preset animation files to a preset server; the information determining module is used for determining the preferred cartoon type of the user based on the historical network browsing information and inquiring a prestored mapping table according to the determined cartoon type to obtain the corresponding identification information; the mapping table stores identification information of one or more preset animation files corresponding to different animation types.
In an embodiment of the present disclosure, each of the scene information includes a background picture of a corresponding scene; the data matching module 103 is specifically configured to determine a scene category of each corresponding scene according to the background picture of the scene described by each piece of scene information obtained through analysis.
In an embodiment of the present disclosure, the data matching module 103 is specifically configured to search and determine different audio resource data corresponding to different scene categories to be automatically matched according to a preset relationship table between different scene categories and corresponding audios.
In an embodiment of the present disclosure, the system may further include a parameter adjustment module (not shown) configured to, after the audio resource data is automatically matched, adjust a playing parameter of the corresponding audio resource data according to different information represented by at least part of the picture area in the background picture corresponding to each scene.
It should be noted that, for the above system embodiment, reference may be made to the detailed description of the foregoing method embodiment, and details are not described here again.
The functional modules in the above embodiments of the present disclosure may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part. The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a smart terminal, a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium may include: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
In sum, other embodiments of the present disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (10)

1. A dynamic cartoon generation method based on an intelligent terminal is characterized by comprising the following steps:
sending a file request containing identification information of one or more preset animation files to a preset server, and receiving one or more preset animation files returned by the preset server in response to the file request;
obtaining a plurality of scene information and audio resource data of the animation to be generated according to the one or more preset animation files;
the plurality of scene information includes different background pictures;
the preset animation file sequentially comprises a file header, a file body and a file tail; the file header is used for providing description information of the animation to be generated, the file body stores a scene index table and scene description information of all scenes, and the file tail comprises a resource index table and all resource data;
analyzing the file header, the file body and the file tail of the preset animation file to obtain a plurality of pieces of scene information and audio resource data of the animation to be generated;
and analyzing and judging the scene types of the scenes described by the scene information, and automatically matching corresponding different audio resource data according to different scene types.
2. The intelligent terminal-based dynamic cartoon generation method according to claim 1, characterized in that the method further comprises:
before a file request containing identification information of one or more preset animation files is sent to a preset server, acquiring historical network browsing information of a user based on the intelligent terminal;
determining the preferred cartoon type of the user based on the historical network browsing information, and inquiring a prestored mapping table according to the determined cartoon type to obtain corresponding identification information;
the mapping table stores identification information of one or more preset animation files corresponding to different animation types.
3. The intelligent terminal-based dynamic cartoon generation method according to claim 2, wherein the plurality of scene information each include a corresponding background picture; the step of analyzing and judging the scene category of the scene described by the plurality of scene information comprises the following steps:
and determining the corresponding scene category of each scene according to the background picture of the scene described by each scene information obtained by analysis.
4. The intelligent terminal based dynamic cartoon generation method of claim 3, wherein the step of automatically matching corresponding different audio resource data according to different scene categories comprises:
and according to a preset relation table between different scene categories and corresponding audios, searching and determining different audio resource data corresponding to different scene categories to be automatically matched.
5. The intelligent terminal based dynamic cartoon generation method according to claim 4, characterized in that the method further comprises:
after the audio resource data are automatically matched, the playing parameters of the corresponding audio resource data are adjusted according to different information represented by at least part of picture areas in the background picture corresponding to each scene.
6. A dynamic cartoon generating system based on an intelligent terminal is characterized by comprising:
the data transceiving module is used for sending a file request containing identification information of one or more preset animation files to a preset server and receiving one or more preset animation files returned by the preset server in response to the file request;
the preset animation file sequentially comprises a file header, a file body and a file tail; the file header is used for providing description information of the animation to be generated, the file body stores a scene index table and scene description information of all scenes, and the file tail comprises a resource index table and all resource data;
the data processing module is used for obtaining a plurality of scene information and audio resource data of the animation to be generated according to the one or more preset animation files;
the plurality of scene information includes different background pictures;
analyzing the file header, the file body and the file tail of the preset animation file to obtain a plurality of pieces of scene information and audio resource data of the animation to be generated;
and the data matching module is used for analyzing and judging scene types of scenes described by the scene information and automatically matching corresponding different audio resource data according to different scene types.
7. The intelligent terminal based dynamic cartoon generation system of claim 6, characterized in that the system further comprises:
the information acquisition module is used for acquiring historical network browsing information of a user based on the intelligent terminal before sending a file request containing identification information of one or more preset cartoon files to a preset server;
the information determining module is used for determining the preferred cartoon type of the user based on the historical network browsing information and inquiring a prestored mapping table according to the determined cartoon type to obtain the corresponding identification information; the mapping table stores identification information of one or more preset animation files corresponding to different animation types.
8. The intelligent terminal based dynamic cartoon generation system of claim 7, wherein the plurality of scene information each include a corresponding background picture; the data matching module is specifically configured to determine a scene type of each corresponding scene according to the judgment of the background picture of the scene described by each piece of scene information obtained through analysis.
9. The system for generating a dynamic cartoon based on an intelligent terminal according to claim 8, wherein the data matching module is specifically configured to search and determine different audio resource data corresponding to different scene categories according to a preset relationship table between the different scene categories and corresponding audios.
10. The system for generating dynamic cartoon based on intelligent terminal of claim 9, characterized in that the system further comprises:
and the parameter adjusting module is used for adjusting the playing parameters of the corresponding audio resource data according to different information represented by at least part of picture areas in the background picture corresponding to each scene after the audio resource data are automatically matched.
CN201711132850.7A 2017-11-15 2017-11-15 Dynamic cartoon generation method and system based on intelligent terminal Active CN107885855B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711132850.7A CN107885855B (en) 2017-11-15 2017-11-15 Dynamic cartoon generation method and system based on intelligent terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711132850.7A CN107885855B (en) 2017-11-15 2017-11-15 Dynamic cartoon generation method and system based on intelligent terminal

Publications (2)

Publication Number Publication Date
CN107885855A CN107885855A (en) 2018-04-06
CN107885855B true CN107885855B (en) 2021-07-13

Family

ID=61777467

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711132850.7A Active CN107885855B (en) 2017-11-15 2017-11-15 Dynamic cartoon generation method and system based on intelligent terminal

Country Status (1)

Country Link
CN (1) CN107885855B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111259181B (en) * 2018-12-03 2024-04-12 连尚(新昌)网络科技有限公司 Method and device for displaying information and providing information
CN109994000B (en) * 2019-03-28 2021-10-19 掌阅科技股份有限公司 Reading accompanying method, electronic equipment and computer storage medium
CN111951357A (en) * 2020-08-11 2020-11-17 深圳市前海手绘科技文化有限公司 Application method of sound material in hand-drawn animation

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102314917A (en) * 2010-07-01 2012-01-11 北京中星微电子有限公司 Method and device for playing video and audio files
CN103402121A (en) * 2013-06-07 2013-11-20 深圳创维数字技术股份有限公司 Method, equipment and system for adjusting sound effect
CN104394331A (en) * 2014-12-05 2015-03-04 厦门美图之家科技有限公司 Video processing method for adding matching sound effect in video picture
CN104618445A (en) * 2014-12-30 2015-05-13 北京奇虎科技有限公司 Method and device for arranging files based on cloud storage space
CN105488044A (en) * 2014-09-16 2016-04-13 华为技术有限公司 Data processing method and device
CN105872790A (en) * 2015-12-02 2016-08-17 乐视网信息技术(北京)股份有限公司 Method and system for recommending audio/video program
CN107169430A (en) * 2017-05-02 2017-09-15 哈尔滨工业大学深圳研究生院 Reading environment audio strengthening system and method based on image procossing semantic analysis

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100742613B1 (en) * 2005-01-07 2007-07-25 한국전자통신연구원 Apparatus and Method for Providing Adaptive Broadcast Service using Classification Schemes for Usage Environment Description
CN103309670B (en) * 2013-06-20 2018-05-15 亿览在线网络技术(北京)有限公司 The implementation method and device of a kind of skin of music player
CN103986980B (en) * 2014-05-30 2017-06-13 中国传媒大学 A kind of hypermedia editing method and system
CN105069104B (en) * 2015-05-22 2018-10-23 福建中科亚创动漫科技股份有限公司 A kind of generation method and system of dynamic caricature
CN106095387B (en) * 2016-06-16 2019-06-25 Oppo广东移动通信有限公司 A kind of the audio setting method and terminal of terminal

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102314917A (en) * 2010-07-01 2012-01-11 北京中星微电子有限公司 Method and device for playing video and audio files
CN103402121A (en) * 2013-06-07 2013-11-20 深圳创维数字技术股份有限公司 Method, equipment and system for adjusting sound effect
CN105488044A (en) * 2014-09-16 2016-04-13 华为技术有限公司 Data processing method and device
CN104394331A (en) * 2014-12-05 2015-03-04 厦门美图之家科技有限公司 Video processing method for adding matching sound effect in video picture
CN104618445A (en) * 2014-12-30 2015-05-13 北京奇虎科技有限公司 Method and device for arranging files based on cloud storage space
CN105872790A (en) * 2015-12-02 2016-08-17 乐视网信息技术(北京)股份有限公司 Method and system for recommending audio/video program
CN107169430A (en) * 2017-05-02 2017-09-15 哈尔滨工业大学深圳研究生院 Reading environment audio strengthening system and method based on image procossing semantic analysis

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Android视频开发基础(一);viclee108;《https://blog.csdn.net/goodlixueyong/article/details/》;20170316;第1页 *
The use of digital facial animation to present anesthesia history;Ortega R A 等;《Bulletin of Anesthesia History》;20170131;第4-6页 *

Also Published As

Publication number Publication date
CN107885855A (en) 2018-04-06

Similar Documents

Publication Publication Date Title
CN107832437B (en) Audio/video pushing method, device, equipment and storage medium
CN106982381B (en) Home page recommendation processing method and device
CN107885855B (en) Dynamic cartoon generation method and system based on intelligent terminal
CN107645686A (en) Information processing method, device, terminal device and storage medium
JP2018026085A (en) Music recommendation method and music recommendation device
US10885107B2 (en) Music recommendation method and apparatus
JP2016524768A (en) Multimedia resource recommendation method, apparatus, program, and recording medium
CN105718566B (en) Intelligent music recommendation system
CN106951527B (en) Song recommendation method and device
CN106302471B (en) Method and device for recommending virtual gift
WO2022247894A1 (en) Service configuration method and apparatus for live broadcast room, and device and medium
WO2019233361A1 (en) Method and device for adjusting volume of music
CN104091596A (en) Music identifying method, system and device
CN110855487B (en) Network user similarity management method, device and storage medium
CN105426392B (en) Collaborative filtering recommendation method and system
CN112118472A (en) Method and apparatus for playing multimedia
CN113688310A (en) Content recommendation method, device, equipment and storage medium
CN112861963A (en) Method, device and storage medium for training entity feature extraction model
CN110263318B (en) Entity name processing method and device, computer readable medium and electronic equipment
CN110147481B (en) Media content pushing method and device and storage medium
CN100483404C (en) Method of searching for media objects
CN110764731A (en) Multimedia file playing control method, intelligent terminal and server
CN108108345B (en) Method and apparatus for determining news topic
CN114363660B (en) Video collection determining method and device, electronic equipment and storage medium
CN117272056A (en) Object feature construction method, device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant