CN112040322A - Video specification making method - Google Patents

Video specification making method Download PDF

Info

Publication number
CN112040322A
CN112040322A CN202010841884.9A CN202010841884A CN112040322A CN 112040322 A CN112040322 A CN 112040322A CN 202010841884 A CN202010841884 A CN 202010841884A CN 112040322 A CN112040322 A CN 112040322A
Authority
CN
China
Prior art keywords
animation
video
website
json
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010841884.9A
Other languages
Chinese (zh)
Inventor
陈仁江
周金岩
刘佳慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yifa Network Technology Dalian Co ltd
Original Assignee
Yifa Network Technology Dalian Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yifa Network Technology Dalian Co ltd filed Critical Yifa Network Technology Dalian Co ltd
Priority to CN202010841884.9A priority Critical patent/CN112040322A/en
Publication of CN112040322A publication Critical patent/CN112040322A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/02Methods for producing synthetic speech; Speech synthesisers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440236Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by media transcoding, e.g. video is transformed into a slideshow of still pictures, audio is converted into text

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Processing Or Creating Images (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention relates to the technical field of video production, in particular to a method for producing a video specification, which comprises the following steps: intercepting software or website operation flow pictures, compressing the pictures into a ZI file, then importing the ZI file into a video editor, making continuous animation by taking each picture as one frame according to the operation flow, and adding mouse movement, click action and a current frame description text on each frame of picture to create a silent JSON animation; analyzing the JSON animation into a series of animation nodes, and continuously displaying the animation nodes in the webpage through an HTML5 canvas technology; dubbing the video description; creating a specification management website, wherein JSON animation files, description texts, audio files and corresponding relations are stored in a background database of the website, and an HTML5 player is integrated in the specification management website. The method can rapidly make the audio video specification, reduce the making cost of the video specification, simply and conveniently update the content of the video specification, and is convenient for a user to read from a specified position.

Description

Video specification making method
Technical Field
The invention relates to the technical field of video production, in particular to a method for producing a video specification.
Background
With the continuous maturity and development of software technology and internet technology, the complexity of software and websites is increasing. Although designers of software and websites strive to improve the compatibility of their products, the conflict between complex functionality and the ease of use required by users cannot be resolved. Although almost all software products and websites are equipped with text tutorials, the readability is low and the difference of understanding degree when different users read makes the text specifications similar to a dummy one, and the readers have few.
At present, the existing video description document has high manufacturing cost, fussy maintenance and difficult query, so that a plurality of software and internet enterprises have to give up using the video description. Only large businesses that are capital intensive and produce high profit margins will produce video specifications for internal training purposes. However, these video specifications are expensive to update and have long production periods, and are difficult to be iteratively synchronized with the software or website update. Therefore, even when used for training, the effect is not satisfactory. And the current mainstream video website still uses streaming media, the streaming media has large volume and occupies a large amount of storage space, and the streaming media is a waste from the perspective of storage cost and environmental protection. The characteristics of the streaming media file determine that the streaming media is difficult to be loaded from the designated time position quickly, which wastes the time of the reader and reduces the patience of the reader.
Disclosure of Invention
In order to solve the existing problems, the invention discloses a method for making a video specification, which can quickly make an audio video specification, reduce the making cost of the video specification, simply and conveniently update the content of the video specification and facilitate the reading of a user from a specified position.
In order to achieve the technical purpose and achieve the technical effects, the invention is realized by the following technical scheme:
a method of making a video specification comprising the steps of:
intercepting software or website operation flow pictures, compressing the pictures into a ZI file, then importing the ZI file into a video editor, making continuous animation by taking each picture as one frame according to the operation flow, and adding mouse movement, click action and a current frame description text on each frame of picture to create a silent JSON animation;
analyzing the JSON animation into a series of animation nodes, and continuously displaying the animation nodes in the webpage through an HTML5 canvas technology;
each frame of picture is matched with a description text, and the content of the description text is converted into a corresponding audio file by using a voice generator;
and creating a specification management website, wherein JSON animation files, text description texts, audio files and corresponding relations thereof are stored in a background database of the website, and an HTML5 player is integrated in the specification management website and is used for playing the video specification.
Further, the adding of the mouse movement, the click action and the current frame description text specifically includes: coordinates of the mouse image in an HTML5 canvas are continuously changed by a JavaScript script within a specified time interval to simulate a moving track of the mouse image, a mouse click action is realized by inserting highlight flicker and mouse click audio in the HTML5 canvas at the moving end of the mouse, and explanatory text is displayed in the canvas according to fixed colors and text.
The invention has the beneficial effects that:
the video specification is simple and convenient to manufacture, and the manufacturing cost of the video specification is reduced;
a user can search for the content mentioned in the dubbing of the specification through the keywords, accurately position the corresponding animation frame in the video according to the search result, and simultaneously find the audio file corresponding to the frame of animation, when clicking the search result, the animation starts to play from the frame, and simultaneously plays the audio file corresponding to the frame;
if the software and the website function are updated, only the corresponding page and the corresponding audio script in the video need to be updated, and the whole audio does not need to be reproduced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a video specification production flow diagram of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, the invention discloses a method for making a video specification, which comprises the following steps:
intercepting software or website operation flow pictures, compressing the pictures into a ZI file, then importing the ZI file into a video editor, making continuous animation by taking each picture as one frame according to the software or website operation flow, adding mouse movement, click action and a current frame description text on each frame of picture, and quickly creating a silent JSON animation;
the specific steps of adding mouse movement, click action and current frame description text are as follows: coordinates of the mouse image in an HTML5 canvas are continuously changed by a JavaScript script within a specified time interval to simulate a moving track of the mouse image, a mouse click action is realized by inserting highlight flicker and mouse click audio in the HTML5 canvas at the moving end of the mouse, and explanatory text is displayed in the canvas according to fixed colors and text.
The JSON animation is parsed into a series of animation nodes, which are then continuously displayed in the web page by the HTML5 canvas technique.
And (4) dubbing the video instruction, matching each frame of picture with a text instruction text, and converting the text instruction text content into a corresponding audio file by using a voice generator.
And creating a specification management website, and storing JSON animations, text description texts, audio texts and corresponding relations in a background database of the website. The instruction management website is used for displaying the created instruction, and an HTML5 player is integrated in the website and used for playing the instruction.
When the video specification file is used, after the video specification file is published to a website capable of browsing videos, a user can search contents mentioned in the specification dubbing on the specification management website through keywords, accurately position the corresponding animation frame in the videos according to the search result, simultaneously find the audio file corresponding to the frame of animation, start playing the animation from the frame when clicking the search result, and simultaneously play the audio file corresponding to the frame, so that unnecessary video contents are skipped over for the user, and time is saved.
The video description book is composed of continuous pictures and audio description texts corresponding to the pictures of each page, each page can be regarded as a frame of animation, when a user finds that software or website functions are updated, a video producer can update only animation frame screenshots of the corresponding software or website updating part in a video and corresponding audio scripts according to feedback of the user when the video description is updated, and the corresponding relation is preserved in a database again without re-producing the whole audio.
The invention fills the blank of fast editing the video file, improves the video modification efficiency and greatly reduces the video editing cost.
The storage space of the video specification file is far smaller than that of a video file with the same resolution, so that the storage cost is saved.
Compared with the traditional streaming video file, the method has the advantage that the playing of the whole file is not affected by the damage of individual elements.
The preferred embodiments of the invention disclosed above are intended to be illustrative only. The preferred embodiments are not intended to be exhaustive or to limit the invention to the precise embodiments disclosed. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, to thereby enable others skilled in the art to best utilize the invention. The invention is limited only by the claims and their full scope and equivalents.
It is to be understood that the above description is not intended to limit the present invention, and the present invention is not limited to the above examples, and those skilled in the art should understand that they can make various changes, modifications, additions and substitutions within the spirit and scope of the present invention.

Claims (2)

1. A method for making a video specification, comprising the steps of:
intercepting software or website operation flow pictures, compressing the pictures into a ZI file, then importing the ZI file into a video editor, making continuous animation by taking each picture as one frame according to the operation flow, and adding mouse movement, click action and a current frame description text on each frame of picture to create a silent JSON animation;
analyzing the JSON animation into a series of animation nodes, and continuously displaying the animation nodes in the webpage through an HTML5 canvas technology;
each frame of picture is matched with a description text, and the content of the description text is converted into a corresponding audio file by using a voice generator;
and creating a specification management website, wherein JSON animation files, text description texts, audio files and corresponding relations thereof are stored in a background database of the website, and an HTML5 player is integrated in the specification management website and is used for playing the video specification.
2. The method of claim 1, wherein the adding of mouse movement, click action and current frame description text is as follows: coordinates of the mouse image in an HTML5 canvas are continuously changed by a JavaScript script within a specified time interval to simulate a moving track of the mouse image, a mouse click action is realized by inserting highlight flicker and mouse click audio in the HTML5 canvas at the end of the mouse movement, and explanatory text is displayed in the HTML5 canvas according to fixed colors and text.
CN202010841884.9A 2020-08-20 2020-08-20 Video specification making method Pending CN112040322A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010841884.9A CN112040322A (en) 2020-08-20 2020-08-20 Video specification making method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010841884.9A CN112040322A (en) 2020-08-20 2020-08-20 Video specification making method

Publications (1)

Publication Number Publication Date
CN112040322A true CN112040322A (en) 2020-12-04

Family

ID=73578553

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010841884.9A Pending CN112040322A (en) 2020-08-20 2020-08-20 Video specification making method

Country Status (1)

Country Link
CN (1) CN112040322A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113408261A (en) * 2021-08-10 2021-09-17 广东新瑞智安科技有限公司 Method and system for generating job requisition
WO2023223671A1 (en) * 2022-05-17 2023-11-23 株式会社Nttドコモ Video manual generation device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150286739A1 (en) * 2012-11-06 2015-10-08 Layabox Inc. Html5-protocol-based webpage presentation method and device
CN105045823A (en) * 2015-06-26 2015-11-11 上海卓易科技股份有限公司 Method and device for generating demo file of mobile terminal based on screen capture
CN109285207A (en) * 2018-09-20 2019-01-29 深圳市牛鼎丰科技有限公司 Animation method, device, computer equipment and storage medium
CN111538851A (en) * 2020-04-16 2020-08-14 北京捷通华声科技股份有限公司 Method, system, device and storage medium for automatically generating demonstration video

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150286739A1 (en) * 2012-11-06 2015-10-08 Layabox Inc. Html5-protocol-based webpage presentation method and device
CN105045823A (en) * 2015-06-26 2015-11-11 上海卓易科技股份有限公司 Method and device for generating demo file of mobile terminal based on screen capture
CN109285207A (en) * 2018-09-20 2019-01-29 深圳市牛鼎丰科技有限公司 Animation method, device, computer equipment and storage medium
CN111538851A (en) * 2020-04-16 2020-08-14 北京捷通华声科技股份有限公司 Method, system, device and storage medium for automatically generating demonstration video

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113408261A (en) * 2021-08-10 2021-09-17 广东新瑞智安科技有限公司 Method and system for generating job requisition
WO2023223671A1 (en) * 2022-05-17 2023-11-23 株式会社Nttドコモ Video manual generation device

Similar Documents

Publication Publication Date Title
US11769419B2 (en) Methods and systems for dynamically generating a training program
US10325397B2 (en) Systems and methods for assembling and/or displaying multimedia objects, modules or presentations
WO2020125567A1 (en) Automatic animation generation method, and automatic animation generation system
US9438850B2 (en) Determining importance of scenes based upon closed captioning data
Costello Multimedia foundations
US20140310746A1 (en) Digital asset management, authoring, and presentation techniques
CN112040322A (en) Video specification making method
CN106202024B (en) A kind of system, method and client for realizing branching selection structure electrical book editor
Cohen Database Documentary: From Authorship to Authoring in Remediated/Remixed Documentary
Kalender et al. Videolization: knowledge graph based automated video generation from web content
Banerjee Elements of multimedia
CN113344633B (en) Advertisement picture processing method and device
Kannan et al. DanVideo: an MPEG-7 authoring and retrieval system for dance videos
Wang Construction of Interactive Platform for Audio Reading From the Perspective of All Media Based on Vue Framework
KR101302583B1 (en) An e-learning contents management system based on object units and the method thereof
CN113269855A (en) Method, equipment and storage medium for converting text semantics into scene animation
CN113411517A (en) Video template generation method and device, electronic equipment and storage medium
KR101089357B1 (en) An e-learning contents development system based on object units
Dusi et al. Movieremix: Having fun playing with videos
Concolato et al. Design of an efficient scalable vector graphics player for constrained devices
Arai Free Open Source Software: FOSS Based e-learning, Mobile Learning Systems Together with Blended Learning System
Salim et al. Improving the learning effectiveness of educational videos
Mertens et al. Interactive content overviews for lecture recordings
KR101161693B1 (en) Objected, and based on XML CMS with freely editing solution
Chen et al. Teaching Practice for Digital Media Design in the Context of Media Integration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20201204