CN104392633B - Explanation control method for power system simulation training - Google Patents

Explanation control method for power system simulation training Download PDF

Info

Publication number
CN104392633B
CN104392633B CN201410645722.2A CN201410645722A CN104392633B CN 104392633 B CN104392633 B CN 104392633B CN 201410645722 A CN201410645722 A CN 201410645722A CN 104392633 B CN104392633 B CN 104392633B
Authority
CN
China
Prior art keywords
voice
subtitles
virtual camera
power transformation
dimensional object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410645722.2A
Other languages
Chinese (zh)
Other versions
CN104392633A (en
Inventor
杨军强
乔焕伟
闫佳文
杨选怀
郭小燕
王伟
刘哲
李大鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Corp of China SGCC
Beijing Kedong Electric Power Control System Co Ltd
Training Center of State Grid Hebei Electric Power Co Ltd
Datong Power Supply Co of State Grid Shanxi Electric Power Co Ltd
Original Assignee
State Grid Corp of China SGCC
Beijing Kedong Electric Power Control System Co Ltd
Training Center of State Grid Hebei Electric Power Co Ltd
Datong Power Supply Co of State Grid Shanxi Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Corp of China SGCC, Beijing Kedong Electric Power Control System Co Ltd, Training Center of State Grid Hebei Electric Power Co Ltd, Datong Power Supply Co of State Grid Shanxi Electric Power Co Ltd filed Critical State Grid Corp of China SGCC
Priority to CN201410645722.2A priority Critical patent/CN104392633B/en
Publication of CN104392633A publication Critical patent/CN104392633A/en
Application granted granted Critical
Publication of CN104392633B publication Critical patent/CN104392633B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • G09B5/065Combinations of audio and video presentations, e.g. videotapes, videodiscs, television systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses an explanation control method for power system simulation training, which comprises the following steps: s1, establishing voice, corresponding subtitles and a voice control animation curve required by equipment explanation, and determining a power transformation equipment angle corresponding to each section of voice and a part three-dimensional object required to be highlighted; s2, extracting the initial color of the three-dimensional object of the part needing to be highlighted when the power transformation overhaul simulation training system is initialized, and reading the recorded voice and the corresponding subtitles; and S3, associating the voice and the corresponding subtitles with the voice control animation curve, playing the voice control animation curve, performing voice and subtitle playing and switching processing, displaying the corresponding power transformation equipment according to the voice playing content, and highlighting the three-dimensional object of the related part. The process realizes the synchronous control processing of the three-dimensional graphics, the voice and the subtitles of the power transformation equipment, can well meet the training requirements and achieve the expected training effect.

Description

Explanation control method for power system simulation training
Technical Field
The invention relates to an explanation control method for power system simulation training, and belongs to the technical field of power system simulation.
Background
With the development of electric power utilities in China, high-voltage and high-capacity power transformation equipment is put into operation in succession, and the application of advanced technology and equipment puts higher requirements on the quality of operation and maintenance personnel. In order to cultivate qualified operation and maintenance personnel, the power transformation maintenance simulation training system is developed.
At present, a power transformation overhaul simulation training system is mainly characterized in that a teacher simulates power transformation equipment through a computer to explain relevant knowledge to personnel needing training, when the number of people needing training is large, a large number of teachers are needed, and the requirements on the teachers are high. And a large amount of time and financial resources are needed for cultivating a batch of high-quality instructors, so that the urgent requirements of qualified operation and maintenance personnel at the present stage cannot be well met.
In order to solve the above problems, it is a common practice to replace an instructor with computer software, to record the explanation contents into the computer software in advance, and to realize the explanation by the computer software. The method comprises the steps of explaining relevant knowledge for personnel needing training by using computer software, needing the cooperation of voice, subtitles and a power transformation equipment model, and enabling the three to be synchronous. And the whole transformer equipment model is displayed on the screen, so that the training personnel can not easily grasp the explained content accurately. In addition, in order to realize the reality of simulation and better understand the power transformation equipment, the existing power transformation equipment model mainly adopts three-dimensional graphics, and the synchronous control of the three-dimensional graphics of the power transformation equipment, voice and subtitles does not relate to the synchronous control, so that the training requirement can not be well met, and the expected training effect can be achieved.
Disclosure of Invention
The invention aims to provide an explanation control method for power system simulation training.
In order to achieve the purpose, the invention adopts the following technical scheme:
an explanation control method for power system simulation training comprises the following steps:
s1, establishing voice, corresponding subtitles and a voice control animation curve required by equipment explanation, and determining a power transformation equipment angle corresponding to each section of voice and a part three-dimensional object required to be highlighted;
s2, extracting the initial color of the three-dimensional object of the part needing to be highlighted when the power transformation overhaul simulation training system is initialized, and reading the recorded voice and the corresponding subtitles;
and S3, associating the voice and the corresponding subtitles with the voice control animation curve, playing the voice control animation curve, performing voice and subtitle playing and switching processing, displaying the corresponding transformation equipment angle according to the voice playing content, and highlighting the three-dimensional object of the related part.
Preferably, in step S1, the step of establishing the voice and the corresponding subtitle required for the device explanation includes the following steps:
firstly, decomposing the device explanation text into a plurality of small sections of text, and respectively storing each small section of text as a unit for displaying subtitles; then dubbing each small segment of characters according to the normal reading speed, and respectively storing the dubbing of each small segment of characters; and finally, combining the text information and the corresponding voice information to establish a voice section and a subtitle configuration file.
Preferably, when the device explanation characters are decomposed, the full text is decomposed by taking the period number as the disconnection mark.
Preferably, the voice control animation curve establishes a corresponding relationship between time and control parameters, the control parameters are associated with each section of voice and corresponding subtitles, and the control parameters continuously trigger the corresponding voice and subtitles as time goes on.
Preferably, the step of determining the angle of the power transformation equipment corresponding to each section of voice and the three-dimensional object of the part needing to be highlighted comprises the following steps:
s11, establishing a virtual camera which can play all angles of the power transformation equipment related to the explanation characters of the equipment according to the change of the position and the direction of the visual angle;
s12, aiming at the content of each speech explanation, establishing the visual angle position and direction of the corresponding virtual camera, and determining the three-dimensional object of the part needing to be highlighted;
and S13, storing the visual angle position and direction of the virtual camera corresponding to each voice, the three-dimensional object of the part to be highlighted and the initial color of the three-dimensional object of the part to be highlighted in an array.
Preferably, in step S3, associating the voice and the corresponding subtitles with the voice control animation curve is performed by associating a specific parameter value of the control parameter with a trigger point of each piece of voice and the corresponding subtitles, and triggering the trigger point of the corresponding voice and subtitles when the control parameter reaches the specific parameter value, wherein the corresponding voice and subtitles start to play; and gradually increasing the parameter value of the control parameter along with the time, triggering the triggering points one by one according to the storage sequence of the voice and the subtitles, playing the voice and displaying the corresponding subtitles.
Preferably, in step S3, displaying the corresponding angle of the power transformation equipment according to the voice playing content is that the visual angle position and direction of the virtual camera continuously move along with the playing of the voice, so as to display different angles of the power transformation equipment on the screen; when the (m-1) th voice is played, extracting the visual angle position and the direction of the virtual camera corresponding to the (m-1) th voice from the array, and then obtaining the direction and the distance of the virtual camera for displaying the angle of the power transformation equipment corresponding to the current voice to move according to the vector Ia1 of the current point of the virtual camera in the three-dimensional space and the vector Ib1 of the visual angle position and the direction of the virtual camera corresponding to the (m-1) th voice;
where the direction is the direction of vector Ic 1: ic1 is Ib 1-Ia 1, the motion distance is the modulus of the vector Ic1, and m is a positive integer.
Preferably, in step S3, highlighting the three-dimensional object of the part related to the played voice appears as a color with a gradually flickering effect; the method comprises the following steps:
firstly, setting an interpolation value between an initial color a of a three-dimensional object of a part extracted during initialization of a power transformation overhaul simulation training system and a finally displayed color b, wherein the interpolation value is changed from 0 to 1; returning to a when the interpolation is 0, and returning to b when the interpolation is 1; and then setting a Sin sine wave function, and finally taking a return value of the Sin sine wave function as an interpolation value, wherein the Sin sine wave function changes along with time, the interpolation value changes from 0 to 1 along with the change of the Sin sine wave function, the color of the three-dimensional object of the part is changed from a color a to a color b, and the color gradual change flickering effect is presented.
Preferably, after the voice playing is finished, the three-dimensional object of the part related to the voice returns to the initial color.
According to the explanation control method for the simulation training of the power system, the explanation characters of one piece of equipment are decomposed into a plurality of small sections of characters which are used as units for displaying subtitles and are stored respectively; and each segment of characters is dubbed, the dubbing and the caption of each segment of characters are associated with the voice control animation curve, and the voice and caption are played and switched in the explanation process of the equipment through the voice control animation curve, so that the whole process does not need human intervention, the triggering accuracy is improved, and the error occurrence probability is reduced.
Drawings
FIG. 1 is a flow chart of an explanation control method for power system simulation training according to the present invention;
FIG. 2 is a schematic structural diagram of a voice segment and a subtitle configuration file according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a voice-controlled animation curve according to an embodiment of the present invention;
FIG. 4 is a flowchart of reading the voice and subtitle from the voice segment and subtitle configuration file according to an embodiment of the present invention;
fig. 5 is a schematic diagram illustrating a motion principle of an object in three-dimensional space according to an embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the figures and the specific embodiments.
As shown in fig. 1, the explanation control method for power system simulation training provided by the present invention is described in detail by taking the explanation of the power transformation equipment as an example, and specifically includes the following steps: firstly, segmenting the device explanation characters, dubbing the characters into a plurality of sections of voice according to the segments, forming voice sections and a subtitle configuration file, and then determining the angle of the power transformation device corresponding to each section of voice and a three-dimensional object of the part needing to be highlighted according to the segments; then a voice control animation curve is established. When the power transformation overhaul simulation training system is initialized, the initial color of the three-dimensional object of the part needing to be highlighted is recorded, and all voices and corresponding subtitles are read in through the voice section and the subtitle configuration file. The method comprises the steps of associating voice and corresponding subtitles with a voice control animation curve, playing the established voice control animation curve by a system when voice explanation is started, triggering and recording the voice one by one along with the driving of the animation curve, displaying the corresponding subtitles, automatically moving to a visual angle position and a direction corresponding to each voice through a set virtual camera, displaying the angle of power transformation equipment corresponding to the voice, and highlighting a part three-dimensional object explained by a voice content institute, explaining the structure, the action principle and the like of the power transformation equipment. When explaining other equipment of the power system, the virtual camera is only required to be automatically moved to the visual angle position and direction corresponding to each voice, and the other equipment angle corresponding to the voice is displayed, and the process is specifically described in detail below.
And S1, establishing a voice, a corresponding subtitle and a voice control animation curve required by equipment explanation, and determining a power transformation equipment angle corresponding to each section of voice and a part three-dimensional object required to be highlighted according to the sections.
When a piece of equipment explanation text is taken, firstly, decomposing the whole text into a plurality of small sections of text by taking a period as a breaking sign, and respectively storing each small section of text as a unit for displaying a caption; then dubbing each small segment of characters according to the normal reading speed, and respectively storing the dubbing of each small segment of characters; and finally, combining the text information and the corresponding voice information to establish a voice section and a subtitle configuration file. As shown in fig. 2, in the voice section and subtitle configuration file, one row represents a correspondence of a section of voice to a subtitle, and the voice storage path and the corresponding subtitle are separated by "& _" as a separator. Each voice storage file can be found through the voice storage path.
The transformer overhaul simulation training system is developed by adopting a unity development environment, the animation curve is established by an animation editor of the unity development environment, and any curve can be established in the animation editor by inserting a key frame aiming at a certain parameter value and is stored in an animation form. When the animation begins to play, the y-axis value changes along with the change of the curve along with the lapse of the x-axis. In the embodiment provided by the invention, the voice control animation curve is constructed on the basis of the animation curve. When the voice control animation curve is constructed, a control parameter Controller is used as a y-axis, and the value of the y-axis changes along with the change of the curve along with the transition of an x-axis. The Controller parameter is used as a control parameter, and when the x time axis changes continuously along with the time, the Controller parameter continuously triggers corresponding voice and subtitles. And the Controller parameter determines the number of animation key frames to be established according to the number of information pieces in the voice section and the subtitle configuration file. When n segments of voice are to be controlled to play, n +2 animation key frames are required to be established according to the Controller parameter. The 1 st frame is a starting frame, the n +2 th frame is an ending frame, and the middle n animation key frames are trigger frames of n sections of voice. And manual intervention is not needed, the triggering accuracy is improved, and the error occurrence probability is reduced.
As shown in fig. 3, the 1 st frame is established as a start frame at a time position of 0.00 second, and the Controller parameter value is less than 0. Establishing a 2 nd frame at a proper position of a time axis of more than 0.0 second, setting a Controller parameter value to be 0, and using the parameter value as a trigger point for playing the 1 st section of voice and the subtitles to start playing; in the embodiment provided by the invention, when the reserved time at the proper position of the time axis is the time for initializing the power transformation overhaul simulation training system, the recorded voice and the corresponding subtitle are read in and are associated with the voice control animation curve. Establishing a 3 rd frame at a time axis position longer than the 1 st voice length, setting a Controller parameter value as 1, and using the Controller parameter value as a trigger point for starting playing the 2 nd voice and the caption; at this time, the time represented by the position of the time axis greater than the length of the 1 st voice is the sum of the pause time of each small segment in the voice recording process and the recording time of the 1 st voice. And by analogy, establishing an n +1 th frame at a time axis position longer than the length of the n-1 th voice, and setting a Controller parameter value as n-1 as a trigger point for starting playing the nth voice and the subtitles. And establishing an n +2 th frame at a proper position of a time axis with the length larger than the nth voice, and setting a Controller parameter value as n to be used as an end frame. Taking the mth frame (any frame) as a reference, and linearly changing the Controller parameter value between the mth frame and the m-1 th frame; between the m-th frame and the m + 1-th frame, the linear change of the Controller parameter values also ensures that the voice between every two frames is played at a constant speed, and each Controller parameter value corresponds to the voice and the caption information at a time position. A device explains that characters correspond to a voice control animation curve, and the voice control animation curve controls playing and switching of multiple sections of voice and corresponding subtitles through Controller parameters.
Meanwhile, the angle of the power transformation equipment corresponding to each section of voice and the three-dimensional object of the part needing to be highlighted are determined according to the sections. In the embodiment provided by the invention, when the power transformation equipment is explained by voice, a virtual camera is arranged in the power transformation overhaul simulation training system, different angles of the power transformation equipment are played through the change of the visual angle position and direction of the virtual camera according to different voice contents, and a three-dimensional object of a part which is mainly explained by the voice contents is highlighted through color change. The method for determining the angle of the power transformation equipment corresponding to each section of voice and the three-dimensional object of the part needing to be highlighted according to the sections specifically comprises the following steps:
and S11, establishing a virtual camera which can play all angles of the power transformation equipment related to the explanation characters of the equipment according to the change of the visual angle position and direction.
And S12, establishing the visual angle position and direction of the virtual camera corresponding to the content of each speech explanation, and determining the part three-dimensional object needing to be highlighted.
In the embodiment provided by the invention, each piece of voice corresponds to the visual angle position and direction of one virtual camera, and the visual angle position and direction of the virtual camera are a certain point in the three-dimensional coordinate where the virtual camera is located when the virtual camera can display a certain angle of the power transformation equipment. And determining the angle of the power transformation equipment required to be displayed on the screen according to the content of each voice explanation, and storing a three-dimensional coordinate position corresponding to a certain point in the three-dimensional coordinate where the virtual camera is positioned when the angle is displayed into an array DirectionandGlitterGRP as the visual angle position and direction of the corresponding virtual camera so as to be used for explaining the power transformation equipment subsequently.
In addition, for the content of each speech explanation, a part three-dimensional object related to the content is determined and stored in the array DirectionandGliterGRP, so that the part three-dimensional object is used when highlighted in order to highlight the pertinence of the explanation, improve the authenticity and the explanation efficiency of the explanation when the power transformation equipment is explained later.
And S13, storing the visual angle position and direction of the virtual camera corresponding to each piece of voice, the three-dimensional object of the part to be highlighted and the initial color of the three-dimensional object of the part to be highlighted in an array DirectiondGlitterGRP.
And S2, extracting the initial color of the three-dimensional object of the part needing to be highlighted when the power transformation overhaul simulation training system is initialized, and reading the recorded voice and the corresponding subtitles.
When the transformer overhaul simulation training system is initialized, the stored initial color of the part three-dimensional object needing to be highlighted is extracted from the array DirectionandGlitterGRP and used for restoring the part three-dimensional object to the initial state after a certain section of voice is played.
In addition, when the power transformation overhaul simulation training system is initialized, recorded voice and corresponding subtitles also need to be read in so as to be conveniently associated with the voice control animation curve subsequently, and corresponding voice and corresponding subtitles are played while the voice control animation curve is played. As shown in fig. 4, when the transformer substation overhaul simulation training system is initialized, the external voice segment and the subtitle configuration file are opened in a data stream manner, and the internal information of the voice segment and the subtitle configuration file is read line by line. Each row of data is separated by "& _" as a separator, stored in a temporary array str, and added to a temporary container arrrist. And closing the data stream until the contents in the voice section and the subtitle configuration file are completely read, and assigning the contents in the temporary container arrlist to the dataArray container.
And extracting a voice storage path from each element of the dataArray container, finding voice information according to the voice storage path, and storing the voice information into the audioArray container.
And S3, associating the voice and the corresponding subtitles with the voice control animation curve, playing the voice control animation curve, performing voice and subtitle playing and switching processing, displaying the corresponding transformation equipment angle according to the voice playing content, and highlighting the three-dimensional object of the related part.
And after the recorded voice and the corresponding subtitles are read in, associating the voice and the corresponding subtitles with the voice control animation curve through a Controller parameter of the voice control animation curve. When an equipment explanation text of the transformer overhaul simulation training system can be decomposed into n sections of voice for playing, n +2 animation key frames are established corresponding to the Controller parameters, and each animation key frame corresponds to one Controller parameter value. The method comprises the following steps that a frame 1 is a starting frame, a frame n +2 is an ending frame, n animation key frames in the middle are trigger frames of n sections of voice, each trigger frame corresponds to a section of voice and a section of subtitle trigger point, a Controller parameter gradually increases along with the time, and when the m-th animation key frame corresponding to the Controller parameter and serving as the trigger frame is reached, a speech section m-1 and a subtitle section m-1 are respectively extracted from an audio array container and a dataArray container and played. At this time, the Controller parameter reaches a specific parameter value m-2, which is an integer greater than or equal to zero in the embodiment of the present invention. Meanwhile, the visual angle position and direction of the virtual camera corresponding to the (m-1) th section of voice and the part three-dimensional object needing to be highlighted are extracted from the array DirectionandGlitterGRP. And starting the coordinate and direction transformation processing of the virtual camera to enable the virtual camera to move in a three-dimensional space along with the playing of the voice, and simultaneously starting the processing of highlighting the three-dimensional object of the designated part.
As shown in fig. 5, the principle of moving an object from point a to point b in three-dimensional space is as follows:
in the three-dimensional space, the vector of the point a is Ia, the vector of the point b is Ib, and when the object is to move from the point a to the point b, the vector Ic from the point a to the point b needs to be obtained. The vector Ia and the vector Ib are subtracted to obtain Ic: and Ic is Ib-Ia.
The direction of the vector Ic is the direction of the object motion, and the modulus is the distance of the object motion.
In order to realize the reality of simulation and better understand the power transformation equipment, the current power transformation equipment model mainly adopts a three-dimensional graph, and the visual angle position and the direction of a virtual camera continuously move along with the continuous playing of recorded voice so as to realize the display of different angles of the three-dimensional graph of the power transformation equipment on a screen. When the m-1 section of voice is played, the visual angle position and the direction of the virtual camera corresponding to the m-1 section of voice are extracted from the array DirectionandGlitterGRP, and then the direction and the distance which the virtual camera needs to move if the angle of the power transformation equipment corresponding to the current voice is required to be displayed are obtained according to the vector Ia1 of the point a1 where the virtual camera is currently located and the vector Ib1 of the point b1 (target point) represented by the visual angle position and the direction of the virtual camera corresponding to the m-1 section of voice in the three-dimensional space. Where the direction is the direction of vector Ic 1: ic1 ═ Ib 1-Ia 1. The norm is the distance moved by the virtual camera. As the virtual camera moves to the target point, the mode of the Ic1 gradually decreases, and when the value is not greater than 0.005, the virtual camera is considered to reach the designated position, the virtual camera stops moving, and the angle of the power transformation equipment irradiated by the virtual camera when the virtual camera is displayed on the screen at the target point b 1. The angles of the power transformation equipment corresponding to the voice, the caption and the voice content are consistent, the reality of simulation is improved, and the power transformation equipment can be known more fully.
Besides, when the (m-1) th voice is played, the information of the part three-dimensional object (target three-dimensional object) which is corresponding to the (m-1) th voice and needs to be highlighted is extracted from the array DirectionandGlitterGRP, and in the embodiment provided by the invention, the highlighting effect of the target three-dimensional object shows that the color presents a gradually-changed flickering effect. The method specifically comprises the following steps:
firstly, an interpolation value t is set between an initial color a of a target three-dimensional object extracted when a power transformation overhaul simulation training system is initialized and a finally displayed color b, and the t is changed from 0 to 1. Returning to a when t is 0; return b when t is 1. And then setting a Sin sine wave function, taking the return value of the Sin sine wave function as the value of the interpolation t, changing the color of the target three-dimensional object from the color a to the color b along with the change of the Sin sine wave function along with time, realizing the effect of color gradual change and flicker, enabling the three-dimensional object of the part designed by voice to be highlighted and concentrated in attention in the explanation process, and improving the explanation effect. And after the voice playing is finished, restoring the initial color of the part three-dimensional object related to the voice.
And gradually increasing the parameter value of the Controller parameter along with the time, gradually reaching a trigger frame at a higher layer, and extracting the voice and the corresponding subtitles from the audio array container and the dataArray container one by one according to the voice and subtitle storage sequence to play the voice and the subtitles. Along with the playing of voice and subtitles, the virtual camera gradually rotates to display the corresponding view angle of the power transformation equipment and highlight the related three-dimensional objects of the parts, so that the reality and pertinence of simulation are improved, and the power transformation equipment can be known more fully.
When the transformer overhaul simulation training system begins to explain the transformer equipment, the voice control animation curve begins to play, when the Controller parameter value is 0, the trigger frame triggers the first section of voice of the corresponding equipment explanation text and the caption corresponding to the first section of voice, and the first section of voice and the corresponding first section of caption are respectively extracted from the audio array container and the dataArray container and played. Meanwhile, the visual angle position and direction of the virtual camera corresponding to the first voice and the part three-dimensional object needing to be highlighted are extracted from the array DirectionandGlitterGRP. And starting the coordinate and direction transformation processing of the virtual camera to enable the virtual camera to move in a three-dimensional space along with the playing of the voice, and simultaneously starting the processing of highlighting the three-dimensional object of the designated part. And displaying the angle of the power transformation equipment related to the voice on the screen, and highlighting the related part three-dimensional object. And (3) as time goes on, gradually increasing the Controller parameter value, sequentially triggering the voice and the corresponding subtitles until all the voice and subtitles in the audio array container and the dataArray container are completely played, wherein the Controller parameter value is n, the voice control animation curve reaches an end frame, the explanation is finished, the virtual camera returns to the initial position, the initial angle of the power transformation equipment is displayed, and all the part three-dimensional objects are restored to the initial color. A device explains that characters correspond to a voice control animation curve, and the voice control animation curve controls playing and switching of multiple sections of voice and corresponding subtitles through Controller parameters. In the whole playing process, human intervention is not needed, the triggering accuracy is improved, and the error occurrence probability is reduced.
In summary, in the explanation control method for the simulation training of the power system, the explanation text of one piece of equipment is decomposed into a plurality of small sections of text which are used as the units for displaying the subtitles and are stored respectively; dubbing each small segment of characters, associating the dubbing and the subtitles of each small segment of characters with a voice control animation curve, and realizing the playing and switching processing of the voice and the subtitles in the explanation process of the equipment through the voice control animation curve, wherein the whole process does not need human intervention, thereby improving the triggering accuracy and reducing the error occurrence probability; in addition, synchronous control processing of three-dimensional graphics, voice and subtitles of the power transformation equipment and highlighting processing of three-dimensional objects of related parts can improve reality of simulation, well meet training requirements and achieve expected training effects.
The explanation control method for the simulation training of the power system provided by the invention is explained in detail above. Any obvious modifications thereof, which would occur to one skilled in the art without departing from the true spirit of the invention, would constitute a violation of the patent rights of the present invention and would bear corresponding legal responsibility.

Claims (8)

1. An explanation control method for power system simulation training is characterized by comprising the following steps:
s1, establishing voice, corresponding subtitles and a voice control animation curve required by equipment explanation, and determining a power transformation equipment angle corresponding to each section of voice and a part three-dimensional object required to be highlighted; when a voice control animation curve is constructed, one control parameter is used as a y axis, the control parameter continuously triggers corresponding voice and subtitles along with the lapse of an x time axis, and the number of the established animation key frames is determined according to the number of information pieces in a voice section and a subtitle configuration file;
s2, extracting the initial color of the three-dimensional object of the part needing to be highlighted when the power transformation overhaul simulation training system is initialized, and reading the recorded voice and the corresponding subtitles;
s3, associating the voice and the corresponding subtitles with the voice control animation curve, playing the voice control animation curve, performing playing and switching processing on the voice and the subtitles, displaying the corresponding transformation equipment angle according to the voice playing content, and highlighting the related part three-dimensional object;
the highlight display of the three-dimensional object of the part shows the effect that colors are gradually changed and flicked, and the method is realized by the following steps: firstly, setting an interpolation value between an initial color a of a three-dimensional object of a part extracted during initialization of a power transformation overhaul simulation training system and a finally displayed color b, wherein the interpolation value is changed from 0 to 1; returning to a when the interpolation is 0, and returning to b when the interpolation is 1; then setting a Sin sine wave function, and finally taking a return value of the Sin sine wave function as an interpolation value, wherein the Sin sine wave function changes along with time, the interpolation value changes from 0 to 1 along with the change of the Sin sine wave function, the color of the three-dimensional object of the part is changed from a color a to a color b, and a gradually-changed flickering effect is presented;
the corresponding angle of the power transformation equipment is displayed according to the voice playing content, along with the voice playing, the visual angle position and the direction of the virtual camera continuously move, and the different angles of the power transformation equipment are displayed on a screen; when the (m-1) th voice is played, extracting the visual angle position and the direction of the virtual camera corresponding to the (m-1) th voice from the array, and then obtaining the direction and the distance of the virtual camera for displaying the angle of the power transformation equipment corresponding to the current voice to move according to the vector Ia1 of the current point of the virtual camera in the three-dimensional space and the vector Ib1 of the visual angle position and the direction of the virtual camera corresponding to the (m-1) th voice; the direction is the direction of the vector Ic 1: ic1 is Ib 1-Ia 1, the motion distance is the modulus of the vector Ic1, and m is a positive integer.
2. The explanation control method as claimed in claim 1, wherein in step S1, the step of creating the voice and the corresponding subtitle required for the explanation of the device comprises the steps of:
firstly, decomposing the device explanation text into a plurality of small sections of text, and respectively storing each small section of text as a unit for displaying subtitles; then dubbing each small segment of characters according to the normal reading speed, and respectively storing the dubbing of each small segment of characters; and finally, combining the text information and the corresponding voice information to establish a voice section and a subtitle configuration file.
3. The interpretation control method according to claim 2, wherein:
when the device explanation characters are decomposed, the full text is decomposed by taking the period as a breaking sign.
4. The interpretation control method according to claim 1, wherein:
the voice control animation curve establishes a corresponding relation between time and control parameters, the control parameters are associated with each section of voice and corresponding subtitles, and the control parameters continuously trigger the corresponding voice and subtitles along with the lapse of time.
5. The explanation control method as claimed in claim 1, characterized in that the step of determining the angle of the power transformation equipment corresponding to each voice segment and the three-dimensional object of the part to be highlighted comprises the steps of:
s11, establishing a virtual camera which can play all angles of the power transformation equipment related to the explanation characters of the equipment according to the change of the position and the direction of the visual angle;
s12, aiming at the content of each speech explanation, establishing the visual angle position and direction of the corresponding virtual camera, and determining the three-dimensional object of the part needing to be highlighted; the position and the direction of the visual angle of the virtual camera are a certain point in a three-dimensional coordinate where the virtual camera is located when the virtual camera displays a certain angle of the power transformation equipment; moving the visual angle position and direction of the virtual camera along with the recorded voice playing, and displaying different angles of the three-dimensional graph of the power transformation equipment on a screen;
and S13, storing the visual angle position and direction of the virtual camera corresponding to each voice, the three-dimensional object of the part to be highlighted and the initial color of the three-dimensional object of the part to be highlighted in an array.
6. The interpretation control method according to claim 1, wherein:
in step S3, associating the voice and the corresponding subtitles with the voice control animation curve is performed by associating a specific parameter value of the control parameter with each segment of voice and the trigger point of the corresponding subtitles, and when the control parameter reaches the specific parameter value, triggering the trigger point of the corresponding voice and subtitles, and the corresponding voice and subtitles start to play; and gradually increasing the parameter value of the control parameter along with the time, triggering the triggering points one by one according to the storage sequence of the voice and the subtitles, playing the voice and displaying the corresponding subtitles.
7. The interpretation control method according to claim 1, wherein:
as the virtual camera moves toward the target point, the mode of Ic1 gradually decreases, and when the value is not greater than 0.005, the virtual camera is deemed to have reached the designated position, and the angle of the power transformation device illuminated by the virtual camera at the target point of the virtual camera is displayed on the screen.
8. The interpretation control method according to claim 1, wherein:
and after the voice playing is finished, restoring the three-dimensional object of the part related to the voice to the initial color.
CN201410645722.2A 2014-11-12 2014-11-12 Explanation control method for power system simulation training Active CN104392633B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410645722.2A CN104392633B (en) 2014-11-12 2014-11-12 Explanation control method for power system simulation training

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410645722.2A CN104392633B (en) 2014-11-12 2014-11-12 Explanation control method for power system simulation training

Publications (2)

Publication Number Publication Date
CN104392633A CN104392633A (en) 2015-03-04
CN104392633B true CN104392633B (en) 2020-08-25

Family

ID=52610526

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410645722.2A Active CN104392633B (en) 2014-11-12 2014-11-12 Explanation control method for power system simulation training

Country Status (1)

Country Link
CN (1) CN104392633B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106371797A (en) * 2016-08-31 2017-02-01 腾讯科技(深圳)有限公司 Method and device for configuring sound effect
CN106504607A (en) * 2016-11-11 2017-03-15 广西电网有限责任公司电力科学研究院 A kind of converting equipment overhauls emulation training method
US11211053B2 (en) 2019-05-23 2021-12-28 International Business Machines Corporation Systems and methods for automated generation of subtitles
CN110968705B (en) * 2019-12-04 2023-07-18 敦煌研究院 Navigation method, navigation device, navigation apparatus, navigation system, and storage medium
CN112652039A (en) * 2020-12-23 2021-04-13 上海米哈游天命科技有限公司 Animation segmentation data acquisition method, segmentation method, device, equipment and medium
CN117765839B (en) * 2023-12-25 2024-07-16 广东保伦电子股份有限公司 Indoor intelligent navigation method, device and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN2348443Y (en) * 1998-12-17 1999-11-10 卢贵东 Heart-lung palpation and ausculation text display speech explaining computer simulator
WO2008113250A1 (en) * 2007-03-21 2008-09-25 Yuming Lin An apparatus and method for displaying an animation menu
CN103945140A (en) * 2013-01-17 2014-07-23 联想(北京)有限公司 Method and system for generating video captions

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5938447A (en) * 1993-09-24 1999-08-17 Readspeak, Inc. Method and system for making an audio-visual work with a series of visual word symbols coordinated with oral word utterances and such audio-visual work
US5741136A (en) * 1993-09-24 1998-04-21 Readspeak, Inc. Audio-visual work with a series of visual word symbols coordinated with oral word utterances
KR100686085B1 (en) * 1999-03-22 2007-02-23 엘지전자 주식회사 Video apparatus having study function and control method of the same
RU2546546C2 (en) * 2008-12-01 2015-04-10 Аймакс Корпорейшн Methods and systems for presenting three-dimensional motion pictures with content adaptive information
CN101770701A (en) * 2008-12-30 2010-07-07 北京新学堂网络科技有限公司 Movie comic book manufacturing method for foreign language learning
CN101859442A (en) * 2009-04-11 2010-10-13 孙炜 Transcription method for patent documents
CN101908232B (en) * 2010-07-30 2012-09-12 重庆埃默科技有限责任公司 Interactive scene simulation system and scene virtual simulation method
CN102157081A (en) * 2011-03-31 2011-08-17 北京唐风汉语教育科技有限公司 Multi-media teaching device based on synchronous text teaching content display
CN103559214B (en) * 2013-10-11 2017-02-08 中国农业大学 Method and device for automatically generating video

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN2348443Y (en) * 1998-12-17 1999-11-10 卢贵东 Heart-lung palpation and ausculation text display speech explaining computer simulator
WO2008113250A1 (en) * 2007-03-21 2008-09-25 Yuming Lin An apparatus and method for displaying an animation menu
CN103945140A (en) * 2013-01-17 2014-07-23 联想(北京)有限公司 Method and system for generating video captions

Also Published As

Publication number Publication date
CN104392633A (en) 2015-03-04

Similar Documents

Publication Publication Date Title
CN104392633B (en) Explanation control method for power system simulation training
CN109062884A (en) A kind of interaction declines the control method of class and interaction declines class
CN101520889A (en) Method for panoramically displaying articles at multiple angels with multiple static images and device for collecting static images
KR101656167B1 (en) Method, apparatus, device, program and recording medium for displaying an animation
CN102915755A (en) Method for extracting moving objects on time axis based on video display
CN106945433A (en) Nanometer touch-control blackboard and interactive intelligent blackboard
US20180143741A1 (en) Intelligent graphical feature generation for user content
CN103306510B (en) Control system of liftable stage
CN102651180A (en) OSG-based (open scene graph-based) electric-electronic virtual experiment simulation system
CN106341552A (en) Method and apparatus for playing teaching videos on mobile terminal
CN104699878A (en) Course arrangement and training method of analog simulation training
CN106657850A (en) Lesson content recording method and system
Khacharem et al. The expertise reversal effect for sequential presentation in dynamic soccer visualizations
CN114299777A (en) Virtual reality industrial simulation training system
US20130187927A1 (en) Method and System for Automated Production of Audiovisual Animations
Peng Application of Micro-lecture in Computer Teaching
CN104485027A (en) Courseware display method and device
CN203547178U (en) Control system of liftable stage
CN116245986A (en) Virtual sign language digital person driving method and device
Chen et al. Research on augmented reality system for childhood education reading
CN115379278A (en) XR technology-based immersive micro-class recording method and system
CN104506919A (en) Method and system for synchronizing display content and display screen movement
Lima et al. Innovation in learning–the use of avatar for sign language
KR101601744B1 (en) Method for Learning Vocabulary using Animated Contents and System of That
Jiménez et al. Tablet pc and head mounted display for live closed captioning in education

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant