CN117255235A - Method and device for general control connection of offline narrative - Google Patents
Method and device for general control connection of offline narrative Download PDFInfo
- Publication number
- CN117255235A CN117255235A CN202311511702.1A CN202311511702A CN117255235A CN 117255235 A CN117255235 A CN 117255235A CN 202311511702 A CN202311511702 A CN 202311511702A CN 117255235 A CN117255235 A CN 117255235A
- Authority
- CN
- China
- Prior art keywords
- narrative
- information
- content
- line
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 33
- 230000000694 effects Effects 0.000 claims description 76
- 238000004088 simulation Methods 0.000 claims description 45
- 230000003287 optical effect Effects 0.000 claims description 30
- 238000012545 processing Methods 0.000 claims description 18
- 239000003086 colorant Substances 0.000 claims description 6
- 239000013598 vector Substances 0.000 claims description 6
- 230000003213 activating effect Effects 0.000 claims description 5
- 238000003860 storage Methods 0.000 claims description 5
- 239000000758 substrate Substances 0.000 claims 2
- 230000001276 controlling effect Effects 0.000 description 22
- 238000006243 chemical reaction Methods 0.000 description 18
- 230000005540 biological transmission Effects 0.000 description 10
- 238000004519 manufacturing process Methods 0.000 description 10
- 238000000547 structure data Methods 0.000 description 5
- 230000008859 change Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000009826 distribution Methods 0.000 description 4
- 230000002452 interceptive effect Effects 0.000 description 4
- 101150027686 psaF gene Proteins 0.000 description 4
- 230000001360 synchronised effect Effects 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000009472 formulation Methods 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- 235000019788 craving Nutrition 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000003205 fragrance Substances 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000001105 regulatory effect Effects 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
- H04N21/8545—Content authoring for generating interactive applications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/261—Image signal generators with monoscopic-to-stereoscopic image conversion
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/02—Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Security & Cryptography (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the invention provides a method and a device for general control connection of an offline narrative, wherein the method comprises the following steps: acquiring first narrative contents of a plurality of input sources, setting at least one narrative space according to the first narrative contents, and controlling a user to set the first narrative contents at a narrative end through a multi-line space controller; setting at least one first narrative element according to the first narrative content; the multi-line space controller interacts with the first narrative element to obtain a second narrative content; setting at least one second narrative element according to the second narrative content, acquiring first pixel information in video information in the second narrative content, and rearranging and converting the first pixel information according to a set narrative rule to obtain second pixel information; the multi-line space controller controls a second narrative element to output second pixel information into a corresponding narrative space during an operational period according to the second narrative content. The method can enable the user to obtain different narrative experiences at the same moment and under the same operation.
Description
Technical Field
The present invention relates to the field of off-line narrative systems, and in particular to a method and apparatus for general control connection of off-line narratives.
Background
With the development of the age and the improvement of the living standard of people, the simple ornamental entertainment and the watch interactive entertainment can not meet the needs of people. People have a strong craving for a nice and resonated story setting, especially for complex interactive narratives with multiple people or with some objective being accomplished between them.
Most electronic games are now played on computers, where the user or player is operating on the computer side. For example, many video games with images and interactive games developed by rendering engines (e.g., illusive engines) are often implemented with a mouse or a number of controllers, and conventional narratives based on two-dimensional screens cannot immerse the user into space. While the existing VR games are believed to be immersed in the audiovisual world created by the scientific and technological means, the VR games have a great fall when people return to reality. And when people take the wearable device, the wearable device has certain restraint, and people cannot truly fully immerse the spirit in a specific environment. Existing close rooms escape, immersive 5d movie theatres or interactive travel and experience travel project exhibition and the like are all completed by various unit independent controllers, and users cannot obtain different narrative experiences under the same operation at the same moment.
Disclosure of Invention
The embodiment of the invention aims to provide a method and a device for connecting general control of an offline narrative, which can enable a user to obtain different narrative experiences at the same moment and operation.
To achieve the above object, an embodiment of the present invention provides a method for controlling connection of an offline narrative, the method comprising: acquiring first narrative contents of a multi-path input source, setting at least one narrative space according to the first narrative contents, setting a user narrative end, and controlling the user narrative end to set the first narrative contents through a multi-path space controller;
setting at least one first narrative element according to the first narrative content, and controlling the first narrative element to output corresponding first narrative content in real time through the multi-line space controller;
the multi-line space controller interacts with the first narrative element to obtain second narrative content, wherein the second narrative content is complete narrative content under a fixed period of time;
setting at least one second narrative element according to the second narrative content, acquiring first pixel information in video information in the second narrative content, and rearranging and converting the first pixel information according to a set narrative rule to obtain second pixel information;
The multi-line space controller controls the second narrative element to output the second pixel information into the corresponding narrative space during the working hours according to the second narrative content.
Optionally, the second narrative content is at least one of video information, special effect information, analog information, guiding information, multi-line display information, narrative wave source information and narrative source information;
the narrative wave source is wave source position information of the second narrative content for determining a spatial structure of the second narrative content.
Optionally, the multi-line display information includes an optical path information file for displaying the light source according to the pixel information.
Optionally, the rearranging and converting the first pixel information according to a set narrative rule to obtain second pixel information includes:
and rearranging the pixel points, the pixel vectors and the pixel colors of the first pixel information according to a certain narrative rule to obtain second pixel information, so that the two-dimensional image raster of the first pixel information is converted into a three-dimensional image.
Optionally, the method further comprises:
setting a narrative index for said first and/or second narrative element;
Connecting the narrative index, the narrative rules, the first narrative element and/or the second narrative element, the narrative space, the second narrative content through a narrative chain;
activating the first and/or second narrative element by invoking a narrative index in the narrative chain;
and distributing the second narrative content to the corresponding first narrative element and/or second narrative element, and scheduling and controlling the first narrative element and/or the second narrative element in real time by the multi-line space controller to display effects.
The application also proposes a device for the connection of an off-line narrative general control, comprising:
the system comprises an acquisition module, a multi-line space controller and a storage module, wherein the acquisition module is used for acquiring first narrative contents of a multi-path input source, setting at least one narrative space according to the first narrative contents, setting a user narrative end, and controlling the user narrative end to set the first narrative contents through the multi-line space controller;
the first processing module is used for setting at least one first narrative element according to the first narrative content, and controlling the first narrative element to output corresponding first narrative content in real time through the multi-line space controller;
the second processing module is used for enabling the multi-line space controller to interact with the first narrative element to obtain second narrative content, and the second narrative content is complete narrative content under a fixed period of time;
The third processing module is used for setting at least one second narrative element according to the second narrative content, acquiring first pixel information in video information in the second narrative content, and rearranging and converting the first pixel information according to a set narrative rule to obtain second pixel information;
and a fourth processing module, configured to control, by the multi-line space controller, the second narrative element to output the second pixel information to a corresponding narrative space during a working period according to the second narrative content.
Optionally, the second narrative content is at least one of video information, special effect information, analog information, guiding information, multi-line display information, narrative wave source information and narrative source information;
the narrative wave source is wave source position information of the second narrative content for determining a spatial structure of the second narrative content.
Optionally, the multi-line display information includes an optical path information file for displaying the light source according to the pixel information.
Optionally, the rearranging and converting the first pixel information according to a set narrative rule to obtain second pixel information includes:
and rearranging the pixel points, the pixel vectors and the pixel colors of the first pixel information according to a certain narrative rule to obtain second pixel information, so that the two-dimensional image raster of the first pixel information is converted into a three-dimensional image.
Optionally, the apparatus further comprises:
a fifth processing module for setting a narrative index for said first and/or second narrative element;
connecting the narrative index, the narrative rules, the first narrative element and/or the second narrative element, the narrative space, the second narrative content through a narrative chain;
activating the first and/or second narrative element by invoking a narrative index in the narrative chain;
and distributing the second narrative content to the corresponding first narrative element and/or second narrative element, and scheduling and controlling the first narrative element and/or the second narrative element in real time by the multi-line space controller to display effects.
The method for connecting the general control of the offline narrative comprises the following steps: acquiring first narrative contents of a multi-path input source, setting at least one narrative space according to the first narrative contents, setting a user narrative end, and controlling the user narrative end to set the first narrative contents through a multi-path space controller; setting at least one first narrative element according to the first narrative content, and controlling the first narrative element to output corresponding first narrative content in real time through the multi-line space controller; the multi-line space controller interacts with the first narrative element to obtain second narrative content, wherein the second narrative content is complete narrative content under a fixed period of time; setting at least one second narrative element according to the second narrative content, acquiring first pixel information in video information in the second narrative content, and rearranging and converting the first pixel information according to a set narrative rule to obtain second pixel information; the multi-line space controller controls the second narrative element to output the second pixel information into the corresponding narrative space during the working hours according to the second narrative content. According to the invention, the traditional single-class plane narrative is changed into a real three-dimensional spatial narrative by controlling multiple input sources, so that a user can obtain different narrative experiences at the same moment and under the same operation.
Additional features and advantages of embodiments of the invention will be set forth in the detailed description which follows.
Drawings
The accompanying drawings are included to provide a further understanding of embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain, without limitation, the embodiments of the invention. In the drawings:
FIG. 1 is a flow diagram of a method of off-line narrative general control connections of the present invention;
FIG. 2 is a schematic representation of a specific embodiment of a method of off-line narrative general control connection of the present invention;
FIG. 3 is a schematic illustration of a narrative chain of the multi-line space control system of the present invention controlling the production of a narrative space;
FIG. 4 is a schematic diagram of the generation of the optical path information file by the specification rule unit and the conversion unit of the present invention;
fig. 5 is a schematic communication diagram of the narrative element of the present invention and a multi-line space control system.
Description of the reference numerals
A101-a special effect unit;
a102-narrative source;
a103-an analog unit;
a104-a guiding unit;
a105-a control terminal;
a106-an issue unit;
a107-point to route element;
a108-an auxiliary module;
a109-a power module;
b101-a first control system;
b102-a second control system;
C101-a first display end;
c102-a second display end;
c103-third display end.
Detailed Description
The following describes the detailed implementation of the embodiments of the present invention with reference to the drawings. It should be understood that the detailed description and specific examples, while indicating and illustrating the invention, are not intended to limit the invention.
FIG. 1 is a flow chart of a method of on-line narrative general control connection of the present invention, as shown in FIG. 1, comprising:
step S100 is to obtain a first narrative content of a multi-path input source, set at least one narrative space according to the first narrative content, set a user narrative end, and control the user narrative end to set the first narrative content through a multi-line space controller.
For example, a narrative end is provided, each having a plurality of input sources, each input source corresponding to a type of narrative content, and the narrative end dispatches the narrative content by the narrative end and outputs open narrative content information to a multi-line space controller (also referred to as a multi-line space control system). The narrative end is used for acquiring the narrative content, distributing the narrative content, scheduling the narrative accessories and transmitting the narrative information to the multi-line space control system, and transmitting the generated narrative content branch line information to different lines. The narrative end allocates different narrative spaces contained in different types of narrative contents to each of the narrative indexes to obtain a narrative index member (the narrative index member includes a special effect unit A101, a guiding unit A104, a multi-line display end, a narrative source A102, and a simulation unit A103). The narrative source A102 refers to the position of the narrative wave source, the size of the narrative wave source and the distance between the narrative wave source and the narrative source A102 in space, and the rule making unit plans and simulates the pointing route of the narrative source A102 according to the narrative content and obtains the space structure data information of the narrative space through the narrative wave source. The rule making unit determines the position generated by the light source, the brightness generated by the light source, the intensity generated by the light source and the color generated by the light source of the multi-line display end, and the rule making unit makes the display rule of the multi-line display end, namely determines the generation rule of the light source. The multi-line display is not limited to the number used herein as an example. The multi-line display terminal of the present invention includes a first display terminal C101, a second display terminal C102, and a third display terminal C103. The multi-line display terminal at least comprises one display terminal, and can also comprise a plurality of display terminals. Meanwhile, the multi-line space control system can also control a plurality of multi-line display ends in the same time period, and the multi-line display ends can also have a plurality of display ends.
Step S200 is to set at least one first narrative element according to the first narrative content, and control the first narrative element to output corresponding first narrative content in real time through the multi-line space controller. For example, there may be a plurality of narrative contents in the narrative end, each of which may open a narrative space, each of which corresponds to a plurality of narrative objects, narrative rules, narrative wave sources, light path information files, and narrative effects. There is a narrative chain in each narrative space, the narrative chain binding a narrative index, the narrative index element transmitting the narrative information to be produced to the narrative space.
The narrative wave source outputs the initial narrative wave source according to the second narrative content to obtain the position information of the second narrative wave source; and determining the spatial structure of the narrative space according to the position information of the second narrative wave source. According to the distance between the initial narrative wave source and the second narrative wave source and the narrative source A102 in space, the rule making unit continuously plans and simulates the pointing route of the narrative source A102 according to the second narrative content, and the spatial structure data information of the narrative space is obtained through the initial narrative wave source and the second narrative wave source; the rule making unit makes initial positions of the initial narrative wave sources, calculates the generation positions of the second narrative wave sources, and obtains a spatial structure of the narrative space, and makes the generation positions of the initial narrative wave sources and the second narrative wave sources, the frequency of the initial narrative wave sources and the second narrative wave sources, and the intensity of the second narrative wave.
Step S300 is the interaction of the multi-line spatial controller with the first narrative element to obtain a second narrative content, wherein the second narrative content is a complete narrative content under a fixed period of time. According to a specific embodiment, the second narrative content is at least one of video information, special effects information, analog information, instructional information, multi-line display information, narrative wave source information, and narrative source A102 information; the narrative wave source is wave source position information of the second narrative content for determining a spatial structure of the second narrative content. The multi-line display information includes an optical path information file for displaying the light source according to the pixel information.
Step S400 is to set at least one second narrative element according to the second narrative content, obtain the first pixel information in the video information in the second narrative content, and rearrange and convert the first pixel information according to the set narrative rule to obtain the second pixel information. Specifically, the rearranging and converting the first pixel information according to the set narrative rule to obtain second pixel information includes: and rearranging the pixel points, the pixel vectors and the pixel colors of the first pixel information according to a certain narrative rule to obtain second pixel information, so that the two-dimensional image raster of the first pixel information is converted into a three-dimensional image.
Step S500 is the multi-line space controller controlling the second narrative element to output the second pixel information to the corresponding narrative space during the working period according to the second narrative content. The method further comprises the steps of: setting a narrative index for said first and/or second narrative element; connecting the narrative index, the narrative rules, the first narrative element and/or the second narrative element, the narrative space, the second narrative content through a narrative chain; activating the first and/or second narrative element by invoking a narrative index in the narrative chain; and distributing the second narrative content to the corresponding first narrative element and/or second narrative element, and scheduling and controlling the first narrative element and/or the second narrative element in real time by the multi-line space controller to display effects.
Specifically, the multi-line spatial control system refers to multi-line special effect loading (special effects include spatial special effects, directional special effects, random special effects and insertion special effects), multi-line audio loading, multi-line loading of spatial sound field special effects (control of sound wave avoidance lines, control of sound wave emission lines, control of sound source generation positions), multi-line release special effects and implementation of special effects. In the invention, the multi-line space control system also refers to the real-time synchronous distribution of the narrative content to be output in the narrative end to corresponding lines, the specific effect of the narrative content to the specific effect unit A101, the guiding of the narrative content to the guiding unit A104, the multi-line display of the narrative content to the multi-line display end, the acoustic wave route of the narrative content to the narrative source A102 and the produced narrative content to the simulation unit A103.
FIG. 2 is a schematic diagram of an embodiment of a method for connecting an offline narrative, as shown in FIG. 2, mainly comprising three flow modules, wherein the first flow module is: the user obtains the data information of the narrative end and verifies whether the narrative end is bound with the connected multi-line space control system, and if so, the demand narrative chain is started. The data parameters are transmitted to the portable output and control terminal a105. The portable output terminal and the control terminal a105 can individually control (can jointly control) the multi-line space control system, and the control terminal a105 and the portable terminal need to schedule the optical path information file to control the multi-line display terminal. The control terminal a105 is an operation module of the multi-line space control system, and the scheduled optical path information file is a PSAF file, where the PSAF file is a file generated according to a rule in the present invention. PSAF files (and other files may be generated) may be generated based on different formulations of rules.
According to the optical path information file, the input video and audio are processed by the rule making unit and the conversion unit to obtain various separation information (as shown in step S104), wherein the separation information comprises video information, audio information and video information derivative information. The video derived information refers to various video derived information such as fragrance attack, fog generation and the like if cold air blows through a video picture. By identifying video derived information to control derived narrative elements in the narrative space, video derived information content is output into the narrative space such that the narrative objects experience differently. The processing and identification of the video derived information uses depth identification techniques and database contrast analysis techniques. The incoming video and audio is scheduled by the multi-line spatial control system for a second narrative content at a time period based on the initial narrative content selected by the user. Video and audio of the second narrative content input all video and audio information over a period of time, the narrative source A102 and the multi-line display of the corresponding narrative elements (including the first narrative element and/or the second narrative element) are controlled by passing the video and audio information and the video derivative information through the multi-line spatial control system according to a time period timeline. The audio information and video are transmitted to the directional route unit a107. The rule making unit will also process and acquire the audio information and the video information in real time according to the time slot time line. Step S103 is to capture a video frame through video to obtain pixel information. And arranging pixels according to the pixel information and a certain rule, and converting the pixel information into the pixels scheduled by the set rule by a conversion unit to obtain the set pixel information. The audio information is transmitted to the directional route unit a107, which specifies the transmission route of the narrative wave source to transmit the transmission route information to the narrative source a102. Step S105 is to convert the given pixel information into an optical path information file, where the optical path information file can be identified by the identification unit, and the multi-line display end, that is, the light source production source, identifies the optical path information file to generate different light sources. And controlling the spatial content of the narrative at the multi-line display end, the narrative capability of the simulation unit A103, the pointing direction of the narrative wave source of the narrative source A102, the narrative special effect of the special effect unit A101 and the narrative object of the guiding unit A104 by the control terminal A105 according to the optical path information file.
The timeline of the present application works in a narrative space providing a second narrative content to the narrative object. The method further comprises the steps of: and determining the input narrative wave source information according to the communication information, wherein the narrative wave source refers to second narrative wave source position information obtained by outputting the initial narrative wave source in second narrative content, and the spatial structure of the narrative space is determined according to the position information of the second narrative wave source. According to the distance between the initial narrative wave source and the second narrative wave source and the narrative source A102 in space, the rule making unit continuously plans and simulates the pointing route of the narrative source A102 according to the second narrative content, and the spatial structure data information of the narrative space is obtained through the initial narrative wave source and the second narrative wave source; the rule making unit makes initial positions of the initial narrative wave sources, calculates the generation positions of the second narrative wave sources, and obtains a spatial structure of the narrative space. For example, a multi-way information element is obtained according to the narrative rules, the multi-way information element comprises a plurality of general control scheduling files (light path information files), simulation information files, special effect information files, guidance information files, narrative source A102 information files and the like, the multi-way space control system controls the narrative element according to the multi-way information files, and the narrative space displays second narrative content of the narrative element according to a time-interval time line.
In the invention, each unit is connected and scheduled in a general control connection mode, and then the consistency of time is controlled in a specific timing mode. The benefit of the consistency of control time is that each unit can be flexibly used, invalid communication between each unit is reduced, and forward circulation between the units can be stabilized. Each unit has an independent control system and independent modes of operation.
And starting a narrative chain with story connection according to a user demand, wherein the narrative chain refers to that a unit master control system performs master control connection on randomly allocated units at specific time according to a specific timing mode and realizes the work of the randomly allocated units. Step S106 selects a narrative content for the user demander based on the plurality of narrative contents at the narrative end. And verifying to open the narrative space of the self-adaptive narrative content, and enabling the narrative end to open the narrative space by determining the narrative content.
The user binds the guiding unit A104 and the special effect unit A101 through the narrative end. The data information identifying the instruction unit a104 obtains serial numbers of the instruction unit a104 and the special effect unit a101. Each narrative space is provided with a narrative chain, namely a narrative chain with story connection is started according to a user demand, and the narrative chain refers to a unit master control system for performing master control connection on randomly allocated units at specific time according to a specific timing mode and realizing the work of the randomly allocated units. The narrative chain binds the narrative index, the narrative index component transmits the narrative information to be produced to the narrative space, and the multiple information components are obtained according to the narrative rules. The multi-line information element comprises a master control scheduling file (a plurality of light path information files, simulation information files, special effect information files, guide information files, narrative source information files and the like), the multi-line space control system controls the narrative index element according to the multi-line information files, and the narrative space displays the content of the narrative index element. The multi-line control system identifies a master control scheduling file, the master control scheduling file is also a rule formulated by the rule formulation unit, the rule can be flexibly changed, the multi-line space control system identifies the rule, and then the rule is applied to schedule the generated file, so that the second narrative content is displayed and output in the narrative space.
The total scheduling file is communicated with the narrative elements through the multi-line space control system and determines the working modes of the narrative elements on the time period time line, so that the total scheduling file is a file used for scheduling and controlling the performance work of the narrative elements according to the time period time line in the multi-line space control system. And each narrative element reads a corresponding plurality of light path information files, simulation information files, special effect information files, instruction information files, narrative source information files and the like according to the time period time line.
The production chain of a narrative space is called a narrative chain, which is connected for binding a narrative index, indexing elements, narrative rules, multiplex information elements and general control narrative elements. The narrative space is shown in connection with figure 3 forming a production chain, which is called a narrative chain.
The narrative index is bound at the end of the event, said narrative index being assigned to the index of the different narrative spaces. The narrative end needs to bind the narrative index elements first when opening the narrative space. The instruction unit a104 and the effect unit a101 need to be bound by the user, the instruction unit a104 and the effect unit a101 being part of the narrative elements, the other narrative elements being automatically connected. The connection mode is inconsistent, but the multi-line space control system agrees to schedule and control the narrative elements. While the present invention is illustrated with a few narrative elements, there are a number of compatible narrative elements of the present invention, and the number of narrative elements may be increased or decreased as desired. Firstly, the special effect unit A101 and the guiding unit A104 are bound at the event end, wherein the special effect unit A101 refers to that a multi-line space control system controls the special effect unit A101 to give a specified special effect to a narrative object in the narrative space. The instruction unit a104 is a special effect to the narrative object at a particular moment in the narrative space. The multi-line space control system schedules and controls the guiding unit A104 to track the narrative object according to the time period timeline; the multi-line spatial control system schedules control of the special effects unit a101 based on special effects information of the second narrative content on the time period timeline.
As shown in fig. 3, the multi-line spatial control system in the present invention refers to multi-line special effect loading (special effects include spatial special effects, directional guiding special effects, random special effects, and inserting special effects), multi-line audio loading, multi-line loading spatial sound field special effects (control of sound wave avoidance line, control of sound wave emission line, control of generation position of second narrative wave source), multi-line special effect release (through release unit a 106), and special effect implementation. The multi-line spatial control system also refers to real-time synchronous distribution of the narrative content to be output in the narrative end to corresponding lines, special effect narrative content to a special effect unit A101, guiding narrative content to a guiding unit A104, multi-line display of the narrative content to a multi-line display end, second narrative wave source route narrative content to a narrative source A102 and manufactured narrative content to a simulation unit A103.
The system is characterized by comprising a multi-line spatial control system, wherein the multi-line spatial control system refers to multi-line special effect loading (special effects comprise spatial special effects, directional special effects, random special effects and inserting special effects), multi-line audio loading and multi-line spatial sound field special effects (control of an initial narrative wave source avoidance route, control of an initial narrative wave source emission route, control of a generation position of a second narrative wave source), multi-line issuing of special effects and implementation of special effects. In the present invention, the multi-line space control system also refers to real-time synchronous distribution of the narrative content to be output in the narrative end to the corresponding lines. The multi-line spatial control system transmits narrative spatial data parameters of the selected narrative content, the data parameters to the portable output and control terminal A105. The transmitted data parameters include narrative object parameters, special effect parameters, instruction parameters, transmission parameters, quasi-exchange parameters, pointing parameters, production parameters, simulation parameters, multi-line rule parameters, display parameters, instruction control parameters and the like.
Specifically, the special effect unit a101 is connected with the narrative source a102, and the narrative source a102 is connected with the power module a109, the first control system B101 and the release unit a106; the simulation unit A103 is connected with the first control system B101 and the control terminal A105; the guiding unit A104 is connected with the first control system B101; the first control system B101 is connected with the narrative source A102, the simulation unit A103, the guiding unit A104 and the control terminal A105; the control terminal A105 is connected with the first control system B101, the simulation unit A103, the release unit A106 and the second control system B102; the issuing unit A106 is connected with the control terminal A105, the narrative source A102, the pointing route unit A107 and the auxiliary module A108; the directing route unit a107 is connected with the issuing unit a106; the auxiliary module A108 is connected with the release unit A106; the power module A109 is connected with the first display end C101, the second display end C102, the third display end C103 and the narrative source A102; the second control system B102 is connected to the first display end C101, the second display end C102, and the third display end C103. The second control system B102 is configured to convert and generate the predetermined pixel information, and the multi-line display end operates according to the predetermined pixel information converted by the second control system B102. The multi-line display terminal comprises a first display terminal C101, a second display terminal C102 and a third display terminal C103, each group comprises a plurality of display modules, and each display module comprises a plurality of small display units. The first display end C101, the second display end C102, and the third display end C103 are arranged according to a certain rule, where the first display end C101, the second display end C102, and the third display end C103 can identify an optical path information file (a format file of feature number transmission information) converted by the second control system B102, and display contents of the optical path information file converted into display effects in real time.
The portable output end and the control terminal A105 can independently control (can jointly control) the multi-line space control system, and the control terminal A105 and the portable terminal need to dispatch the optical path information file to control the multi-line display end; in the invention, the portable output end refers to a narrative end which is easy to hide and convenient, and the portable output end and the narrative end are different in that a user demander can hide and control a multi-line space general control system in the narrative space. The user does not see the changes made by the user to the narrative space control as the narrative object enters the narrative space, and the narrative object sees the narrative space change as the narrative content and the narrative object change to the narrative space. In the present invention control terminal a105 refers to transmitting data to transmit the narrative space information in the narrative content to the narrative index elements. The control terminal a105 schedules the narrative indexing elements to operate according to the multiple information elements. The control terminal A105 controls the narrative source A102 to emit sound waves; the control terminal A105 controls the special effect unit A101 to issue a special effect; the control terminal A105 controls the guiding unit A104 to locate the narrative object; the control terminal A105 controls the multi-line display end to display the light source elements in sequence or randomly; the control terminal a105 controls the simulation unit a103 to make real-time deductions of the three-dimensional narrative content. The control terminal a105 is configured to identify the received multi-line information file for transmission to transform the multi-line information file into a narrative space, and the control terminal a105 has a connection unit therein that connects together the narrative indexing elements that operate in the narrative space.
The portable output end and the control terminal A105 can be controlled independently or jointly to control a multi-line space control system, and the control terminal A105 and the portable terminal need to dispatch optical path information files to control a multi-line display end; the portable output end in the invention is a narrative end which is easy to hide and convenient. The user does not see the changes made by the user to the narrative space control as the narrative object enters the narrative space, and the narrative object sees the narrative space change as the narrative content and the narrative object change to the narrative space. In the present invention control terminal a105 refers to transmitting data to transmit narrative space information in the narrative content to the narrative elements. The control terminal a105 schedules the narrative elements to operate according to the multiple information elements. The control terminal a105 is an operation module of the line control system. Is part of a multi-line space control system. Control terminal A105 controls the location at which narrative source A102 emits a second narrative wave source; the control terminal A105 controls the special effect unit A101 to issue a special effect; the control terminal A105 controls the guiding unit A104 to locate the narrative object; the control terminal A105 controls the multi-line display end to display the light source elements in sequence or randomly; the control terminal A105 controls the super visual simulation module to perform real-time deduction on the three-dimensional narrative content. The control terminal a105 is configured to identify a received multi-line information file, convert the multi-line information file into a narrative space, the simulation unit a103 refers to a user adding a plurality of narrative elements, the added narrative elements are not limited to the several kinds of narrative elements in the present invention, different kinds of narrative elements can be added, after adding the narrative elements, the simulation unit a103 identifies the newly added narrative elements and acquires the narrative content information carried by the newly added narrative elements, at this time, the simulation unit a103 communicates with the multi-line space according to a control system, and the multi-line space control system connects at least one of the newly added narrative elements in the simulation unit a103 and schedules and controls the newly added narrative elements to perform the narrative work of the phase object, or reduces the existing narrative elements. The control terminal A105 is provided with a connecting unit which connects the narrative elements together, the connecting unit is also a working module of the multi-line space control system, and the connecting unit and the control terminal A105 are both key modules for assisting the multi-line space control system to complete the total scheduling control. Work in the narrative space to control the elements of the narrative.
The portable output terminal and the control terminal a105 may control the operation of the multi-line space control system separately or may control the operation of the multi-line space control system in combination. The control terminal a105 and portable terminal need to transmit characteristic instruction data to a multi-line space control system, which needs to acquire multi-line information elements in the narrative content for control. The multi-line information element comprises a plurality of light path information files (such as format files of feature number transmission information), simulation information files, special effect information files, instruction information files, narrative source information files and the like.
According to the optical path information file, a rule making unit and a conversion unit are required to process the input video and audio to obtain video information and audio information, and the audio information is transmitted to a directional route unit A107; and capturing a video picture and acquiring pixel information. The rule making unit makes the narrative content that the simulation unit a103 needs to display according to the narrative rule. The video information is directly displayed in the super visual simulation module. The image pixel information in the video is converted and regularly arranged to generate light source information required by the multi-line display end.
In the invention, the rule making unit is used for obtaining the input video information, capturing the video picture, obtaining the pixels in the picture and rearranging the pixels according to a certain rule. The video is composed of rapid continuous images, each frame represents a time point, so that a rule making unit identifies the time code of the video image frame, the frame image of the video image is obtained, each pixel of the frame image is identified and grabbed to obtain pixel information, the pixels are arranged according to a certain rule according to the pixel information, and a conversion unit converts the pixel information into the pixels arranged according to the established rule to obtain the established pixel information.
The input video and audio are video and audio uploaded according to the user's needs, and in the invention both video and audio can be converted into narrative elements required by the narrative space by specific processing means. Capturing a video picture and acquiring pixel information of the video picture. The conversion unit converts, transforms, rearranges, etc. the pixel information. And (3) a process of transforming the pixel points, the pixel vectors and the pixel colors of the pixel information data in a specific setting mode, and rasterizing the two-dimensional image into three-dimensional images and the like.
As shown in fig. 4, the conversion mode of the conversion unit is to store the pixel information in the storage module in sequence, rearrange the pixel information according to the stored pixel information, and arrange the pixel information so as to convert each pixel value into each light source body of the first display end C101, the second display end C102 and the third display end C103, so that the first display end C101, the second display end C102 and the third display end C103 can emit different light sources according to the pixel information. However, since the arrangement rule of the first display terminal C101, the second display terminal C102, and the third display terminal C103 is different from the arrangement rule of the pixel information of the acquired image, the rule making unit needs to rearrange the pixel information according to the rule made by the present invention.
The specified rule unit and the conversion unit of the present invention generate the optical path information file including: the method comprises the steps of acquiring frame width pixels and frame height pixels of an image/video, acquiring resolutions of the frame width pixels and the frame height pixels (pixel information of the frame width pixels and the frame height pixels), dividing the pixel information into three groups, and obtaining a basic rule of image pixel arrangement when acquiring the image pixel information.
The rule making unit makes the narrative content in the narrative space required to be displayed by the simulation unit A103 according to the narrative rule, the display of the simulation unit A103 is made to indicate that the user can intuitively feel the three-dimensional narrative content, and the three-dimensional display effect of the narrative content is that the rule making unit makes the three-dimensional narrative content displayed by the simulation unit A103. The rule making unit makes a pointing route of the narrative source A102, and the narrative source A102 outputs an initial narrative wave source, and the initial wave source reaches a terminal position of the pointing route to generate rebound so as to obtain a second narrative wave source. The narrative source A102 changes the size of the initial narrative wave source such that the size of the second narrative wave source changes accordingly. According to the distances between the initial narrative wave source and the second narrative wave source and the narrative source A102 in space, the rule making unit continuously plans and simulates the pointing route of the narrative source A102 according to the second narrative content, and the spatial structure data information of the narrative space is obtained through the initial narrative wave source and the second narrative wave source; the rule making unit makes initial positions of the initial narrative wave sources, calculates the generation positions of the second narrative wave sources to obtain a spatial structure of the narrative space, and makes the generation positions of the initial narrative wave sources and the second narrative wave sources, the frequency of the initial narrative wave sources and the second narrative wave sources and the intensity of the second narrative wave sources. The simulation unit a103 refers to the narrative content of the simulation information file that the simulation unit a103 is operating and displaying when part of the narrative source a102 is generated. The simulation unit A103 is synchronized with a part of the narrative source A102, and when the part of the narrative source A102 generates a narrative wave source and transmits a narrative object, the simulation unit A103 is started to work, so that the narrative object obtains the narrative wave and the narrative object obtains the display three-dimensional picture synchronization. The simulation unit a103 displays the content produced by the production unit. The making unit is used for automatically generating a super visual display screen and a two-dimensional display screen according to the selected narrative content. The simulation unit a103 may be any narrative element added by any narrative object, the narrative element being a work unit and module for producing narrative content in a narrative space. In the present invention, the creating unit may create the narrative content in any narrative space, and the simulation unit a103 creates the narrative content in the narrative space from the content of the creating unit. For example, various types of narrative elements may be added, including units/modules of work in space that can make changes to the space context and content. The simulation unit A103 can simulate the work content of the newly added various narrative elements, and the simulation unit A103 executes the work content of the newly added various narrative elements.
And arranging pixels according to the pixel information and a certain rule, and converting the pixel information into pixels scheduled by the set rule by a conversion unit to obtain the set pixel information. The frame height pixels X frame width pixels of the image as shown in fig. 4, the values of which may be random numbers, may be converted into an optical path information file by a specific arrangement in the present invention. The audio information is transmitted to a directional route unit A107, a transmission route of the narrative wave source is regulated, and the transmission route information is transmitted to a narrative source A102; the conversion means of the present invention converts the obtained pixel information into predetermined pixel information. The conversion unit composes the pixels according to a certain narrative rule based on the pixel information, wherein the narrative rule is composed of a rule-making unit for identifying the narrative content and the narrative content. The narrative rule is a special pixel typesetting mode, namely the pixel rule is to arrange parameters of pixels such as form intensity, color and the like of pixel display, a plurality of pixels are arranged and converted according to the narrative rule, the given pixel information obtained by the conversion unit is the core work of the conversion unit, and in order to enable the multi-line display end to recognize and read the given pixel information, the given pixel information can be narratively displayed on the multi-line display end and can be converted into a light path information file which can be a file of special transmission information and can be recognized by the multi-line display end. The optical path information file has uniqueness, nonlinearity and real-time property. When the conversion unit converts the given pixel information into an optical path information file, the multi-line display end displays the optical path information file in real time.
As shown in fig. 4, the acquired pixel information data is uniformly divided into three groups, and the width pixels and the frame height pixels of the image/video frame are acquired (the video is composed of successive images, resulting in images of the frame width pixels X frame height pixels). Dividing the corresponding frame width pixels and the corresponding frame height pixels of the image into three groups, and obtaining the image pixel information when obtaining the image pixel information is the basic rule of the image pixel arrangement. In the present invention, the basic rule of the pixel information arrangement is changed, that is, the basic rule of the pixel information arrangement of the first display terminal C101 and the second display terminal C102 is changed by a specific rule. After the first display terminal C101 and the second display terminal C102 are arranged by the specific rule, the third display terminal C103 is arranged by the specific rule of the first display terminal C101 and the second display terminal C102, and then the second specific pixel arrangement rule is performed on the third display terminal C103. The pixel information of the first display terminal C101, the second display terminal C102 and the third display terminal C103 are rearranged to obtain predetermined pixel information.
The audio information is transmitted to the directional route unit a107, which specifies the transmission route of the initial narrative wave source, and the transmitted initial wave source route information is transmitted to the narrative source a102. The narrative source A102 refers to producing a second source of narrative waves. The narrative source A102 outputs an initial narrative wave source, which reaches a terminal location pointing to the route to generate a rebound, resulting in a second narrative wave source. The narrative source A102 changes the size of the initial narrative wave source such that the size of the second narrative wave source changes accordingly. According to the distance between the initial narrative wave source and the second narrative wave source and the narrative source A102 in space, the rule making unit continuously plans and simulates the pointing route of the narrative source A102 according to the second narrative content, and the spatial structure data information of the narrative space is obtained through the initial narrative wave source and the second narrative wave source; the rule making unit makes initial positions of the initial narrative wave sources, calculates the generation positions of the second narrative wave sources, and obtains a spatial structure of the narrative space, and makes the generation positions of the initial narrative wave sources and the second narrative wave sources, the frequency of the initial narrative wave sources and the second narrative wave sources, and the intensity of the second narrative wave sources. The spatial location information of the narrative wave source and the spatial structure information of the narrative space are obtained by calculating the rules of the initial narrative wave source and the second narrative wave source through the AI module in the narrative source A102. A narrative wave source library file of a second narrative wave source according to a time period timeline is generated, and the second narrative wave source information in the second narrative wave source library file is started according to the time period timeline of the narrative rule, so that the second narrative wave source is produced according to different positions in the narrative space of the time period timeline.
And converting the given pixel information into an optical path information file according to the given pixel information, wherein the optical path information file can be identified by an identification unit, and the multi-line display end also identifies the optical path information file for the light source production source so as to generate different light sources. The multi-line display end in the invention is to identify the light path information file through the identification unit to obtain the light source production information, and the light source production source is the multi-line display end. The multi-line display can produce a plurality of different light sources (number of different light sources, color of the light sources, intensity of the light sources, brightness of the light sources, saturation of the light sources, range of the light sources, etc.). The multi-line display end starts the multi-line display end to work according to the dimming path information file in the multi-line information element. The multi-line display terminal comprises a multi-line display terminal, a light path information file, a recognition unit and a real-time production light source, wherein one working module in the multi-line display terminal is called as the recognition unit, the recognition unit is used for recognizing the light path information file generated by the conversion unit, the recognition unit directly recognizes the light path information file generated by the given pixel information, and the multi-line display terminal obtains the given pixel information with specific regular arrangement according to the light path information file. The established pixel information is obtained by the simultaneous operation of the established rule unit and the conversion, and the established pixel information is quasi-converted into an optical path information file. The optical path information file is a file that can be directly recognized by the recognition unit of the multi-line display unit. The rules may be formulated differently, the resulting effects may vary, and the resulting files may be different, not limited to generating PSAF files. The multi-line space control system has the function of directly identifying a plurality of files in real time.
And controlling the spatial content of the narrative at the multi-line display end, the narrative capability of the simulation unit A103, the second narrative wave source direction of the narrative source A102, the narrative special effect of the special effect unit A101 and the narrative object of the guiding unit A104 by the control terminal A105 according to the optical path information file. The control terminal A105 controls the multi-line display end to identify the light path information file so that the multi-line display end can display the spatial content of the narrative. The control simulation unit a103 simulates the occurrence of a narrative event so that the narrative object more intuitively sees the narrative content in the narrative space. The connecting unit is connected with the special effect unit A101, the guiding unit A104, the multi-line display end, the simulation unit A103, the narrative source A102 and the super-visual simulation module to realize deduction of the narrative space. The special effect unit A101, the guiding unit A104, the multi-line display end, the simulation unit A103, the narrative source A102 and the super visual simulation module are index elements, wherein the simulation unit A103 represents a plurality of newly added narrative elements of different types or the same types. In the present invention, the connection unit refers to an operation module in the control terminal a105, the connection unit connects together the narrative elements in the narrative space, and the connection unit connects the narrative elements requiring the operation and the narrative elements not requiring the operation according to the narrative content, so that the control terminal a105 can select the narrative elements requiring the operation and the index elements not requiring the operation. The narrative content may have a plurality of contents requiring a plurality of changes in the working contents of the narrative elements, the connection unit then functioning as a sequential distribution of the working mechanisms of the narrative elements of the entire narrative content, so as to control the terminal A105 to sequentially cause the narrative elements to work. The connecting unit is connected with a multi-line space control system, and a control terminal A105 in the multi-line space control system can carry out total control on the special effect unit A101, the guiding unit A104, the multi-line display end, the simulation unit A103, the narrative source A102 and the super vision simulation module through the connecting unit. The multi-line space control system may be connected to any device as needed and is not limited to the example work units presented herein. The multi-line space control system automatically opens a corresponding narrative space according to the narrative content selected by the narrative object, and at the moment, the multi-line space control system automatically controls the narrative elements in the narrative space to control the narrative elements to work, so that the narrative content selected by the narrative object is produced in real time in the narrative space.
As shown in fig. 5, the multi-line space control system controls a plurality of narrative elements such as a special effect unit a101, a guiding unit a104, a multi-line display end, a simulation unit a103, a narrative source a102 and a three-dimensional display end through a certain conversion rule and a control rule, so that the narrative elements can deduct the content of the narrative required by the narrative in the narrative space. When a narrative object issues commands and/or instructions, the narrative elements operate in real-time, resulting in a narrative space having narrative content.
The issuing unit a106 refers to issuing control instructions to cause a narrative space to pace a narrative, and the issuing unit a106 refers to issuing and executing the content of the narrative in the narrative element, and performing a final deduction in the narrative space. In the present invention, the issuing unit a106 refers to issuing a control command to cause a narrative space to pace a narrative, and the issuing unit a106 refers to issuing and executing the narrative content in the narrative element and performing a final deduction in the narrative space.
In another aspect, the invention provides an apparatus for controlling connection of an offline narrative, the apparatus comprising: the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for respectively acquiring initial narrative contents of multiple paths of input sources, and the initial narrative contents comprise video information, audio information and multi-line display information; the first processing module is used for setting a plurality of three-dimensional narrative spaces and is used for sequentially setting the initial narrative content; the second processing module is used for acquiring first pixel information in the video information, rearranging the first pixel information according to a certain narrative rule to obtain second pixel information, and converting the second pixel information into second narrative content; a third processing module for providing a narrative element into which said second narrative content is distributed; a fourth processing module for deducting initial narrative content in at least one three-dimensional narrative space by controlling the operation of said narrative element. The device connects and schedules each unit by means of a master control connection, and then controls the consistency of time by means of a specific timing mode. By controlling the consistency of time, invalid communication between each unit is reduced, and forward circulation between the units can be stabilized. The invention opens a narrative chain with story connection according to the user demand, wherein the narrative chain refers to the unit master control system which performs master control connection on randomly allocated units at specific time according to a specific timing mode and realizes the work of the randomly allocated units.
The method for connecting the general control of the offline narrative comprises the following steps: acquiring first narrative contents of a multi-path input source, setting at least one narrative space according to the first narrative contents, setting a user narrative end, and controlling the user narrative end to set the first narrative contents through a multi-path space controller; setting at least one first narrative element according to the first narrative content, and controlling the first narrative element to output corresponding first narrative content in real time through the multi-line space controller; the multi-line space controller interacts with the first narrative element to obtain second narrative content, wherein the second narrative content is complete narrative content under a fixed period of time; setting at least one second narrative element according to the second narrative content, acquiring first pixel information in video information in the second narrative content, and rearranging and converting the first pixel information according to a set narrative rule to obtain second pixel information; the multi-line space controller controls the second narrative element to output the second pixel information into the corresponding narrative space during the working hours according to the second narrative content. According to the invention, the traditional single-class plane narrative is changed into a real three-dimensional spatial narrative by controlling multiple input sources, so that a user can obtain different narrative experiences at the same moment and under the same operation.
The foregoing details of the optional implementation of the embodiment of the present invention have been described in detail with reference to the accompanying drawings, but the embodiment of the present invention is not limited to the specific details of the foregoing implementation, and various simple modifications may be made to the technical solution of the embodiment of the present invention within the scope of the technical concept of the embodiment of the present invention, and these simple modifications all fall within the protection scope of the embodiment of the present invention.
In addition, the specific features described in the above embodiments may be combined in any suitable manner without contradiction. In order to avoid unnecessary repetition, various possible combinations of embodiments of the present invention are not described in detail.
Those skilled in the art will appreciate that all or part of the steps in implementing the methods of the embodiments described above may be implemented by a program stored in a storage medium, including instructions for causing a single-chip microcomputer, chip or processor (processor) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In addition, any combination of various embodiments of the present invention may be performed, so long as the concept of the embodiments of the present invention is not violated, and the disclosure of the embodiments of the present invention should also be considered.
Claims (10)
1. A method of controlling connections in an off-line narrative, the method comprising:
acquiring first narrative contents of a multi-path input source, setting at least one narrative space according to the first narrative contents, setting a user narrative end, and controlling the user narrative end to set the first narrative contents through a multi-path space controller;
setting at least one first narrative element according to the first narrative content, and controlling the first narrative element to output corresponding first narrative content in real time through the multi-line space controller;
the multi-line space controller interacts with the first narrative element to obtain second narrative content, wherein the second narrative content is complete narrative content under a fixed period of time;
setting at least one second narrative element according to the second narrative content, acquiring first pixel information in video information in the second narrative content, and rearranging and converting the first pixel information according to a set narrative rule to obtain second pixel information;
The multi-line space controller controls the second narrative element to output the second pixel information into the corresponding narrative space during the working hours according to the second narrative content.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the second narrative content is at least one of video information, special effect information, simulation information, instruction information, multi-line display information, narrative wave source information and narrative source information;
the narrative wave source is wave source position information of the second narrative content for determining a spatial structure of the second narrative content.
3. The method of claim 2, wherein the step of determining the position of the substrate comprises,
the multi-line display information includes an optical path information file for displaying the light source according to the pixel information.
4. The method of claim 1, wherein rearranging and converting the first pixel information according to the set narrative rules to obtain second pixel information, comprising:
and rearranging the pixel points, the pixel vectors and the pixel colors of the first pixel information according to a certain narrative rule to obtain second pixel information, so that the two-dimensional image raster of the first pixel information is converted into a three-dimensional image.
5. The method according to claim 1, characterized in that the method further comprises:
setting a narrative index for said first and/or second narrative element;
connecting the narrative index, the narrative rules, the first narrative element and/or the second narrative element, the narrative space, the second narrative content through a narrative chain;
activating the first and/or second narrative element by invoking a narrative index in the narrative chain;
and distributing the second narrative content to the corresponding first narrative element and/or second narrative element, and scheduling and controlling the first narrative element and/or the second narrative element in real time by the multi-line space controller to display effects.
6. An apparatus for controlling connection in an off-line narrative, comprising:
the system comprises an acquisition module, a multi-line space controller and a storage module, wherein the acquisition module is used for acquiring first narrative contents of a multi-path input source, setting at least one narrative space according to the first narrative contents, setting a user narrative end, and controlling the user narrative end to set the first narrative contents through the multi-line space controller;
the first processing module is used for setting at least one first narrative element according to the first narrative content, and controlling the first narrative element to output corresponding first narrative content in real time through the multi-line space controller;
The second processing module is used for enabling the multi-line space controller to interact with the first narrative element to obtain second narrative content, and the second narrative content is complete narrative content under a fixed period of time;
the third processing module is used for setting at least one second narrative element according to the second narrative content, acquiring first pixel information in video information in the second narrative content, and rearranging and converting the first pixel information according to a set narrative rule to obtain second pixel information;
and a fourth processing module, configured to control, by the multi-line space controller, the second narrative element to output the second pixel information to a corresponding narrative space during a working period according to the second narrative content.
7. The apparatus of claim 6, wherein the device comprises a plurality of sensors,
the second narrative content is at least one of video information, special effect information, simulation information, instruction information, multi-line display information, narrative wave source information and narrative source information;
the narrative wave source is wave source position information of the second narrative content for determining a spatial structure of the second narrative content.
8. The apparatus of claim 7, wherein the device comprises a plurality of sensors,
The multi-line display information includes an optical path information file for displaying the light source according to the pixel information.
9. The apparatus of claim 6, wherein the rearranging and converting the first pixel information according to the set narrative rules to obtain second pixel information comprises:
and rearranging the pixel points, the pixel vectors and the pixel colors of the first pixel information according to a certain narrative rule to obtain second pixel information, so that the two-dimensional image raster of the first pixel information is converted into a three-dimensional image.
10. The apparatus of claim 6, wherein the apparatus further comprises:
a fifth processing module for setting a narrative index for said first and/or second narrative element;
connecting the narrative index, the narrative rules, the first narrative element and/or the second narrative element, the narrative space, the second narrative content through a narrative chain;
activating the first and/or second narrative element by invoking a narrative index in the narrative chain;
and distributing the second narrative content to the corresponding first narrative element and/or second narrative element, and scheduling and controlling the first narrative element and/or the second narrative element in real time by the multi-line space controller to display effects.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311511702.1A CN117255235B (en) | 2023-11-14 | 2023-11-14 | Method and device for general control connection of offline narrative |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311511702.1A CN117255235B (en) | 2023-11-14 | 2023-11-14 | Method and device for general control connection of offline narrative |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117255235A true CN117255235A (en) | 2023-12-19 |
CN117255235B CN117255235B (en) | 2024-03-01 |
Family
ID=89129805
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311511702.1A Active CN117255235B (en) | 2023-11-14 | 2023-11-14 | Method and device for general control connection of offline narrative |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117255235B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170245023A1 (en) * | 2014-07-31 | 2017-08-24 | MindsightMedia, Inc. | Method, apparatus and article for delivering media content via a user-selectable narrative presentation |
CN110262661A (en) * | 2019-06-20 | 2019-09-20 | 广东工业大学 | A kind of the narration interaction data processing method and relevant apparatus of learning system |
CN111612917A (en) * | 2020-04-02 | 2020-09-01 | 清华大学 | Augmented reality interaction method based on real scene feedback and touchable prop |
CN112464060A (en) * | 2020-12-12 | 2021-03-09 | 张育英 | Method and system for assisting script writing based on wiener law |
CN114003131A (en) * | 2021-12-31 | 2022-02-01 | 垒途智能教科技术研究院江苏有限公司 | VR narrative method based on attention guidance mechanism |
-
2023
- 2023-11-14 CN CN202311511702.1A patent/CN117255235B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170245023A1 (en) * | 2014-07-31 | 2017-08-24 | MindsightMedia, Inc. | Method, apparatus and article for delivering media content via a user-selectable narrative presentation |
CN110262661A (en) * | 2019-06-20 | 2019-09-20 | 广东工业大学 | A kind of the narration interaction data processing method and relevant apparatus of learning system |
CN111612917A (en) * | 2020-04-02 | 2020-09-01 | 清华大学 | Augmented reality interaction method based on real scene feedback and touchable prop |
CN112464060A (en) * | 2020-12-12 | 2021-03-09 | 张育英 | Method and system for assisting script writing based on wiener law |
CN114003131A (en) * | 2021-12-31 | 2022-02-01 | 垒途智能教科技术研究院江苏有限公司 | VR narrative method based on attention guidance mechanism |
Also Published As
Publication number | Publication date |
---|---|
CN117255235B (en) | 2024-03-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10134179B2 (en) | Visual music synthesizer | |
US8972897B2 (en) | Information presentation in virtual 3D | |
US5495576A (en) | Panoramic image based virtual reality/telepresence audio-visual system and method | |
US4689683A (en) | Computerized studio for motion picture film and television production | |
US9911209B2 (en) | System and method for improving video and other media playback | |
KR102371031B1 (en) | Apparatus, system, method and program for video shooting in virtual production | |
US20060007228A1 (en) | Simulation method, program, and system for creating a virtual three-dimensional illuminated scene | |
KR20200134575A (en) | System and method for ballet performance via augumented reality | |
CN110475157A (en) | Multimedia messages methods of exhibiting, device, computer equipment and storage medium | |
JP3507452B2 (en) | Crowd Animation Generation Method Coordinated by Optimal State Feedback | |
US20080043038A1 (en) | Systems and methods for incorporating three-dimensional objects into real-time video feeds | |
CN117255235B (en) | Method and device for general control connection of offline narrative | |
CN112351289B (en) | Live broadcast interaction method and device, computer equipment and storage medium | |
CN115631287A (en) | Digital virtual stage figure display system | |
EP4394721A1 (en) | Training system, method and apparatus using extended reality contents | |
KR101443327B1 (en) | A real time rendering 3d object interactive multi-vision system and method for processing a real time rendering 3d object | |
CN102737408A (en) | Image processing apparatus, image processing method and program | |
CN106249857A (en) | A kind of display converting method, device and terminal unit | |
CN115081488A (en) | Scene control method based on holographic projection technology | |
KR20200014668A (en) | System and providing method for joint training of command and control using 3D augmented reality | |
Spielmann | Video and Computer: The Aesthetics of Steina and Woody Vasulka | |
KR101985640B1 (en) | Election campaign system based on augmented reality | |
CN100366059C (en) | Image playing method and system | |
AU2003255316A1 (en) | Method and system for depicting digital display elements | |
CN110223363A (en) | Image generating method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |