WO2024020794A1 - 一种交互系统 - Google Patents

一种交互系统 Download PDF

Info

Publication number
WO2024020794A1
WO2024020794A1 PCT/CN2022/107990 CN2022107990W WO2024020794A1 WO 2024020794 A1 WO2024020794 A1 WO 2024020794A1 CN 2022107990 W CN2022107990 W CN 2022107990W WO 2024020794 A1 WO2024020794 A1 WO 2024020794A1
Authority
WO
WIPO (PCT)
Prior art keywords
display
target object
module
current position
display screen
Prior art date
Application number
PCT/CN2022/107990
Other languages
English (en)
French (fr)
Other versions
WO2024020794A8 (zh
Inventor
巩方源
夏友祥
管恩慧
张峰
万中魁
李咸珍
王志懋
Original Assignee
京东方科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司 filed Critical 京东方科技集团股份有限公司
Priority to CN202280002367.XA priority Critical patent/CN117769822A/zh
Priority to PCT/CN2022/107990 priority patent/WO2024020794A1/zh
Publication of WO2024020794A1 publication Critical patent/WO2024020794A1/zh
Publication of WO2024020794A8 publication Critical patent/WO2024020794A8/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/32Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials

Definitions

  • the present disclosure belongs to the field of display technology, and specifically relates to an interactive system.
  • the display screen is used to match the performance content of the actors to present a beautiful program effect.
  • the display screen has unparalleled advantages in presenting program effects, but real-time communication between the conductor and the actors during rehearsals and adjustment of the stage effects are time-consuming and laborious work, such as the need for communication between the conductor and different actors. Repeated communication requires coordination and cooperation of a large number of personnel.
  • the conductor often needs to communicate with the post-production personnel first to adjust the stage effects. Only after the post-production personnel make modifications can the adjusted picture be presented on the display. This process requires a lot of manpower, material resources and time. costs, and seriously affects the efficiency of program rehearsals.
  • the present disclosure aims to solve at least one of the technical problems existing in the prior art and provide an interactive system.
  • embodiments of the present disclosure provide an interactive system, which includes a terminal, a display screen, and a server; the terminal and the display screen are respectively communicatively connected to the server;
  • the server is configured to determine the current position of a target object located on the display screen; generate an object identifier of the target object according to the current position of the target object, and add the object of the same target object to The identification is associated with the current position; and, receiving adjustment information of the target object, generating an indication pattern according to the current position of the target object and the adjustment information, and sending the indication pattern to the display screen display;
  • the terminal is configured to display a frame image pre-configured for the display screen, and display the object identifier according to the association between the object identifier and the current location; respond to the user's arrangement of the object identifiers Operation to generate adjustment information of the target object;
  • the display screen is configured to display the indication pattern so that the target object is adjusted according to the instructions of the indication pattern.
  • the server includes a location determination module, an identity generation module, an information association module, a pattern generation module, and a pattern sending module;
  • the position determination module is configured to determine the current position of the target object located on the display screen
  • the identification generation module is configured to generate an object identification of the target object
  • the information association module is configured to associate the object identification of the same target object with the current location
  • the pattern generation module is configured to receive the adjustment information, generate an indication pattern according to the current position of the target object and the adjustment information, and send the indication pattern to the display screen for display;
  • the pattern sending module is configured to send the indication pattern to the display screen for display.
  • the position determination module is specifically configured to receive a scene image of a real scene, identify the target object in the scene image, and determine the current position of the target object.
  • the interactive system further includes a sensor configured for the target object; the sensor is configured to send the location information of the target object to the server;
  • the position determination module is specifically configured to use the received position information of the target object as the current position of the target object.
  • the adjustment information includes an updated position of the target object;
  • the pattern generation module is specifically configured to generate a pattern located at the target object based on the current position of the target object and the updated position. a first indication pattern of the current position, and generate a second indication pattern from the current position to the updated position;
  • the display screen is configured to display the first indication pattern at the current position and to display the second indication pattern between the current position and the updated position.
  • the pattern generation module is specifically configured to determine, according to the frame image, pictures corresponding to other areas located around the current position, and pictures of the area located between the current position and the updated position; According to the pictures corresponding to the other areas and the current position, a first indication pattern located at the current position is generated; according to the area picture, the current position of the target object and the updated position, a first indication pattern is generated from the a second indication pattern from the current position to the updated position.
  • the pattern sending module is further configured to send the frame image to the display screen
  • the display screen is configured to display the frame image, and when receiving the indication pattern, superimpose and display the indication pattern on the frame image; or, replace the frame image located at the current position with The first indication pattern superimposes the second indication pattern on the frame image for display.
  • the adjustment information includes the updated position of the target object;
  • the pattern generation module is specifically configured to generate a third indication pattern located at the current position according to the current position of the target object. ; According to the updated position, generate a fourth indication pattern located at the updated position;
  • the display screen is configured to display the third indication pattern at the current position and the fourth indication pattern at the updated position.
  • the interactive system further includes a storage module
  • the terminal is further configured to, in response to the scheme storage operation, send the location of each currently displayed object identification to the storage module;
  • the storage module is configured to store the location of each object identifier.
  • the terminal further includes a display module and a replacement module
  • the display module is configured to display the frame image and a preset material library file in response to the replacement operation of the frame image; the material library file includes at least one method for replacing the frame image. material video;
  • the replacement module is configured to, in response to selecting a start time node and an end time node, and a selection operation on a material video, filter a target material video from the material video, and replace the start time node and the end time. Frame images between nodes; the duration of the target material video is equal to the duration from the start time node to the end time node.
  • the terminal further includes a frame selection module;
  • the frame selection module includes a first adjustment unit and an amplification unit;
  • the first adjustment unit is configured to adjust the partial area into a regular area in response to an operation of framing a partial area of the frame image
  • the amplification unit is configured to amplify the sub-image located in the regular area
  • the replacement module is specifically configured to select a target material video located at a corresponding position in the rule area from the material video in response to the selection of the start time node and the end time node and the selection operation of the material video, and replace the target material video.
  • Each frame between the start time node and the end time node is a sub-image located in the regular area; the resolution of the material image in the target material video is the same as the resolution of the frame image.
  • the frame selection module further includes a second adjustment unit
  • the second adjustment unit is configured to determine a target area in response to the adjustment of the rule area
  • the amplification unit is configured to amplify a sub-image located in the target area
  • the replacement module is specifically configured to, in response to selecting a start time node and an end time node, and selecting a material video, select a target material video located at a corresponding position in the target area from the material video, and replace the Each frame between the start time node and the end time node is a sub-image located in the target area.
  • the terminal further includes a boundary optimization module
  • the boundary optimization module is configured to, after the replacement module is executed, in response to the optimization operation of the replacement boundary, determine the image to be displayed after the boundary is optimized; the optimization operation includes color smoothing, boundary sharpening, image blurring one or more;
  • the display screen is configured to display the image to be displayed.
  • the terminal also includes a manual adjustment module;
  • the manual adjustment module includes an editing unit and a display unit;
  • the editing unit is configured to generate editing content in response to an editing operation on the frame image
  • the display unit is configured to superimpose and display the editing content on the frame image
  • the display screen is configured to display a frame image superimposed with the editing content.
  • the manual adjustment module further includes an effect adjustment unit
  • the effect adjustment unit is configured to respond to adjustment of the playback effect of the edited content
  • the display unit is configured to superimpose and display the editing content and the playback effect on the frame image
  • the display screen is configured to display a frame image superimposed with the editing content and the adjusted playback effect.
  • the interactive system further includes an image capture device
  • the image collection device is configured to collect scene images of real scenes and send them to the server.
  • Figure 1 is a schematic diagram of an interactive system provided by an embodiment of the present disclosure
  • Figure 2 is a schematic structural diagram of a server provided by an embodiment of the present disclosure
  • Figure 3 is a specific schematic diagram of an interactive system provided by an embodiment of the present disclosure.
  • Figure 4 is a schematic diagram of an interaction method provided by an embodiment of the present disclosure.
  • a plurality or several mentioned in this disclosure means two or more.
  • “And/or” describes the relationship between related objects, indicating that there can be three relationships. For example, A and/or B can mean: A exists alone, A and B exist simultaneously, and B exists alone. The character “/” generally indicates that the related objects are in an "or” relationship.
  • Figure 1 is a schematic diagram of an interactive system provided by an embodiment of the present disclosure.
  • the interactive system includes a terminal, a display screen and a server; the terminal and the display screen are respectively connected to the server through communication.
  • the display screen can be a stage screen in a performance scene, including a display screen on the ground, a display screen on the surrounding walls, a display screen on the ceiling, a display screen suspended in the air, etc.
  • the server is configured to determine the current position of the target object located on the display screen; generate an object identifier of the target object according to the current position of the target object, and associate the object identifier of the same target object with the current position; and, receive the object identifier of the target object. Adjust the information, generate an indication pattern based on the current position of the target object and the adjustment information, and send the indication pattern to the display screen for display.
  • the target object can be the person or object being controlled.
  • the target object can be actors, props, etc.
  • the object identifier can be a virtual model, avatar, code name, etc. used to represent the target object.
  • the server can use the information collected by the scene camera or position sensor to determine the specific location of the target object on the display screen in the stage scene, that is, the target object.
  • the current location of the location display Each target object that needs to be controlled generates a corresponding object identification, associates the object identification of the same target object with the current location of the target object, and stores this association for subsequent terminal monitoring of the location of the target object. .
  • the adjustment information includes information on position adjustment of the target object.
  • the server generates an indication pattern that is moved away from the current position based on the target object's current position and position adjustment information.
  • the indication pattern is displayed on the display screen, the target object in the real scene can follow the instructions of the indication pattern. Make position adjustments.
  • the terminal is configured to display the frame image pre-configured for the display screen, and to display the object identification according to the association between the object identification and the current position; and to generate adjustment information of the target object in response to the user's arrangement operation of the object identification.
  • the terminal operates the terminal in the embodiment of the present disclosure to direct the controlled person (ie, the target object) to implement scene scheduling.
  • the terminal can be a convenient terminal, such as a mobile phone, a tablet computer, or a touch-control all-in-one machine, etc.
  • the convenient terminal synchronously displays the images displayed on the display screen in the display scene. When the display needs to be adjusted, the operator can operate it directly on the convenient terminal.
  • the server pre-stores the frame image configured for the display screen. While the display screen displays the frame image, the terminal can obtain the frame image of the display screen to achieve synchronous display of the picture through the display module of the terminal.
  • the display module can be understood as the screen of the terminal.
  • the terminal While displaying the frame image, the terminal obtains the current position of the target object in the real scene, the object identifier, and the association between the object identifier and the current position from the server, and converts the object according to the association between the current position and the object identifier.
  • the logo is displayed at a specific position of the display module, and the characteristic position corresponds to the current position of the target object in the real scene.
  • the terminal responds to the operator's arrangement operation of the object identifiers displayed on the display module, and generates adjustment information corresponding to the target object in the real scene for the object identifiers. For example, the operator can drag the object logo displayed on the display module to the adjusted position, or delineate an area and then drag the object logo to arrange the scene of the target object.
  • the adjustment information may include information on position adjustment of the target object.
  • the adjustment information generated by the terminal can be sent to the server through wifi communication.
  • the display screen is configured to display the indication pattern so that the target object adjusts according to the instructions of the indication pattern.
  • the embodiment of the present disclosure can real-time feedback of the operator's modification plan on the scene to the display screen to quickly adjust the target object. During the performance, it can achieve accurate accuracy between the backstage and the scene. Scheduling greatly reduces the communication cost between the backend and the scene, and improves the command efficiency of the operator.
  • FIG. 2 is a schematic structural diagram of a server provided by an embodiment of the present disclosure.
  • the server includes a location determination module, a logo generation module, an information association module, a pattern generation module and a pattern sending module.
  • the position determination module is configured to determine the current position of the target object located on the display screen;
  • the identification generation module is configured to generate an object identification of the target object;
  • the information association module is configured to compare the object identification of the same target object with the current location.
  • Association the pattern generation module is configured to receive the adjustment information, generate an indication pattern according to the current position of the target object and the adjustment information, and send the indication pattern to the display screen for display;
  • the pattern sending module is configured to send the indication pattern to the display screen display.
  • embodiments of the present disclosure provide two different ways of determining the current location of a target object in a real scene.
  • Example 1 Figure 3 is a specific schematic diagram of an interactive system provided by an embodiment of the present disclosure. As shown in Figure 3, the scene image of the real scene is collected through the image acquisition device, and the current position of the target object is determined using the scene image.
  • Example 2 Determine the current location of the target object through the positioning sensor carried by the target object.
  • the position determination module is specifically configured to receive a scene image of a real scene, identify the target object in the scene image, and determine the current position of the target object.
  • the interactive system also includes an image collection device; the image collection device is configured to collect scene images of the real scene and send them to the location determination module in the server.
  • the interactive system also includes a sensor configured for the target object; the sensor is configured to directly send the location information of the target object to the server; the location determination module is specifically configured to use the received location information of the target object as the target object. 's current location.
  • the sensor can be a positioning sensor, which can feed back the current location information to the server in real time.
  • the location information is the location coordinates in the real scene.
  • the adjustment information includes an updated position of the target object; the pattern generation module is specifically configured to generate a first indication pattern located at the current position based on the current position of the target object and the updated position, and generate a first indication pattern from the current position. A second indication pattern to the updated position.
  • the first indication pattern is only located at the current position and is used to prompt the target object located at the current position and needs to be adjusted.
  • the first indication pattern may be randomly selected by the pattern generation module from the pattern library, or may be selected by the user from the pattern library.
  • the shape of the first indication pattern may be a triangle, a square, a circle, a five-pointed star, etc., and the color may be white, red, green, yellow, blue, etc.; or the first indication pattern may also be a dynamic special effect pattern, for example Highlight flashing, breathing lights, fireworks blooming, etc.
  • the second indication pattern can be a static arrow pattern indicating an indication from the current position to the updated position, or it can also be a dynamic arrow moving pattern, for example, starting from the display of the first indication pattern from the current position of the display screen, pointing out an arrow from The movement trajectory pattern of the updated position after the current position is moved is used to instruct the target object to follow the arrow's travel path to reach the destination.
  • the display screen is configured to display a first indication pattern at the current position and a second indication pattern between the current position and the updated position.
  • the first indication pattern and the second indication pattern may be patterns that are different from the picture currently displayed on the display screen.
  • the pattern generation module is specifically configured to determine, according to the frame image, pictures corresponding to other areas around the current position and pictures of the area between the current position and the updated position; according to the pictures corresponding to other areas and the current position , generate a first indication pattern located at the current position; generate a second indication pattern from the current position to the updated position according to the area screen, the current position of the target object and the updated position.
  • the first indication pattern may be automatically selected by the pattern generation module from the pattern library. specifically.
  • the pattern generation module identifies pictures corresponding to other areas around the current position and selects comparisons from the pattern library.
  • a first indication pattern that is more obvious and different from the color and shape of the picture. For example, if the current display screen of the display screen is a blue ocean, the first indication pattern generated is preferably a sharp-edged figure, such as a five-pointed star, and the color is preferably a contrasting color such as yellow or orange.
  • the area picture between the current position and the updated position is identified, and a second indication pattern that is more obvious in comparison and is different from the color of the area picture is selected from the pattern library. For example, if the color of the area around the current location is blue, the color of the area around the updated location is yellow, and there is a red area between the current location and the updated location, then the color of the second indication pattern can be different from the above. Blue, yellow and red colors, preferably green.
  • the method of generating the second indication pattern is the same as the method of generating the second indication pattern in the above example (that is, a static indication arrow or a dynamic moving arrow), and the repeated parts will not be described again.
  • the third indication pattern may be displayed only at the current location, and the fourth indication pattern may be displayed at the updated location.
  • the adjustment information includes the updated position of the target object; the pattern generation module is specifically configured to generate a third indication pattern located at the current position according to the current position of the target object; and generate a third indication pattern located at the updated position according to the updated position.
  • the fourth indicator pattern may be the same pattern as the first indication pattern, for example, a pattern in the shape of a triangle, a square, a circle or a five-pointed star, and a color of white, red, green, yellow or blue; or, the third indication pattern
  • the pattern can also be a dynamic special effect pattern, such as highlight flashing, breathing lights, fireworks blooming, etc.
  • the fourth indication pattern may be a pattern in the shape of a triangle, a square, a circle or a five-pointed star, and a color of white, red, green, yellow or blue; or the fourth indication pattern may also be a dynamic special effect pattern, such as a highlight flashing , breathing lights, fireworks blooms, etc.
  • the third indication pattern and the fourth indication pattern of the target object are the same.
  • the display screen is configured to display the third indication pattern at the current position and the fourth indication pattern at the updated position.
  • the third indication pattern and the fourth indication pattern are both patterns that are more obviously different from the surrounding display images.
  • the pattern sending module includes a video transmitting card, a photoelectric conversion unit and a video receiving card.
  • the video transmitting card is configured to send the indication pattern (such as the above-mentioned first indication pattern and the second indication pattern; or the third indication pattern and the fourth indication pattern) to the photoelectric conversion unit through the optical fiber.
  • the photoelectric conversion unit is configured to convert the indication pattern into an electrical signal and send it to the video receiving card
  • the video receiving card is configured to convert the received electrical signal into a video signal and send it to the display screen for display.
  • the pattern sending module is further configured to send the frame image to the display screen.
  • the video transmitting card is configured to send the frame image to the photoelectric conversion unit through the optical fiber.
  • the photoelectric conversion unit is configured to convert the frame image into an electrical signal and send it to the video receiving card, and the video receiving card is configured to convert the received electrical signal into a video signal and send it to the display screen for display.
  • the display screen is configured to display the frame image, and when receiving the indication pattern, display the indication pattern superimposed on the frame image.
  • the display screen is configured to display the frame image, and when receiving the indication pattern, replace the frame image located at the current position with the first indication pattern, and superimpose the second indication pattern on the frame image for display.
  • overlay display means adding a layer to the currently displayed frame image to specifically display the indication pattern.
  • Image replacement here means that, for the image layer where the frame image is located, the image located at the current position of the image layer is replaced with the first indication pattern.
  • the second indication pattern since the second indication pattern spans a longer path, it is only superimposed and displayed without replacement adjustment.
  • the replacement technology corresponding to the first indication pattern can also be used to realize the adjustment of replacing the corresponding pattern with the second indication pattern. Therefore, the embodiments of the present disclosure can be configured according to the actual situation, and the present disclosure is not limited to this.
  • the third indication pattern and the fourth indication pattern can also be displayed on the display screen based on the above implementation process, and the repeated parts will not be described again.
  • the adjustment information generated by the terminal can be saved to the server for subsequent viewing of the target object's scheduling plan.
  • the interactive system also includes a storage module; the terminal is also configured to respond to the solution storage operation and send the location of each currently displayed object identification to the storage module; the storage module is configured to store the location of each object identification for use Call back the adjustment plan of the target object corresponding to the object identifier.
  • the terminal provided by the embodiments of the present disclosure also includes a replacement module for synchronously displaying the modification scheme of the frame image on the display screen,
  • This real-time modification and intuitive display method can reduce the workload of repeated modifications caused by actual results not meeting expectations, reduce modification costs, and improve work efficiency.
  • the display module is configured to respond to the replacement operation of the frame image, display the frame image, and the preset material library file; the material library file includes at least one material video for replacing the frame image.
  • the replacement module is configured to, in response to the selection of the start time node and the end time node, and the selection operation of the material video, filter a target material video from the material video, and replace the frame image between the start time node and the end time node; a target The duration of the material video is equal to the duration from the start time node to the end time node.
  • the user has pre-stored relevant materials such as video, pictures, audio, text and other materials in the terminal.
  • relevant materials such as video, pictures, audio, text and other materials in the terminal.
  • the operator can select the stage modification mode displayed in the display module. , at this time, the display module displays the display screen of the current stage (that is, the frame image played on the display screen), and the material library file.
  • the material library file includes a variety of material videos. The operator can view the progress bar where the currently played frame image is located. Select the time period that needs to be modified, that is, select the start time node and end time node, and then drag the material to be replaced into the current time period.
  • the replacement module intercepts a target material video with the same modified duration from the received material video, and the modified duration is the duration from the start time node to the end time node. Specifically, you can randomly intercept video clips from the material video, or you can time and intercept a video clip with a modified duration starting from the first frame of the material video. Image replacement here means replacing the material video with the image layer where the frame image of the corresponding time period is located.
  • the terminal also includes a frame selection module; the frame selection module includes a first adjustment unit and an amplification unit.
  • the first adjustment unit is configured to adjust the partial area into a regular area in response to an operation of framing a partial area of the frame image. It should be noted that when the operator manually draws the area that needs to be modified on the terminal screen, the drawn lines are not necessarily regular. At this time, the first adjustment unit can adjust the rules according to the area framed by the operator's hand-drawn lines to obtain the regular area. For example, regular areas such as rectangles or circles.
  • the magnification unit is configured to magnify the sub-image located in the regular area.
  • the amplification unit can proportionally enlarge the sub-images of the regular area on the display module according to the proportional relationship between the area of the frame image displayed by the display module and the regular area, so that it occupies the entire image display area of the display module as much as possible. It is convenient for the operator to view the sub-image.
  • the replacement module is specifically configured to respond to the selection of the start time node and the end time node and the selection operation of the material video, filter a target material video located at a corresponding position in the rule area from the material video, and replace the start time node and the end time node.
  • Each frame is a sub-image in a regular area; the resolution of the material image in the target material video is the same as the resolution of the frame image.
  • the operator can also adjust the rule area, such as adjusting the size of the box, etc., to determine the final target area to be selected.
  • the frame selection module further includes a second adjustment unit; the second adjustment unit is configured to determine the target area in response to the adjustment of the rule area.
  • the magnification unit is configured to magnify the sub-image located in the target area. The way in which the enlarging unit enlarges the sub-image of the target area is the same as the way of enlarging the sub-image of the method rule area in the above example, and the repeated parts will not be repeated.
  • the replacement module is specifically configured to respond to the selection of the start time node and the end time node and the selection operation of the material video, filter a target material video located at the corresponding position of the target area from the material video, and replace the start time node and the end time node. sub-images located in the target area in each frame.
  • this example can use the same technology to replace the sub-image of the target area in the frame image, and the repeated process will not be described again.
  • Image replacement here means that only the footage sub-image is displayed overlayed above the original image layer.
  • the terminal also includes a boundary optimization module.
  • the boundary optimization module is configured to determine the image to be displayed after the boundary optimization in response to the optimization operation on the replacement boundary after the replacement module is executed.
  • the display screen is configured to display images to be displayed.
  • the terminal defaults to The boundary between the sub-image of the replaced partial area and the original frame image is optimized to make the boundary excessively smooth.
  • the operator can also choose to sharpen the boundary, that is, to make the image difference on both sides of the boundary obvious. Therefore, the optimization operation may include one or more of color smoothing, boundary sharpening, and image blurring, which may be set according to the actual scene, and is not specifically limited in the embodiments of the present disclosure.
  • the stage image may display errors.
  • the terminal is also provided with a manual adjustment module that supports manual drawing of the image.
  • the manual adjustment module includes an editing unit and a display unit.
  • the editing unit is configured to respond to the editing operation on the frame image and generate editing content.
  • the display unit is configured to superimpose and display the editing content on the frame image.
  • the display screen is configured to display frame images superimposed with editing content.
  • the editing operation can be understood as the operation of the operator drawing patterns on the terminal screen to make up for the incompleteness or overall coordination of the display screen display.
  • the display screen is configured to display frame images superimposed with editing content.
  • the manual adjustment module also includes an effects adjustment unit.
  • the effect adjustment unit is configured to respond to adjustment of a playback effect of the edited content.
  • the display unit is configured to superimpose and display editing content and playback effects on the frame image.
  • the display screen is configured to display frame images superimposed with editing content and adjusted playback effects.
  • the display time and playback effect of the hand-painted part can be adjusted, such as loop playback according to a certain period of time, and the playback effect is fade in and fade out mode or gradient mode, etc.
  • the manual adjustment module sends the drawing results to the server for rendering, and then the server sends the rendering effects to the display screen, which is superimposed on the frame image for display.
  • the terminal includes a position adjustment module for the display screen, which is used to adjust the position of the display screen in the display scene, such as real-time control of the lifting platform, etc.
  • the server can save the modified timestamp.
  • the modification information generated by the terminal can be saved to the server for subsequent viewing of the stage effect modification plan.
  • the interactive system also includes a storage module; the terminal is also configured to respond to the solution storage operation and send the location of each currently displayed object identification to the storage module; the storage module is also configured to store the stage information in the form of voice or text.
  • Modification information that is, the replacement of material videos, manual painting, stage special effects, lifting platform adjustments, etc.
  • the interactive system provided by the embodiments of the present disclosure can be applied to performance rehearsal scenes. By arranging the object identifiers of target objects on the terminal by the operator, it can assist the target object in the real scene to quickly adjust the position, and improve the operator's ability to adjust the position quickly. command efficiency.
  • the terminal provided by the embodiment of the present disclosure can support the modification of the stage effect, and can quickly modify the background image displayed on the display screen to make up for the incompleteness of the stage effect.
  • the operator can also use the manual adjustment module to compensate for the stage effect.
  • the server For example, if the server encounters a performance plan that matches the actor's trajectory, if the server does not receive the actor's running trajectory data, it will initiate a manual adjustment request to the terminal's manual adjustment module. , the manual adjustment module responds to the operator's drawing operation on the frame image, and superimposes the edited content and/or adjusted playback effect on the frame image.
  • Figure 4 is a schematic diagram of an interaction method provided by the embodiment of the disclosure. As shown in Figure 4, it includes steps S41 to S46.
  • S41 and S42 can be executed by the server in the above example, and the specific parts will not be described again.
  • S43 and S44 can be executed through the terminal in the above example, and the specific parts will not be described again.
  • S45 can be executed through the server in the above example, and the specific parts will not be described again.
  • S46 can be executed through the display screen in the above example, and the specific parts will not be described again.
  • the disclosed embodiment can display the feedback of the operator's modification plan on the scene in real time on the display screen to quickly adjust the target object.
  • precise scheduling between the backstage and the scene can be achieved, which greatly reduces the communication cost between the backstage and the scene. Improved the command efficiency of operators.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本公开提供一种交互系统,属于显示技术领域,其中交互系统包括终端、显示屏和服务端;终端和显示屏分别与服务端通信连接;服务端被配置为确定位于显示屏上的目标对象的当前位置;根据目标对象的当前位置生成目标对象的对象标识,并将同一目标对象的对象标识和当前位置进行关联;以及,接收目标对象的调整信息,并根据目标对象的当前位置和调整信息生成指示图案,并将指示图案发送到显示屏进行显示;终端被配置为显示预先为显示屏配置的帧图像,并根据对象标识与当前位置的关联关系,显示对象标识;响应用户对对象标识的排布操作,生成目标对象的调整信息;显示屏被配置为显示指示图案,以使目标对象按照指示图案的指示进行调整。

Description

一种交互系统 技术领域
本公开属于显示技术领域,具体涉及一种交互系统。
背景技术
目前显示屏技术在大型演出中的应用越来越普遍,利用显示屏显示画面与演员表演内容相配合,呈现出美轮美奂的节目效果。显示屏在呈现节目效果时有着无可比拟的优势,但是排练时指挥人员与演员之间的实时沟通、以及对舞台效果的调整却是一个费时费力的工作,比如需要指挥人员与不同演员之间的反复沟通,同时需要大量的人员协调配合。另外,在演出过程中,指挥人员对舞台效果的调整往往需要先与后期人员沟通,在后期人员做出修改后才能在显示屏上呈现调整后的画面,这个过程需要大量的人力、物力和时间成本,且严重影响了节目排练的效率。
发明内容
本公开旨在至少解决现有技术中存在的技术问题之一,提供一种交互系统。
第一方面,本公开实施例提供了一种交互系统,其包括终端、显示屏和服务端;所述终端和所述显示屏分别与所述服务端通信连接;
所述服务端,被配置为确定位于所述显示屏上的目标对象的当前位置;根据所述目标对象的当前位置生成所述目标对象的对象标识,并将同一所述目标对象的所述对象标识和所述当前位置进行关联;以及,接收所述目标对象的调整信息,并根据所述目标对象的当前位置和所述调整信息生成指示图案,并将所述指示图案发送到所述显示屏进行显示;
所述终端,被配置为显示预先为所述显示屏配置的帧图像,并根据所述对象标识与所述当前位置的关联关系,显示所述对象标识;响应用户对所述对象标识的排布操作,生成所述目标对象的调整信息;
所述显示屏,被配置为显示所述指示图案,以使所述目标对象按照所述指示图案的指示进行调整。
在一些示例中,所述服务端包括位置确定模块、标识生成模块、信息关联模块、图案生成模块和图案发送模块;
所述位置确定模块,被配置为确定位于所述显示屏上的目标对象的当前位置;
所述标识生成模块,被配置为生成所述目标对象的对象标识;
所述信息关联模块,被配置为将同一所述目标对象的所述对象标识和所述当前位置进行关联;
所述图案生成模块,被配置为接收所述调整信息,并根据所述目标对象的当前位置和所述调整信息生成指示图案,并将所述指示图案发送到所述显示屏进行显示;
所述图案发送模块,被配置为将所述指示图案发送到所述显示屏进行显示。
在一些示例中,所述位置确定模块,具体被配置为接收现实场景的场景图像,并对所述场景图像中的所述目标对象进行识别,确定所述目标对象的当前位置。
在一些示例中,所述交互系统还包括为所述目标对象配置的传感器;所述传感器,被配置为向所述服务端发送所述目标对象的位置信息;
所述位置确定模块,具体被配置为将接收到的所述目标对象的位置信息作为所述目标对象的当前位置。
在一些示例中,所述调整信息包括所述目标对象的更新后的位置;所述图案生成模块,具体被配置为根据所述目标对象的当前位置和所述更新后的位置,生成位于所述当前位置的第一指示图案,并生成从所述当前位置到所述更新后的位置的第二指示图案;
所述显示屏,被配置为在所述当前位置显示所述第一指示图案,在所述当前位置和所述更新后的位置之间显示所述第二指示图案。
在一些示例中,所述图案生成模块,具体被配置为根据所述帧图像,确 定位于所述当前位置周围的其他区域对应的画面、以及位于当前位置与更新后的位置之间的区域画面;根据所述其他区域对应的画面和所述当前位置,生成位于所述当前位置的第一指示图案;根据所述区域画面、所述目标对象的当前位置和所述更新后的位置,生成从所述当前位置到所述更新后的位置的第二指示图案。
在一些示例中,所述图案发送模块,还被配置为将所述帧图像发送至所述显示屏;
所述显示屏,被配置为显示所述帧图像,以及在接收到所述指示图案时,在所述帧图像上叠加显示所述指示图案;或者,将位于所述当前位置的帧图像替换为所述第一指示图案,将所述第二指示图案叠加在所述帧图像上进行显示。
在一些示例中,所述调整信息包括所述目标对象的更新后的位置;所述图案生成模块,具体被配置为根据所述目标对象的当前位置,生成位于所述当前位置的第三指示图案;根据所述更新后的位置,生成位于所述更新后的位置的第四指示图案;
所述显示屏,被配置为在所述当前位置显示所述第三指示图案,在所述更新后的位置显示所述第四指示图案。
在一些示例中,所述交互系统还包括存储模块;
所述终端,还被配置为响应于方案存储操作,将当前显示的各个所述对象标识的位置发送给存储模块;
所述存储模块,被配置为存储各个所述对象标识的位置。
在一些示例中,所述终端还包括显示模块和替换模块;
所述显示模块,被配置为响应对所述帧图像的替换操作,显示所述帧图像、以及预先设置的素材库文件;所述素材库文件中包括至少一种用于替换所述帧图像的素材视频;
所述替换模块,被配置为响应于选择开始时间节点和结束时间节点、以及对素材视频的选择操作,从所述素材视频中筛选一段目标素材视频,替换 所述开始时间节点和所述结束时间节点之间的帧图像;所述一段目标素材视频的时长与所述开始时间节点到所述结束时间节点的时长相等。
在一些示例中,所述终端还包括框选模块;所述框选模块包括第一调整单元和放大单元;
所述第一调整单元,被配置为响应于框选所述帧图像的部分区域的操作,调整所述部分区域为规则区域;
所述放大单元,被配置为所述放大位于所述规则区域的子图像;
所述替换模块,具体被配置为响应于选择开始时间节点和结束时间节点、以及对素材视频的选择操作,从所述素材视频中筛选一段位于所述规则区域对应位置的目标素材视频,替换所述开始时间节点和所述结束时间节点之间的、每帧位于所述规则区域的子图像;所述目标素材视频中素材图像的分辨率与所述帧图像的分辨率相同。
在一些示例中,所述框选模块还包括第二调整单元;
所述第二调整单元,被配置为响应于对所述规则区域的调整,确定目标区域;
所述放大单元,被配置为所述放大位于所述目标区域的子图像;
所述替换模块,具体被配置为响应于选择开始时间节点和结束时间节点、以及对素材视频的选择操作,从所述素材视频中筛选一段位于所述目标区域对应位置的目标素材视频,替换所述开始时间节点和所述结束时间节点之间的、每帧位于所述目标区域的子图像。
在一些示例中,所述终端还包括边界优化模块;
所述边界优化模块,被配置为在所述替换模块执行之后,响应于对替换边界的优化操作,确定边界优化后的待展示图像;所述优化操作包括色彩平滑、边界锐化、图像模糊中的一种或多种;
所述显示屏,被配置为显示所述待展示图像。
在一些示例中,所述终端还包括手动调整模块;手动调整模块包括编辑 单元和显示单元;
所述编辑单元,被配置为响应对所述帧图像的编辑操作,生成编辑内容;
所述显示单元,被配置为将所述编辑内容叠加显示在所述帧图像上;
所述显示屏,被配置为显示叠加有所述编辑内容的帧图像。
在一些示例中,所述手动调整模块还包括效果调节单元;
所述效果调节单元,被配置为响应于对所述编辑内容的播放效果的调节;
所述显示单元,被配置为将所述编辑内容和所述播放效果叠加显示在所述帧图像上;
所述显示屏,被配置为显示叠加有所述编辑内容、以及调整后的播放效果的帧图像。
在一些示例中,所述交互系统还包括图像采集设备;
所述图像采集设备,被配置为采集现实场景的场景图像,并发送到所述服务端。
附图说明
图1为本公开实施例提供的一种交互系统的示意图;
图2为本公开实施例提供的一种服务端的具体结构示意图;
图3为本公开实施例提供的一种交互系统的具体示意图;
图4为本公开实施例提供的一种交互方法的示意图。
具体实施方式
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例中附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。通常在此处附图中描述和示出的本公开实施例的组件可以以各种不同的配置来布置和设计。因此,以下对在附图中提供的本公开的实施例的详细描述并非旨在限制要求保护的本公开的范围,而是仅仅表示本公开的选定实施例。 基于本公开的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。
除非另外定义,本公开使用的技术术语或者科学术语应当为本公开所属领域内具有一般技能的人士所理解的通常意义。本公开中使用的“第一”、“第二”以及类似的词语并不表示任何顺序、数量或者重要性,而只是用来区分不同的组成部分。同样,“一个”、“一”或者“该”等类似词语也不表示数量限制,而是表示存在至少一个。“包括”或者“包含”等类似的词语意指出现该词前面的元件或者物件涵盖出现在该词后面列举的元件或者物件及其等同,而不排除其他元件或者物件。“连接”或者“相连”等类似的词语并非限定于物理的或者机械的连接,而是可以包括电性的连接,不管是直接的还是间接的。“上”、“下”、“左”、“右”等仅用于表示相对位置关系,当被描述对象的绝对位置改变后,则该相对位置关系也可能相应地改变。
在本公开中提及的“多个或者若干个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。
第一方面,图1为本公开实施例提供的一种交互系统的示意图,如图1所示,该交互系统包括终端、显示屏和服务端;终端和显示屏分别与服务端通信连接。这里,显示屏可以是在演出场景下的舞台屏幕,包括位于地面上的显示屏,四周墙体上的显示屏、天花板上的显示屏、空中吊置的显示屏等。
下面对交互系统中的各个结构做详细说明:
服务端被配置为确定位于显示屏上的目标对象的当前位置;根据目标对象的当前位置生成目标对象的对象标识,并将同一目标对象的对象标识和当前位置进行关联;以及,接收目标对象的调整信息,并根据目标对象的当前位置和调整信息生成指示图案,并将指示图案发送到显示屏进行显示。
这里,目标对象可以为被控的人或物,例如在舞台排练场景下,目标对象可以为演员、道具等。对象标识可以是用于表征目标对象的虚拟模型、头 像、代号等。
在舞台排练的场景下,演员站在显示屏上,周围布置道具,服务端可以通过场景摄像头或位置传感器所采集的信息,确定目标对象在舞台场景中位于显示屏的具体位置,也即目标对象位置显示屏的当前位置。每个需要被控的目标对象均生成对应的对象标识,并将同一目标对象的对象标识与该目标对象的当前位置进行关联,并存储这一关联关系,以用于后续终端监控目标对象的位置。
由于当前位置表示的是目标对象在现实场景中的当前位置,调整信息包括对目标对象的位置调整的信息。服务端根据目标对象的当前位置和位置调整的信息生成从当前位置被调离的指示图案,当该指示图案显示在显示屏上时,在现实场景中的目标对象就可以根据指示图案的指示,进行位置调整。
终端被配置为显示预先为显示屏配置的帧图像,并根据对象标识与当前位置的关联关系,显示对象标识;响应用户对对象标识的排布操作,生成目标对象的调整信息。
这里,操控人员操作本公开实施例中的终端,以指挥被控人员(也即目标对象)实现场景调度。终端可以是便捷端,例如手机、平板电脑、或者触控一体机等,该便捷端同步显示显示场景下显示屏显示的画面。在需要对显示屏进行调整时,操控人员可以直接在便捷端进行操作。
服务端预先存储有为显示屏配置的帧图像,在显示屏显示帧图像的同时,终端可以获取显示屏的帧图像,以通过终端的显示模块实现画面的同步显示。这里,显示模块可以理解为终端的屏幕。
终端在显示帧图像的同时,从服务端获取现实场景下的目标对象的当前位置、对象标识、以及对象标识与当前位置的关联关系,并按照当前位置与对象标识之间的关联关系,将对象标识显示在显示模块的特定位置,该特征位置对应现实场景中目标对象的当前位置。之后,终端响应操控人员对显示模块上显示的对象标识的排布操作,生成该对象标识对应现实场景中的目标对象的调整信息。例如,操控人员可以拖拽显示模块上显示的对象标识到调 整后的位置,或者,圈定一个区域,再对对象标识进行拖拽,以实现对目标对象的场景排布。该调整信息可以包括对目标对象的位置调整的信息。终端将生成的调整信息可以通过wifi通信的方式发送到服务端。
显示屏被配置为显示指示图案,以使目标对象按照指示图案的指示进行调整。
本公开实施例通过服务端、终端和显示屏的信息交互,能够实时将操控人员对现场的修改方案反馈置显示屏进行显示,以快速调整目标对象,在演出过程中,能够实现后台与现场精准调度,大大降低了后台与现场的沟通成本,提高了操控人员的指挥效率。
在一些实施例中,图2为本公开实施例提供的一种服务端的具体结构示意图。如图2所示,服务端包括位置确定模块、标识生成模块、信息关联模块、图案生成模块和图案发送模块。其中,位置确定模块被配置为确定位于显示屏上的目标对象的当前位置;标识生成模块被配置为生成目标对象的对象标识;信息关联模块被配置为将同一目标对象的对象标识和当前位置进行关联;图案生成模块被配置为接收调整信息,并根据目标对象的当前位置和调整信息生成指示图案,并将指示图案发送到显示屏进行显示;图案发送模块被配置为将指示图案发送到显示屏进行显示。
下面对服务端中的各个模块做详细说明:
在一些示例中,本公开实施例提出两种不同的、确定现实场景中的目标对象的当前位置的方式,示例一:图3为本公开实施例提供的一种交互系统的具体示意图。如图3所示,通过图像采集设备采集现实场景的场景图像,利用场景图像确定目标对象的当前位置。示例二:通过目标对象携带的定位传感器,确定目标对象的当前位置。
针对示例一、位置确定模块具体被配置为接收现实场景的场景图像,并对场景图像中的目标对象进行识别,确定目标对象的当前位置。
具体地,交互系统还包括图像采集设备;图像采集设备被配置为采集现实场景的场景图像,并发送到服务端中的位置确定模块。
针对示例二、交互系统还包括为目标对象配置的传感器;传感器被配置为可以直接向服务端发送目标对象的位置信息;位置确定模块具体被配置为将接收到的目标对象的位置信息作为目标对象的当前位置。这里,传感器可以为定位传感器,能够实时向服务端反馈当前所在的位置信息。该位置信息为现实场景中的位置坐标。
在一些示例中,调整信息包括目标对象的更新后的位置;图案生成模块具体被配置为根据目标对象的当前位置和更新后的位置,生成位于当前位置的第一指示图案,并生成从当前位置到更新后的位置的第二指示图案。这里,第一指示图案仅位于当前位置,是用于提示位于当前位置的目标对象,需要做出调整。该第一指示图案可以是图案生成模块从图案库中随机选择的,或者也可以是用户从图案库中选择的。第一指示图案的形状可以为三角形、正方形、圆形、五角星形等形状,颜色可以为白、红、绿、黄、蓝等颜色;或者,第一指示图案也可以为动态特效图案,例如高亮闪烁、呼吸灯、烟花绽放等。第二指示图案可以为从当前位置到更新后的位置的一个指示的静态箭头图案,或者,也可以为动态箭头移动图案,例如,从显示屏当前位置显示第一指示图案开始,指出一个箭头从当前位置移动更新后的位置的移动轨迹图案,以用于指示目标对象跟从箭头的行进路径达到目的地。显示屏被配置为在当前位置显示第一指示图案,在当前位置和更新后的位置之间显示第二指示图案。
在一些示例中,第一指示图案和第二指示图案可以为区别于当前显示屏所显示画面的图案。基于此,图案生成模块具体被配置为根据帧图像,确定位于当前位置周围的其他区域对应的画面、以及位于当前位置与更新后的位置之间的区域画面;根据其他区域对应的画面和当前位置,生成位于当前位置的第一指示图案;根据区域画面、目标对象的当前位置和更新后的位置,生成从当前位置到更新后的位置的第二指示图案。
该第一指示图案可以是图案生成模块从图案库中自动筛选的。具体地。为了较为明显的区分第一指示图案与显示屏当前显示的帧图像,因此在生成第一指示图案之前,图案生成模块对当前位置周围的其他区域对应的画面进 行识别,从图案库中筛选出对比较为明显的、区别于该画面颜色和形状的第一指示图案。例如,若显示屏当前显示画面为蓝色海洋,则生成的第一指示图案优选棱角分明的图形,例如五角星,且颜色优先选择黄色或橙色等对比明显的颜色。针对第二指示图案,对当前位置与更新后的位置之间的区域画面进行识别,从图案库中筛选出对比较为明显的、区别于该区域画面颜色的第二指示图案。例如,若当前位置周围区域的颜色为蓝色,更新后的位置周围区域的颜色为黄色,而当前位置与更新后的位置之间存在红色区域,则第二指示图案的颜色可以为区别于上述蓝色、黄色和红色的颜色,优选绿色。具体地,在颜色有所区分的前提下,生成第二指示图案的方式与上述示例中生成第二指示图案的方式相同(也即静态指示箭头或动态移动箭头),重复部分不再赘述。
在一些示例中,可以仅在当前位置显示第三指示图案,在更新后的位置显示第四指示图案。
具体地,调整信息包括目标对象的更新后的位置;图案生成模块具体被配置为根据目标对象的当前位置,生成位于当前位置的第三指示图案;根据更新后的位置,生成位于更新后的位置的第四指示图案。这里,第三指示图案可以为与第一指示图案相同的图案,例如形状为三角形、正方形、圆形或五角星形,颜色为白、红、绿、黄或蓝的图案;或者,第三指示图案也可以为动态特效图案,例如高亮闪烁、呼吸灯、烟花绽放等。第四指示图案可以为形状为三角形、正方形、圆形或五角星形,颜色为白、红、绿、黄或蓝的图案;或者,第四指示图案也可以为动态特效图案,例如高亮闪烁、呼吸灯、烟花绽放等。示例性的,为了便于现实场景中的目标对象确定调整位置,则该目标对象的第三指示图案和第四指示图案相同。显示屏被配置为在当前位置显示第三指示图案,在更新后的位置显示第四指示图案。
第三指示图案和第四指示图案均为较为明显区别于周围显示画面的图案,具体实施过程参见上述示例的自动配置过程,重复部分不再赘述。
在一些实例中,如图3所示,图案发送模块包括视频发射卡、光电转换单元和视频接收卡。其中,视频发射卡被配置为将指示图案(例如上述的第 一指示图案和第二指示图案;或者,第三指示图案和第四指示图案)通过光纤发送到光电转换单元。光电转换单元被配置为将指示图案转换为电信号发送到视频接收卡,视频接收卡被配置为将接收到的电信号转化为视频信号发送到显示屏进行显示。
在一些实例中,图案发送模块还被配置为将帧图像发送至显示屏。具体地,如图3所示,视频发射卡被配置为将帧图像通过光纤发送到光电转换单元。光电转换单元被配置为将帧图像转换为电信号发送到视频接收卡,视频接收卡被配置为将接收到的电信号转化为视频信号发送到显示屏进行显示。
显示屏被配置为显示帧图像,以及在接收到指示图案时,在帧图像上叠加显示指示图案。或者,显示屏被配置为显示帧图像,以及在接收到指示图案时,将位于当前位置的帧图像替换为第一指示图案,将第二指示图案叠加在帧图像上进行显示。
这里,叠加显示意味着在当前显示的帧图像上增加一个图层专门显示指示图案。这里的图像替换意味着,针对帧图像所在的图像层,将位于该图像层当前位置的图像替换为第一指示图案。针对第二指示图案,由于第二指示图案跨越路径较长,因此,仅做叠加显示,不做替换调整。当然,利用第一指示图案对应的替换技术也可实现对应图案替换为第二指示图案的调整,因此本公开实施例可根据实际情况进行设置,对此本公开不进行限定。
同理,针对第三指示图案和第四指示图案同样可以基于上述实施过程在显示屏上进行显示,重复部分不再赘述。
在一些示例中,在对目标对象进行一次完整的调度后,可以将终端生成的调整信息保存到服务端,以用于后续方便查看目标对象的调度方案。具体地,交互系统还包括存储模块;终端还被配置为响应于方案存储操作,将当前显示的各个对象标识的位置发送给存储模块;存储模块被配置为存储各个对象标识的位置,以用于回调该对象标识对应的目标对象的调整方案。
在一些实施例中,为了解决在舞台场景下,舞台效果修改周期较长的问题,本公开实施例所提供的终端还包括替换模块,用于将帧图像的修改方案 同步显示到显示屏上,这种实时修改、直观显示的方式,能够降低因实际效果不符合预期而导致反复修改的工作量,降低修改成本的同时,提高工作效率。
具体地,显示模块被配置为响应对帧图像的替换操作,显示帧图像、以及预先设置的素材库文件;素材库文件中包括至少一种用于替换帧图像的素材视频。替换模块被配置为响应于选择开始时间节点和结束时间节点、以及对素材视频的选择操作,从素材视频中筛选一段目标素材视频,替换开始时间节点和结束时间节点之间的帧图像;一段目标素材视频的时长与开始时间节点到结束时间节点的时长相等。
以舞台场景为例,用户预先将相关的素材视频、图片、音频、文本等素材预先存储在终端中,在发现显示屏当前显示的内容需要修改,操作人员可以选择显示模块中显示的舞台修改模式,此时显示模块显示当前舞台的显示画面(也即显示屏播放的帧图像)、以及素材库文件,素材库文件中包括多种素材视频,操作人员可以从当前播放的帧图像所在的进度条中选择需要修改的时间段,也即选择开始时间节点和结束时间节点,之后,将要替换的素材拖入当前时间段内。替换模块从接收到的素材视频中截取出一段与修改时长相同的目标素材视频,该修改时长即为从开始时间节点到结束时间节点的时长。具体可以随机从素材视频中截取视频片段,也可以从素材视频第一帧开始计时截取一段修改时长的视频片段。这里的图像替换意味着,将素材视频替换对应时间段的帧图像所在的图像层。
在一些示例中,若只需要修改帧图像的部分区域,则可以先手动框选出待修改部分,再对框选区域做出具体内容的调整。具体地,终端还包括框选模块;框选模块包括第一调整单元和放大单元。其中,第一调整单元被配置为响应于框选帧图像的部分区域的操作,调整部分区域为规则区域。需要说明的是,操作人员在对终端屏幕手动画出需要修改的区域,所绘线条不一定规则,此时第一调整单元能够根据操作人员手绘线条框出的区域进行规则调整,得到规则区域,例如矩形或圆形等规则区域。放大单元被配置为放大位于规则区域的子图像。具体地,放大单元可以根据显示模块显示的帧图像的 区域和规则区域之间的比例关系,在显示模块上等比放大规则区域的子图像,使其尽可能占据整个显示模块的图像显示区域,便于操作人员观看子图像。
替换模块具体被配置为响应于选择开始时间节点和结束时间节点、以及对素材视频的选择操作,从素材视频中筛选一段位于规则区域对应位置的目标素材视频,替换开始时间节点和结束时间节点之间的、每帧位于规则区域的子图像;目标素材视频中素材图像的分辨率与帧图像的分辨率相同。
与上述实施例中替换显示模块所显示的整张图像同理,本示例可以利用相同技术替换帧图像中规则区域的子图像,重复过程不再赘述。
在一些示例中,在确定了规则区域后,操作人员还可以对规则区域进行调整,例如调节框的大小等,确定最终要选择的目标区域。具体地,框选模块还包括第二调整单元;第二调整单元被配置为响应于对规则区域的调整,确定目标区域。放大单元被配置为放大位于目标区域的子图像。这里放大单元放大目标区域的子图像的方式与上述示例中方法规则区域的子图像的方式相同,重复部分不再赘述。
替换模块具体被配置为响应于选择开始时间节点和结束时间节点、以及对素材视频的选择操作,从素材视频中筛选一段位于目标区域对应位置的目标素材视频,替换开始时间节点和结束时间节点之间的、每帧位于目标区域的子图像。
与上述实施例中替换显示模块所显示的整张图像同理,本示例可以利用相同技术替换帧图像中目标区域的子图像,重复过程不再赘述。
这里的图像替换意味着,仅将素材子图像叠加显示在原始图像层上方。
在一些示例中,终端还包括边界优化模块。边界优化模块被配置为在替换模块执行之后,响应于对替换边界的优化操作,确定边界优化后的待展示图像。显示屏被配置为显示待展示图像。
这里,针对仅需要修改帧图像的部分区域的情况,在修改部分区域的图像后,部分区域对应的子图像和该部分区域周围区域的原始帧图像之间存在 明显的画面变化,因此,终端默认对替换后的部分区域的子图像与原始帧图像之间的边界进行优化处理,使得边界过度平滑。但是,针对特殊情况,操作人员也可以选择对边界进行边界锐化处理,也即使得边界两边的图像差异明显。因此优化操作可以包括色彩平滑、边界锐化、图像模糊中的一种或多种,可以根据实际场景进行设定,本公开实施例不进行具体限定。
在一些实施例中,舞台画面也可能存在显示失误的情况,为了完善演出画面的整体协调性,终端还设置有可支持手动绘制画面的手动调整模块。手动调整模块包括编辑单元和显示单元。其中,编辑单元被配置为响应对帧图像的编辑操作,生成编辑内容。显示单元被配置为将编辑内容叠加显示在帧图像上。显示屏被配置为显示叠加有编辑内容的帧图像。
这里,编辑操作可以理解为操作人员在终端屏幕上绘制图案的操作,以弥补显示屏显示画面的不完整性或整体协调性。显示屏被配置为显示叠加有编辑内容的帧图像。
在一些示例中,手动调整模块还包括效果调节单元。效果调节单元被配置为响应于对编辑内容的播放效果的调节。显示单元被配置为将编辑内容和播放效果叠加显示在帧图像上。显示屏被配置为显示叠加有编辑内容、以及调整后的播放效果的帧图像。
手绘部分的显示时间和播放效果可以调节,例如按照一定时间段循环播放,播放效果为淡入淡出模式或渐变模式等。手动调整模块将绘制结果发送到服务端进行渲染,之后服务端将渲染效果发送至显示屏,叠加在帧图像上方进行显示。
在一些实施例中,终端包括显示屏的位置调整模块,用于调整显示屏在显示场景中的位置,例如对升降台的实时控制等,服务端可以保存修改的时间戳。
在一些示例中,在对舞台效果进行一次完整的修改后,可以将终端生成的修改信息保存到服务端,以用于后续方便查看舞台效果的修改方案。具体地,交互系统还包括存储模块;终端还被配置为响应于方案存储操作,将当 前显示的各个对象标识的位置发送给存储模块;存储模块还被配置为通过语音或文本的形式存储舞台的修改信息(也即上述示例中的素材视频的替换、手动绘画、舞台特效、升降台调整等),以用于回调舞台的历史修改记录。
本公开实施例提供的一种交互系统可以应用于演出排练的场景,通过操控人员在终端上对目标对象的对象标识的排布,能够辅助现实场景下的目标对象快速实现位置调整,提高操控人员的指挥效率。另外,针对目标对象在演出过程中出现的失误或错位的情况,本公开实施例提供的终端能够支持舞台效果的修改,能够实现快速修改显示屏显示的背景图片,弥补舞台效果的不完整性。或者,操作人员也可以利用手动调整模块弥补舞台效果,例如,如遇到与演员轨迹配合演出的方案,若服务端未收到演员的运行轨迹数据,则向终端的手动调整模块发起手动调整请求,手动调整模块响应操控人员对帧图像的绘制操作,将编辑内容和/或调整后的播放效果叠加显示在帧图像上。
第二方面,本公开实施例还提供了一种交互方法,图4为本公开实施例提供的一种交互方法的示意图,如图4所示,包括步骤S41~S46。
S41、确定位于显示屏上的目标对象的当前位置。
S42、根据目标对象的当前位置生成目标对象的对象标识,并将同一目标对象的对象标识和当前位置进行关联。
这里,S41和S42可以通过上述示例中的服务端执行,具体部分不再赘述。
S43、根据对象标识与当前位置的关联关系,显示对象标识。
S44、响用户对对象标识的排布操作,生成目标对象的调整信息。
这里,S43和S44可以通过上述示例中的终端执行,具体部分不再赘述。
S45、根据目标对象的当前位置和调整信息生成指示图案。
这里,S45可以通过上述示例中的服务端执行,具体部分不再赘述。
S46、显示指示图案,以使目标对象按照指示图案的指示进行调整。
这里,S46可以通过上述示例中的显示屏来执行,具体部分不再赘述。
本公开实施例能够实时将操控人员对现场的修改方案反馈置显示屏进行显示,以快速调整目标对象,在演出过程中,能够实现后台与现场精准调度,大大降低了后台与现场的沟通成本,提高了操控人员的指挥效率。
可以理解的是,以上实施方式仅仅是为了说明本公开的原理而采用的示例性实施方式,然而本公开并不局限于此。对于本领域内的普通技术人员而言,在不脱离本公开的精神和实质的情况下,可以做出各种变型和改进,这些变型和改进也视为本公开的保护范围。

Claims (16)

  1. 一种交互系统,其包括终端、显示屏和服务端;所述终端和所述显示屏分别与所述服务端通信连接;
    所述服务端,被配置为确定位于所述显示屏上的目标对象的当前位置;根据所述目标对象的当前位置生成所述目标对象的对象标识,并将同一所述目标对象的所述对象标识和所述当前位置进行关联;以及,接收所述目标对象的调整信息,并根据所述目标对象的当前位置和所述调整信息生成指示图案,并将所述指示图案发送到所述显示屏进行显示;
    所述终端,被配置为显示预先为所述显示屏配置的帧图像,并根据所述对象标识与所述当前位置的关联关系,显示所述对象标识;响应用户对所述对象标识的排布操作,生成所述目标对象的调整信息;
    所述显示屏,被配置为显示所述指示图案,以使所述目标对象按照所述指示图案的指示进行调整。
  2. 根据权利要求1所述的交互系统,其中,所述服务端包括位置确定模块、标识生成模块、信息关联模块、图案生成模块和图案发送模块;
    所述位置确定模块,被配置为确定位于所述显示屏上的目标对象的当前位置;
    所述标识生成模块,被配置为生成所述目标对象的对象标识;
    所述信息关联模块,被配置为将同一所述目标对象的所述对象标识和所述当前位置进行关联;
    所述图案生成模块,被配置为接收所述调整信息,并根据所述目标对象的当前位置和所述调整信息生成指示图案,并将所述指示图案发送到所述显示屏进行显示;
    所述图案发送模块,被配置为将所述指示图案发送到所述显示屏进行显示。
  3. 根据权利要求2所述的交互系统,其中,所述位置确定模块,具体被配置为接收现实场景的场景图像,并对所述场景图像中的所述目标对象进 行识别,确定所述目标对象的当前位置。
  4. 根据权利要求2所述的交互系统,其中,所述交互系统还包括为所述目标对象配置的传感器;所述传感器,被配置为向所述服务端发送所述目标对象的位置信息;
    所述位置确定模块,具体被配置为将接收到的所述目标对象的位置信息作为所述目标对象的当前位置。
  5. 根据权利要求2所述的交互系统,其中,所述调整信息包括所述目标对象的更新后的位置;所述图案生成模块,具体被配置为根据所述目标对象的当前位置和所述更新后的位置,生成位于所述当前位置的第一指示图案,并生成从所述当前位置到所述更新后的位置的第二指示图案;
    所述显示屏,被配置为在所述当前位置显示所述第一指示图案,在所述当前位置和所述更新后的位置之间显示所述第二指示图案。
  6. 根据权利要求5所述的交互系统,其中,所述图案生成模块,具体被配置为根据所述帧图像,确定位于所述当前位置周围的其他区域对应的画面、以及位于当前位置与更新后的位置之间的区域画面;根据所述其他区域对应的画面和所述当前位置,生成位于所述当前位置的第一指示图案;根据所述区域画面、所述目标对象的当前位置和所述更新后的位置,生成从所述当前位置到所述更新后的位置的第二指示图案。
  7. 根据权利要求5或6所述的交互系统,其中,所述图案发送模块,还被配置为将所述帧图像发送至所述显示屏;
    所述显示屏,被配置为显示所述帧图像,以及在接收到所述指示图案时,在所述帧图像上叠加显示所述指示图案;或者,将位于所述当前位置的帧图像替换为所述第一指示图案,将所述第二指示图案叠加在所述帧图像上进行显示。
  8. 根据权利要求2所述的交互系统,其中,所述调整信息包括所述目标对象的更新后的位置;所述图案生成模块,具体被配置为根据所述目标对象的当前位置,生成位于所述当前位置的第三指示图案;根据所述更新后的 位置,生成位于所述更新后的位置的第四指示图案;
    所述显示屏,被配置为在所述当前位置显示所述第三指示图案,在所述更新后的位置显示所述第四指示图案。
  9. 根据权利要求1所述的交互系统,其中,所述交互系统还包括存储模块;
    所述终端,还被配置为响应于方案存储操作,将当前显示的各个所述对象标识的位置发送给存储模块;
    所述存储模块,被配置为存储各个所述对象标识的位置。
  10. 根据权利要求1所述的交互系统,其中,所述终端还包括显示模块和替换模块;
    所述显示模块,被配置为响应对所述帧图像的替换操作,显示所述帧图像、以及预先设置的素材库文件;所述素材库文件中包括至少一种用于替换所述帧图像的素材视频;
    所述替换模块,被配置为响应于选择开始时间节点和结束时间节点、以及对素材视频的选择操作,从所述素材视频中筛选一段目标素材视频,替换所述开始时间节点和所述结束时间节点之间的帧图像;所述一段目标素材视频的时长与所述开始时间节点到所述结束时间节点的时长相等。
  11. 根据权利要求10所述的交互系统,其中,所述终端还包括框选模块;所述框选模块包括第一调整单元和放大单元;
    所述第一调整单元,被配置为响应于框选所述帧图像的部分区域的操作,调整所述部分区域为规则区域;
    所述放大单元,被配置为所述放大位于所述规则区域的子图像;
    所述替换模块,具体被配置为响应于选择开始时间节点和结束时间节点、以及对素材视频的选择操作,从所述素材视频中筛选一段位于所述规则区域对应位置的目标素材视频,替换所述开始时间节点和所述结束时间节点之间的、每帧位于所述规则区域的子图像;所述目标素材视频中素材图像的 分辨率与所述帧图像的分辨率相同。
  12. 根据权利要求11所述的交互系统,其中,所述框选模块还包括第二调整单元;
    所述第二调整单元,被配置为响应于对所述规则区域的调整,确定目标区域;
    所述放大单元,被配置为所述放大位于所述目标区域的子图像;
    所述替换模块,具体被配置为响应于选择开始时间节点和结束时间节点、以及对素材视频的选择操作,从所述素材视频中筛选一段位于所述目标区域对应位置的目标素材视频,替换所述开始时间节点和所述结束时间节点之间的、每帧位于所述目标区域的子图像。
  13. 根据权利要求11或12所述的交互系统,其中,所述终端还包括边界优化模块;
    所述边界优化模块,被配置为在所述替换模块执行之后,响应于对替换边界的优化操作,确定边界优化后的待展示图像;所述优化操作包括色彩平滑、边界锐化、图像模糊中的一种或多种;
    所述显示屏,被配置为显示所述待展示图像。
  14. 根据权利要求1所述的交互系统,其中,所述终端还包括手动调整模块;手动调整模块包括编辑单元和显示单元;
    所述编辑单元,被配置为响应对所述帧图像的编辑操作,生成编辑内容;
    所述显示单元,被配置为将所述编辑内容叠加显示在所述帧图像上;
    所述显示屏,被配置为显示叠加有所述编辑内容的帧图像。
  15. 根据权利要求14所述的交互系统,其中,所述手动调整模块还包括效果调节单元;
    所述效果调节单元,被配置为响应于对所述编辑内容的播放效果的调节;
    所述显示单元,被配置为将所述编辑内容和所述播放效果叠加显示在所 述帧图像上;
    所述显示屏,被配置为显示叠加有所述编辑内容、以及调整后的播放效果的帧图像。
  16. 根据权利要求1所述的交互系统,其中,所述交互系统还包括图像采集设备;
    所述图像采集设备,被配置为采集现实场景的场景图像,并发送到所述服务端。
PCT/CN2022/107990 2022-07-26 2022-07-26 一种交互系统 WO2024020794A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202280002367.XA CN117769822A (zh) 2022-07-26 2022-07-26 一种交互系统
PCT/CN2022/107990 WO2024020794A1 (zh) 2022-07-26 2022-07-26 一种交互系统

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/107990 WO2024020794A1 (zh) 2022-07-26 2022-07-26 一种交互系统

Publications (2)

Publication Number Publication Date
WO2024020794A1 true WO2024020794A1 (zh) 2024-02-01
WO2024020794A8 WO2024020794A8 (zh) 2024-03-28

Family

ID=89704716

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/107990 WO2024020794A1 (zh) 2022-07-26 2022-07-26 一种交互系统

Country Status (2)

Country Link
CN (1) CN117769822A (zh)
WO (1) WO2024020794A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000149045A (ja) * 1998-11-05 2000-05-30 Matsushita Electric Ind Co Ltd タイトル情報の編集及び再生方法と編集装置
CN108255304A (zh) * 2018-01-26 2018-07-06 腾讯科技(深圳)有限公司 基于增强现实的视频数据处理方法、装置和存储介质
CN111163343A (zh) * 2020-01-20 2020-05-15 海信视像科技股份有限公司 图形识别码的识别方法和显示设备
CN111210577A (zh) * 2020-01-03 2020-05-29 深圳香蕉设计有限公司 一种节日主题虚拟化全息影像交互系统
CN114637890A (zh) * 2020-12-16 2022-06-17 花瓣云科技有限公司 在图像画面中显示标签的方法、终端设备及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000149045A (ja) * 1998-11-05 2000-05-30 Matsushita Electric Ind Co Ltd タイトル情報の編集及び再生方法と編集装置
CN108255304A (zh) * 2018-01-26 2018-07-06 腾讯科技(深圳)有限公司 基于增强现实的视频数据处理方法、装置和存储介质
CN111210577A (zh) * 2020-01-03 2020-05-29 深圳香蕉设计有限公司 一种节日主题虚拟化全息影像交互系统
CN111163343A (zh) * 2020-01-20 2020-05-15 海信视像科技股份有限公司 图形识别码的识别方法和显示设备
CN114637890A (zh) * 2020-12-16 2022-06-17 花瓣云科技有限公司 在图像画面中显示标签的方法、终端设备及存储介质

Also Published As

Publication number Publication date
WO2024020794A8 (zh) 2024-03-28
CN117769822A (zh) 2024-03-26

Similar Documents

Publication Publication Date Title
US20180204381A1 (en) Image processing apparatus for generating virtual viewpoint image and method therefor
CN110225224B (zh) 虚拟形象的导播方法、装置及系统
US8363056B2 (en) Content generation system, content generation device, and content generation program
KR20170105444A (ko) 컨텐츠 큐레이션을 포함하는 디스플레이 장치들의 구성 및 동작
US9648272B2 (en) News production system with display controller
JP6429545B2 (ja) 制御装置、制御方法
JP2010160270A (ja) プロジェクタ・システム及びこれを含むビデオ会議システム
KR20230107883A (ko) 촬영 방법, 촬영 장치 및 전자기기
CN103873800A (zh) 一种投影显示图像调节方法及电子设备
KR102371031B1 (ko) 버추얼 프로덕션의 영상 촬영을 위한 장치, 시스템, 방법 및 프로그램
CN109063039A (zh) 一种基于移动端的视频地图动态标签显示方法及系统
WO2015184841A1 (zh) 一种控制投影显示的方法及装置
US9706239B1 (en) Production system with dynamic media server allocation
CN101483742A (zh) 组合式大屏幕前向投影显示方法及控制装置
CN113473207A (zh) 直播方法、装置、存储介质及电子设备
JP2014233056A (ja) 情報処理装置およびプログラム
CN103106700B (zh) 一种基于3d技术的数据中心自动巡检方法
KR20170012109A (ko) 동화상 재생 프로그램, 장치, 및 방법
WO2024020794A1 (zh) 一种交互系统
CN103116330A (zh) 一种指挥大厅控制方法
CN102737567B (zh) 多媒体正投影数字模型互动集成系统
WO2018107318A1 (zh) 一种可视化装修设计方法及其装置、机器人
RU105102U1 (ru) Автоматизированная система для создания, обработки и монтажа видеороликов
CN105161005A (zh) 利用扩展场景和沉浸式弧形大屏幕拍摄mtv的系统
CN116962744A (zh) 网络直播的连麦互动方法、装置及直播系统

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 202280002367.X

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22952259

Country of ref document: EP

Kind code of ref document: A1