CN113542891B - Video special effect display method and device - Google Patents

Video special effect display method and device Download PDF

Info

Publication number
CN113542891B
CN113542891B CN202110692749.7A CN202110692749A CN113542891B CN 113542891 B CN113542891 B CN 113542891B CN 202110692749 A CN202110692749 A CN 202110692749A CN 113542891 B CN113542891 B CN 113542891B
Authority
CN
China
Prior art keywords
terminal
target video
trigger point
preset trigger
special effect
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110692749.7A
Other languages
Chinese (zh)
Other versions
CN113542891A (en
Inventor
王冉冉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Visual Technology Co Ltd
Original Assignee
Hisense Visual Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Visual Technology Co Ltd filed Critical Hisense Visual Technology Co Ltd
Priority to CN202110692749.7A priority Critical patent/CN113542891B/en
Publication of CN113542891A publication Critical patent/CN113542891A/en
Application granted granted Critical
Publication of CN113542891B publication Critical patent/CN113542891B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4722End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content
    • H04N21/4725End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content using interactive regions of the image, e.g. hot spots
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/344Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program

Abstract

The application relates to the technical field of AR (augmented reality), and provides a video special effect display method and device, wherein a first terminal responds to a received target video playing request to acquire and play a target video; detecting whether a second terminal is connected or not according to a set time interval, wherein the second terminal is provided with a transparent display screen, and the transparent display screen is used for enabling a wearer of the second terminal to watch the target video played by the first terminal; if the video is accessed, a control instruction is sent to the second terminal in response to the detected preset trigger point, the preset trigger point represents that playing special effects are set at the moment corresponding to the preset trigger point related to the target video, the control instruction carries the identification of the target video, the second terminal obtains and plays special effect information set at the moment corresponding to the preset trigger point according to the identification, and the target video and the special effect information are displayed through the first terminal and the second terminal respectively, so that the phenomenon of video clamping is reduced.

Description

Video special effect display method and device
Technical Field
The application relates to the technical field of augmented reality (Augmented Reality, AR), in particular to a video special effect display method and device.
Background
Augmented reality (Augmented Reality, AR), which is a technology of skillfully fusing virtual information with the real world, widely uses various technical means such as multimedia, three-dimensional modeling, real-time tracking, intelligent interaction, sensing and the like, and applies generated virtual information such as characters, images, three-dimensional models, music, videos and the like to the real world after simulation, wherein the two kinds of information are mutually complemented, thereby realizing the enhancement of the real world.
AR displays have wide application in industries such as educational training, fire-fighting exercise, virtual driving, real estate, marketing, etc., giving users an immersive visual feast. The video content is overlapped with the scene special effect through the AR technology, so that the unique visual impact experience brought by intelligence is fully shown, the video after the personalized special effect is overlapped, the interesting dynamic effect of limb response can be generated by a user, and the advantage of high stability is shown in an extreme environment.
At present, the mode of AR enhancement special effects is generally that AR equipment plays videos and displays special effects, the processing performance requirement on the AR equipment is high, and the phenomenon of video clamping is easy to occur.
Disclosure of Invention
The embodiment of the application provides a video special effect display method and device, which are used for improving the performance of AR display special effects.
In a first aspect, an embodiment of the present application provides a video special effect display method, including:
the first terminal responds to the received target video playing request, and acquires and plays the target video;
the first terminal detects whether the second terminal is accessed according to a set time interval, wherein the second terminal is provided with a transparent display screen, and the transparent display screen is used for enabling a wearer of the second terminal to watch the target video played by the first terminal;
when the second terminal is detected to be accessed, a control instruction is sent to the second terminal in response to the detected preset trigger point, wherein the preset trigger point represents that playing special effects are set at the moment corresponding to the preset trigger point related to the target video, the control instruction carries an identifier of the target video, and the control instruction is used for enabling the second terminal to acquire and play special effect information set at the moment corresponding to the preset trigger point according to the identifier.
In a second aspect, an embodiment of the present application provides a video special effect display method, including:
When a first terminal is accessed, a second terminal receives a control instruction sent by the first terminal when a detected preset trigger point is detected, wherein the preset trigger point represents that a playing special effect is set at a moment corresponding to the preset trigger point associated with a target video, the control instruction carries an identifier of the target video, the target video is obtained and played by the first terminal in response to a received target video playing request, and the second terminal is provided with a transparent display screen which is used for enabling a wearer of the second terminal to watch the target video played by the first terminal;
and the second terminal acquires and plays special effect information set at the moment corresponding to the preset trigger point according to the identifier carried by the control instruction.
In a third aspect, an embodiment of the present application provides a first terminal, including a display, a memory, and a controller:
the display is connected with the controller and is configured to display a target video;
the memory is connected with the controller and is configured to store computer program instructions;
the controller is configured to perform the following operations according to the computer program instructions:
Responding to a received target video playing request, acquiring a target video and playing the target video;
detecting whether a second terminal is connected or not according to a set time interval, wherein the second terminal is provided with a transparent display screen, and the transparent display screen is used for enabling a wearer of the second terminal to watch the target video played by the first terminal;
when the second terminal is detected to be accessed, a control instruction is sent to the second terminal in response to the detected preset trigger point, wherein the preset trigger point characterizes that playing special effects are set at the moment corresponding to the preset trigger point related to the target video, the control instruction carries an identifier of the target video, and the control instruction is used for enabling the second terminal to acquire and play special effect information set at the moment corresponding to the preset trigger point according to the identifier.
Optionally, the controller is further configured to:
when the second terminal is not accessed, responding to the detected preset trigger point and sending a special effect acquisition request to a server;
and receiving and playing special effect information which is sent by the server and is set at the moment corresponding to the preset trigger point.
Optionally, the controller is further configured to:
And if the second terminal is detected to be unresponsive to the control instruction, displaying prompting information which is unresponsive to the second terminal to the wearer, wherein the prompting information comprises error types which are unresponsive to the second terminal, so that the wearer reconfigures the second terminal according to the error types to establish communication connection with the first terminal.
Optionally, the controller is configured to send a control instruction to the second terminal in response to the detected preset trigger point:
when the preset trigger point is detected, a control instruction is directly sent to the second terminal at a first moment corresponding to the preset trigger point; or alternatively
When the preset trigger point is detected, determining a second moment according to a first moment corresponding to the preset trigger point, and sending a control instruction to the second terminal at the second moment.
Optionally, the controller determines a second time according to the first time corresponding to the preset trigger point, including:
acquiring a current playing time corresponding to a current playing target video frame; if the difference value between the current playing time and the first time is smaller than a preset threshold value, determining the current playing time as a second time; or alternatively
And determining a second moment according to the first moment corresponding to the preset trigger point and the preset delay playing time.
Optionally, the preset trigger point is a tag when the target video is played to a preset scene, or a tag when a preset playing time on a playing time axis of the target video is marked.
Optionally, the second terminal is augmented reality AR glasses.
In a fourth aspect, embodiments of the present application provide a second terminal, including a rendering engine, a memory, and a processor:
the rendering engine is connected with the processor and is configured to render and display special effect information;
the memory is connected with the processor and is configured to store computer program instructions;
the processor is configured to perform the following operations in accordance with the computer program instructions:
when a first terminal is accessed, receiving a control instruction sent by the first terminal when a detected preset trigger point is detected, wherein the preset trigger point represents that a playing special effect is set at a moment corresponding to the preset trigger point associated with a target video, the control instruction carries an identifier of the target video, the target video is obtained and played by the first terminal in response to a received target video playing request, and a transparent display screen is arranged on the second terminal and used for enabling a wearer of the second terminal to watch the target video played by the first terminal;
And acquiring and playing special effect information set at the moment corresponding to the preset trigger point according to the mark carried by the control instruction.
In a fifth aspect, the present application provides a computer-readable storage medium storing computer-executable instructions for causing a computer to perform the video display method provided by the embodiments of the present application.
In the above embodiment of the present invention, a wearer of a second terminal interacts with a first terminal, the first terminal plays a target video selected by the wearer, the wearer views the target video played by the first terminal through a transparent display screen of the second terminal, the first terminal detects whether the second terminal is connected according to a set time interval, when the second terminal is detected to be connected, a control instruction carrying a target video identifier is sent to the second terminal in response to a detected preset trigger point, and the second terminal obtains and plays special effect information set at a time corresponding to the preset trigger point according to the identifier after receiving the control instruction, and plays the target video through the first terminal, so that a scheme of superimposing and displaying the special effect information on the basis of the target video is realized, thereby bringing a visual experience to the wearer and generating interesting response corresponding to the special effect information.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort to a person skilled in the art.
Fig. 1 schematically illustrates an application scenario provided by an embodiment of the present application;
fig. 2 illustrates a hardware configuration diagram of a first terminal provided in an embodiment of the present application;
fig. 3 illustrates a block diagram of a second terminal provided in an embodiment of the present application;
FIG. 4 illustrates a flow chart of a video special effect display method provided by an embodiment of the present application;
fig. 5 exemplarily shows an effect diagram of displaying special effect information by the second terminal provided in the embodiment of the present application;
FIG. 6 illustrates a relationship diagram of different target videos and special effects information provided by embodiments of the present application;
fig. 7a illustrates an effect diagram of displaying special effect information by the first terminal according to the embodiment of the present application;
Fig. 7b illustrates an effect diagram of displaying prompt information by the first terminal according to the embodiment of the present application;
FIG. 8 illustrates a schematic diagram of a television and AR glasses interaction process provided by an embodiment of the present application;
fig. 9 illustrates a complete method flowchart for displaying video special effects on television and AR glasses provided by an embodiment of the present application.
Detailed Description
For purposes of clarity, embodiments and advantages of the present application, the following description will make clear and complete the exemplary embodiments of the present application, with reference to the accompanying drawings in the exemplary embodiments of the present application, it being apparent that the exemplary embodiments described are only some, but not all, of the examples of the present application.
Based on the exemplary embodiments described herein, all other embodiments that may be obtained by one of ordinary skill in the art without making any inventive effort are within the scope of the claims appended hereto. Furthermore, while the disclosure is presented in the context of an exemplary embodiment or embodiments, it should be appreciated that the various aspects of the disclosure may, separately, comprise a complete embodiment.
The terms "first," second, "" third and the like in the description and in the claims and in the above drawings are used for distinguishing between similar or similar objects or entities and not necessarily for describing a particular sequential or chronological order, unless otherwise indicated (Unless otherwise indicated). It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application are, for example, capable of operation in sequences other than those illustrated or otherwise described herein.
Furthermore, the terms "comprise" and "have," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or apparatus that comprises a list of elements is not necessarily limited to those elements expressly listed, but may include other elements not expressly listed or inherent to such product or apparatus.
Embodiments of the present application are described in detail below with reference to the accompanying drawings.
Fig. 1 schematically illustrates an application scenario provided in an embodiment of the present application. As shown in fig. 1, the first terminal 100 is configured to obtain a target video according to a received video playing request and play the target video, and a wearer of the second terminal 200 views the target video played by the first terminal through a transparent display screen of the second terminal. The first terminal 100 detects whether the second terminal 200 is accessed according to the set time interval. When detecting that the second terminal 200 is accessed, the first terminal 100 sends a control instruction to the second terminal 200, and the second terminal 200 plays a special effect adapted to a target video scene played by the first terminal 100 according to the received control instruction, so as to experience immersive visual feast like being personally on the scene, and when the wearer watches the video with the superimposed personalized special effect, interesting dynamic effects of limb response are generated; when it is detected that there is no second terminal 200 connected, the personalized special effect is played while the target video is played by the first terminal 100.
As shown in fig. 1, the server 300 is configured to store processed target video and special effect information, the first terminal 100 may acquire the target video and the special effect information from the server 300, and the second terminal may acquire the special effect information from the server 300.
The first terminal 100 and the second terminal 200 may be connected via bluetooth or via the same network.
It should be noted that, when the memories of the first terminal and the second terminal are sufficiently real-time, corresponding special effect information may be stored locally.
Taking the first terminal as an example, fig. 2 illustrates a block diagram of the first terminal according to the embodiment of the present application. As shown in fig. 2, the first terminal 100 includes therein at least one of a controller 250, a modem 210, a communicator 220, a detector 230, an input/output interface 255, a display 275, an audio output interface 285, a memory 260, a power supply 290, a user interface 265, and an external device interface 240.
In some embodiments, the display 275 includes a display screen component for presenting a picture, and a drive component for driving an image display, a component for receiving image signals derived from the first processor output, for displaying video content and images, and a menu manipulation interface.
In some embodiments, display 275 is a projection display and may further include a projection device and a projection screen.
In some embodiments, communicator 220 is a component for communicating with external devices or external servers according to various communication protocol types. For example: the communicator may include at least one of a Wifi chip, a bluetooth communication protocol chip, a wired ethernet communication protocol chip, or other network communication protocol chip or a near field communication protocol chip, and an infrared receiver.
In some embodiments, the first terminal 100 may establish control signal and data signal transmission and reception between the communicator 220 and an external device.
In some embodiments, the user interface 265 may be used to receive control signals for external devices.
In some embodiments, the detector 230 includes a light receiver, an image collector, a temperature sensor, a sound collector, etc., for collecting signals of an external environment or interacting with the outside.
In some embodiments, the input/output interface 255 is configured to enable data transfer between the controller 250 and external other devices or other controllers 250. Such as receiving video signal data and audio signal data of an external device, command instruction data, or the like.
In some embodiments, external device interface 240 may include, but is not limited to, the following: any one or more interfaces of a high definition multimedia interface HDMI interface, an analog or data high definition component input interface, a composite video input interface, a USB input interface, an RGB port, and the like can be used. The plurality of interfaces may form a composite input/output interface.
In some embodiments, the modem 210 is configured to receive the broadcast television signal by a wired or wireless receiving manner, and may perform modulation and demodulation processes such as amplifying, mixing, and resonating, and demodulate the audio/video signal from the plurality of wireless or wired broadcast television signals, where the audio/video signal may include a television audio/video signal carried in a television channel frequency selected by a user, and an EPG data signal.
In some embodiments, the frequency point demodulated by the modem 210 is controlled by the controller 250, and the controller 250 may send a control signal according to the user selection, so that the modem responds to the television signal frequency selected by the user and modulates and demodulates the television signal carried by the frequency.
In some embodiments, the controller 250 controls the operation of the first terminal and responds to the user's operations through various software control programs stored on the memory. The controller 250 may control the overall operation of the first terminal 100. For example: in response to receiving a user command to select to display a UI object on the display 275, the controller 250 may perform an operation related to the object selected by the user command.
As shown in fig. 2, the controller 250 includes at least one of a random access Memory 251 (Random Access Memory, RAM), a Read-Only Memory 252 (ROM), a video processor 270, an audio processor 280, other processors 253 (e.g., a graphics processor (Graphics Processing Unit, GPU), a central processing unit 254 (Central Processing Unit, CPU), a communication interface (Communication Interface), and a communication Bus 256 (Bus), which connects the respective components.
In some embodiments, RAM 251 is used to store temporary data for the operating system or other on-the-fly programs.
In some embodiments, ROM 252 is used to store instructions for various system boots.
In some embodiments, ROM 252 is used to store a basic input output system, referred to as a basic input output system (Basic Input Output System, BIOS). The system comprises a drive program and a boot operating system, wherein the drive program is used for completing power-on self-checking of the system, initialization of each functional module in the system and basic input/output of the system.
In some embodiments, upon receipt of the power-on signal, the first terminal 100 power starts up, and the CPU runs the system start-up instructions in the ROM 252, copying temporary data of the operating system stored in the memory into the RAM 251, so as to start up or run the operating system. When the operating system is started, the CPU copies temporary data of various applications in the memory to the RAM 251, and then, facilitates starting or running of the various applications.
In some embodiments, CPU processor 254 is used to execute operating system and application program instructions stored in memory. And executing various application programs, data and contents according to various interactive instructions received from the outside, so as to finally display and play various audio and video contents.
In some exemplary embodiments, the CPU processor 254 may comprise a plurality of processors. The plurality of processors may include one main processor and one or more sub-processors. A main processor for performing some operations of the first terminal 100 in a pre-power-up mode and/or displaying a picture in a normal mode. One or more sub-processors for one operation in a standby mode or the like.
In some embodiments, the graphics processor 253 is configured to generate various graphical objects, such as: icons, operation menus, user input instruction display graphics, and the like. The device comprises an arithmetic unit, wherein the arithmetic unit is used for receiving various interaction instructions input by a user to carry out operation and displaying various objects according to display attributes. And a renderer for rendering the various objects obtained by the arithmetic unit, wherein the rendered objects are used for being displayed on a display.
In some embodiments, video processor 270 is configured to receive an external video signal, perform video processing such as decompression, decoding, scaling, noise reduction, frame rate conversion, resolution conversion, image composition, etc., according to the standard codec protocol of the input signal, and may result in a signal that is directly displayable or playable on first terminal 100.
In some embodiments, video processor 270 includes a demultiplexing module, a video decoding module, an image compositing module, a frame rate conversion module, a display formatting module, and the like.
In some embodiments, the audio processor 280 is configured to receive an external audio signal, decompress and decode the audio signal according to a standard codec protocol of an input signal, and perform noise reduction, digital-to-analog conversion, and amplification processing, so as to obtain a sound signal that can be played in a speaker.
The power supply 290 supplies power input from an external power source to the first terminal 100 under the control of the controller 250. The power supply 290 may include a built-in power supply circuit installed inside the first terminal 100, or may be an external power supply installed in the first terminal 100, and a power supply interface for providing an external power supply to the first terminal 100.
The user interface 265 is used to receive an input signal from a user and then transmit the received user input signal to the controller 250. The user input signal may be a remote control signal received through an infrared receiver, and various user control signals may be received through a network communication module.
The memory 260 includes a memory storing various software modules for driving the first terminal 100. Such as: various software modules stored in the first memory, including: at least one of a base module, a detection module, a communication module, a display control module, a browser module, various service modules, and the like.
Taking the second terminal as an AR glasses example, fig. 3 exemplarily shows a structure diagram of the second terminal provided in the embodiment of the present application. As shown in fig. 3, the second terminal 200 includes a left display lens 301 and a right display lens 302 through which a wearer can view video images. The camera 303 is used to capture images during the interaction.
In some embodiments, the wearer may control the connection to the external device by opening and/or closing AR glasses through switch 304.
As shown in fig. 3, the wearer may interact with AR glasses through a touch pad 305. For example, a user obtains special effect information to be displayed through a touch area.
As not shown in fig. 3, the AR glasses further include a rendering engine, a memory, a processor, and other chips that may be integrated in one integrated circuit board and placed inside the AR glasses, wherein the rendering engine, the memory, and the processor are connected through a bus, and the rendering engine is configured to render and display special effect information; the memory is configured to store computer program instructions; the processor is configured to execute the method for displaying the special effect on the second terminal side in the embodiment of the application according to the computer program instructions.
It should be noted that fig. 1-3 are only examples, and alternatively, the first terminal may be a display device with video playing and interaction functions, such as a smart phone, a notebook computer, a desktop computer, a tablet computer, or the like.
Based on the scenario shown in fig. 1, fig. 4 schematically shows a flowchart of a video special effect display method provided in an embodiment of the present application, and as shown in fig. 4, the flowchart is executed by a first terminal, and mainly includes the following steps:
s401: and the first terminal responds to the received target video playing request, acquires the target video and plays the target video.
In this step, the target video playing request may be triggered by the user, or may be sent by other external devices. Taking user triggering as an example, a user selects an interested target video through a touch screen or a function key of a first terminal, and sends a target video playing request to the first terminal, the first terminal responds after receiving the target video playing request, sends a target video acquisition request to a server, and after receiving the target video acquisition request, the server sends a corresponding target video to the first terminal, and the first terminal plays the target video.
In some embodiments, in order to improve video playing efficiency, after receiving a target video playing request, the first terminal first queries whether the target video is locally included in the local video list, loads the target video from the local and plays the target video when the target video is locally included, and acquires the target video from the server and plays the target video when the target video is not locally included.
S402: the first terminal detects whether the second terminal is accessed according to the set time interval, when detecting that the second terminal is accessed, S403 is executed, and when detecting that the second terminal is not accessed, S404 is executed.
In the step, the second terminal is provided with a transparent display screen, a wearer of the second terminal views the target video played by the first terminal through the transparent display screen, and the second terminal can display virtual special effect information (such as characters, images, three-dimensional models, music and videos) in a superimposed manner with the real target video after simulation, so that the 'enhancement' of a real video picture is realized. Optionally, the second terminal is AR glasses.
The second terminal is tracked to the first terminal in the real space through visual positioning, and is connected with the first terminal through Bluetooth or connected to the same network through WIFI. In S402, the first terminal detects whether the second terminal is connected according to the set time interval, and determines a device for playing the special effect information according to the connection state of the second terminal. Specifically, when the second terminal is detected to be connected to the first terminal, the second terminal displays the special effect information, and when the second terminal is detected to be not connected to the first terminal, the first terminal displays the special effect information.
S403: the first terminal responds to the detected preset trigger point and sends a control instruction to the second terminal.
In the step, at least one preset trigger point is associated with each target video in advance, and play special effects are set at the moment corresponding to the preset trigger point, wherein the preset trigger point represents the association of the target video. The preset trigger point can be set according to actual requirements.
In an alternative embodiment, the preset trigger point is a tag when the target video is played to the preset scene, one target video may include different preset scenes, each preset scene may be used as a tag of the preset trigger point, and when the target video is played to the preset scene, corresponding special effect information is acquired and played.
Taking the target video as "diamond gourd baby" as an example, because the skills of seven gourd baby are different, for example, four baby can be sprayed with fire and five baby can be sprayed with water, the scene of four baby using skill in the target video can be used as one preset trigger point, the special effect information of spraying fire is added for the preset trigger point, as shown in (a) in fig. 5, the scene of five baby using skill in the target video is used as another preset trigger point, and the special effect information of spraying water is added for the preset trigger point, as shown in (b) in fig. 5.
In another alternative embodiment, the preset trigger point is a tag that marks a predetermined play time on the play time axis of the target video.
For example, the duration of the target video is 30 seconds, and the special effect is preset to be played when the target video is played to the 10 th second, then a preset trigger point is set at the 10 th second on the playing time axis of the target video, and special effect information is added to the preset trigger point.
Each special effect information corresponds to a unique code, and the relation between the preset trigger point associated with the target video and the special effect information is shown in table 1.
Table 1 preset correspondence between trigger points and special effect information
Figure GDA0004052081930000101
Figure GDA0004052081930000111
As can be seen from table 1, the special effect information corresponding to different preset trigger points associated with different target video frames may be the same. For example, the "fire-spraying" special effect of four children in "King's warrior", as shown in (a) of FIG. 6, can also be applied to the "fire-spraying" special effect of red children in "West-tour", as shown in (b) of FIG. 6.
The special effect information set at the moment corresponding to the preset trigger point associated with the target video is different in display effect due to different display technologies of the first terminal and the second terminal. It should be noted that, in fig. 5 and fig. 6, the second terminal is taken as an example to display special effect information corresponding to the preset trigger point.
In S403, since the second terminal has been connected to the first terminal, the effect information played by the second terminal is more realistic, so as to bring better immersive experience to the user, and the first terminal can control the second terminal to play the effect information. Specifically, the first terminal responds to the detected preset trigger point and sends a control instruction to the second terminal, wherein the control instruction carries the identification of the target video, and the identification of each target video is unique. The embodiments of the present application do not impose any limiting requirements on the type of identification, including, but not limited to, uniform resource locators (Uniform Resource Locator, URL) of the target video, ID of the target video, video encoding of the target video. The target video has been associated with each preset trigger point in advance, and each preset trigger point corresponds to specific information, see table 1 specifically. Because the control instruction carries the mark of the target video, the second terminal can acquire special effect information set at the moment corresponding to the preset trigger point according to the mark, and render and play the acquired special effect information, thereby realizing the superposition of the special effect information and the target video content, enhancing the visual impact, bringing the immersive visual experience for the user and generating the interesting dynamic effect of limb response.
For example, when the preset trigger point 1001 is detected, the first terminal sends a control instruction labeled as "1" to the second terminal, and after the second terminal receives the control instruction, the second terminal obtains special effect information A1 corresponding to the preset trigger point 1001 from the server and plays the special effect information A1, so that the enhancement of the current video frame is realized, the immersive experience is brought to the wearer, and the wearer generates corresponding interesting limb response according to the displayed special effect information A1. For example, when the special effect information A1 is a flame, the wearer can generate a self-protection action of naturally and backwardly avoiding when watching.
In some scenes with low requirement on the special effect playing time, when the first terminal detects a preset trigger point, a control instruction is directly sent to the second terminal at a first time corresponding to the preset trigger point.
For example, assuming that a preset trigger point is set at the 10 th second on the target video playing time axis, when the target video is played to the 10 th second, the first terminal detects the preset trigger point and directly sends a control instruction to the second terminal.
In some scenes with higher requirements on the special effect playing time, the time delay of signaling transmission and special effect acquisition needs to be considered, when the first terminal detects a preset trigger point, a control instruction can be sent to the second terminal before the time corresponding to the preset trigger point, so that the second terminal has enough time to acquire special effect information from the server after receiving the control instruction.
For example, assuming that a preset trigger point is set at the 10 th second on the target video playing time axis, in order to ensure that special effect information can be played when the target video is played to the 10 th second, the first terminal may send a control instruction to the second terminal at the 9.5 th second, and how long in advance, the first terminal may be set according to the actual situation.
In the implementation, the first terminal determines a second moment according to a first moment corresponding to a preset trigger point, and sends a control instruction to the second terminal at the second moment, wherein the second moment is smaller than the first moment.
In an optional implementation manner, a first terminal acquires a current moment corresponding to a target video frame played currently; and comparing the current time with a first time corresponding to a preset trigger point, and if the difference between the current time and the first time is smaller than a preset threshold value, determining the current time as a second time by the first terminal.
For example, the first time corresponding to the preset trigger point 1001 is T, the current time is T, if T-T is less than Δt, T is determined as the second time, if T-T is greater than or equal to Δt, the target video is continuously played, the current time t+1 corresponding to the next target video frame is obtained, and if T- (t+1) < Δt, t+1 is determined as the second time.
In another optional implementation manner, the first terminal determines the second time according to the first time corresponding to the preset trigger point and the preset delay playing time. Specifically, the difference between the first time and the delayed play time is determined as the second time. Wherein, delay play time can be set according to practical experience. In the embodiment of the present application, the delay playing time is measured to be 30 ms or 60 ms according to experimental data.
For example, if the first time corresponding to the preset trigger point 1001 is T and the preset delay playing time is Δt, then T- Δt=t' is determined as the second time.
It should be noted that, in the embodiment of the present application, the time of playing the special effect is not limited, and the playing time of the special effect information may not be consistent with the time corresponding to the preset trigger point associated with the target video.
For example, the time corresponding to the preset trigger point is 10 seconds, the first terminal sends a control instruction to the second terminal when 9.5 seconds, and because the network speed is faster, the second terminal acquires corresponding special effect information in the 9.9 seconds, and then the special effect information can be played when the target video is played to the 9.9 seconds.
For another example, the time corresponding to the preset trigger point is 10 seconds, the first terminal sends a control instruction to the second terminal when 9.5 seconds, and the second terminal obtains corresponding special effect information only when the second terminal obtains the corresponding special effect information in the 10.1 th second due to slower network speed, so that the special effect information can be played when the target video is played to the 10.1 th second.
S404: and the first terminal responds to the detected preset trigger point and sends a special effect acquisition request to the server.
In this step, since it is detected that the second terminal is not connected to the first terminal, special effect information set by the first terminal at a time corresponding to a preset trigger point associated with the target video is set. Specifically, when the first terminal detects a preset trigger point, a special effect acquisition request is sent to a server, the special effect acquisition request carries the identification of the target video, and the server returns special effect information set at the moment corresponding to the detected preset trigger point according to the identification of the target video.
S405: and the first terminal receives and plays the special effect information set at the moment corresponding to the preset trigger point sent by the server.
In the step, after receiving the special effect information returned by the server, the first terminal plays the obtained special effect information while playing the target video.
For example, for a winning bid link in a target video, the awards obtained are different according to the types of awards. After receiving the special effect information corresponding to the first-class prize video frame, the first terminal plays the moment of the first-class prize video frame and plays the special effect information corresponding to the first-class prize at the same time, as shown in (a) in fig. 7a, and after receiving the special effect information corresponding to the second-class prize video frame, the first terminal plays the moment of the second-class prize video frame and plays the special effect information corresponding to the second-class prize at the same time, as shown in (b) in fig. 7 a.
In some embodiments, the wearer of the second terminal may switch the target video played by the first terminal through human-computer interaction. In the implementation, a wearer of the second terminal switches the target video through a touch screen or a function key, and sends a target video switching request, and after receiving the target video switching request, the first terminal acquires a new target video from the server and plays the new target video, and controls the playing of corresponding special effect information based on a preset trigger point associated with the new video.
In some embodiments, the wearer of the second terminal may control the access states of the first terminal and the second terminal through the touch area or the switch key of the second terminal, so as to implement switching of the specific information playing device.
For example, in the process of playing the target video by the first terminal, the first terminal detects that the second terminal is not connected at the first detection moment, when a first preset trigger point associated with the target video is detected, the first terminal plays special effect information corresponding to the first preset trigger point obtained from the server, plays the obtained special effect information at the moment corresponding to the first preset trigger point, in a preset time interval, a wearer starts the second terminal and performs Bluetooth or WiFi connection with the first terminal through the touch area, after the first terminal detects that the second terminal is connected at the second detection moment, when the second preset trigger point is detected, a control instruction is sent to the connected second terminal, and the second terminal obtains the special effect information corresponding to the second preset trigger point from the server according to the received control instruction and plays the obtained special effect information at the moment corresponding to the second preset trigger point.
For another example, in the process of playing the target video by the first terminal, the second terminal is detected to be accessed to the first terminal at the first detection moment, when a first preset trigger point associated with the target video is detected, an identification control instruction carrying the target video is sent to the second terminal, the second terminal obtains special effect information corresponding to the first preset trigger point from the server according to the identification, plays the obtained special effect information at the moment corresponding to the first preset trigger point, and in the process of playing the target video, the second terminal disconnects the access to the first terminal due to interruption of a network or Bluetooth or closing of a switch, after a preset time interval, the first terminal sends a special effect information obtaining request to the server when the second preset trigger point is detected after the first terminal detects that the second terminal is not accessed at the second detection moment, and plays the obtained special effect information at the moment corresponding to the second preset trigger point.
In other embodiments, when the first terminal detects that the second terminal is accessed, but for some reasons (such as an IP address conflict and a domain name resolution error), the second terminal does not respond to the control instruction, at this time, the first terminal displays a prompt message that the second terminal does not respond to the wearer of the second terminal, where the prompt message includes an error type that the second terminal does not respond, so that the configurator reconfigures the second terminal according to the error type to establish a communication connection with the first terminal, and after the connection is established, the second terminal acquires and plays corresponding special effect information, so that the wearer views the real special effect information more vividly.
It should be noted that, in the embodiment of the present application, the display manner of the prompt information is not limited, for example, the user may be prompted by a voice broadcast manner that the second terminal is disconnected from the first terminal, or the prompt information is displayed in a prompt box in a display page of the first terminal.
Optionally, in order not to obstruct the target video played by the first terminal, a prompt message may be displayed in the upper left corner of the display page, as shown in fig. 7 b.
In the above embodiment of the present application, a wearer of a second terminal performs man-machine interaction with a first terminal, the first terminal plays a target video selected by the wearer, and detects whether the second terminal is accessed according to a set time interval, when the second terminal is detected to be accessed, a control instruction is sent to the second terminal to enable the second terminal to acquire and play special effect information corresponding to a preset trigger point, so that the target video and the special effect information are independently overlapped and displayed by the two terminals, performance requirements of the device are reduced, a video clamping phenomenon is reduced, and user experience is improved; and when the second terminal is detected not to be accessed, the first terminal acquires and plays the special effect information corresponding to the preset trigger point, so that the personalized special effect can still be played when the second terminal is interrupted.
Taking the first terminal as a television and the second terminal as AR glasses as an example, fig. 8 schematically illustrates interaction between the television and the AR glasses provided in the embodiment of the present application. As shown in fig. 8, a television is used to play a target video. When the AR glasses and the television are connected through the same WiFi, the television sends control signals to the AR glasses, the AR glasses acquire special effect information according to the received control signals, and the special effect information is played at a preset trigger point associated with the target video, so that the enhanced special effect of the target video played by the television is realized.
The complete interaction flow of the television and the AR glasses is shown in fig. 9, and the flow mainly comprises the following steps:
s901: and the television responds to the target video playing request triggered by the user and sends a target video acquisition request to the server.
S902 to 903: and the server sends the target video to the television according to the target video acquisition request.
S904: and playing the obtained target video by the television.
S905: the television detects whether AR glasses worn by the user are attached or not according to the set time interval, and when the AR glasses are detected to be attached, S906 is executed, and when the AR glasses are detected to be not attached, 910 is executed.
S906: the television responds to the detected preset trigger point and sends a control instruction to the AR glasses, wherein the preset trigger point represents that playing special effects are set at the moment corresponding to the preset trigger point associated with the target video, and the control instruction carries the identification of the target video.
S907: and the AR glasses send a special effect information acquisition request to the server according to the mark carried by the control instruction so as to acquire the special effect information set at the moment corresponding to the preset trigger point associated with the target video.
S908: and the server sends corresponding special effect information to the AR glasses according to the special effect information acquisition request.
S909: the AR glasses receive and play the special effect information set at the moment corresponding to the preset trigger point sent by the server.
S910: and the television responds to the detected preset trigger point and sends a special effect acquisition request to the server.
S911: and the server sends corresponding special effect information to the television according to the special effect information acquisition request.
S912: and the television broadcast receiving server sends and plays the special effect information set at the moment corresponding to the preset trigger point.
Embodiments of the present application also provide a computer readable storage medium storing instructions that, when executed, perform the method of the foregoing embodiments.
The present application also provides a computer program product for storing a computer program for performing the method of the foregoing embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present application.
The foregoing description, for purposes of explanation, has been presented in conjunction with specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the embodiments to the precise forms disclosed above. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles and the practical application, to thereby enable others skilled in the art to best utilize the embodiments and various embodiments with various modifications as are suited to the particular use contemplated.

Claims (10)

1. A video special effect display method, comprising:
the first terminal responds to the received target video playing request, and acquires and plays the target video;
the first terminal detects whether a second terminal is connected or not according to a set time interval, wherein the second terminal is provided with a transparent display screen, and the transparent display screen is used for enabling a wearer of the second terminal to watch the target video played by the first terminal;
when the second terminal is detected to be accessed, a control instruction is sent to the second terminal in response to the detected preset trigger point, wherein the preset trigger point characterizes that playing special effects are set at the moment corresponding to the preset trigger point related to the target video, the control instruction carries an identifier of the target video, and the control instruction is used for enabling the second terminal to acquire and play special effect information set at the moment corresponding to the preset trigger point according to the identifier.
2. The method of claim 1, wherein the method further comprises:
when the second terminal is not accessed, responding to the detected preset trigger point, and sending a special effect acquisition request to a server;
and receiving and playing special effect information which is sent by the server and is set at the moment corresponding to the preset trigger point.
3. The method of claim 1, wherein after sending a control instruction to the second terminal, the method further comprises:
and if the second terminal is detected to be unresponsive to the control instruction, displaying prompting information which is unresponsive to the second terminal to the wearer, wherein the prompting information comprises error types which are unresponsive to the second terminal, so that the wearer reconfigures the second terminal according to the error types to establish communication connection with the first terminal.
4. A method according to any one of claims 1-3, wherein the sending a control instruction to the second terminal in response to the detected preset trigger point comprises:
when the preset trigger point is detected, a control instruction is directly sent to the second terminal at a first moment corresponding to the preset trigger point; or alternatively
When the preset trigger point is detected, determining a second moment according to a first moment corresponding to the preset trigger point, and sending a control instruction to the second terminal at the second moment, wherein the second moment is smaller than the first moment.
5. The method of claim 4, wherein the determining the second time according to the first time corresponding to the preset trigger point comprises:
acquiring a current playing time corresponding to a current playing target video frame; if the difference value between the current playing time and the first time is smaller than a preset threshold value, determining the current playing time as a second time; or alternatively
And determining a second moment according to the first moment corresponding to the preset trigger point and the preset delay playing time.
6. A method according to any one of claims 1-3, wherein the preset trigger point is a tag when the target video is played to a preset scene, or a tag when a predetermined play time on a play time axis of the target video is marked.
7. The method of any of claims 1-3, wherein the second terminal is augmented reality AR glasses.
8. A video special effect display method, comprising:
When a first terminal is accessed, a second terminal receives a control instruction sent by the first terminal when a detected preset trigger point is detected, wherein the preset trigger point represents that a playing special effect is set at a moment corresponding to the preset trigger point associated with a target video, the control instruction carries an identifier of the target video, the target video is obtained and played by the first terminal in response to a received target video playing request, and the second terminal is provided with a transparent display screen which is used for enabling a wearer of the second terminal to watch the target video played by the first terminal;
and the second terminal acquires and plays special effect information set at the moment corresponding to the preset trigger point according to the identifier carried by the control instruction.
9. A first terminal, comprising a display, a memory, and a controller:
the display is connected with the controller and is configured to display a target video;
the memory is connected with the controller and is configured to store computer program instructions;
the controller is configured to perform the following operations according to the computer program instructions:
Responding to a received target video playing request, acquiring a target video and playing the target video;
detecting whether a second terminal is connected or not according to a set time interval, wherein the second terminal is provided with a transparent display screen, and the transparent display screen is used for enabling a wearer of the second terminal to watch the target video played by the first terminal;
when the second terminal is detected to be accessed, a control instruction is sent to the second terminal in response to the detected preset trigger point, wherein the preset trigger point represents that playing special effects are set at the moment corresponding to the preset trigger point related to the target video, the control instruction carries an identifier of the target video, and the control instruction is used for enabling the second terminal to acquire and play special effect information set at the moment corresponding to the preset trigger point according to the identifier.
10. A second terminal comprising a rendering engine, a memory, and a processor:
the rendering engine is connected with the processor and is configured to render and display special effect information;
the memory is connected with the processor and is configured to store computer program instructions;
the processor is configured to perform the following operations in accordance with the computer program instructions:
When a first terminal is accessed, receiving a control instruction sent by the first terminal when a detected preset trigger point is detected, wherein the preset trigger point represents that a playing special effect is set at a moment corresponding to the preset trigger point associated with a target video, the control instruction carries an identifier of the target video, the target video is obtained and played by the first terminal in response to a received target video playing request, and a transparent display screen is arranged on the second terminal and used for enabling a wearer of the second terminal to watch the target video played by the first terminal;
and acquiring and playing special effect information set at the moment corresponding to the preset trigger point according to the mark carried by the control instruction.
CN202110692749.7A 2021-06-22 2021-06-22 Video special effect display method and device Active CN113542891B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110692749.7A CN113542891B (en) 2021-06-22 2021-06-22 Video special effect display method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110692749.7A CN113542891B (en) 2021-06-22 2021-06-22 Video special effect display method and device

Publications (2)

Publication Number Publication Date
CN113542891A CN113542891A (en) 2021-10-22
CN113542891B true CN113542891B (en) 2023-04-21

Family

ID=78096459

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110692749.7A Active CN113542891B (en) 2021-06-22 2021-06-22 Video special effect display method and device

Country Status (1)

Country Link
CN (1) CN113542891B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114120787A (en) * 2021-11-23 2022-03-01 中国航空工业集团公司洛阳电光设备研究所 Device for indoor experience of vehicle-mounted AR-HUD simulator

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106997235B (en) * 2016-01-25 2018-07-13 亮风台(上海)信息科技有限公司 For realizing method, the equipment of augmented reality interaction and displaying
CN112348969B (en) * 2020-11-06 2023-04-25 北京市商汤科技开发有限公司 Display method and device in augmented reality scene, electronic equipment and storage medium
CN112637665B (en) * 2020-12-23 2022-11-04 北京市商汤科技开发有限公司 Display method and device in augmented reality scene, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113542891A (en) 2021-10-22

Similar Documents

Publication Publication Date Title
CN111200746B (en) Method for awakening display equipment in standby state and display equipment
US20200159485A1 (en) Virtual reality real-time visual navigation method and system
CN112367543B (en) Display device, mobile terminal, screen projection method and screen projection system
CN112905289A (en) Application picture display method, device, terminal, screen projection system and medium
CN111601134B (en) Time display method in display equipment and display equipment
US20210289263A1 (en) Data Transmission Method and Device
CN112153447A (en) Display device and sound and picture synchronous control method
CN113542891B (en) Video special effect display method and device
CN104903844A (en) Method for rendering data in a network and associated mobile device
CN113630656B (en) Display device, terminal device and communication connection method
CN113660503B (en) Same-screen interaction control method and device, electronic equipment and storage medium
CN113518257B (en) Multisystem screen projection processing method and equipment
WO2018010338A1 (en) Display method and device
CN112040276B (en) Video progress synchronization method, display device and refrigeration device
CN111984167B (en) Quick naming method and display device
CN107241651B (en) Media data playing method and device and intelligent terminal
CN113489938A (en) Virtual conference control method, intelligent device and terminal device
CN112399199A (en) Course video playing method, server and display equipment
CN112162764A (en) Display device, server and camera software upgrading method
CN112218145A (en) Smart television, VR display device and related methods
WO2023125316A1 (en) Video processing method and apparatus, electronic device, and medium
CN116347158A (en) Video playing method and device, electronic equipment and computer readable storage medium
CN113490060B (en) Display equipment and method for determining common contact person
CN112363683B (en) Method and display device for supporting multi-layer display by webpage application
CN111901649B (en) Video playing method and display equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant