CN110737385B - Video mouse interaction method, intelligent terminal and storage medium - Google Patents

Video mouse interaction method, intelligent terminal and storage medium Download PDF

Info

Publication number
CN110737385B
CN110737385B CN201910972037.3A CN201910972037A CN110737385B CN 110737385 B CN110737385 B CN 110737385B CN 201910972037 A CN201910972037 A CN 201910972037A CN 110737385 B CN110737385 B CN 110737385B
Authority
CN
China
Prior art keywords
video
mouse
picture
panel control
mouse interaction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910972037.3A
Other languages
Chinese (zh)
Other versions
CN110737385A (en
Inventor
陈志芬
卫宣安
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Genew Technologies Co Ltd
Original Assignee
Shenzhen Genew Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Genew Technologies Co Ltd filed Critical Shenzhen Genew Technologies Co Ltd
Priority to CN201910972037.3A priority Critical patent/CN110737385B/en
Publication of CN110737385A publication Critical patent/CN110737385A/en
Application granted granted Critical
Publication of CN110737385B publication Critical patent/CN110737385B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0486Drag-and-drop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/443OS processes, e.g. booting an STB, implementing a Java virtual machine in an STB or power management in an STB
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/268Signal distribution or switching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04806Zoom, i.e. interaction techniques or interactors for controlling the zooming operation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Position Input By Displaying (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The invention discloses a video mouse interaction method, an intelligent terminal and a storage medium, wherein the method comprises the following steps: opening a multi-channel video playing picture, receiving the operation of dragging the picture in playing by a mouse, and controlling the position of the multi-channel video to be exchanged randomly; when the mouse double-click on one of the multiple paths of videos is detected, controlling the double-click on the one path of video to be displayed maximally; and dividing one path of video picture into a plurality of virtual quadrants, receiving the mouse pressing, executing a cloud deck rotation function, and amplifying or reducing the focus of the video through the mouse roller. According to the invention, through the video mouse interaction behavior, the position exchange of the video player, the rotation and amplification of the camera and other interaction operations are realized, and the mouse interaction behavior of the whole player is not influenced.

Description

Video mouse interaction method, intelligent terminal and storage medium
Technical Field
The invention relates to the technical field of computer application, in particular to a video mouse interaction method, an intelligent terminal and a storage medium.
Background
When a client of a scheduling system (the client of the scheduling system is equivalent to a product, namely a WPF application program, and the program has a video monitoring function, and draws a video by using a Winform control), a handle of the Winform control (the handle can be understood as a unique identifier ID of the Winform control in the program, and a video SDK finds the control by the identifier, and then draws a video stream into the control) is transmitted into an SDK (software development kit) for playing the video, so that the SDK draws a video picture into a control (player) specified by the client to achieve the effect of playing the video. The Panel control of Winform originally supports interactive effects of mouse dragging, mouse clicking and the like, can realize position exchange of a video player (namely two paths of camera pictures are played by a client side, and the position exchange between the two paths of camera playing pictures can be realized by dragging) through the interactive action effects, and can also detect clicking or roller operation of a mouse on the video pictures for some cameras supporting holder control, so that the interactive operations of camera rotation, amplification and the like are realized. However, in practical application, when the handle of the Winform control is handed over to the SDK, the SDK may perform special processing on the control (for example, blocking a windows message loop), which directly causes the control not to respond to any mouse event, thereby destroying the original interaction behavior.
Accordingly, there is a need for improvements and developments in the art.
Disclosure of Invention
The invention mainly aims to provide a video mouse interaction method, an intelligent terminal and a storage medium, and aims to overcome the defects in the prior art.
In order to achieve the above object, the present invention provides a video mouse interaction method, which comprises the following steps:
opening a multi-channel video playing picture, receiving the operation of dragging the picture in playing by a mouse, and controlling the position of the multi-channel video to be exchanged randomly;
when the mouse double-click on one of the multiple paths of videos is detected, controlling the double-click on the one path of video to be displayed maximally;
and dividing one path of video picture into a plurality of virtual quadrants, receiving the mouse pressing, executing a pan-tilt rotation function, and amplifying or reducing the focus of the video through the mouse roller.
Optionally, in the video mouse interaction method, when a video is accessed, a handle of a Winform control is transmitted to a software development kit for playing the video, and the software development kit draws a video picture to a control specified by a client.
Optionally, in the video mouse interaction method, the Winform control includes dual Panel controls, which are a first Panel control and a second Panel control disposed under the first Panel control in an overlapping manner, respectively.
Optionally, in the video mouse interaction method, the first Panel control is configured to draw a video frame, and transmit a handle to the software development kit;
and the second Panel control is used for subscribing mouse events and controlling mouse interaction.
Optionally, in the video mouse interaction method, the Enabled attribute in the first Panel control is set to False, and the False attribute is used for interfacing with a software development kit of any video, so as to avoid affecting the mouse interaction behavior of the whole player.
Optionally, in the video mouse interaction method, the Enabled attribute is an attribute of the first Panel control, and when the Enabled attribute is set to False, the first Panel control cannot respond to any mouse or keyboard operation.
Optionally, in the video mouse interaction method, the multiple video playing pictures are 16 video playing pictures.
Optionally, in the video mouse interaction method, the plurality of virtual quadrants are 8 virtual quadrants.
In addition, to achieve the above object, the present invention further provides an intelligent terminal, wherein the intelligent terminal includes: the video mouse interaction program comprises a memory, a processor and a video mouse interaction program which is stored on the memory and can run on the processor, wherein the video mouse interaction program realizes the steps of the video mouse interaction method when being executed by the processor.
In addition, in order to achieve the above object, the present invention further provides a storage medium, wherein the storage medium stores a video mouse interaction program, and the video mouse interaction program implements the steps of the video mouse interaction method when executed by a processor.
The invention receives the operation of dragging the picture in playing by the mouse by opening the multi-channel video playing picture, and controls the position of the multi-channel video to be exchanged randomly; when the mouse double-click on one of the multiple paths of videos is detected, controlling the double-click on the one path of video to be displayed maximally; and dividing one path of video picture into a plurality of virtual quadrants, receiving the mouse pressing, executing a pan-tilt rotation function, and amplifying or reducing the focus of the video through the mouse roller. According to the invention, through the video mouse interaction behavior, the position exchange of the video player, the rotation and amplification of the camera and other interaction operations are realized, and the mouse interaction behavior of the whole player is not influenced.
Drawings
FIG. 1 is a flow chart of a preferred embodiment of a video mouse interaction method of the present invention;
FIG. 2 is a schematic diagram illustrating a principle of avoiding the influence of the video SDK on the mouse interaction in the preferred embodiment of the video mouse interaction method of the present invention;
FIG. 3 is a schematic diagram of a video mouse dragging and exchanging interaction in a preferred embodiment of the video mouse interaction method of the present invention;
FIG. 4 is a schematic diagram illustrating a video screen mouse double-click maximization interaction in a preferred embodiment of the video mouse interaction method according to the present invention;
FIG. 5 is a schematic diagram illustrating a video mouse pressing pan/tilt control interaction in a preferred embodiment of the video mouse interaction method according to the present invention;
fig. 6 is a schematic operating environment diagram of an intelligent terminal according to a preferred embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer and clearer, the present invention is further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As shown in fig. 1, the video mouse interaction method according to the preferred embodiment of the present invention includes the following steps:
s10, opening a multi-channel video playing picture, receiving an operation of dragging the picture in playing by a mouse, and controlling the position of the multi-channel video to be exchanged randomly;
s20, when the mouse is detected to double click one path of video in the multi-path video, controlling the double-clicked one path of video to be displayed maximally;
and S30, dividing one path of video image into a plurality of virtual quadrants, executing a pan-tilt rotation function after receiving the mouse press, and amplifying or reducing the focus of the video through the mouse roller.
Specifically, for example, 16 paths of videos are played on a scheduling monitoring system platform, and the positions of the 16 paths of videos can be arbitrarily exchanged by dragging the played pictures with a mouse; double-clicking a certain path of video through a mouse to realize the maximization of the path of video; a video picture is divided into 8 virtual quadrants, the pan-tilt rotation function after a mouse is pressed is realized (for example, some cameras have the functions of remote control rotation, focusing, zooming and the like, and the functions are called pan-tilt control), and the zooming-in or zooming-out of the video focus by a mouse roller is realized.
Further, after the mouse interaction behavior is defined, after a handle of a Winform control needs to be handed to a bottom layer video SDK (Software Development Kit), the control function is not affected, and because the Winform control does not support the control to be transparent, the Winform control needs to be designed into a form of double panels (a Panel is a meaning of a Panel, and is generally used for receiving and drawing a video stream), the double panels are overlapped together, the upper Panel is used for drawing a video picture, the handle of the upper Panel needs to be handed to the SDK, and the lower Panel is mainly used for subscribing a mouse event; in order to prevent the upper Panel from blocking the lower Panel from acquiring a mouse event, the Enabled attribute of the upper Panel (the Enabled attribute is an attribute of the Panel control, and after the attribute is set to False, the Panel control cannot respond to any mouse and keyboard operation) needs to be set to False, so that the mouse interaction behavior of the whole player cannot be influenced no matter any video SDK is docked.
That is to say, when a video is accessed, a handle of a Winform control is transmitted to a software development kit for playing the video, and the software development kit draws a video picture to a control specified by a client; the Winform control comprises double Panel controls, namely a first Panel control and a second Panel control which is arranged below the first Panel control in an overlapped mode, wherein the first Panel control is used for drawing a video picture and transmitting a handle to the software development kit; and the second Panel control is used for subscribing mouse events and controlling mouse interaction. Setting an Enabled attribute in the first Panel control to False, wherein the Enabled attribute is used for docking a software development kit of any video, and avoiding influencing the mouse interaction behavior of the whole player; the Enabled attribute is an attribute of the first Panel control, and when the Enabled attribute is set to False, the first Panel control cannot respond to any mouse or keyboard operation.
As shown in fig. 2, in a background that a Winform control cannot support transparentization, the dual Panel control design manner in fig. 2 can avoid a problem that the Panel control cannot respond to mouse interaction due to the influence of the video SDK after the previous Panel control handle is transferred to the video SDK. The effect achieved is that the existence of double Panel controls cannot be sensed on the interface, the video is drawn on the front Panel control, and the interaction of the mouse is realized by the rear Panel control.
As shown in fig. 3, it is shown that the video picture is dragged to achieve the effect that the two video pictures are interchanged with each other; in many cases, the user wishes to monitor multiple cameras and can sort through the order he or she wants. The dragging mode can make a monitoring picture appear at any position (generally, video monitoring has a plurality of layout modes, such as 1 picture, 4 pictures, 9 pictures, 16 pictures, 32 pictures, etc., when only one path of camera is played, the layout is 16 pictures, and then the path of camera can be placed at any position of 16 positions through dragging).
As shown in fig. 4, it shows that a single video frame is double-clicked by a mouse, and the frame can be quickly viewed in a full screen.
As shown in fig. 5, by virtually dividing a video frame into 8 quadrants, when the left button of the mouse is pressed, the position where the mouse is pressed is detected, and the camera can be rotated in different directions (pan/tilt/zoom control).
For example, 4-way video pictures are opened; the video pictures need to be adjusted to change the sequence from 1234 to 2143, and the 1 picture needs to be dragged to the 2 pictures, and the 3 picture needs to be dragged to the 4 picture; the 1 picture lens needs to rotate about 30 degrees towards the left side, the 1 picture needs to be found, then the left mouse button is pressed in the middle of the left side of the 1 picture, the picture can rotate at the moment, and after the 30 degrees is reached, the left mouse button is released to stop rotating; the 2-picture lens needs to be amplified, the 2 pictures are selected firstly, then the mouse roller rolls forwards, and the lens is correspondingly amplified; if the maximum 1 picture is presented, the 1 picture is double clicked.
Further, the interactive behavior of the mouse may be extended to the keyboard.
Further, as shown in fig. 6, based on the above video mouse interaction method, the present invention also provides an intelligent terminal, which includes a processor 10, a memory 20, and a display 30. Fig. 6 shows only some of the components of the smart terminal, but it should be understood that not all of the shown components are required to be implemented, and that more or fewer components may be implemented instead.
The memory 20 may be an internal storage unit of the intelligent terminal in some embodiments, such as a hard disk or a memory of the intelligent terminal. The memory 20 may also be an external storage device of the Smart terminal in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the Smart terminal. Further, the memory 20 may also include both an internal storage unit and an external storage device of the smart terminal. The memory 20 is used for storing application software installed in the intelligent terminal and various data, such as program codes of the installed intelligent terminal. The memory 20 may also be used to temporarily store data that has been output or is to be output. In one embodiment, the memory 20 stores a video mouse interaction program 40, and the video mouse interaction program 40 can be executed by the processor 10, so as to implement the video mouse interaction method in the present application.
The processor 10 may be, in some embodiments, a Central Processing Unit (CPU), a microprocessor or other data Processing chip, and is configured to execute program codes stored in the memory 20 or process data, such as executing the video mouse interaction method.
The display 30 may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch panel, or the like in some embodiments. The display 30 is used for displaying information at the intelligent terminal and for displaying a visual user interface. The components 10-30 of the intelligent terminal communicate with each other via a system bus.
In one embodiment, when the processor 10 executes the video mouse interaction program 40 in the memory 20, the following steps are implemented:
opening a multi-channel video playing picture, receiving the operation of dragging the picture in playing by a mouse, and controlling the position of the multi-channel video to be exchanged randomly;
when the mouse is detected to double-click one path of video in the multi-path videos, controlling the double-clicked one path of video to be displayed in a maximized mode;
and dividing one path of video picture into a plurality of virtual quadrants, receiving the mouse pressing, executing a pan-tilt rotation function, and amplifying or reducing the focus of the video through the mouse roller.
The video mouse interaction method further comprises the following steps:
when the video is accessed, the handle of the Winform control is transmitted to a software development kit for playing the video, and the software development kit draws a video picture to a control specified by a client.
The Winform control comprises double Panel controls, namely a first Panel control and a second Panel control which is arranged below the first Panel control in an overlapping mode.
The first Panel control is used for drawing a video picture and transmitting a handle to the software development kit;
and the second Panel control is used for subscribing mouse events and controlling mouse interaction.
And setting the Enabled attribute in the first Panel control to False for interfacing a software development kit of any video, so as to avoid influencing the mouse interaction behavior of the whole player.
The Enabled attribute is an attribute of the first Panel control, and when the Enabled attribute is set to False, the first Panel control cannot respond to any mouse or keyboard operation.
The multi-channel video playing picture is a 16-channel video playing picture; the virtual quadrants are 8 virtual quadrants.
The invention also provides a storage medium, wherein the storage medium stores a video mouse interaction program, and the video mouse interaction program realizes the steps of the video mouse interaction method when being executed by a processor.
In summary, the present invention provides a video mouse interaction method, an intelligent terminal and a storage medium, where the method includes: opening a multi-channel video playing picture, receiving an operation of dragging the picture in playing by a mouse, and controlling the position of the multi-channel video to be exchanged randomly; when the mouse double-click on one of the multiple paths of videos is detected, controlling the double-click on the one path of video to be displayed maximally; and dividing one path of video picture into a plurality of virtual quadrants, receiving the mouse pressing, executing a cloud deck rotation function, and amplifying or reducing the focus of the video through the mouse roller. According to the invention, through the video mouse interaction behavior, the interaction operations such as position exchange of the video player and rotation amplification of the camera are realized, and the mouse interaction behavior of the whole player is not influenced.
Of course, it can be understood by those skilled in the art that all or part of the processes in the methods of the embodiments described above can be implemented by instructing relevant hardware (such as a processor, a controller, etc.) by a computer program, and the program can be stored in a computer-readable storage medium, and when executed, the program can include the processes of the embodiments of the methods described above. The storage medium may be a memory, a magnetic disk, an optical disk, etc.
It will be understood that the invention is not limited to the examples described above, but that modifications and variations will occur to those skilled in the art in light of the above teachings, and that all such modifications and variations are considered to be within the scope of the invention as defined by the appended claims.

Claims (5)

1. A video mouse interaction method is characterized by comprising the following steps:
opening a multi-channel video playing picture, receiving the operation of dragging the picture in playing by a mouse, and controlling the position of the multi-channel video to be exchanged randomly;
when the mouse double-click on one of the multiple paths of videos is detected, controlling the double-click on the one path of video to be displayed maximally;
dividing one path of video picture into a plurality of virtual quadrants, executing a pan-tilt rotation function after receiving the mouse press, and amplifying or reducing the focus of the video through the mouse roller;
when a video is accessed, a handle of a Winform control is transmitted to a software development kit for playing the video, and the software development kit draws a video picture to a control specified by a client;
the Winform control comprises double Panel controls, namely a first Panel control and a second Panel control which is arranged below the first Panel control in an overlapped mode;
the first Panel control is used for drawing a video picture and transmitting a handle to the software development kit;
the second Panel control is used for subscribing mouse events and controlling mouse interaction;
setting an Enabled attribute in the first Panel control to False, wherein the Enabled attribute is used for docking a software development kit of any video, and avoiding influencing the mouse interaction behavior of the whole player;
the Enabled attribute is an attribute of the first Panel control, and when the Enabled attribute is set to False, the first Panel control cannot respond to any mouse or keyboard operation.
2. The video mouse interaction method according to claim 1, wherein the multiple video playback screens are 16 video playback screens.
3. The video mouse interaction method of claim 1, wherein the plurality of virtual quadrants are 8 virtual quadrants.
4. An intelligent terminal, characterized in that, intelligent terminal includes: a memory, a processor and a video mouse interaction program stored on the memory and executable on the processor, the video mouse interaction program when executed by the processor implementing the steps of the video mouse interaction method as claimed in any one of claims 1-3.
5. A storage medium, characterized in that the storage medium stores a video mouse interaction program, and the video mouse interaction program, when executed by a processor, implements the steps of the video mouse interaction method according to any one of claims 1-3.
CN201910972037.3A 2019-10-14 2019-10-14 Video mouse interaction method, intelligent terminal and storage medium Active CN110737385B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910972037.3A CN110737385B (en) 2019-10-14 2019-10-14 Video mouse interaction method, intelligent terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910972037.3A CN110737385B (en) 2019-10-14 2019-10-14 Video mouse interaction method, intelligent terminal and storage medium

Publications (2)

Publication Number Publication Date
CN110737385A CN110737385A (en) 2020-01-31
CN110737385B true CN110737385B (en) 2023-03-24

Family

ID=69269929

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910972037.3A Active CN110737385B (en) 2019-10-14 2019-10-14 Video mouse interaction method, intelligent terminal and storage medium

Country Status (1)

Country Link
CN (1) CN110737385B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112099718A (en) * 2020-09-07 2020-12-18 四川傲势科技有限公司 Photoelectric pod operation method and system
CN113064537B (en) * 2021-03-31 2023-03-10 北京达佳互联信息技术有限公司 Media resource playing method, device, equipment, medium and product

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011120024A (en) * 2009-12-03 2011-06-16 Canon Inc Video display system
CN102902465A (en) * 2011-07-27 2013-01-30 永泰软件有限公司 Intelligent video control method
CN103024353A (en) * 2012-12-07 2013-04-03 安科智慧城市技术(中国)有限公司 Playing method and device for multi-screen compressed image
CN103491341A (en) * 2013-07-17 2014-01-01 北京汉邦高科数字技术股份有限公司 Method for displaying network hard disk video
CN106937159A (en) * 2017-04-26 2017-07-07 西安诺瓦电子科技有限公司 Many picture output control methods and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011120024A (en) * 2009-12-03 2011-06-16 Canon Inc Video display system
CN102902465A (en) * 2011-07-27 2013-01-30 永泰软件有限公司 Intelligent video control method
CN103024353A (en) * 2012-12-07 2013-04-03 安科智慧城市技术(中国)有限公司 Playing method and device for multi-screen compressed image
CN103491341A (en) * 2013-07-17 2014-01-01 北京汉邦高科数字技术股份有限公司 Method for displaying network hard disk video
CN106937159A (en) * 2017-04-26 2017-07-07 西安诺瓦电子科技有限公司 Many picture output control methods and device

Also Published As

Publication number Publication date
CN110737385A (en) 2020-01-31

Similar Documents

Publication Publication Date Title
US10466951B2 (en) Gallery video player
US11698721B2 (en) Managing an immersive interface in a multi-application immersive environment
US20200059500A1 (en) Simultaneous input system for web browsers and other applications
KR102027612B1 (en) Thumbnail-image selection of applications
US8516393B2 (en) Apparatus, system, and method for presenting images in a multiple display environment
US8508614B2 (en) Teleprompting system and method
US20140223490A1 (en) Apparatus and method for intuitive user interaction between multiple devices
US8381136B2 (en) Handheld electronic device supporting multiple display mechanisms
US20120299968A1 (en) Managing an immersive interface in a multi-application immersive environment
KR20140039210A (en) Multi-application environment
JP5932831B2 (en) Managing an immersive environment
JP5344651B2 (en) Information processing apparatus, control method, program, and information processing system
WO2022161432A1 (en) Display control method and apparatus, and electronic device and medium
CN110062287B (en) Target object control method and device, storage medium and electronic equipment
CN110737385B (en) Video mouse interaction method, intelligent terminal and storage medium
CN112911147A (en) Display control method, display control device and electronic equipment
CN110688190A (en) Control method and device of intelligent interactive panel
WO2024046203A1 (en) Content display method and apparatus
TW201915710A (en) Display device and image display method thereof based on Android platform
US11775127B1 (en) Emissive surfaces and workspaces method and apparatus
US11557065B2 (en) Automatic segmentation for screen-based tutorials using AR image anchors
CN113726953B (en) Display content acquisition method and device
JP7289208B2 (en) Program, Information Processing Apparatus, and Method
US20230289048A1 (en) Managing An Immersive Interface in a Multi-Application Immersive Environment
JP2022137023A (en) Program, Information Processing Apparatus, and Method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant