CN111866548A - Marking method applied to medical video - Google Patents

Marking method applied to medical video Download PDF

Info

Publication number
CN111866548A
CN111866548A CN202010690333.7A CN202010690333A CN111866548A CN 111866548 A CN111866548 A CN 111866548A CN 202010690333 A CN202010690333 A CN 202010690333A CN 111866548 A CN111866548 A CN 111866548A
Authority
CN
China
Prior art keywords
information
video
mark
marking
medical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010690333.7A
Other languages
Chinese (zh)
Inventor
刘峥嵘
王岩
张国强
孟齐源
乔乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Ouying Information Technology Co Ltd
Original Assignee
Beijing Ouying Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Ouying Information Technology Co Ltd filed Critical Beijing Ouying Information Technology Co Ltd
Priority to CN202010690333.7A priority Critical patent/CN111866548A/en
Publication of CN111866548A publication Critical patent/CN111866548A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • H04N21/2353Processing of additional data, e.g. scrambling of additional data or processing content descriptors specifically adapted to content descriptors, e.g. coding, compressing or processing of metadata
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/7867Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47217End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks

Abstract

Embodiments of the present disclosure provide a marking method, apparatus, and computer-readable storage medium applied to a medical video. The method comprises the steps of obtaining video information; receiving a mark adding instruction; and calling a mark adding tool according to the mark adding instruction, and adding mark information into the video. In this way, note content may be presented to all users directly during the playback of the medical video, while all users may enjoy, refer to, and/or comment on the note content.

Description

Marking method applied to medical video
Technical Field
Embodiments of the present disclosure relate generally to the field of medical video processing, and more particularly, to a marking method, apparatus, and computer-readable storage medium applied to medical video.
Background
When a doctor watches a medical video, and some key points in the video have own knowledge or need to record key points, the key points are marked, namely, the key points are marked so as to be convenient for viewing the marked content subsequently.
However, the existing medical video key point marking method is only limited to viewing by itself, cannot be directly displayed to all users (other users watching videos) in the video process, and cannot view, comment, quote and appreciate the marked content.
Disclosure of Invention
The present disclosure is directed to solving at least one of the technical problems of the related art or related art.
To this end, in a first aspect of the present disclosure, a tagging method applied to medical video is provided. The method comprises the following steps:
acquiring video information;
and receiving a mark adding instruction, calling a mark adding tool according to the mark adding instruction, and adding mark information into the medical video.
Further, the marking information includes:
the tagged content information and the time node in the video at which the tagged information appears.
Further, the tagged content information includes text, picture, audio, and/or video information.
Further, the mark-up tools include graphical drawing tools and/or file editing tools.
Further, the adding the mark information to the video comprises:
and adding mark information in a preset mark area of the video.
Further, still include:
when the video is played again, analyzing the video, loading the added mark information, and determining the time node of each mark information;
when the time node containing the marking information is played, popping up prompt information of the marking information;
And responding to the operation of the prompt message, and playing or hiding the mark message.
Further, still include:
the added marking information is endorsed, referenced and/or commented.
In a second aspect of the disclosure, an apparatus is presented, comprising:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the above-described methods according to the present disclosure.
In a third aspect of the disclosure, a computer-readable storage medium is provided, on which a computer program is stored, which program, when being executed by a processor, realizes the above-mentioned method as according to the disclosure.
The marking method applied to the medical video, provided by the embodiment of the application, comprises the steps of obtaining video information; receiving a mark adding instruction; and calling a mark adding tool according to the mark adding instruction, adding mark information into the video, so that all users who see the video can directly pop up marks containing contents in a video playing area, viewing, appreciating and quoting the marks, and performing further discussion operation.
It should be understood that the statements herein reciting aspects are not intended to limit the critical or essential features of the embodiments of the present disclosure, nor are they intended to limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. In the drawings, like or similar reference characters designate like or similar elements, and wherein:
FIG. 1 is an exemplary system architecture diagram in which one embodiment of the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of a tagging method applied to medical video according to the present application;
FIG. 3 is a schematic structural diagram of a computer system used for implementing a terminal device or a server according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a tag add command according to the present invention;
FIG. 5 is a schematic diagram illustrating the display effect of the added mark information according to the present invention;
FIG. 6 is a schematic diagram illustrating the display effect of the marking information according to the present invention;
FIG. 7 is a diagram illustrating the details of viewing tagged information according to the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are some, but not all embodiments of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
In addition, the term "and/or" herein is only one kind of association relationship describing an associated object, and means that there may be three kinds of relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
Fig. 1 illustrates an exemplary system architecture 100 to which an embodiment of a tagging method applied to medical video of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. Various communication client applications, such as a model training application, a video recognition application, a web browser application, social platform software, etc., may be installed on the terminal devices 101, 102, 103.
The terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, and 103 are hardware, they may be various electronic devices with a display screen, including but not limited to smart phones, tablet computers, e-book readers, MP3 players (Moving Picture Experts Group Audio Layer III, mpeg Audio Layer 3), MP4 players (Moving Picture Experts Group Audio Layer IV, mpeg Audio Layer 4), laptop portable computers, desktop computers, and the like. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the electronic apparatuses listed above. It may be implemented as multiple pieces of software or software modules (e.g., multiple pieces of software or software modules to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
When the terminals 101, 102, 103 are hardware, a video capture device may also be installed thereon. The video acquisition equipment can be various equipment capable of realizing the function of acquiring video, such as a camera, a sensor and the like. The user may capture video using a video capture device on the terminal 101, 102, 103.
The server 105 may be a server that provides various services, such as a background server that processes data displayed on the terminal devices 101, 102, 103. The background server may perform processing such as analysis on the received data, and may feed back a processing result (e.g., an identification result) to the terminal device.
The server may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster formed by multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as multiple pieces of software or software modules (e.g., multiple pieces of software or software modules used to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. In particular, in the case where the target data does not need to be acquired from a remote place, the above system architecture may not include a network but only a terminal device or a server.
Fig. 2 is a flowchart of a marking method applied to a medical video according to an embodiment of the present application. As can be seen from fig. 2, the marking method applied to the medical video of the embodiment includes the following steps:
And S210, acquiring medical video information.
In the present embodiment, an execution subject (for example, a server shown in fig. 1) for a marking method applied to medical video may acquire medical video information by a wired manner or a wireless connection manner.
Further, the execution main body may acquire medical video information transmitted by an electronic device (for example, the terminal device shown in fig. 1) connected in communication therewith, or may be medical video information stored locally in advance.
S220, receiving a mark adding instruction, calling a mark adding tool according to the mark adding instruction, and adding mark information into the video.
Optionally, the tag add command includes: the user inputs the annotation adding command by means of a key input device such as a keyboard and a remote controller, a pointing input device such as a touch screen or a mouse, and the like.
As shown in fig. 4, a user (doctor) clicks a "write note" input mark adding instruction is received.
Alternatively, as shown in fig. 5, the current time node is recorded, the video playing is paused, the playing picture of the current video is intercepted, the intercepted picture is reduced and moved to a position (such as the video screenshot in fig. 5) which is convenient to observe and does not influence the entering of the mark information, and the mark information is added in the preset mark area.
It should be noted that, in practical applications, the video playing may not be paused according to practical situations, and the video is only reduced and moved to a position where the video is convenient to observe and the mark information is not affected.
Optionally, after the user clicks the "write note" button, the mark adding tool is called, the playing of the video is automatically paused, and the mark information added by the mark adding tool is input into the preset mark area.
The mark-up tools include graphical drawing tools and/or file editing tools.
The graphic drawing tool is used for marking key information appearing in the video, such as key point positions in an operation, wound shapes in a suture operation, key information displayed in a medical instrument and the like, and the key information is not further limited and can be selected according to actual conditions.
Further, the user may mark through one or more graphical shapes provided by the image drawing tool, or may mark the key information directly through a brush.
The text editing tool is used for editing information such as texts, pictures, audio and/or video.
Optionally, the annotation adding tool is in a hidden state during the video playing process, and after receiving an annotation adding command (after clicking a "write note" button), the video player can call and display the annotation adding tool for the user to add rich annotation content.
Optionally, after the user finishes marking the medical videos, the marked medical videos are stored and released, and meanwhile, the marked medical videos can be synchronized into a corresponding doctor community according to a preset setting, for example, "bone cloud" (a medical community commonly used by orthopedic doctors). More users are facilitated to discuss the marked medical videos.
Optionally, as shown in fig. 6, when the video to which the tag is added is played again, the video is parsed, the tag information that has been added is loaded, a time node at which each tag information appears is determined, and a summary of the tag information corresponding to the time node is displayed, where the summary is used to facilitate a user viewing the video to understand the tag information. That is, all the marked time nodes can be seen in the video interface, and when the user moves the cursor to point to the time nodes, the outlines of the marked information corresponding to the time nodes are automatically displayed.
With the playing of the video, if the current playing time contains the marking information, the prompt information of whether to watch the current marking information is automatically popped up, and the prompt information can comprise the summary of the marking information.
For example, the content of the prompt message is "… … applicable to knee joint lesions … … caused by various causes".
Further, in response to a selection or click of a user viewing the medical video, if the user selects or clicks "view (determine)", the marking information is viewed; and if the user selects or clicks 'no watching (canceling)', skipping the current mark information to continuously watch the video, namely hiding the current mark information to continuously watch the video.
Optionally, the user may click a play button on a time node at any time, and automatically jump to the corresponding video progress to start playing.
Alternatively, as shown in fig. 7, after the user selects to view the markup information, detailed markup content (characters, images, and the like) of the markup information is displayed, and meanwhile, the markup information can be appreciated, referred and/or commented, so as to form multi-person interactive communication. The detailed markup content includes operations (markup, appreciation, reference and/or comment, etc.) of other users on the markup information.
It should be noted that, when the user watches the video to which the tag information has been added, if there is a unique insight or viewpoint for the content (not added with the tag content) in the video, the tag information may also be added (the specific adding manner refers to step S220, which is not described herein again), and all the tag information (time node) is displayed when the video is played next time.
The marking method applied to the medical video realizes the function of marking open discussion by multiple persons. The system can fully exert and display the insights and viewpoints of the video by different users, is more efficient and convenient in communication of the content of the video and the marking information, simultaneously supports simultaneous marking of multiple users, further improves the efficiency, and creates a good using atmosphere.
An embodiment of the present application further provides an apparatus, including:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement a marking method as described above for a medical video.
Furthermore, an embodiment of the present application also provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the above-mentioned marking method applied to medical videos.
Reference is now made to fig. 3, which illustrates a schematic block diagram of a computer system suitable for implementing a terminal device or server of an embodiment of the present application. The terminal device shown in fig. 3 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 3, the computer system includes a Central Processing Unit (CPU)301 that can perform various appropriate actions and processes based on a program stored in a Read Only Memory (ROM)302 or a program loaded from a storage section 308 into a Random Access Memory (RAM) 303. In the RAM 303, various programs and data necessary for system operation are also stored. The CPU301, ROM 302, and RAM 303 are connected to each other via a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
The following components are connected to the I/O interface 305: an input portion 306 including a keyboard, a mouse, and the like; an output section 307 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 308 including a hard disk and the like; and a communication section 309 including a network interface card such as a LAN card, a modem, or the like. The communication section 309 performs communication processing via a network such as the internet. The driver 310 is also connected to the I/O interface 305 on an as needed basis. A removable medium 311 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 310 on an as-needed basis, so that a computer program read out therefrom is mounted on the storage section 308 on an as-needed basis.
In particular, based on the embodiments of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 309, and/or installed from the removable medium 311. The computer program performs the above-described functions defined in the method of the present application when executed by the Central Processing Unit (CPU) 301.
It should be noted that the computer readable medium described herein can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a unit, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes an information measuring unit, a travel track determining unit, a mapping relation determining unit, and a driving strategy generating unit. Here, the names of these units do not constitute a limitation on the unit itself in some cases, and for example, the information measuring unit may also be described as a "unit that measures the state information of the own vehicle and the surrounding scene information".
As another aspect, the present application also provides a non-volatile computer storage medium, which may be the non-volatile computer storage medium included in the apparatus in the above-described embodiments; or it may be a non-volatile computer storage medium that exists separately and is not incorporated into the terminal. The non-transitory computer storage medium stores one or more programs that, when executed by a device, cause the device to: acquiring medical video information; and receiving a mark adding instruction, calling a mark adding tool according to the mark adding instruction, and adding mark information into the video.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (9)

1. A marking method applied to medical video, comprising:
acquiring video information;
and receiving a mark adding instruction, calling a mark adding tool according to the mark adding instruction, and adding mark information into the video.
2. The method of claim 1, wherein the marking information comprises:
the tagged content information and the time node in the video at which the tagged information appears.
3. The method of claim 2, wherein the tagged content information comprises text, picture, audio, and/or video information.
4. The method of claim 3, wherein the mark-up tool comprises a graphical drawing tool and/or a file editing tool.
5. The method of claim 4, wherein adding marker information to the video comprises:
and adding mark information in a preset mark area of the video.
6. The method of claim 5, further comprising:
when the video is played again, analyzing the video, loading the added mark information, and determining the time node of each mark information;
When the time node containing the marking information is played, popping up prompt information of the marking information;
and responding to the operation of the prompt message, and playing or hiding the mark message.
7. The method of claim 6, further comprising:
the added marking information is endorsed, referenced and/or commented.
8. An apparatus, comprising:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-7.
9. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
CN202010690333.7A 2020-07-17 2020-07-17 Marking method applied to medical video Pending CN111866548A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010690333.7A CN111866548A (en) 2020-07-17 2020-07-17 Marking method applied to medical video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010690333.7A CN111866548A (en) 2020-07-17 2020-07-17 Marking method applied to medical video

Publications (1)

Publication Number Publication Date
CN111866548A true CN111866548A (en) 2020-10-30

Family

ID=72983692

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010690333.7A Pending CN111866548A (en) 2020-07-17 2020-07-17 Marking method applied to medical video

Country Status (1)

Country Link
CN (1) CN111866548A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024077909A1 (en) * 2022-10-12 2024-04-18 腾讯科技(深圳)有限公司 Video-based interaction method and apparatus, computer device, and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101930779A (en) * 2010-07-29 2010-12-29 华为终端有限公司 Video commenting method and video player
CN110381382A (en) * 2019-07-23 2019-10-25 腾讯科技(深圳)有限公司 Video takes down notes generation method, device, storage medium and computer equipment
CN110784754A (en) * 2019-10-30 2020-02-11 北京字节跳动网络技术有限公司 Video display method and device and electronic equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101930779A (en) * 2010-07-29 2010-12-29 华为终端有限公司 Video commenting method and video player
CN110381382A (en) * 2019-07-23 2019-10-25 腾讯科技(深圳)有限公司 Video takes down notes generation method, device, storage medium and computer equipment
CN110784754A (en) * 2019-10-30 2020-02-11 北京字节跳动网络技术有限公司 Video display method and device and electronic equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024077909A1 (en) * 2022-10-12 2024-04-18 腾讯科技(深圳)有限公司 Video-based interaction method and apparatus, computer device, and storage medium

Similar Documents

Publication Publication Date Title
CN110570698B (en) Online teaching control method and device, storage medium and terminal
US11450350B2 (en) Video recording method and apparatus, video playing method and apparatus, device, and storage medium
US20220385997A1 (en) Video processing method and apparatus, readable medium and electronic device
CN112698769B (en) Information interaction method, device, equipment, storage medium and program product
CN111800671B (en) Method and apparatus for aligning paragraphs and video
US11528535B2 (en) Video file playing method and apparatus, and storage medium
CN109785687B (en) Data processing method, device and system for online video teaching
US20140304730A1 (en) Methods and apparatus for mandatory video viewing
CN109255767B (en) Image processing method and device
CN108763532A (en) For pushed information, show the method and apparatus of information
US20100177122A1 (en) Video-Associated Objects
CN110058854B (en) Method, terminal device and computer-readable medium for generating application
US9372601B2 (en) Information processing apparatus, information processing method, and program
CN110930220A (en) Display method, display device, terminal equipment and medium
CN113806306B (en) Media file processing method, device, equipment, readable storage medium and product
CN109982130A (en) A kind of video capture method, apparatus, electronic equipment and storage medium
CN109271929B (en) Detection method and device
CN110673886B (en) Method and device for generating thermodynamic diagrams
CN112784103A (en) Information pushing method and device
US20200007959A1 (en) Method and apparatus for publishing information, and method and apparatus for processing information
CN111866548A (en) Marking method applied to medical video
CN112492399B (en) Information display method and device and electronic equipment
CN109871465B (en) Time axis calculation method and device, electronic equipment and storage medium
JP6686578B2 (en) Information processing apparatus and information processing program
CN113641853A (en) Dynamic cover generation method, device, electronic equipment, medium and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Du Jiaqi

Inventor after: Liu Zhengrong

Inventor after: Wang Yan

Inventor after: Zhang Guoqiang

Inventor after: Meng Qiyuan

Inventor after: Qiao Le

Inventor before: Liu Zhengrong

Inventor before: Wang Yan

Inventor before: Zhang Guoqiang

Inventor before: Meng Qiyuan

Inventor before: Qiao Le

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20201030