CN115134635B - Media information processing method, device, equipment and storage medium - Google Patents

Media information processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN115134635B
CN115134635B CN202210638029.7A CN202210638029A CN115134635B CN 115134635 B CN115134635 B CN 115134635B CN 202210638029 A CN202210638029 A CN 202210638029A CN 115134635 B CN115134635 B CN 115134635B
Authority
CN
China
Prior art keywords
content
target
media information
displaying
encryption
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210638029.7A
Other languages
Chinese (zh)
Other versions
CN115134635A (en
Inventor
陈昱志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202210638029.7A priority Critical patent/CN115134635B/en
Publication of CN115134635A publication Critical patent/CN115134635A/en
Application granted granted Critical
Publication of CN115134635B publication Critical patent/CN115134635B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2347Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving video stream encryption
    • H04N21/23476Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving video stream encryption by partially encrypting, e.g. encrypting the ending portion of a movie
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2347Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving video stream encryption

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The application provides a method, a device, equipment and a computer readable storage medium for processing media information; the method comprises the following steps: displaying target content of the media information in a media information display interface, wherein the target content is part of content in the content included in the media information; receiving a content encryption instruction for indicating that the target content is encrypted in whole or in part; responding to the content encryption instruction, and covering the encrypted content indicated by the current content encryption instruction by adopting a floating layer; and responding to the content generation instruction, generating target media information, and controlling the content to be in an invisible state when the content in the target media information is displayed. The application can realize the encryption of partial content in the media information and improve the accuracy and encryption efficiency of encryption operation.

Description

Media information processing method, device, equipment and storage medium
Technical Field
The present application relates to the field of computer communication technology processing, and in particular, to a method, an apparatus, a device, a computer readable storage medium and a computer program product for processing media information.
Background
With the popularity of video conferences, sensitive information, such as profit data, development planning, employee information, etc., may be involved in various conference videos. With the exposure of the video, leakage of sensitive information can result. Therefore, in order to achieve the purpose of privacy protection, encryption protection is required for sensitive information in the video.
Then, the related video encryption technology encrypts the whole video file, and cannot encrypt part of the content in the video, so that the encryption precision is low and the encryption efficiency is low.
Disclosure of Invention
The embodiment of the application provides a method and a device for processing media information and a computer readable storage medium, which can realize encryption of partial content in the media information and improve the accuracy and encryption efficiency of encryption operation.
The technical scheme of the embodiment of the application is realized as follows:
The embodiment of the application provides a method for processing media information, which comprises the following steps:
Displaying target content of media information in a media information display interface, wherein the target content is part of content in the content included in the media information;
receiving a content encryption instruction, wherein the content encryption instruction is used for indicating that the target content is encrypted in whole or in part;
responding to the content encryption instruction, and covering the encrypted content indicated by the content encryption instruction by adopting a floating layer;
And generating target media information in response to a content generation instruction, and controlling the content to be in an invisible state when the content in the target media information is displayed.
The embodiment of the application provides a processing device of media information, which comprises:
the display module is used for displaying target content of the media information in a media information display interface, wherein the target content is part of the content included in the media information;
The receiving module is used for receiving a content encryption instruction, and the content encryption instruction is used for indicating that all or part of the target content is encrypted;
the coverage module is used for responding to the content encryption instruction and adopting a floating layer to cover the encrypted content indicated by the content encryption instruction;
And the generation module is used for responding to the content generation instruction, generating target media information and controlling the content to be in an invisible state when the content in the target media information is displayed.
In the above scheme, the display module is further configured to display a content search area and a media display area in a media information display interface, display at least one piece of recommended content in the content search area, and display the content of the media information in the media display area;
The recommended content is part of content in the content included in the recommended media information;
In the process of displaying the content, responding to a selection operation of target recommended content in the at least one recommended content, skipping the displayed content to first content comprising the target recommended content, and taking the first content as the target content.
In the above scheme, the display module is further configured to display, in the association area of each recommended content in the content search area, location information of the corresponding recommended content;
The position information is used for indicating the display position of the recommended content in the media information.
In the above scheme, the display module is further configured to display the content of the media information in a media information display interface, and display at least one keyword;
in the process of displaying the content, responding to a selection operation for a target keyword in the at least one keyword, skipping the displayed content to second content, and taking the second content as the target content;
And the second content is obtained by searching the content included in the media information based on the target keyword.
In the above scheme, the display module is further configured to display, in a media information display interface, content of the media information, and display a content search function item;
In the process of displaying the content, responding to a search instruction for input content triggered based on the content search function item, skipping the displayed content to third content, and taking the third content as the target content;
And the third content is obtained by searching for the content included in the media information based on the input content.
In the above scheme, the display module is further configured to display a content search area and a media display area in a media information display interface, display at least one content thumbnail in the content search area, and display content of the media information in the media display area;
wherein the content thumbnail is a thumbnail of a content unit of the media information;
In the process of displaying the content, responding to a selection operation of a target content thumbnail in the at least one content thumbnail, skipping the displayed content to fourth content corresponding to the target content thumbnail, and taking the fourth content as the target content.
In the above scheme, when the media information is video, the display module is further configured to play the content of the video in the media information display interface, and display a play progress bar for indicating the play progress of the video;
In the process of displaying the content, responding to a progress adjustment operation triggered based on the playing progress bar, jumping the displayed content to a fifth content which is indicated to be adjusted by the progress adjustment operation, and taking the fifth content as the target content.
In the above scheme, the overlay module is further configured to mark a position of the target content in the media information, and display corresponding mark information;
And displaying the content which is encrypted by the content encryption instruction covered by the floating layer when the triggering operation for the marking information is received in the process of displaying other content which is different from the target content.
In the above scheme, the receiving module is further configured to display an automatic encryption control in the media information display interface;
receiving a content encryption instruction in response to a triggering operation for an automatic encryption control; in a corresponding manner,
Correspondingly, in the above scheme, the coverage module is further configured to automatically adopt a floating layer to cover all contents of the target content in response to the content encryption instruction.
In the above scheme, the receiving module is further configured to display a smearing encryption control in the media information display interface;
And receiving a content encryption instruction in response to a smearing operation triggered based on the smearing encryption control.
In the above scheme, the coverage module is further configured to respond to the content encryption instruction, display an application track of the application operation by using a floating layer, and use the content covered by the application track as the encrypted content indicated by the content encryption instruction.
In the above scheme, the coverage module is further configured to display an icon corresponding to at least one smearing tool in response to a triggering operation for the smearing encryption control;
Responding to a selection operation for a target icon in at least one icon, and displaying a target smearing tool corresponding to the target icon;
the application operation triggered based on the target application tool is received.
In the above scheme, the receiving module is further configured to display a frame selection encryption control in the media information display interface;
responding to the triggering operation of the frame selecting encryption control, and controlling the target content to be in an editing state;
Responsive to a content selection operation for the target content in the editing state, displaying a selection frame including the selected content;
And receiving a content encryption instruction for the content included in the selection box.
Correspondingly, in the above scheme, the coverage module is further configured to cover the content included in the frame by adopting a floating layer in response to the content encryption instruction.
In the above scheme, when the media information is video, the target content is a frame image of the video, and the content indicated to be encrypted by the content encryption instruction is target image content included in the frame image; the generating module is further configured to control, in the process of playing the video, when a target video clip played in the video includes a plurality of frame images including the target image content, the target image content in each frame image to be in an invisible state.
In the above scheme, the coverage module is further configured to obtain audio segments corresponding to the plurality of frame images;
encrypting the content of the audio fragment to obtain an encrypted target audio fragment;
and in the process of playing the video, when the video is played to the target audio fragment, shielding the target audio fragment.
In the above scheme, when the media information is video, the coverage module is further configured to obtain an audio file in the video, and perform semantic recognition on the audio file to obtain recognized content;
retrieving, based on the encrypted content indicated by the content encryption instruction, in the identified content to determine an audio clip that matches the encrypted content indicated by the content encryption instruction;
encrypting the content of the audio fragment to obtain an encrypted target audio fragment;
and in the process of playing the video, when the video is played to the target audio fragment, shielding the target audio fragment.
In the above scheme, the overlay module is further configured to display encryption prompt information when the content in the target media information is displayed;
The encryption prompt information is used for prompting that the content corresponding to the current display position is encrypted.
In the above aspect, the media information processing device further includes an execution module, where the execution module is configured to receive a target operation instruction that indicates to perform a target operation on the target media information, where the target operation includes one of: sharing operation, uploading operation and exporting operation;
and responding to the target operation instruction, and executing the target operation on the target media information.
In the above scheme, the generating module is further configured to display at least one object having a social relationship with the current object in response to the permission setting instruction;
in response to an object selection operation for the at least one object, determining the selected object as a target object; in a corresponding manner,
The generating module is further configured to, when receiving a first sharing instruction, instruct to share the target media information to the target object, share the target media information to a first terminal of the target object, so that the first terminal controls the content to be in an invisible state when displaying the content in the target media information;
And when a first sharing instruction is received, the first sharing instruction is used for indicating to share the target media information to other objects except the target object in at least one object, and the target media information is shared to a second terminal of the other objects, so that the second terminal controls the content to be in a visible state when displaying the content in the target media information.
In the above scheme, the overlay module is further configured to remove a floating layer overlaid on the content in response to a revocation operation for the content encryption instruction, and display the content in a visible state.
The embodiment of the application also provides a processing method of the media information, which comprises the following steps:
Acquiring target media information, wherein the target media information comprises a plurality of continuous content units, and partial content in the content units is encrypted in a floating layer coverage mode;
responsive to a presentation instruction for the target media information, presenting content included in the target media information;
when the encrypted partial content is presented, the partial content is controlled to be in an invisible state.
The embodiment of the application also provides a device for processing the media information, which comprises the following steps:
The acquisition module is used for acquiring target media information, wherein the target media information comprises a plurality of continuous content units, and part of content in the content units is encrypted in a floating layer coverage mode;
the information display module is used for responding to the display instruction aiming at the target media information and displaying the content included in the target media information;
and the control module is used for controlling the partial content to be in an invisible state when the partial content is displayed to be encrypted.
In the above scheme, the control module is further configured to display an identity verification function item;
And responding to the identity verification operation for the current object triggered based on the identity verification function item, and controlling the part of content to be switched from the invisible state to the visible state when the identity verification of the current object passes.
An embodiment of the present application provides an electronic device, including:
a memory for storing executable instructions;
and the processor is used for realizing the processing method of the media information provided by the embodiment of the application when executing the executable instructions stored in the memory.
The embodiment of the application provides a computer readable storage medium which stores executable instructions for causing a processor to execute, thereby realizing the processing method of media information provided by the embodiment of the application.
The embodiment of the application provides a computer program product, which comprises a computer program or instructions, wherein the computer program or instructions realize the processing method of the media information provided by the embodiment of the application when being executed by a processor.
The embodiment of the application has the following beneficial effects:
By applying the embodiment of the application, in the display process of the media information, the target content is encrypted by responding to the content encryption instruction for indicating to encrypt all or part of the target content of the media information, so that the encryption operation of part of the content in the media information can be realized, and the accuracy and encryption efficiency of the encryption operation are improved; then, based on the content generation instruction, generating target media information, and controlling the encrypted content to be in an invisible state by adopting a floating layer coverage mode in the target media information display process, so that the man-machine interaction experience can be improved on the premise of ensuring the data security.
Drawings
FIG. 1 is a schematic diagram of a system architecture of a method for processing media information according to an embodiment of the present application;
FIGS. 2A-2B are schematic structural diagrams of an electronic device implementing a method for processing media information according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating a method for processing media information according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a media information presentation interface according to an embodiment of the present application;
FIG. 5 is a schematic diagram of target content location information provided by an embodiment of the present application;
FIG. 6 is a schematic diagram of a second content determination interface provided by an embodiment of the present application;
FIG. 7 is a schematic diagram of an interface for content search function items provided by an embodiment of the present application;
FIG. 8 is a schematic thumbnail view of a content unit provided by an embodiment of the present application;
FIG. 9 is a schematic diagram of a playing progress bar according to an embodiment of the present application;
FIG. 10 is a schematic diagram of smearing encryption provided by an embodiment of the present application;
FIG. 11 is a schematic diagram of frame encryption provided in an embodiment of the present application;
FIG. 12 is a schematic illustration of a floating layer according to an embodiment of the present application;
FIG. 13 is a schematic diagram of a location marker for target content provided by an embodiment of the present application;
FIG. 14 is a schematic illustration of a revocation operation provided by an example of the present application;
FIG. 15 is a schematic diagram of a content unit provided by an embodiment of the present application;
FIG. 16 is a flow chart of an audio clip masking process provided by an embodiment of the present application;
FIG. 17 is a schematic diagram of an encryption hint provided by an embodiment of the present application;
FIG. 18 is a schematic diagram of a target operation provided by an embodiment of the present application;
FIG. 19 is a schematic view of an object rights setting interface provided by an embodiment of the present application;
FIG. 20 is a flowchart of a method for processing media information according to an embodiment of the present application;
FIG. 21 is a schematic diagram of an identity verification function interface provided by an embodiment of the present application;
FIG. 22 is a flowchart of a processing interface for media information provided by an embodiment of the present application;
fig. 23 is a schematic diagram of a method for processing media information according to an embodiment of the present application.
Detailed Description
The present application will be further described in detail with reference to the accompanying drawings, for the purpose of making the objects, technical solutions and advantages of the present application more apparent, and the described embodiments should not be construed as limiting the present application, and all other embodiments obtained by those skilled in the art without making any inventive effort are within the scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict.
If a similar description of "first/second" appears in the application document, the following description is added, in which the terms "first/second/third" are merely distinguishing between similar objects and not representing a particular ordering of the objects, it being understood that the "first/second/third" may be interchanged with a particular order or precedence, if allowed, so that embodiments of the application described herein may be practiced otherwise than as illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the application only and is not intended to be limiting of the application.
Before describing embodiments of the present application in further detail, the terms and terminology involved in the embodiments of the present application will be described, and the terms and terminology involved in the embodiments of the present application will be used in the following explanation.
1) And the client is used for providing various service application programs such as a video playing client, an instant messaging client, a live broadcast client and the like, which are operated in the terminal.
2) In response to a condition or state that is used to represent the condition or state upon which the performed operation depends, the performed operation or operations may be in real-time or with a set delay when the condition or state upon which it depends is satisfied; without being specifically described, there is no limitation in the execution sequence of the plurality of operations performed.
3) Video or picture encryption real-time smearing algorithm processing technology: and encrypting or smearing image-text messages such as video contents in a live video conference or pictures, presentation files, labels and the like played in the video in real time through an encryption algorithm and a real-time smearing technology, and re-synthesizing the whole video after processing.
4) Pulse-code modulation (PCM): in an optical fiber communication system, binary optical pulses "0" and "1" are transmitted in an optical fiber, which are generated by on-off modulating an optical source with a binary digital signal. And digital signals are generated by sampling, quantizing and encoding a continuously varying analog signal, known as PCM. Such an electrical digital signal is called a digital baseband signal, and is generated by the PCM electrical terminal. Digital transmission systems all employ pulse code modulation schemes.
Based on the above explanation of terms and terms involved in the embodiments of the present application, a method and a system for processing media information provided by the embodiments of the present application are described below. Referring to fig. 1, fig. 1 is a schematic architecture diagram of a media information processing method system according to an embodiment of the present application, in order to support a media information processing method application, in a media information processing method system 100, a terminal (a terminal 400-1 and a terminal 400-2 are shown in an exemplary manner) is connected to a server 200 through a network 300, where the network 300 may be a wide area network or a local area network, or a combination of the two. Wherein the server 200 may belong to a target server cluster, and the target server cluster includes at least one of a server, a plurality of servers, a cloud computing platform, and a virtualization center. The server cluster may be used to provide background services for applications that support a three-dimensional virtual environment.
The terminal is provided with a client, such as a video playing client, a live broadcast client and the like. When a user opens a client on the terminal to display media information, the terminal 400-1 (the display client 410-1 deployed with the media information) is used as an execution end (also a release end of target media information) for encrypting the media information, and is used for displaying target content of the media information in a media information display interface, wherein the target content is part of content included in the content of the media information; receiving a content encryption instruction for indicating that the target content is encrypted in whole or in part; responding to the content encryption instruction, and covering the encrypted content indicated by the current content encryption instruction by adopting a floating layer; and responding to the content generation instruction, generating target media information, and controlling the content to be in an invisible state when the content in the target media information is displayed.
A terminal 400-2 (a display client 410-2 deployed with media information) as a display end of target media information, for obtaining the target media information, where the target media information includes a plurality of continuous content units, and part of the content in the plurality of content units is encrypted by adopting a floating layer coverage manner; responding to a display instruction aiming at the target media information, and displaying contents included in the target media information in a display interface of the media information; when presented to the encrypted portion of content, the control portion of content is in an invisible state.
The server 200 is configured to receive a media information acquisition request sent by the terminal 400-1, send media information to the terminal 400-1 in response to the request, receive target media information sent by the terminal 400-1, and cache the target media information; when receiving the acquisition request for the target media information sent by the terminal 400-2, the target media information is sent to the terminal 400-2, so that the terminal 400-2 displays the target media information through the display client 410-2 of the media information, and when displaying the target content, the terminal presents a corresponding floating layer to cover the target content, namely, the target content is controlled to be in an invisible state.
In practical applications, the server 200 may be an independent physical server, or may be a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, content delivery networks (cdns, content Delivery Network), and basic cloud computing services such as big data and artificial intelligent platforms. Terminals (e.g., terminal 400-1 and terminal 400-2) may be, but are not limited to, smart phones, tablet computers, notebook computers, desktop computers, smart speakers, smart televisions, smart watches, etc. Terminals, such as terminal 400-1 and terminal 400-2, and server 200 may be directly or indirectly connected through wired or wireless communication, and the present application is not limited thereto.
The embodiment of the application can also be realized by means of Cloud Technology (Cloud Technology), wherein the Cloud Technology refers to a hosting Technology for integrating serial resources such as hardware, software, network and the like in a wide area network or a local area network to realize calculation, storage, processing and sharing of data.
The cloud technology is a generic term of network technology, information technology, integration technology, management platform technology, application technology and the like based on cloud computing business model application, can form a resource pool, and is flexible and convenient as required. Cloud computing technology will become an important support. Background services of technical network systems require a large amount of computing and storage resources.
Fig. 2A-2B are schematic structural diagrams of an electronic device implementing a method for processing media information according to an embodiment of the present application, referring to fig. 2A (or fig. 2B), in practical application, the electronic device 500 may be implemented as a server or a terminal in fig. 1, and an electronic device implementing a method for processing media information according to an embodiment of the present application is described. The electronic device 500 shown in fig. 2A (or fig. 2B) includes: at least one processor 510, a memory 550, at least one network interface 520, and a user interface 530. The various components in electronic device 500 are coupled together by bus system 540. It is appreciated that bus system 540 is used to facilitate connected communications between these components. The bus system 540 includes a power bus, a control bus, and a status signal bus in addition to the data bus. But for clarity of illustration, the various buses are labeled as bus system 540 in fig. 2A (or fig. 2B).
The processor 510 may be an integrated circuit chip having signal processing capabilities such as a general purpose processor, such as a microprocessor or any conventional processor, a digital signal processor (DSP, digital Signal processor), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like.
The user interface 530 includes one or more output devices 531 that enable presentation of media content, including one or more speakers and/or one or more visual displays. The user interface 530 also includes one or more input devices 532, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 550 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard drives, optical drives, and the like. Memory 550 may optionally include one or more storage devices physically located remote from processor 510.
Memory 550 includes volatile memory or nonvolatile memory, and may also include both volatile and nonvolatile memory. The non-volatile memory may be read only memory (ROM, read Only Me mory) and the volatile memory may be random access memory (RAM, random Access Memor y). The memory 550 described in embodiments of the present application is intended to comprise any suitable type of memory.
In some embodiments, memory 550 is capable of storing data to support various operations, examples of which include programs, modules and data structures, or subsets or supersets thereof, as exemplified below.
An operating system 551 including system programs for handling various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and handling hardware-based tasks;
Network communication module 552 is used to reach other computing devices via one or more (wired or wireless) network interfaces 520, exemplary network interfaces 520 include: bluetooth, wireless compatibility authentication (WiFi), and universal serial bus (USB, universal Serial Bus), etc.;
A presentation module 553 for enabling presentation of information (e.g., a user interface for operating a peripheral device and displaying content and information) via one or more output devices 531 (e.g., a display screen, speakers, etc.) associated with the user interface 530;
the input processing module 554 is configured to detect one or more user inputs or interactions from one of the one or more input devices 532 and translate the detected inputs or interactions.
In some embodiments, the method and apparatus for processing media information provided in the embodiments of the present application may be implemented in a software manner, fig. 2A shows a schematic structural diagram of a server for providing a method for processing media information by an electronic device provided in the embodiments of the present application, and the method and apparatus 555 for processing media information stored in the memory 550 may be software in the form of a program, a plug-in, or the like, and includes the following software modules: the presentation module 5551, the receiving module 5552, the overlay module 5553, and the generation module 5554 are logical, and thus may be arbitrarily combined or further split depending on the functions implemented. The functions of the respective modules will be described hereinafter.
In other embodiments, the method and apparatus for processing media information provided in the embodiments of the present application may be implemented in software, and referring to fig. 2B, fig. 2B shows a schematic structural diagram of a server for providing a processing method of media information for an electronic device provided in the embodiments of the present application, and the method and apparatus 556 for processing media information stored in the memory 550 may be software in the form of a program, a plug-in, or the like, and includes the following software modules: the acquisition module 5561, the information presentation module 5562 and the control module 5563 are logical, and thus may be arbitrarily combined or further split according to the implemented functions. The functions of the respective modules will be described hereinafter.
In other embodiments, the apparatus for processing media information provided in the embodiments of the present application may be implemented in hardware, and as an example, the apparatus for processing media information provided in the embodiments of the present application may be a processor in the form of a hardware decoding processor, which is programmed to perform the method for processing media information provided in the embodiments of the present application, for example, the processor in the form of a hardware decoding processor may use one or more application specific integrated circuits (asics, application SPECIFIC INTEGRATED circuits), dsps, programmable logic devices (plds, programmable Logic Device), complex programmable logic devices (cplds, complex Programmable Logic Device), field-programmable gate arrays (fpgas), field-programmable GATE ARRAY), or other electronic components.
Based on the above description of the media information processing method system and the electronic device provided by the embodiment of the present application, the media information processing method provided by the embodiment of the present application is described below. In some embodiments, the method for processing media information provided by the embodiments of the present application may be implemented by a server or a terminal alone or in cooperation with the server and the terminal. In some embodiments, the terminal or the server may implement the method for processing media information provided by the embodiment of the present application by running a computer program. For example, the computer program may be a native program or a software module in an operating system; a Native Application (APP), i.e. a program that needs to be installed in an operating system to run, such as a client that supports virtual scenarios, such as a game APP; the method can also be an applet, namely a program which can be run only by being downloaded into a browser environment; but also an applet that can be embedded in any APP. In general, the computer programs described above may be any form of application, module or plug-in.
The method for processing media information provided by the embodiment of the application is described below by taking a terminal embodiment as an example. Referring to fig. 3, fig. 3 is a flowchart of a method for processing media information according to an embodiment of the present application, and it should be noted that, a terminal according to an embodiment of the present application is a sending end of a session message, and the method for processing media information according to an embodiment of the present application includes:
In step 101, the terminal displays a target content of the media information in the media information display interface, where the target content is a part of the content included in the media information.
In practical implementation, a display client of the media information is arranged on the terminal, a display interface of the media information is displayed through the display client, and a user can display target content of the media information through the display interface. Taking media information as video information as an example, the corresponding display client may be a video playing client, and in response to a start operation for the video playing client, part of the content of the video may be displayed in a playing interface of the playing client. Taking the media information as an article as an example, the corresponding display client may be a reading client, and in response to the start operation for the video reading client, part of the content of the article may be displayed in the display interface of the reading client.
Describing the manner in which the target content of the media information is displayed, in some embodiments, the terminal may implement the foregoing manner in which the target content of the media information is displayed: the terminal displays a content searching area and a media display area in a media information display interface, displays at least one piece of recommended content in the content searching area, and displays the content of media information in the media display area; the recommended content is part of content in the content included in the recommended media information; in the process of displaying the content, in response to a selection operation of a target recommended content in at least one recommended content, the displayed content is jumped to a first content including the target recommended content, and the first content is taken as the target content.
In actual implementation, in the process of displaying the media information, the content of the media information can be searched and positioned based on the recommended content to obtain the first content including the target recommended content, and the content displayed in the media information display interface is jumped to the determined first content (namely the target content), so that the accurate positioning and rapid display of the target content are realized. In addition, in an application scenario of encrypting content in media information, recommended content may be content that may need to be protected in the media information, where the protected content may be determined based on a corresponding privacy protection keyword, for example, the privacy protection keyword may be "important", "privacy", "secret", "non-compromised", or the like. According to a search algorithm based on artificial intelligence learning, automatically searching a frame image where the privacy protection keyword is located in the media information; when the media information contains voice, the voice recognition method can also be used for carrying out understanding recognition on the semantics of the voice part in the media information based on voice recognition, screening out the key frames of the (image) page where the key information (the information which does not explicitly indicate the need of encryption protection in the image but needs encryption protection after carrying out semantic understanding on the voice) is determined, namely, determining after carrying out semantic understanding on the voice in the media information based on a voice recognition model learned by an artificial intelligence machine. The speech recognition model may be constructed based on a mixed gaussian model-hidden markov model (GMM-HMM model), where the GMM-HMM models the phonemes, identifies pronunciation phonemes (phonemes of chinese are pinyin and correspond to chinese characters; phonemes of english are phonetic symbols and correspond to english words) in the target content, finds the corresponding chinese characters (words) or words in the dictionary, and determines the position of the speech content (including the time point of the speech content) corresponding to the target content in the speech content of the media information.
Referring to fig. 4, fig. 4 is a schematic diagram of a media information display interface provided by an embodiment of the present application, in which reference numeral 1 shows a content search area for displaying at least one recommended content; reference numeral 2 shows a media presentation area for playing the content of the media information. When the user clicks any one of the recommended content in the content searching area, the content displayed in the media display area can be controlled to directly jump to the target content comprising the current recommended content. If the user clicks the "key data", the user directly jumps to the time point (i.e. directly jumps to the time point "1:04" for presentation) where the content is included in the media information for playing. The content searching area can switch display or hiding based on the display or hiding function item shown by the number 3, so that the screen display area of the terminal can be effectively utilized.
The method for determining the target content based on the at least one recommended content can achieve rapid positioning of the target content.
In some embodiments, the terminal may implement the presentation of the target content of the media information in such a manner that: displaying the position information of the corresponding recommended content in the associated area of each recommended content in the content searching area; the position information is used for indicating the display position of the recommended content in the media information.
In actual implementation, the content searching area can display not only the recommended content but also position information for indicating the display position of the current recommended content in the media information, and quick positioning for the target content can be realized based on the position information. If the media information is video, the location information may be a point in time when the recommended content appears in the entire video; if the media information is a document, the location information may be a page number of the recommended content in the document, which is not limited in the embodiment of the present application.
For example, referring to fig. 5, fig. 5 is a schematic diagram of target content location information provided by an embodiment of the present application, where location information shown by reference numeral 1 in the figure is a play time point, and location information shown by reference numeral 2 in the figure is a page number appearing in a document.
The above-mentioned mode of displaying the position information can further improve the accuracy of locating the target content.
In some embodiments, the terminal may implement the presentation of the target content of the media information in such a manner that: displaying the content of the media information in a media information display interface, and displaying at least one keyword; in the process of displaying the content, responding to a selection operation for a target keyword in the at least one keyword, skipping the displayed content to second content, and taking the second content as the target content; and the second content is obtained by searching the content included in the media information based on the target keyword.
In practical implementation, the manner of determining the target content may be determined based on preset keywords, and in the media information display interface, a plurality of keywords for determining the protected content are presented, and the second content is determined in response to a selection operation for the target keywords. It should be noted that, there are various display modes for keywords: the media information may be displayed in suspension in a display area of the display interface, or may be displayed in a specific keyword display area (the aforementioned content search area). In addition, when there are a plurality of second contents determined based on the keywords, since the plurality of second contents have a time-series relationship, the second content that appears earliest may be automatically set as the target content at this time, and a second content list for selection by the user may be displayed, and the target content may be selected from the plurality of second contents by a manual operation by the user.
Referring to fig. 6, fig. 6 is a schematic diagram of a second content determination interface provided by an embodiment of the present application, in which reference numeral 1 indicates keywords for media information, such as "key data", "secret", "security", "protection", "privacy", and the like, and the keywords are shown in a floating layer style in the interface.
The method for determining the target content based on the keywords can be more targeted, and meanwhile, the man-machine interaction experience is improved, namely, the participation feeling of the user is stronger.
In some embodiments, the terminal may implement the presentation of the target content of the media information in such a manner that: displaying the content of the media information in a media information display interface and displaying a content search function item; in the process of displaying the content, responding to a search instruction for the input content triggered based on the content search function item, skipping the displayed content to third content, and taking the third content as target content; the third content is obtained by searching for the content included in the media information based on the input content.
In practical implementation, the content search function item can be suspended in a display area of the media information, and the content search area and the media information display area can also be displayed in a media information display interface, and the search function item (comprising a search box and a corresponding search control) is displayed in the search area. The user can determine the target content directly by inputting the content to be searched in the content search function item (such as a content search box).
Referring to fig. 7, fig. 7 is an interface schematic diagram of a content search function item provided by an embodiment of the present application, reference numeral 1 shows a content search area in a display interface of media information, reference numeral 2 shows a content search area display area in a display interface of media information, reference numeral 3 shows a content search function item presented in the content display area, and reference numeral 4 shows a content search function item directly displayed in a display area in a floating layer form.
In some embodiments, the terminal may implement the presentation of the target content of the media information in such a manner that: displaying a content searching area and a media displaying area in a media information displaying interface, displaying at least one content thumbnail in the content searching area, and displaying the content of the media information in the media displaying area; wherein the content thumbnail is a thumbnail of a content unit of media information; in the process of displaying the content, responding to the selection operation of the target content thumbnail in at least one content thumbnail, skipping the displayed content to fourth content corresponding to the target content thumbnail, and taking the fourth content as the target content.
In actual implementation, the terminal may also determine the target content according to the content unit of the media information. The display style for the content unit may be displayed in the form of a content thumbnail, for example, the thumbnail of the content unit may be displayed in the content search area. In addition, content units of media information may be divided according to types of media information: when the media information is video recorded for a conference taking a presentation as a content, the content unit may be each presentation; when the media information is a scene video, the content unit may be scene content when the scene is shot or a shot in the scene; when the media information is a general document (article), the content unit may be content corresponding to each page. The embodiment of the present application is not limited to the specific form of the content unit. Meanwhile, due to the size limitation of the terminal screen, a function of enlarging display can be provided for the thumbnail, namely, after the thumbnail obtains a focus, the thumbnail can be enlarged and displayed in a relevant area so as to facilitate the selection of a user for target content.
For example, referring to fig. 8, fig. 8 is a schematic diagram of a thumbnail of a content unit provided in an embodiment of the present application, where media information is video, and reference numeral 1 in the drawing indicates a thumbnail for the content unit, where the thumbnail may be a thumbnail corresponding to each frame image containing target content, and after the thumbnail indicated by reference numeral 2 in the drawing acquires a focus, an enlarged image (indicated by reference numeral 3 in the drawing) of the content contained in the thumbnail may be presented, so that the target content in the thumbnail may be displayed more clearly; in response to a trigger operation for a target thumbnail among the thumbnails, such as the thumbnail shown by double-click number 2, the presentation content of the media information can be directly jumped to the content corresponding to the thumbnail (the content shown by number 4 in the figure). In practical application, the position information of the content corresponding to the thumbnail in the media information can be displayed, for example, the thumbnail corresponding to the number 2 shown in the number 5 in the figure, and the position appearing in the video is the time point "1:18". In the figure, numeral 2 indicates a thumbnail image when the media information is an article and the content unit is a page number. Reference numeral 3 shows an enlarged view corresponding to the content indicated by the thumbnail when the cursor hovers over the thumbnail.
According to the method for displaying the content unit of the media information through the thumbnail, the user can conveniently preview the content of the corresponding content unit, and then the target content unit is determined.
In some embodiments, the terminal may implement the presentation of the target content of the media information in such a manner that: when the media information is video, playing the content of the video in a media information display interface, and displaying a playing progress bar for indicating the playing progress of the video; in the process of displaying the content, responding to a progress adjustment operation triggered based on a playing progress bar, skipping the displayed content to a fifth content which is indicated to be adjusted by the progress adjustment operation, and taking the fifth content as a target content.
In practical implementation, when the media information is video, a playing progress bar for indicating the playing progress can be displayed in the process of playing the video, and the user switches the displayed content by adjusting the playing progress bar. Namely, determining target content by dragging the playing progress bar.
For example, referring to fig. 9, fig. 9 is a schematic diagram of a playing progress bar provided by an embodiment of the present application, where reference numeral 1 shows a playing progress bar for controlling a playing progress of media information, and in response to an adjustment operation for the playing progress bar, the playing progress of the media information may be adjusted. In addition, the target content of the media information can be marked on the playing progress bar, and when the progress bar is dragged to the corresponding time point, a thumbnail of the target content corresponding to the current time point can be displayed so that a user can select whether to jump to the target content corresponding to the thumbnail.
In step 102, a content encryption instruction is received, where the content encryption instruction is used to instruct to encrypt all or part of the target content.
In actual implementation, after determining the target content based on step 101, the terminal may further perform a content encryption operation for the target content, where the content encryption may be performed for all of the content of the target content or may be performed for a portion of the content of the target content. That is, the terminal performs an encryption operation corresponding to the content encryption instruction on the target content based on the received content encryption instruction.
Describing the manner in which the content encryption instructions are triggered, in some embodiments, the terminal may receive the content encryption instructions by: the terminal displays an automatic encryption control in a media information display interface; and receiving a content encryption instruction in response to a triggering operation for the automatic encryption control.
In actual implementation, the terminal may receive a content encryption instruction based on the automatic encryption control, where the content encryption instruction is to automatically encrypt the target content. When the media information is video, it is necessary to automatically encrypt the video content (including voice) from the start time point to the end time point when the target content appears in the video and the end time point when the target content appears in the video.
For example, referring to fig. 4, in response to the triggering operation of the "encrypt" control in the figure, a portion of the content in the currently displayed media information (i.e., the content corresponding to the time point "1:04" in the figure may be automatically encrypted, so that the portion of the content may be controlled to be in an invisible state in the subsequent display process.
In some embodiments, the terminal may also receive the content encryption instruction by: displaying a smearing encryption control in a media information display interface; and receiving a content encryption instruction in response to a smearing operation triggered based on the smearing encryption control.
In actual implementation, the terminal can also receive a corresponding content encryption instruction through a smearing encryption control presented in the media information display interface. In practical application, when a user needs to encrypt a part of the target content, a content encryption instruction of the part of the target content can be triggered by triggering the smearing encryption control.
Referring to fig. 10, fig. 10 is a schematic diagram of smearing encryption provided by an embodiment of the present application, where reference numeral 1 shows a smearing encryption control, and reference numeral 3 shows target content, which is part of data in a certain page in a presentation. In practical application, with continued reference to fig. 10, in response to a trigger operation for the "encrypt" control in the figure, the media information may be controlled to be in an encryptable state, and an encryption manner selection interface including two functional items of "automatic encryption" and "smear encryption" is presented, where the process implemented by "automatic encryption" may also be referred to as "one-key encryption" is the same as that of the "encrypt" control in fig. 4.
In some embodiments, the terminal may also receive the smear operation by: the terminal responds to the triggering operation for smearing the encryption control, and displays an icon corresponding to at least one smearing tool; responding to a selection operation for a target icon in at least one icon, and displaying a target smearing tool corresponding to the target icon; the application operation triggered based on the target application tool is received.
In actual implementation, after the terminal responds to the triggering operation for smearing the encryption control, the target content can be controlled to be in a spreadable state, and the target smearing tool is selected to trigger the smearing operation for the target content. In practical applications, the degree of coating fineness for the target content can be controlled by different types of coating tools. For example, the applicator may be a "pen" for line application, a "brush" for random area application, or a "shape brush" for regular area application, with no limitation on the specific style of applicator.
Illustratively, referring to FIG. 10, in response to a trigger operation for the "paint encryption" function item shown at number 1 in the figure, a paint tool selection interface is presented, and in response to a selection operation for a target paint tool, a paint operation is performed at the presentation area with the target paint tool (the "pen" performing a line paint).
In some embodiments, the terminal may also receive the content encryption instruction by: the terminal displays a frame selection encryption control in a media information display interface; responding to triggering operation for the frame selection encryption control, and controlling the target content to be in an editing state; in response to a content selection operation for a target content in an editing state, displaying a selection frame including the selected content; a content encryption instruction is received for content included in the box.
In actual implementation, the terminal can also select the region of the content in the target content through the frame selection encryption control to obtain part of the content in the target content, and receive a content encryption instruction for the content included in the frame selection when the region selection operation triggered based on the frame selection encryption control is completed.
Referring to fig. 11, an exemplary embodiment of the present application is shown in fig. 11, where media information is based on video of a presentation, and target content is a picture in a current presentation page, at this time, a frame selection encryption control shown by a number 1 is triggered, a frame selection (regular quadrangle or other shapes) is shown by a number 2 is presented in a presentation area, a start point of the frame selection is determined, the frame selection is dragged, so that part of content in an image of a current frame in the video is in an area formed by frame selection dragging, and a corresponding completion operation (a "v" function item shown by a number 3) is triggered, and at this time, a content encryption instruction for the selected content in the frame selection can be received; when the cancel box selection operation (the "x" function item shown in number 3) is triggered, the box encryption-related operation can be canceled.
The application tool based on different types is used for applying and encrypting the target content, so that the encryption requirements of various encryption accuracies can be used, the universality of encryption is improved, the application tool can be suitable for encrypting various target contents, and meanwhile, the encryption accuracy can be effectively improved.
In step 103, in response to the content encryption instruction, the encrypted content indicated by the content encryption instruction is covered with a floating layer.
In actual implementation, the terminal encrypts the target content of the media information through encryption processing logic in response to the content encryption instruction. The encryption processing logic comprises the following two processes: encrypting the key page corresponding to the target content in the media information, and encrypting the voice information corresponding to the target content in the media information. Firstly, converting a key page corresponding to target content into a frame image sequence, and determining the position of a first frame image in the frame image sequence, at which the first frame image starts to appear in media information, and the position of a last frame image in the frame image sequence, at which the last frame image appears in the media information (wherein, the position of the frame image can be embodied in a mode of a time point in a playing progress bar). For media information containing teletext content, a key page may occupy a plurality of consecutive frame images in a sequence of frame images. If the playing time length of a presentation to be encrypted is 2 minutes based on the media information of the presentation, each frame image corresponding to the media information for 2 minutes is encrypted independently (the same frame image is encrypted in a merging way), so that the encryption of the presentation to be encrypted can be completed. Secondly, encryption operation is carried out on voice information corresponding to target content, specifically, voice recognition is carried out on voice information corresponding to target content, voice shielding is carried out on voice content obtained through recognition, encryption is carried out on the point information or short message when encryption is carried out on single information, namely, encryption processing is carried out on data, namely, compression coding is carried out through a mobile terminal and a communication network, long-term prediction (RPE-LTP, regular pulse excitation-long term prediction) coding is carried out through a vocoder after regular pulse excitation through PCM coding, and finally voice after compression coding is output. After the image information and the voice information corresponding to the target content are encrypted, the encrypted content can be covered by adopting a floating layer, namely, the encrypted content can be in an invisible state in a floating layer coverage mode, so that the safety of data is ensured. For example, after the media information is encrypted, when the target content of the media information is displayed again, a floating layer of the encrypted content is directly displayed, or an encrypted watermark of a complete page is provided, and the user is informed that the content is encrypted through the display of characters and graphics.
In some embodiments, the terminal may also receive a content encryption instruction triggered based on the automatic encryption control, and automatically adopt the floating layer to cover the whole content of the target content in response to the content encryption instruction.
In actual implementation, when the encryption type for the target content is automatic encryption, a floating layer may be employed to cover the entire content of the target content. When the media information is video, the duration of the floating layer display is equal to the duration of the display of the target content in the video, that is, the terminal determines the starting time point of the target content in the video and the ending time point of the target content in the video, determines the duration of the display of the target content based on the ending time point and the starting time point, and when the target content is displayed, the floating layer is displayed from the starting time point until the ending time point until the floating layer disappears. When the media information is a document comprising a plurality of pages, the floating layer directly covers the page number of the target content.
For example, referring to fig. 12, fig. 12 is a schematic illustration of floating layer presentation provided by the embodiment of the present application, in which, taking media information as a video as an example, the beginning of presentation of target content is "1:18" shown in a time chart with the number 2, and the time point of ending of the presentation is "1:58" shown in the number 3, that is, "1:57", then, for the video that is encrypted, during the video presentation, the video is normally presented in "1:17" (that is before "1:18") shown in the number 1, and the floating layer is presented to cover the target content from "1:18" shown in the number 2, until "1:58" shown in the number 3 is immediately before "1:57", and then the relevant content of the video is normally presented again at "1:58" shown in the number 3.
In some embodiments, the terminal may further receive a content encryption instruction triggered based on the application encryption control, and in response to the content encryption instruction, display an application track of the application operation by using the floating layer, and use content covered by the application track as the encrypted content indicated by the content encryption instruction.
In practical implementations, the encryption may be applied by erasing or masking the target content by a pen touch or other encrypted masking means to protect the content. When the encryption type aiming at the target content is coating encryption, a floating layer can be adopted to display a coating track corresponding to the coating operation, namely the shape and the size of the floating layer are the same as those of the coating track. At this time, the floating layer does not cover the entire content of the target content, but only a part of the content of the target content that is subjected to the painting operation. It should be noted that, when the media information is video, the floating layer appears for the same period as the period of the smeared content presentation.
Illustratively, referring to fig. 10, clicking the "smear encryption" function item shown in fig. 1, presenting the smear tool selection interface shown in fig. 2, and in response to a selection operation (selecting "pen" in the drawing) for a target smear tool, smearing the graphic information presented in the media information, to obtain a smear track shown in fig. 3, and when the smeared content is displayed, directly displaying the smear track shown in fig. 3, so that the smeared content is in an invisible state.
In some embodiments, the terminal may further receive a content encryption instruction triggered based on the frame encryption control, and in response to the content encryption instruction, overlay the content included in the frame with a floating layer.
In actual implementation, for the target content with larger area, the terminal can realize frame selection for the target content through a frame selection encryption control. In practical application, a frame selecting tool is presented by triggering a frame selecting encryption control, then the starting point of a frame is determined, the content selected by the frame is used as target content in response to the dragging operation for the frame, when the dragging operation is completed, a content encryption instruction for the content included by the frame is received, and at the moment, the content included by the frame is covered by a floating layer.
For example, referring to fig. 11, in the example where media information is video based on a presentation, the target content is a picture in a current presentation page, at this time, a frame selection (regular quadrangle or other shapes) is shown in a display area by triggering a frame selection encryption control shown in a number 1, a starting point of the frame selection is determined, the frame selection is dragged, so that part of content in a current frame image in the video is in an area formed by frame selection dragging, and a corresponding completion operation (a 'v' function item shown in a number 3) is triggered, at this time, a content encryption instruction for the selected content in the frame selection can be received, and the content in the frame selection is covered by a floating layer; when the cancel box selection operation (the "x" function item shown in number 3) is triggered, the box encryption-related operation can be canceled.
According to the content encryption instruction triggered based on the frame selection encryption control, the selection operation of the content with larger area can be realized, so that the acquisition efficiency of the target content can be improved, and the encryption operation speed can be improved.
In some embodiments, the terminal may also locate the encrypted target content by: marking the position of the target content in the media information and displaying corresponding marking information; in the process of displaying other content than the target content, when a trigger operation for the mark information is received, the content encrypted indicated by the content encryption instruction covered with the floating layer is displayed.
In actual implementation, after encrypting the target content in the media information, operations such as editing the encrypted target content again can be performed. Such as when the user finds that the encrypted target content is inaccurate (e.g., too much, too little), the encrypted target content may be re-edited, etc. Thus, in order to be able to quickly locate the position of the encrypted target content, the position of the encrypted target content in the media information may be marked, and thus, during presentation of other content than the media information of the target content, the encrypted target content may be quickly located based on the relevant mark. Wherein the other content may be regarded as content included in the media information other than the target content. In addition, different encrypted target content may be marked by different marking patterns.
For example, referring to fig. 13, fig. 13 is a schematic diagram of a position mark of a target content provided by an embodiment of the present application, taking a video as an example of media information, in which in a playing progress bar of the video, positioning information a1-a2 and b1-b2 of 2 encrypted target contents in the video are shown in different display styles, that is, in a process of playing the video, when the video is played to a time point a1, a floating layer is presented to cover the corresponding content until the time point a2, and the floating layer is used for prompting a user that the corresponding target content of a1-a2 is encrypted, after the video is played to the time point a2, the media information is normally played until the playing time point b1 presents the floating layer to cover the corresponding content again until the time point b2, and then the media information is normally played. At this time, the user finds that the target content at a1-a2 is more than the content actually needed to be encrypted, and needs to edit the encrypted content at a1-a2 again, at this time, the user only needs to click the mark of a1-a2, or directly adjust the playing progress bar of the video to the time point a1, so that the target content at a1-a2 can be located.
The method for marking the encrypted target content is convenient for a user to quickly locate at the position, so that the encrypted content is modified, and the like, the marked content can be quickly located, and the secondary operation on the encrypted content can be improved.
In some embodiments, after covering the encrypted content indicated by the content encryption instruction with the floating layer, the terminal may also cancel the corresponding encryption operation by: and in response to the revocation operation for the content encryption instruction, removing the floating layer which is covered on the content, and displaying the content in a visible state.
In actual implementation, after the encryption operation is performed on the target content, the encryption on the target content may also be revoked, i.e., the overlay floating layer is removed, in response to the revocation operation on the content encryption instruction, so that the target content is in a visible state.
For example, referring to fig. 14, fig. 14 is a schematic view of a revocation operation provided by an embodiment of the present application, in which reference numeral 1 shows a previous revocation control, clicking on "previous revocation" may revoke an encryption operation performed on a target content last time, and reference numeral 2 shows a next revocation control, clicking on "next revocation" may revoke an encryption operation performed on the target content next time; clicking the "one-click revocation" control, shown by number 3, may revoke the encryption operation performed on the content of the media information during the present media information presentation.
By providing different types of revocation controls, the encrypted content can be revoked with different granularities, and the media information can be timely restored.
In step 104, in response to the content generation instruction, target media information is generated, and when the content in the target media information is presented, the content is controlled to be in an invisible state.
In actual implementation, after the encryption operation on the target content of the media information is completed, the target media information including the encrypted content may be generated, and the generated target media information may be stored for other users to perform related operations. When the target media information is presented again, the encrypted target content is in an invisible state. Thus, the security of the media information can be effectively improved.
In some embodiments, when the media information is video, the target content is one frame image of the video, and the content indicated by the content encryption instruction is target image content included in the frame image. The terminal may control the content to be in an invisible state by: in the process of playing the video, when a target video clip in the video is played and the target video clip comprises a plurality of frame images containing target image contents, the target image contents in the frame images are controlled to be in an invisible state.
In practical implementation, when the media information is video, because the media information is formed by a plurality of frame images with time sequence relation, if the encrypted content indicated by the content encryption instruction received by the terminal is the target image content included in the frame images, the terminal needs to acquire all frame images corresponding to the current target image content in the video, and in the process of playing the video, the frame images including the target image content are controlled to be in an invisible state, and corresponding voice information is shielded (namely, the voice related to the target image content is in an inaudible state).
For example, referring to fig. 15, fig. 15 is a schematic diagram of a content unit provided by an embodiment of the present application, where a video shown in the figure is a section of conference record, the content unit is a frame image, and a summary of years including 15 pages of presentation files is shown in the video, when page 8 is presented, that is, content shown by number 2 in a frame image at a time point of "1:24" of the whole video is encrypted, since the content shown by number 2 appears on page 8 of the whole presentation file corresponding to the video, a start time point and an end time point of the appearance of page 8 in the video are acquired, and the start time point of the appearance of page 8 in the video (playing duration is 3 minutes 45 seconds) is assumed to be "1:10 "(time point shown by number 3), the ending time point is" 2:38 "(time point shown by number 4), then each corresponding frame image in the time period from" 1:10 "to" 2:38 "is in an invisible state (i.e. blocked by a floating layer) during playing, and simultaneously, the voice in the time period from" 1:10 "to" 2:38 "is masked (i.e. the voice is in an inaudible state).
In some embodiments, when the media information is video, the terminal may mask the audio segment corresponding to the target video segment by: the terminal acquires audio clips corresponding to a plurality of frame images; encrypting the content of the audio fragment to obtain an encrypted target audio fragment; in the process of playing video, when playing to the target audio clip, the target audio clip is masked.
In practical implementation, when the media information is video, when the target video segment in the control video includes a plurality of frame images including the target image content in an invisible state, the terminal also needs to encrypt an audio segment (target audio segment) corresponding to the plurality of frame images, and when the target audio segment is played in the process of playing the video, the terminal directly shields the target audio segment. In practical application, when the terminal plays the target audio segment, other audio segments with equal duration can be adopted to replace the target audio segment, and the target audio state can be directly controlled to be in a mute state. In addition, the terminal can determine the target audio fragment by means of voice recognition, and meanwhile, encryption processing logic (technical implementation process) for the target audio fragment can be realized by the following modes: and combining PCM coding and RPE-LTP coding, performing compression coding on the target audio segment, and outputting the compressed and coded audio segment.
Illustratively, a video conference M (including 48 presentation files) with a duration of 36 minutes, where the 3 rd presentation file P is a protected content (i.e. needs encryption processing), a frame image sequence Q of the 3 rd presentation file in the video M is determined, assuming that 25 frame images are included, an audio clip corresponding to the 25 frame images is determined, compression encryption is performed on the obtained audio clip, and masking processing is performed on the target audio clip.
In some embodiments, referring to fig. 16, fig. 16 is a flowchart of an audio clip masking process provided in an embodiment of the present application, and the steps shown in fig. 16 are described.
Step 201, when the media information is video, the terminal acquires an audio file in the video, and performs semantic recognition on the audio file to obtain recognized content.
In actual implementation, the terminal performs semantic understanding on the audio file corresponding to the video in a voice recognition mode to obtain voice content corresponding to the corresponding video. The terminal can identify pronunciation phonemes in the audio file through a voice recognition model constructed based on the GMM-HMM model, find Chinese characters (words) or words of corresponding semantics in the dictionary, and obtain the identified content.
Step 202, based on the encrypted content indicated by the content encryption instruction, searching in the identified content to determine an audio clip matching the encrypted content indicated by the content encryption instruction.
In actual implementation, the terminal analyzes the content encryption instruction to obtain the encrypted content indicated by the content encryption instruction, retrieves the encrypted content indicated by the content encryption instruction from the voice content obtained in step 201, and determines the specific position of the audio clip by corresponding to the start time point and the end time point of the audio clip in the video, and takes the audio clip as the matched audio clip.
And 203, encrypting the content of the audio fragment to obtain an encrypted target audio fragment.
In actual implementation, the terminal encrypts the resulting audio piece (i.e., performs masking processing on the audio piece). The implementation logic of encryption can be to perform compression coding processing on the target audio segment by combining a PCM coding mode and an RPE-LTP coding mode, and output the target audio segment after compression coding.
In step 204, during the process of playing the video, when playing to the target audio clip, the target audio clip is masked. In practical implementation, when the terminal plays the video, and when playing the video to the target audio fragment, the terminal can replace the target audio fragment by adopting other audio fragments (such as playing advertisement audio, a busy tone, or a voice loop playing mode of "the voice is shielded"), so as to realize shielding processing of the target audio fragment; the target audio fragment can also be controlled to be in a mute state, and the prompt information of the shielded audio fragment is displayed so as to inform the user that the target audio fragment is encrypted. When the image-text information in the video corresponding to the target audio fragment is encrypted content, the terminal controls the image-text information to be in an invisible state while the target audio fragment is on the screen.
In some embodiments, when the terminal displays the content in the target media information, the terminal displays an encryption prompt, where the encryption prompt is used for prompting that the content corresponding to the current display position is encrypted.
In actual implementation, when the terminal displays the content in the target media information, when the encrypted target content is displayed, prompt information for prompting that the content corresponding to the current display position is encrypted can be displayed. The encryption prompt information may be at least one of text information, image information or animation information, wherein the image information or animation information may be information with advertisement announce function, such as playing animation related to the content of the media information, etc., and the embodiment of the application does not limit the specific form of the encryption prompt information.
For example, referring to fig. 17, fig. 17 is a schematic diagram of an encryption hint provided by an embodiment of the present application, where the text type encryption hint shown by reference numeral 1 is "the content has been encrypted"; the animation type information shown in number 2, which may be set by a publisher of media information, may be related advertisements for service promotion.
In some embodiments, after generating the target media information, the terminal may also perform the following target operations: the terminal receives a target operation instruction for indicating to execute target operation on target media information, wherein the target operation comprises one of the following steps: sharing operation, uploading operation and exporting operation; and responding to the target operation instruction, and executing target operation on the target media information.
In actual implementation, operations such as sharing, uploading, exporting and the like for the target media information can be performed for the generated target media file, so that other users can perform other operations such as viewing and the like for the target media file.
For example, referring to fig. 18, fig. 18 is a schematic diagram of a target operation provided by an embodiment of the present application, in which after a target media file is generated, a user clicks a "share" control shown in a number 1, so that target media information may be shared with other users having social relationships with the current user; clicking number 2 shows that the "upload" control can save the target media file to the cloud; click number 3 shows that the "export" control may save the target media file to the local terminal in the target format.
The target operation executed for the target media information can effectively increase the applicable scenes of the target media information.
In some embodiments, the terminal may also determine the target object before generating the target media information by: responding to the permission setting instruction, and displaying at least one object with social relation with the current object; in response to an object selection operation for at least one object, the selected object is determined to be a target object.
In actual implementation, when the target operation performed on the target media information has a corresponding target object, corresponding permissions may be set for one or more other objects having social relationships with the current object for different responses to the target operation before the target media information is generated. If the sharing operation is performed on the target media information containing the encrypted content, different rights can be set for the object indicated by the sharing operation, if the object is still in an invisible state when the corresponding terminal displays the target media information, and if the object is in a visible state when the corresponding terminal displays the target media information, the encrypted target content can be in a visible state. That is, a white list, a black list may be set for other objects having a social relationship with the current object.
For example, referring to fig. 19, fig. 19 is a schematic view of an object rights setting interface provided by an embodiment of the present application, where a current object clicks on a "rights setting" function item, and the rights setting interface shown in the figure is presented, and a number 1 in the interface shows a plurality of other objects having social association with the current object, where the current object may set whether the encrypted content in the media information may be normally displayed for each object shown in the number 1. The current object adds "object 3", "object 5" and "object 7" to the rights whitelist, so that when the object 3 displays the target media information corresponding to the media information, the encrypted content can be controlled to be in a visible state (i.e. normally displayed).
Accordingly, in some embodiments, after generating the target media information, the terminal may also control the presentation state of the target media by: when a first sharing instruction is received, the first sharing instruction is used for indicating to share the target media information to the target object, and the target media information is shared to the first terminal of the target object, so that the first terminal is in an invisible state when displaying the content in the target media information; when a first sharing instruction is received, the first sharing instruction is used for indicating to share the target media information to other objects except the target object in at least one object, and the target media information is shared to a second terminal of the other objects, so that the second terminal controls the content to be in a visible state when displaying the content in the target media information.
In actual implementation, when the target operation for the target media information is a sharing operation, if the determined target object for receiving the target media information is in a permission blacklist set by the current object for the media information, displaying a floating layer for covering the target content when the target object displays the target media information through the terminal where the target object is located, namely, the target content is in an invisible state at the moment; when the target object is in a right white list set by the current object for the media information, the target object displays the target media information through the terminal, and normally displays the target content, namely the target content is in a visible state.
By applying the embodiment, the position of the target content in the media information is determined through various positioning modes in the display process of the media information, so that the searching efficiency of the target content is improved, and the encryption efficiency is improved; the target content is encrypted in response to a content encryption instruction for indicating to encrypt all or part of the target content of the media information, so that encryption operation for part of the content in the media information can be realized, the accuracy of the encrypted content is improved, and the accuracy of the encryption operation is improved; based on the content generation instruction, generating target media information, and controlling the encrypted content to be in an invisible state by adopting a floating layer coverage mode in the target media information display process, so that the man-machine interaction experience can be improved on the premise of ensuring the data security.
Next, a method for processing media information provided by the embodiment of the application is described by taking a terminal as a receiving end embodiment of target media information. Referring to fig. 20, fig. 20 is a flowchart of a method for processing media information according to an embodiment of the present application, where the method for processing media information according to the embodiment of the present application includes:
In step 301, the receiving end acquires target media information, where the target media information includes a plurality of continuous content units, and part of content in the plurality of content units is encrypted by adopting a floating layer coverage manner.
In actual implementation, the target media information acquired by the terminal contains a plurality of continuous content units, and part of the content is encrypted. Taking as an example that the target media information is a video containing encrypted content, the corresponding content units may be image frames, shots, scenes, etc.; taking the example that the target media information is a multi-page document containing encrypted content, the content units may be paragraphs, pages, etc.
Step 302, in response to the display instruction for the target media information, displaying the content included in the target media information.
In practical implementation, the terminal is deployed with a client for playing the media information, and when receiving a display instruction for the target media information, the terminal displays the target media information in the media information interface. The manner of receiving the display instruction may be that the user triggers a display function item for displaying media information.
Step 303, when the encrypted partial content is presented, the partial content is controlled to be in an invisible state.
In actual implementation, when the terminal displays the encrypted part of the content in the process of displaying the target media information, the display floating layer shields the encrypted content, namely, the part of the content is controlled to be in an invisible state.
In some embodiments, portions of the content may be controlled to be in a visible state by: displaying an identity verification function item; in response to an identity verification operation for the current object triggered based on the identity verification function item, the control portion content is switched from the invisible state to the visible state when the identity verification of the current object passes.
In actual implementation, since the rights are set for the related objects (users) receiving the target media information when the target media information is generated, the encrypted content can be normally displayed when the objects in the rights white list display the target media information through the terminal. That is, when the object for displaying the target media information is in the authority white list, the target media information can be normally displayed. In practical application, in order to further ensure the security of the media information, before the receiving end displays the media information, the identity of the current object corresponding to the receiving end may be checked, and a specific checking mode may be based on the checking of the bottom layer authorized by the user or based on the checking of the input of the user information. When the identity verification is passed, the target media information can be normally displayed, namely the content of the control part is switched from the invisible state to the visible state.
Referring to fig. 21, an exemplary embodiment of the present application is shown in fig. 21, where reference number 1 shows an object, media information corresponding to a "2020 final meeting" is displayed in a corresponding terminal, when content corresponding to a time point "1:18" is displayed, a floating layer is displayed to block related content, and prompt that the content is encrypted, at this time, the object of reference number 1 may perform corresponding identity verification processing by clicking on an "identity verification" function item shown by reference number 2 in the display interface, click on the "identity verification" function item, present a verification interface shown by reference number 3, that is, send identity confirmation information to a publisher (a publisher avatar shown by reference number 4) of the media information by scanning a two-dimensional code in the verification interface, and after the identity verification passes (the publisher and the object shown by reference number 1 have social association), control the content corresponding to the time point "1:18" to be in a visible state (that is, normally displayed content shown by reference number 4). When the identity verification is not passed, a message prompt interface 'no play, you have no right to view the content, and ask the contact publisher to process' can be popped up.
By means of the method for verifying the identity of the object receiving the target media information, the encrypted content can be prevented from being propagated and leaked, and the safety of the encrypted content is improved.
By applying the embodiment, when the target media message containing the protected content is displayed, the target object can be accurately screened by carrying out corresponding identity verification on the target object, so that the protected content can be prevented from being propagated and leaked, and the safety of the protected content is improved.
In the following, an exemplary application of the embodiment of the present application in a practical application scenario will be described.
In the related art, video protection encryption is divided into three schemes, namely downloading prevention, screen recording prevention and anti-theft chain prevention. The anti-downloading scheme mainly adopts a file slicing technology, namely, an uploaded video is sliced into countless small fragments, and different encryption algorithms are adopted in each small fragment. With such an encryption algorithm, even if the video is downloaded, it cannot be completely played out because the critical data sequence of the video has been disturbed.
However, for the scheme of preventing screen recording, the mode of setting a ticker, prohibiting the screen recording of a browser and the like is mainly adopted to prevent screen recording, and although the screen recording prevention can not be completely realized, the screen recording difficulty is increased for pirates. However, educational institutions or teacher individuals can add an institution logo or trademark to the live video to help the person who steals the video, and it should be noted that the logo of the trademark and the institution cannot be too clear or have no sense of existence. The anti-theft link scheme is mainly characterized in that the identity of a watching user is limited through the self-defining functions of a live broadcast background, such as white list setting, identity verification watching, pay-per-view watching and the like, so that piracy videos are effectively restrained. Meanwhile, in order to prevent people who watch pirated videos in users from watching live videos, the users also need to perform identity verification, namely, different addresses can be generated in each millisecond of the videos, each address only allows one-time watching, and illegal propagation during video playing is prevented.
In the scheme, the related video encryption technology only aims at the whole video file and can not encrypt or smear part of information of the content in the video file; in addition, the video encryption technology cannot locate specific video contents through key information in the conference process, and smearing protection of part of information is implemented.
Based on the above, the embodiment of the application provides a method for processing media information, which adopts an encryption smearing mode to protect video information, wherein the video information can be conference records, teacher classroom videos and the like. The video information can be encrypted and smeared on video content or graphic information content appearing in video before a user generates cloud storage records by recognizing voice emphasis or subsequent videos in a live conference scene, and finally, the video information is re-synthesized into a new video record and then uploaded and stored in the cloud so that the user can protect the content information of the user on subsequent transmission/review content.
Next, a description will be given of a usage scenario and a processing flow of a processing method of media information. When a user uses video conference (live broadcast) software, the user can identify key eyes mentioned in the process, such as "secret", "important", "key" and conveyed semantic understanding, such as that a person mentions that "sensitive data cannot be transmitted out" in the conference, and the like, after the video conference is finished, the user can locate the key pages of the mentioned eyes through the generated video records temporarily stored in the cloud, and directly encrypt or smear part of information in the whole video pages of the current frame; or the user can smear the page or part of information to be encrypted by adjusting the progress bar of the video conference record, in the encryption process, the user can cancel all encryption/smearing in a single step or in a one-key way, finally click a record generation button, the current video conference record can be stored in the cloud (if the user does not perform any operation, the video conference record is directly stored in the cloud in a limited period), the video conference record can be transmitted or shared after being stored in the cloud, and when the video conference record is shared by other people, a conference video manager can select whether to encrypt and share and not encrypt through editing the sharing authority.
Next, an interactive flow of the processing method of the media information is described from the product side. Referring to fig. 22, fig. 22 is a flowchart of a processing interface of media information provided by an embodiment of the present application, where a video conference record already temporarily stored in a cloud end when a video is played in fig. 1, a video pauses a position of a keyword spoken by a user in a conference process or automatically locates to a position where the pause is understood according to semantics in the conference process, and a smearing position can be confirmed by dragging a video playing progress bar; FIG. 2 shows that after the user clicks the "encrypt" button, the video file is controlled to be in an encrypted state, and the "one-key encrypt" (the automatic encrypt in the foregoing) and the "smear encrypt" function items are presented in the video playing interface; after the click operation of the 'one-key encryption' button is responded, the presentation page of the content at the position of the video is encrypted, all the frames related to the presentation page are encrypted, the 'withdraw' state is activated at the moment, and the user can return to the operation of withdrawing the previous step after using the 'withdraw' state; after the user clicks the 'complete', the video file is successfully encrypted, and a 'cancel' button appears, and clicking indicates that encryption is canceled before one key is used; at the moment, the user can click to encrypt and save, so that the current video record can be saved; when a user drags a playing progress bar of the video to another playing time point '01:18' (another positioning position) to prepare to start encryption; again responding to the clicking operation of the 'encryption' button in the figure (2), controlling the video information to be in an encryption state, and presenting 'one-key encryption' and 'smearing encryption' functional items in a playing interface of the video information; FIG. 4 shows that in response to a click operation on the "smear encryption" button, a "pencil" icon is presented, clicking or manually spreadable covering important information; in response to the click operation for the "done" button, the video information is encrypted successfully, the "undo" button is presented, and in response to the click operation for the "undo" button, the encryption operation for the video information can be canceled by one key; in addition, in response to a click operation for "encryption save," the current video information may be saved. In response to clicking operation for 'encryption save', encrypted video information (also referred to as target video information) is generated and stored in the cloud, and then operations such as sharing and exporting can be performed on the encrypted video information.
Next, from the technical implementation aspect, a hardware environment, implementation logic, and a processing flow of data of the media information processing method provided by the embodiment of the present application are described. Referring to fig. 23, fig. 23 is a schematic diagram of a processing method of media information according to an embodiment of the present application.
Firstly (first step in the figure), in a video conference (live) client of any mobile terminal or desktop terminal, automatically identifying keywords such as confidentiality, importance, key and the like mentioned by a presenter through an intelligent voice recognition technology in the live video conference process; after the preliminary video conference record is generated, automatically searching a key video page (one or more continuous frame images) where the key words are located according to artificial intelligence (AI, artificial Intelligence), and otherwise, recognizing semantic understanding in all conference videos according to an AI algorithm, and selecting key frames of the page where the key information is located. After identifying the corresponding keywords, modeling the time sequence information by using an HMM model in the acoustic model GMM-HMM, and after a state of the HMM is given, modeling the probability distribution of the voice feature vectors belonging to the state by the GMM. By identifying the pronunciation, the Chinese is the phonetic alphabet corresponding to the Chinese character, and the English is the phonetic symbol corresponding to the word. According to the phonemes recognized by the acoustic model, corresponding Chinese characters (words) or words are found in the dictionary, a bridge is established between the acoustic model and the language model, the two models are connected, finally, video clip positioning of the keywords in the video information is recognized in the generated conference record, and the current key page is paused to be displayed (namely, the page jumps to the key frames corresponding to the keywords).
And secondly (the second step in the figure), converting the key page in the video information into a plurality of pictures with time sequence relation, and adjusting the playing progress bar of the video information to the time point of the key page. For encryption of a complete page, pictures can be directly encrypted according to an encryption algorithm, and picture information such as pictures, presentation files, labels and the like in a video live conference of the same page are automatically combined for encryption, and particularly, the scalable lightweight video encryption algorithm based on H.264/AVC (a video compression format) aims at diversity formed by video application scenes, security requirements and computing resource differences, and can meet the requirements of the scalable lightweight video encryption algorithm of most media application platforms. Taking into consideration the relation of the security, encryption speed and the mutual constraint of several factors such as compression ratio, the key data such as intra-frame prediction mode, motion vector difference and brightness quantization transformation coefficient are encrypted by using a standard cryptographic algorithm. The algorithm can realize encryption effects of a plurality of levels from low to high according to different security requirement levels and calculation resource levels in practical application, has little variation of compression ratio, relatively low calculation complexity and certain operability. In the encryption process, individual frames are encrypted, the same frames can be combined and encrypted, the encrypted content is displayed as encrypted, or an encrypted watermark of a complete page is provided, and the user is informed of the encrypted content through the display of characters and graphics.
In addition, the encryption of the key page to partial information can be realized by real-time smearing processing, and any characters and graphics on pictures, presentations and labels can be erased or shielded by pen touch or other encrypted covering modes so as to protect the content. The essence of smearing is that the information which is not needed to be displayed in the video is complemented, the video complement method is based on the optical flow technology, colors and optical flows can be synthesized, colors are transferred along the track of the optical flows, and therefore the time continuity of the video is improved, the memory problem is relieved, and high-resolution output is achieved. The method based on the optical flow is adopted, firstly, the edges of the moving object are extracted and complemented, and then the optical flow edges are used as the guiding complement optical flow. Since not all areas missing in the video can be complemented in this way, researchers have introduced non-local optical flow so that video content can be propagated over the motion boundaries.
Meanwhile, related encryption is needed to be carried out on voice information in video, specifically, voice information is subjected to language recognition to obtain voice content, voice shielding is carried out on the whole page of content, encryption is carried out on the point information or short message when encryption is carried out on single information, namely, encryption processing is carried out on data, namely, compression coding is carried out through a mobile terminal and a communication network, RPE-LTP coding is carried out through a PCM coding and finally compressed and coded voice is output.
And (the third step in the figure) encrypting or smearing part of information in the video content, and then re-integrating the real-time processing to form a new video record, wherein the encrypted video information can be re-stored in the cloud by returning to the mode of canceling the last step or canceling all encryption by one key.
Finally (fourth step in the figure), for the generated encrypted video information sharing operation, encrypted sharing and non-encrypted sharing can be selected, and in the sharing process, the encrypted video information stored in the cloud is coded and converted into a corresponding link. The video is converted into a link according to the target object (user) indicated by the sharing operation requiring the open rights, and the specific encryption revocation mode is reverse coding according to the previous encryption.
By applying the embodiment of the application, partial information of the content in the video file can be encrypted or smeared, including but not limited to image-text information such as pictures, presentation files, labels and the like, so that the video recording information is protected from being stolen or used for other purposes in subsequent propagation and sharing.
The embodiment of the application realizes that partial information in Chinese encrypted or smeared video content of the temporarily cloud video record, including but not limited to picture, presentation, annotation and other picture and text information, is realized by encrypting and smearing the generated video conference record in the conference process and after the conference process so as to protect the video record information from being stolen or used for other purposes in the spreading and sharing after being restored in the cloud.
In the embodiment of the application, related data such as user information and the like are related, when the embodiment of the application is applied to specific products or technologies, user permission or consent needs to be obtained, and the collection, use and processing of the related data need to comply with related laws and regulations and standards of related countries and regions.
Continuing with the description below of an exemplary architecture of the media information processing device 555 implemented as a software module provided by embodiments of the present application, in some embodiments, as shown in fig. 2A, the software module stored in the media information processing device 555 of the memory 540 may comprise:
The display module 5551 is configured to display, in a media information display interface, target content of media information, where the target content is part of content included in the media information;
a receiving module 5552, configured to receive a content encryption instruction, where the content encryption instruction is configured to instruct to encrypt all or part of the target content;
the coverage module is used for responding to the content encryption instruction and adopting a floating layer to cover the encrypted content indicated by the content encryption instruction;
a generating module 5553, configured to generate target media information in response to a content generating instruction, and control the content to be in an invisible state when the content in the target media information is presented.
In some embodiments, the display module is further configured to display, in a media information display interface, a content search area and a media display area, and display at least one piece of recommended content in the content search area, and display content of the media information in the media display area; the recommended content is part of content in the content included in the recommended media information; in the process of displaying the content, responding to a selection operation of target recommended content in the at least one recommended content, skipping the displayed content to first content comprising the target recommended content, and taking the first content as the target content.
In some embodiments, the display module is further configured to display, in an associated area of each of the recommended content in the content search area, location information of the corresponding recommended content; the position information is used for indicating the display position of the recommended content in the media information.
In some embodiments, the display module is further configured to display the content of the media information in a media information display interface, and display at least one keyword; in the process of displaying the content, responding to a selection operation for a target keyword in the at least one keyword, skipping the displayed content to second content, and taking the second content as the target content; and the second content is obtained by searching the content included in the media information based on the target keyword.
In some embodiments, the display module is further configured to display, in a media information display interface, content of the media information and display a content search function item; in the process of displaying the content, responding to a search instruction for input content triggered based on the content search function item, skipping the displayed content to third content, and taking the third content as the target content; and the third content is obtained by searching for the content included in the media information based on the input content.
In some embodiments, the display module is further configured to display, in a media information display interface, a content search area and a media display area, and display at least one content thumbnail in the content search area, and display content of the media information in the media display area; wherein the content thumbnail is a thumbnail of a content unit of the media information; in the process of displaying the content, responding to a selection operation of a target content thumbnail in the at least one content thumbnail, skipping the displayed content to fourth content corresponding to the target content thumbnail, and taking the fourth content as the target content.
In some embodiments, when the media information is video information, the display module is further configured to, when the media information is video, play content of the video in the media information display interface, and display a play progress bar for indicating a play progress of the video; in the process of displaying the content, responding to a progress adjustment operation triggered based on the playing progress bar, jumping the displayed content to a fifth content which is indicated to be adjusted by the progress adjustment operation, and taking the fifth content as the target content.
In some embodiments, the overlay module is further configured to mark a location of the target content in the media information, and display corresponding mark information; and displaying the content which is encrypted by the content encryption instruction covered by the floating layer when the triggering operation for the marking information is received in the process of displaying other content which is different from the target content.
In some embodiments, the receiving module is further configured to display an automatic encryption control in the media information presentation interface; receiving a content encryption instruction in response to a triggering operation for an automatic encryption control; in a corresponding manner,
Accordingly, in some embodiments, the overlay module is further configured to automatically overlay the entire content of the target content with a floating layer in response to the content encryption instruction.
In some embodiments, the receiving module is further configured to display a spread encryption control in the media information presentation interface; and receiving a content encryption instruction in response to a smearing operation triggered based on the smearing encryption control.
In some embodiments, the overlay module is further configured to display, in response to the content encryption instruction, a coating track of the coating operation using a floating layer, and use content covered by the coating track as the encrypted content indicated by the content encryption instruction.
In some embodiments, the overlay module is further configured to display an icon corresponding to at least one application tool in response to a trigger operation for the application encryption control; responding to a selection operation for a target icon in at least one icon, and displaying a target smearing tool corresponding to the target icon; the application operation triggered based on the target application tool is received.
In some embodiments, the receiving module is further configured to display a frame encryption control in the media information presentation interface; responding to the triggering operation of the frame selecting encryption control, and controlling the target content to be in an editing state; responsive to a content selection operation for the target content in the editing state, displaying a selection frame including the selected content; and receiving a content encryption instruction for the content included in the selection box.
Accordingly, in some embodiments, the overlay module is further configured to overlay, with a floating layer, content included in the frame in response to the content encryption instruction.
In some embodiments, when the media information is a video, the target content is a frame image of the video, and the content indicated by the content encryption instruction is target image content included in the frame image; the generating module is further configured to control, in the process of playing the video, when a target video clip played in the video includes a plurality of frame images including the target image content, the target image content in each frame image to be in an invisible state.
In some embodiments, the coverage module is further configured to obtain audio segments corresponding to the plurality of frame images; encrypting the content of the audio fragment to obtain an encrypted target audio fragment; and in the process of playing the video, when the video is played to the target audio fragment, shielding the target audio fragment.
In some embodiments, the overlay module is further configured to, when the media information is a video, obtain an audio file in the video, and perform semantic recognition on the audio file to obtain recognized content; retrieving, based on the encrypted content indicated by the content encryption instruction, in the identified content to determine an audio clip that matches the encrypted content indicated by the content encryption instruction; encrypting the content of the audio fragment to obtain an encrypted target audio fragment; and in the process of playing the video, when the video is played to the target audio fragment, shielding the target audio fragment.
In some embodiments, the overlay module is further configured to display an encryption hint information when the content in the target media information is presented; the encryption prompt information is used for prompting that the content corresponding to the current display position is encrypted.
In some embodiments, the processing device of media information further includes an execution module for receiving a target operation instruction that instructs to perform a target operation on the target media information, the target operation including one of: sharing operation, uploading operation and exporting operation; and responding to the target operation instruction, and executing the target operation on the target media information.
In some embodiments, the generating module, before generating the target media information in response to the content generating instruction, is further configured to display at least one object having a social relationship with the current object in response to the rights setting instruction; in response to an object selection operation for the at least one object, determining the selected object as a target object; in a corresponding manner,
In some embodiments, after generating the target media information, the generating module is further configured to, when receiving a first sharing instruction, instruct, when the first sharing instruction is to share the target media information to the target object, share the target media information to a first terminal of the target object, so that the first terminal controls the content to be in an invisible state when displaying the content in the target media information; and when a first sharing instruction is received, the first sharing instruction is used for indicating to share the target media information to other objects except the target object in at least one object, and the target media information is shared to a second terminal of the other objects, so that the second terminal controls the content to be in a visible state when displaying the content in the target media information.
In some embodiments, the overlay module is further configured to remove a floating layer overlaid on the content and display the content in a visible state in response to a revocation operation for the content encryption instruction.
In some embodiments, as shown in FIG. 2B, software modules stored in the processing device 556 of the media information of the memory 540 may comprise:
An acquisition module 5561, configured to acquire target media information, where the target media information includes a plurality of continuous content units, and a part of content in the plurality of content units is encrypted by adopting a floating layer coverage manner;
An information presentation module 5562, configured to present content included in the target media information in response to a presentation instruction for the target media information;
a control module 5563 for controlling the partial content to be in an invisible state when presented to the encrypted partial content.
In some embodiments, the control module is further configured to display an identity verification function item; and responding to the identity verification operation for the current object triggered based on the identity verification function item, and controlling the part of content to be switched from the invisible state to the visible state when the identity verification of the current object passes.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions, so that the computer device executes the method for processing media information according to the embodiment of the present application.
An embodiment of the present application provides a computer-readable storage medium storing executable instructions that, when executed by a processor, cause the processor to perform a method for processing media information provided by the embodiment of the present application, for example, a method for processing media information as shown in fig. 3.
In some embodiments, the computer readable storage medium may be FRAM, ROM, PROM, EPROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; but may be a variety of devices including one or any combination of the above memories.
In some embodiments, the executable instructions may be in the form of programs, software modules, scripts, or code, written in any form of programming language (including compiled or interpreted languages, or declarative or procedural languages), and they may be deployed in any form, including as stand-alone programs or as modules, components, subroutines, or other units suitable for use in a computing environment.
As an example, executable instructions may, but need not, correspond to files in a file system, may be stored as part of a file that holds other programs or data, such as in one or more scripts in a hypertext markup language (HTML, hyper Text Markup Language) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
As an example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices located at one site or distributed across multiple sites and interconnected by a communication network.
In summary, the embodiment of the application can encrypt part of the content in the media information, thereby improving the accuracy of encryption and further improving the encryption efficiency.
The foregoing is merely exemplary embodiments of the present application and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement, etc. made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (20)

1. A method for processing media information, the method comprising:
Displaying target content of media information in a media information display interface, wherein the target content is part of content in the content included in the media information, and the media information is video information in a conference live broadcast scene;
receiving a content encryption instruction, wherein the content encryption instruction is used for indicating that the target content is encrypted in whole or in part;
responding to the content encryption instruction, and covering the encrypted content indicated by the content encryption instruction by adopting a floating layer;
generating target media information in response to a content generation instruction, and controlling the encrypted content indicated by the content encryption instruction to be in an invisible state when the encrypted content indicated by the content encryption instruction is displayed in the target media information;
Wherein, in the media information display interface, displaying the target content of the media information includes:
Displaying a content searching area and a media displaying area in the media information displaying interface, displaying at least one piece of recommended content in the content searching area, and displaying the content of the media information in the media displaying area, wherein the recommended content is a keyword or semantic content used for indicating encryption in the media information related to the automatic recommended conference live broadcast process; in the process of displaying the content of the media information, responding to the selection operation of target recommended content in the at least one recommended content, skipping the displayed content of the media information to first content comprising the target recommended content, and taking the first content as the target content;
Or displaying the content of the media information in the media information display interface, and displaying at least one keyword; in the process of displaying the content of the media information, responding to the selection operation of a target keyword in the at least one keyword, skipping the displayed content of the media information to second content, and taking the second content as the target content, wherein the second content is obtained by searching in the content included in the media information based on the target keyword.
2. The method of claim 1, wherein the method further comprises:
Displaying the position information of the corresponding recommended content in the associated area of each recommended content in the content searching area;
the position information is used for indicating the display position of the recommended content in the media information.
3. The method of claim 1, wherein after the employing a floating layer to overlay the encrypted content indicated by the content encryption instruction, the method further comprises:
Marking the position of the target content in the media information and displaying corresponding marking information;
And displaying the content which is encrypted by the content encryption instruction covered by the floating layer when the triggering operation for the marking information is received in the process of displaying other content which is different from the target content.
4. The method of claim 1, wherein the receiving the content encryption instruction comprises:
displaying an automatic encryption control in a media information display interface;
Receiving a content encryption instruction in response to a triggering operation for an automatic encryption control;
the response to the content encryption instruction, adopting a floating layer to cover the encrypted content indicated by the content encryption instruction, comprising:
and responding to the content encryption instruction, and automatically adopting a floating layer to cover the whole content of the target content.
5. The method of claim 1, wherein the receiving the content encryption instruction comprises:
Displaying a smearing encryption control in a media information display interface;
And receiving a content encryption instruction in response to a smearing operation triggered based on the smearing encryption control.
6. The method of claim 5, wherein the overlaying the encrypted content indicated by the content encryption instruction with a floating layer in response to the content encryption instruction comprises:
And responding to the content encryption instruction, displaying an application track of the application operation by adopting a floating layer, and taking the content covered by the application track as the encrypted content indicated by the content encryption instruction.
7. The method of claim 5, wherein the method further comprises:
responding to the triggering operation for the smearing encryption control, and displaying icons corresponding to at least one smearing tool;
Responding to a selection operation for a target icon in at least one icon, and displaying a target smearing tool corresponding to the target icon;
the application operation triggered based on the target application tool is received.
8. The method of claim 1, wherein the receiving the content encryption instruction comprises:
displaying a frame selection encryption control in a media information display interface;
responding to the triggering operation of the frame selecting encryption control, and controlling the target content to be in an editing state;
Responsive to a content selection operation for the target content in the editing state, displaying a selection frame including the selected content;
And receiving a content encryption instruction for the content included in the selection box.
9. The method of claim 1, wherein the target content is a frame image of the video information, and the content indicated by the content encryption instruction is target image content included in the frame image;
The controlling the encrypted content indicated by the content encryption instruction to be in an invisible state when the encrypted content indicated by the content encryption instruction is presented into the target media information includes:
In the process of playing the video information, when a target video clip in the video information is played and the target video clip comprises a plurality of frame images containing the target image content, controlling the target image content in each frame image to be in an invisible state.
10. The method of claim 9, wherein, in response to the content generation instruction, prior to generating the target media information, the method further comprises:
Acquiring audio clips corresponding to the plurality of frame images;
encrypting the content of the audio fragment to obtain an encrypted target audio fragment;
During the playing of the video information, the method further comprises:
And shielding the target audio fragment when the target audio fragment is played.
11. The method of claim 1, wherein, in response to the content generation instruction, prior to generating the target media information, the method further comprises:
acquiring an audio file in the video information, and carrying out semantic recognition on the audio file to obtain recognized content;
retrieving, based on the encrypted content indicated by the content encryption instruction, in the identified content to determine an audio clip that matches the encrypted content indicated by the content encryption instruction;
encrypting the content of the audio fragment to obtain an encrypted target audio fragment;
During the playing of the video information, the method further comprises:
And shielding the target audio fragment when the target audio fragment is played.
12. The method of claim 1, wherein the method further comprises:
Displaying encryption prompt information when the content indicated to be encrypted by the content encryption instruction in the target media information is displayed;
the encryption prompt information is used for prompting that the content corresponding to the current display position is encrypted.
13. The method of claim 1, wherein, in response to the content generation instruction, prior to generating the target media information, the method further comprises:
responding to the permission setting instruction, and displaying at least one object with social relation with the current object;
In response to an object selection operation for the at least one object, determining the selected object as a target object;
After the generating the target media information, the method further includes:
When a first sharing instruction is received, the first sharing instruction is used for indicating to share the target media information to the target object, and the target media information is shared to a first terminal of the target object, so that the first terminal controls the encrypted content indicated by the content encryption instruction to be in an invisible state when displaying the encrypted content indicated by the content encryption instruction in the target media information;
and when a first sharing instruction is received, the first sharing instruction is used for indicating to share the target media information to other objects except the target object in at least one object, and the target media information is shared to a second terminal of the other objects, so that the second terminal controls the encrypted content indicated by the content encryption instruction to be in a visible state when displaying the encrypted content indicated by the content encryption instruction in the target media information.
14. The method of claim 1, wherein after the employing a floating layer to overlay the encrypted content indicated by the content encryption instruction, the method further comprises:
In response to a revocation operation for the content encryption instruction, a floating layer overlaid on the content indicated to be encrypted by the content encryption instruction is removed, and the content indicated to be encrypted by the content encryption instruction in a visible state is displayed.
15. A method for processing media information, the method comprising:
Acquiring target media information, wherein the target media information is video information in a live conference scene, the target media information comprises a plurality of continuous content units, and part of the content in the content units is encrypted in a floating layer coverage mode;
responsive to a presentation instruction for the target media information, presenting content included in the target media information;
Controlling the partial content to be in an invisible state when the partial content is displayed to be encrypted;
wherein the partial content is determined by one of the following means:
Displaying a content searching area and a media displaying area in a media information displaying interface, displaying at least one piece of recommended content in the content searching area, and displaying the content of the media information in the media displaying area; the recommended content is a keyword or semantic content used for indicating encryption in the media information related in the automatic recommended conference live broadcast process; in the process of displaying the content, responding to a selection operation of target recommended content in the at least one recommended content, skipping the displayed content to first content comprising the target recommended content, and taking the first content as the part of content;
Or displaying the content of the media information in the media information display interface, and displaying at least one keyword; and in the process of displaying the content of the media information, responding to the selection operation of the target keyword in the at least one keyword, skipping the displayed content of the media information to second content, and taking the second content as the part of content, wherein the second content is obtained by searching in the content included in the media information based on the target keyword.
16. The method of claim 15, wherein the method further comprises:
displaying an identity verification function item;
And responding to the identity verification operation for the current object triggered based on the identity verification function item, and controlling the part of content to be switched from the invisible state to the visible state when the identity verification of the current object passes.
17. A device for processing media information, the device comprising:
The display module is used for displaying target content of media information in a media information display interface, wherein the target content is part of content in the content included in the media information, and the media information is video information in a conference live broadcast scene;
The display module is further used for displaying a content searching area and a media display area in the media information display interface, displaying at least one piece of recommended content in the content searching area and displaying the content of the media information in the media display area; the recommended content is a keyword or semantic content used for indicating encryption in the media information related in the automatic recommended conference live broadcast process; in the process of displaying the content of the media information, responding to the selection operation of target recommended content in the at least one recommended content, skipping the displayed content of the media information to first content comprising the target recommended content, and taking the first content as the target content;
The display module is further used for displaying the content of the media information in the media information display interface and displaying at least one keyword; in the process of displaying the content of the media information, responding to a selection operation of a target keyword in at least one keyword, skipping the displayed content of the media information to second content, and taking the second content as the target content, wherein the second content is obtained by searching in the content included in the media information based on the target keyword;
The receiving module is used for receiving a content encryption instruction, and the content encryption instruction is used for indicating that all or part of the target content is encrypted;
the coverage module is used for responding to the content encryption instruction and adopting a floating layer to cover the encrypted content indicated by the content encryption instruction;
And the generation module is used for responding to the content generation instruction, generating target media information and controlling the encrypted content indicated by the content encryption instruction to be in an invisible state when the encrypted content indicated by the content encryption instruction is displayed in the target media information.
18. An electronic device, the electronic device comprising:
a memory for storing executable instructions;
A processor for implementing the method of processing media information according to any one of claims 1 to 16 when executing executable instructions stored in said memory.
19. A computer readable storage medium storing executable instructions which when executed by a processor implement the method of processing media information according to any one of claims 1 to 16.
20. A computer program product comprising computer-executable instructions or a computer program, which when executed by a processor implements the method of processing media information according to any one of claims 1 to 16.
CN202210638029.7A 2022-06-07 2022-06-07 Media information processing method, device, equipment and storage medium Active CN115134635B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210638029.7A CN115134635B (en) 2022-06-07 2022-06-07 Media information processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210638029.7A CN115134635B (en) 2022-06-07 2022-06-07 Media information processing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115134635A CN115134635A (en) 2022-09-30
CN115134635B true CN115134635B (en) 2024-04-19

Family

ID=83377858

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210638029.7A Active CN115134635B (en) 2022-06-07 2022-06-07 Media information processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115134635B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103442300A (en) * 2013-08-27 2013-12-11 Tcl集团股份有限公司 Audio and video skip playing method and device
CN106485173A (en) * 2015-08-25 2017-03-08 腾讯科技(深圳)有限公司 Sensitive information methods of exhibiting and device
CN110719527A (en) * 2019-09-30 2020-01-21 维沃移动通信有限公司 Video processing method, electronic equipment and mobile terminal
CN110881033A (en) * 2019-11-07 2020-03-13 腾讯科技(深圳)有限公司 Data encryption method, device, equipment and readable storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10037413B2 (en) * 2016-12-31 2018-07-31 Entefy Inc. System and method of applying multiple adaptive privacy control layers to encoded media file types

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103442300A (en) * 2013-08-27 2013-12-11 Tcl集团股份有限公司 Audio and video skip playing method and device
CN106485173A (en) * 2015-08-25 2017-03-08 腾讯科技(深圳)有限公司 Sensitive information methods of exhibiting and device
CN110719527A (en) * 2019-09-30 2020-01-21 维沃移动通信有限公司 Video processing method, electronic equipment and mobile terminal
CN110881033A (en) * 2019-11-07 2020-03-13 腾讯科技(深圳)有限公司 Data encryption method, device, equipment and readable storage medium

Also Published As

Publication number Publication date
CN115134635A (en) 2022-09-30

Similar Documents

Publication Publication Date Title
CN112822542A (en) Video synthesis method and device, computer equipment and storage medium
US11190557B1 (en) Collaborative remote interactive platform
US20190087081A1 (en) Interactive media reproduction, simulation, and playback
CN101639943A (en) Method and apparatus for producing animation
US20190034213A1 (en) Application reproduction in an application store environment
CN112969097A (en) Content playing method and device, and content commenting method and device
CN112104908A (en) Audio and video file playing method and device, computer equipment and readable storage medium
US20180124453A1 (en) Dynamic graphic visualizer for application metrics
US11758218B2 (en) Integrating overlaid digital content into displayed data via graphics processing circuitry
CN113806570A (en) Image generation method and generation device, electronic device and storage medium
CN114638232A (en) Method and device for converting text into video, electronic equipment and storage medium
US20140282000A1 (en) Animated character conversation generator
WO2024041564A1 (en) Video recording method and apparatus, electronic device and storage medium
CN115134635B (en) Media information processing method, device, equipment and storage medium
KR20090124240A (en) Device for caption edit and method thereof
CN113191184A (en) Real-time video processing method and device, electronic equipment and storage medium
CN104123112B (en) A kind of image processing method and electronic equipment
US20220350650A1 (en) Integrating overlaid digital content into displayed data via processing circuitry using a computing memory and an operating system memory
CN109168025B (en) Video playing method capable of marking audit video sensitive operation and crossing platform
Rajaram et al. Reframe: An Augmented Reality Storyboarding Tool for Character-Driven Analysis of Security & Privacy Concerns
US11682101B2 (en) Overlaying displayed digital content transmitted over a communication network via graphics processing circuitry using a frame buffer
US20230326108A1 (en) Overlaying displayed digital content transmitted over a communication network via processing circuitry using a frame buffer
US20240212240A1 (en) Integrating overlaid content into displayed data via processing circuitry by detecting the presence of a reference patch in a file
Qin et al. Visual information transfer design of packaging product instructions based on unity platform
US20240098213A1 (en) Modifying digital content transmitted to devices in real time via processing circuitry

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant