CN115134635A - Method, device and equipment for processing media information and storage medium - Google Patents

Method, device and equipment for processing media information and storage medium Download PDF

Info

Publication number
CN115134635A
CN115134635A CN202210638029.7A CN202210638029A CN115134635A CN 115134635 A CN115134635 A CN 115134635A CN 202210638029 A CN202210638029 A CN 202210638029A CN 115134635 A CN115134635 A CN 115134635A
Authority
CN
China
Prior art keywords
content
target
media information
encryption
displaying
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210638029.7A
Other languages
Chinese (zh)
Other versions
CN115134635B (en
Inventor
陈昱志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202210638029.7A priority Critical patent/CN115134635B/en
Publication of CN115134635A publication Critical patent/CN115134635A/en
Application granted granted Critical
Publication of CN115134635B publication Critical patent/CN115134635B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2347Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving video stream encryption
    • H04N21/23476Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving video stream encryption by partially encrypting, e.g. encrypting the ending portion of a movie
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2347Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving video stream encryption

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The application provides a method, a device and equipment for processing media information and a computer-readable storage medium; the method comprises the following steps: displaying target content of the media information in a media information display interface, wherein the target content is part of content included in the media information; receiving a content encryption instruction for instructing to encrypt all or part of target content; in response to the content encryption instruction, covering the encrypted content indicated by the current content encryption instruction by using a floating layer; and generating the target media information in response to the content generation instruction, and controlling the content to be in an invisible state when the content in the target media information is displayed. According to the method and the device, the encryption of part of contents in the media information can be realized, and the accuracy and the encryption efficiency of encryption operation are improved.

Description

Method, device and equipment for processing media information and storage medium
Technical Field
The present application relates to the field of computer communication technology processing, and in particular, to a method, an apparatus, a device, a computer-readable storage medium, and a computer program product for processing media information.
Background
With the popularization of video conferences, sensitive information such as profit data, development plans, employee information, and the like may be involved in various conference videos. As the video is exposed, sensitive information may be revealed. Therefore, for the purpose of privacy protection, it is necessary to protect sensitive information in the video from encryption.
Then, the related video encryption technology is to encrypt the whole video file, and cannot encrypt part of the content in the video, which results in low encryption precision and low encryption efficiency.
Disclosure of Invention
Embodiments of the present application provide a method and an apparatus for processing media information, and a computer-readable storage medium, which can implement encryption of part of content in the media information, and improve accuracy and encryption efficiency of an encryption operation.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides a method for processing media information, which comprises the following steps:
displaying target content of media information in a media information display interface, wherein the target content is part of content included in the media information;
receiving a content encryption instruction, wherein the content encryption instruction is used for indicating that the target content is completely or partially encrypted;
in response to the content encryption instruction, overlaying the encrypted content indicated by the content encryption instruction with a floating layer;
and responding to a content generation instruction, generating target media information, and controlling the content to be in an invisible state when the content in the target media information is displayed.
An embodiment of the present application provides a device for processing media information, including:
the display module is used for displaying target content of the media information in a media information display interface, wherein the target content is part of content included in the media information;
a receiving module, configured to receive a content encryption instruction, where the content encryption instruction is used to instruct to encrypt all or part of the target content;
the covering module is used for responding to the content encryption instruction and covering the encrypted content indicated by the content encryption instruction by adopting a floating layer;
and the generating module is used for responding to a content generating instruction, generating target media information and controlling the content to be in an invisible state when the content in the target media information is displayed.
In the above solution, the display module is further configured to display a content search area and a media display area in a media information display interface, display at least one piece of recommended content in the content search area, and display the content of the media information in the media display area;
the recommended content is part of recommended content included in the media information;
in the process of presenting the content, responding to a selection operation aiming at a target recommended content in the at least one piece of recommended content, skipping the presented content to a first content comprising the target recommended content, and taking the first content as the target content.
In the above scheme, the display module is further configured to display the location information of the corresponding recommended content in the associated area of each recommended content in the content search area;
the position information is used for indicating the display position of the recommended content in the media information.
In the above scheme, the display module is further configured to display the content of the media information in a media information display interface, and display at least one keyword;
in the process of displaying the content, responding to the selection operation aiming at a target keyword in the at least one keyword, jumping to second content from the displayed content, and taking the second content as the target content;
and the second content is obtained by searching the content included in the media information based on the target keyword.
In the above scheme, the display module is further configured to display the content of the media information and display a content search function item in a media information display interface;
in the process of displaying the content, responding to a search instruction aiming at input content and triggered based on the content search function item, skipping the displayed content to third content, and taking the third content as the target content;
the third content is obtained by searching the content included in the media information based on the input content.
In the above scheme, the display module is further configured to display a content search area and a media display area in a media information display interface, display at least one content thumbnail in the content search area, and display the content of the media information in the media display area;
the content thumbnail is a thumbnail of a content unit of the media information;
in the process of displaying the content, responding to the selection operation of a target content thumbnail in the at least one content thumbnail, jumping to the displayed content to fourth content corresponding to the target content thumbnail, and taking the fourth content as the target content.
In the above scheme, the display module is further configured to, when the media information is a video, play content of the video in the media information display interface, and display a play progress bar for indicating a play progress of the video;
in the process of displaying the content, responding to a progress adjustment operation triggered based on the playing progress bar, skipping the displayed content to a fifth content adjusted by the progress adjustment operation, and taking the fifth content as the target content.
In the above solution, the overlay module is further configured to mark a position of the target content in the media information, and display corresponding mark information;
in the process of displaying other content different from the target content, when a trigger operation aiming at the mark information is received, displaying the content which is indicated to be encrypted by the content encryption instruction covered by a floating layer.
In the above solution, the receiving module is further configured to display an automatic encryption control in a media information display interface;
receiving a content encryption instruction in response to a triggering operation for the automatic encryption control; accordingly, the method can be used for solving the problems that,
correspondingly, in the above scheme, the overlay module is further configured to automatically overlay all the content of the target content with a floating layer in response to the content encryption instruction.
In the above scheme, the receiving module is further configured to display a smearing encryption control in a media information display interface;
and receiving a content encryption instruction in response to the smearing operation triggered based on the smearing encryption control.
In the above scheme, the overlay module is further configured to respond to the content encryption instruction, display a smearing track of the smearing operation by using a floating layer, and use content overlaid on the smearing track as encrypted content indicated by the content encryption instruction.
In the above scheme, the overlay module is further configured to display an icon corresponding to at least one smearing tool in response to a trigger operation for the smearing encryption control;
responding to the selection operation of a target icon in at least one icon, and displaying a target smearing tool corresponding to the target icon;
receiving the application operation triggered based on the target application tool.
In the above scheme, the receiving module is further configured to display a box selection encryption control in the media information display interface;
controlling the target content to be in an editing state in response to a triggering operation for the box selection encryption control;
in response to a content selection operation for the target content in the editing state, displaying a selection box including the selected content;
and receiving a content encryption instruction aiming at the content included in the selection box.
Correspondingly, in the above scheme, the overlay module is further configured to overlay, in response to the content encryption instruction, the content included in the box with a floating layer.
In the above scheme, when the media information is a video, the target content is a frame image of the video, and the content indicated by the content encryption instruction is the target image content included in the frame image; the generating module is further configured to, in the process of playing the video, control the target image content in each frame image to be in an invisible state when a target video clip played to the video includes a plurality of frame images including the target image content.
In the above scheme, the overlay module is further configured to obtain audio segments corresponding to the plurality of frame images;
encrypting the content of the audio clip to obtain an encrypted target audio clip;
and shielding the target audio clip when the target audio clip is played in the process of playing the video.
In the above scheme, the overlay module is further configured to, when the media information is a video, obtain an audio file in the video, and perform semantic recognition on the audio file to obtain a recognized content;
based on the encrypted content indicated by the content encryption instruction, retrieving in the identified content to determine an audio clip matching the encrypted content indicated by the content encryption instruction;
encrypting the content of the audio clip to obtain an encrypted target audio clip;
and shielding the target audio clip when the target audio clip is played in the process of playing the video.
In the above scheme, the overlay module is further configured to display an encrypted prompt message when the content in the target media information is displayed;
and the encryption prompt information is used for prompting that the content corresponding to the current display position is encrypted.
In the foregoing solution, the apparatus for processing media information further includes an execution module, where the execution module is configured to receive a target operation instruction indicating to execute a target operation on the target media information, where the target operation includes one of: sharing operation, uploading operation and exporting operation;
and responding to the target operation instruction, and executing the target operation on the target media information.
In the above scheme, the generating module is further configured to respond to the permission setting instruction, and display at least one object having a social relationship with the current object;
in response to an object selection operation for the at least one object, determining the selected object as a target object; accordingly, the method can be used for solving the problems that,
the generating module is further configured to, when a first sharing instruction is received, the first sharing instruction is used to indicate that the target media information is shared to the target object, share the target media information to a first terminal of the target object, so that the first terminal controls the content to be in an invisible state when displaying the content in the target media information;
when a first sharing instruction is received, the first sharing instruction is used for indicating that the target media information is shared to other objects except the target object in at least one object, the target media information is shared to second terminals of the other objects, and therefore the second terminals control the content to be in a visible state when the content in the target media information is displayed.
In the foregoing solution, the overlay module is further configured to remove a floating layer overlaid on the content in response to a revocation operation for the content encryption instruction, and display the content in a visible state.
An embodiment of the present application further provides a method for processing media information, including:
acquiring target media information, wherein the target media information comprises a plurality of continuous content units, and partial content in the content units is encrypted in a floating layer covering mode;
in response to a display instruction for the target media information, displaying content included in the target media information;
controlling the partial content to be in an invisible state when being presented to the encrypted partial content.
An embodiment of the present application further provides a device for processing media information, including:
the device comprises an acquisition module, a storage module and a processing module, wherein the acquisition module is used for acquiring target media information, the target media information comprises a plurality of continuous content units, and partial content in the content units is encrypted in a floating layer covering mode;
the information display module is used for responding to a display instruction aiming at the target media information and displaying the content included by the target media information;
and the control module is used for controlling the part of the content to be in an invisible state when the encrypted part of the content is displayed.
In the above scheme, the control module is further configured to display an identity verification function item;
and in response to the identity verification operation for the current object triggered based on the identity verification function item, controlling the partial content to be switched from the invisible state to the visible state when the identity verification for the current object passes.
An embodiment of the present application provides an electronic device, including:
a memory for storing executable instructions;
and the processor is used for realizing the media information processing method provided by the embodiment of the application when executing the executable instructions stored in the memory.
The embodiment of the application provides a computer-readable storage medium, which stores executable instructions for causing a processor to execute the method for processing media information provided by the embodiment of the application.
The embodiment of the present application provides a computer program product, which includes a computer program or instructions, and the computer program or instructions, when executed by a processor, implement the method for processing media information provided by the embodiment of the present application.
The embodiment of the application has the following beneficial effects:
by applying the embodiment of the application, in the display process of the media information, the target content is encrypted by responding to the content encryption instruction for indicating that the target content of the media information is completely or partially encrypted, so that the encryption operation of partial content in the media information can be realized, and the accuracy and the encryption efficiency of the encryption operation are improved; and then, generating target media information based on a content generation instruction, and controlling the encrypted content to be in an invisible state in a floating layer covering mode in the target media information display process, so that the human-computer interaction experience can be improved on the premise of ensuring the data security.
Drawings
Fig. 1 is a schematic architecture diagram of a system for processing media information according to an embodiment of the present application;
2A-2B are schematic structural diagrams of an electronic device implementing a method for processing media information according to an embodiment of the present application;
fig. 3 is a schematic flowchart of a method for processing media information according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a media information presentation interface provided by an embodiment of the present application;
FIG. 5 is a schematic diagram illustrating location information of target content according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a second content determination interface provided by an embodiment of the present application;
FIG. 7 is an interface diagram of a content search function provided by an embodiment of the present application;
FIG. 8 is a schematic diagram of a thumbnail of a content unit provided by an embodiment of the present application;
fig. 9 is a schematic diagram of a play progress bar provided in an embodiment of the present application;
FIG. 10 is a schematic view of a smear encryption provided by an embodiment of the present application;
FIG. 11 is a block diagram of a boxed encryption scheme provided by an embodiment of the present application;
FIG. 12 is a schematic illustration of a buoyant layer provided by an embodiment of the present application;
FIG. 13 is a schematic diagram of location tagging of target content provided by an embodiment of the present application;
FIG. 14 is a schematic diagram of a undo operation provided by an example of the present application;
FIG. 15 is a schematic diagram of a content unit provided by an embodiment of the present application;
FIG. 16 is a flowchart of an audio segment masking process provided by an embodiment of the present application;
FIG. 17 is a schematic diagram of an encrypted hint provided by an embodiment of the present application;
FIG. 18 is a schematic view of the target operation provided by an embodiment of the present application;
FIG. 19 is a diagram of an object rights setting interface provided in an embodiment of the present application;
fig. 20 is a flowchart illustrating a method for processing media information according to an embodiment of the present application;
FIG. 21 is a schematic diagram of an identity verification function interface provided in an embodiment of the present application;
FIG. 22 is a flow chart of a processing interface for media information provided by an embodiment of the present application;
fig. 23 is a schematic diagram of a media information processing method according to an embodiment of the present application.
Detailed Description
In order to make the objectives, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the attached drawings, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
The following description will be added if similar descriptions of "first/second" appear in the specification, and the terms "first \ second \ third" referred to in the following description are merely used for distinguishing similar objects and do not represent a specific ordering for the objects, and it should be understood that "first \ second \ third" may be interchanged under certain ordering or sequence conditions to enable the embodiments of the application described herein to be implemented in other than the ordering illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
Before further detailed description of the embodiments of the present application, terms and expressions referred to in the embodiments of the present application will be described, and the terms and expressions referred to in the embodiments of the present application will be used for the following explanation.
1) The client side, and the application program running in the terminal for providing various services, such as a video playing client side, an instant messaging client side, a live broadcast client side, and the like.
2) In response to the condition or state on which the performed operation depends, one or more of the performed operations may be in real-time or may have a set delay when the dependent condition or state is satisfied; there is no restriction on the order of execution of the operations performed unless otherwise specified.
3) Video or picture encryption real-time smearing algorithm processing technology: the technology is used for encrypting or coating image-text messages such as video contents in a live video conference or pictures, presentations, labels and the like played in videos in real time through an encryption algorithm and a real-time coating technology, and the whole video can be synthesized again after processing.
4) Pulse-code modulation (PCM): in an optical fiber communication system, binary optical pulse '0' code and '1' code are transmitted in an optical fiber, and are generated by on-off modulation of a light source by binary digital signals. And the digital signal is generated by sampling, quantizing and encoding a continuously varying analog signal, called PCM. This electrical digital signal, called the digital baseband signal, is generated by the PCM electrical terminal. Digital transmission systems all employ pulse code modulation schemes.
Based on the above explanations of terms and terms involved in the embodiments of the present application, the following describes a system of a method for processing media information provided by the embodiments of the present application. Referring to fig. 1, fig. 1 is a schematic structural diagram of a system for processing media information according to an embodiment of the present application, in order to implement a method for processing media information application, in a system 100 for processing media information, terminals (terminal 400-1 and terminal 400-2 are exemplarily shown) are connected to a server 200 through a network 300, and the network 300 may be a wide area network or a local area network, or a combination of the two. The server 200 may be attributed to a target server cluster, which includes at least one of a server, a plurality of servers, a cloud computing platform, and a virtualization center. The server cluster may be used to provide background services for applications that support a three-dimensional virtual environment.
The terminal is provided with a client, such as a video playing client, a live broadcasting client and the like. When a user opens a client on the terminal to display media information, the terminal 400-1 (the display client 410-1 deployed with the media information) serves as an execution end (also a publishing end of target media information) for encrypting the media information, and is configured to display target content of the media information in a media information display interface, where the target content is part of content included in the media information; receiving a content encryption instruction for instructing to encrypt all or part of target content; in response to the content encryption instruction, covering the encrypted content indicated by the current content encryption instruction with a floating layer; and generating the target media information in response to the content generation instruction, and controlling the content to be in an invisible state when the content in the target media information is displayed.
The terminal 400-2 (a display client 410-2 deployed with media information) is used as a display end of target media information, and is configured to acquire the target media information, where the target media information includes a plurality of continuous content units, and part of content in the content units is encrypted in a floating layer covering manner; responding to a display instruction aiming at the target media information, and displaying the content included by the target media information in a display interface of the media information; when the encrypted partial content is displayed, the partial content is controlled to be in an invisible state.
The server 200 is configured to receive a media information acquisition request sent by the terminal 400-1, send media information to the terminal 400-1 in response to the request, receive target media information sent by the terminal 400-1, and cache the target media information; when an acquisition request for the target media information sent by the terminal 400-2 is received, the target media information is sent to the terminal 400-2, so that the terminal 400-2 displays the target media information through the display client 410-2 of the media information, and when the target content is displayed, the corresponding floating layer coverage target content is presented, that is, the target content is controlled to be in an invisible state.
In practical applications, the server 200 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as cloud service, a cloud database, cloud computing, cloud functions, cloud storage, Network service, cloud communication, middleware service, domain name service, security service, Content Delivery Network (CDN), big data, and an artificial intelligence platform. The terminals (e.g., terminal 400-1 and terminal 400-2) may be, but are not limited to, a smart phone, a tablet computer, a laptop computer, a desktop computer, a smart speaker, a smart television, a smart watch, and the like. The terminals (e.g., terminal 400-1 and terminal 400-2) and the server 200 may be directly or indirectly connected through wired or wireless communication, and the application is not limited thereto.
The embodiments of the present application can also be implemented by means of Cloud Technology (Cloud Technology), which refers to a hosting Technology for unifying series resources such as hardware, software, and network in a wide area network or a local area network to implement data calculation, storage, processing, and sharing.
The cloud technology is a general term of network technology, information technology, integration technology, management platform technology, application technology and the like applied based on a cloud computing business model, can form a resource pool, is used as required, and is flexible and convenient. Cloud computing technology will become an important support. Background services of the technical network system require a large amount of computing and storage resources.
Fig. 2A-2B are schematic structural diagrams of an electronic device implementing a method for processing media information according to an embodiment of the present application, and referring to fig. 2A (or fig. 2B), in practical applications, the electronic device 500 may be implemented as the server or the terminal in fig. 1, and an electronic device implementing the method for processing media information according to the embodiment of the present application is described. The electronic device 500 shown in fig. 2A (or fig. 2B) includes: at least one processor 510, memory 550, at least one network interface 520, and a user interface 530. The various components in the electronic device 500 are coupled together by a bus system 540. It will be appreciated that the bus system 540 is used to enable communications among the components of the connection. The bus system 540 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, the various buses are labeled as bus system 540 in FIG. 2A (or FIG. 2B).
The Processor 510 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like, wherein the general purpose Processor may be a microprocessor or any conventional Processor, or the like.
The user interface 530 includes one or more output devices 531 enabling presentation of media content, including one or more speakers and/or one or more visual display screens. The user interface 530 also includes one or more input devices 532, including user interface components to facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 550 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, and the like. Memory 550 optionally includes one or more storage devices physically located remote from processor 510.
The memory 550 may comprise volatile memory or nonvolatile memory, and may also comprise both volatile and nonvolatile memory. The nonvolatile memory may be a Read Only Memory (ROM), and the volatile memory may be a Random Access Memory (RAM). The memory 550 described in embodiments herein is intended to comprise any suitable type of memory.
In some embodiments, memory 550 can store data to support various operations, examples of which include programs, modules, and data structures, or subsets or supersets thereof, as exemplified below.
An operating system 551 including system programs for processing various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and processing hardware-based tasks;
a network communication module 552 for communicating to other computing devices via one or more (wired or wireless) network interfaces 520, exemplary network interfaces 520 including: bluetooth, wireless compatibility authentication (WiFi), and Universal Serial Bus (USB), etc.;
a presentation module 553 for enabling presentation of information (e.g., a user interface for operating peripherals and displaying content and information) via one or more output devices 531 (e.g., a display screen, speakers, etc.) associated with the user interface 530;
an input processing module 554 to detect one or more user inputs or interactions from one of the one or more input devices 532 and to translate the detected inputs or interactions.
In some embodiments, the apparatus for processing media information provided in this embodiment of the present application may be implemented in software, fig. 2A illustrates a schematic structural diagram of a server that is an electronic device provided in this embodiment and is a processing method for providing media information, and the apparatus 555 for processing media information stored in the memory 550 may be software in the form of programs and plug-ins, and includes the following software modules: a presentation module 5551, a reception module 5552, an overlay module 5553 and a generation module 5554, which are logical and thus can be arbitrarily combined or further split depending on the functionality implemented. The functions of the respective modules will be explained below.
In other embodiments, the apparatus for processing media information provided in this embodiment of the present application may be implemented in software, referring to fig. 2B, where fig. 2B shows a schematic structural diagram of an electronic device provided in this embodiment of the present application as a server for providing a processing method of media information, and the apparatus 556 for processing media information stored in the memory 550, which may be software in the form of programs and plug-ins, and includes the following software modules: an acquisition module 5561, an information presentation module 5562, and a control module 5563, which are logical and thus can be arbitrarily combined or further separated according to the functions implemented. The functions of the respective modules will be explained below.
In other embodiments, the method and apparatus for processing media information provided in this embodiment may be implemented in hardware, and for example, the method and apparatus for processing media information provided in this embodiment may be a processor in the form of a hardware decoding processor, which is programmed to execute the method for processing media information provided in this embodiment, for example, the processor in the form of the hardware decoding processor may be implemented by one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field Programmable Gate Arrays (FPGAs), or other electronic components.
Based on the above description of the system and the electronic device for processing media information provided in the embodiments of the present application, the following description will discuss a method for processing media information provided in the embodiments of the present application. In some embodiments, the media information processing method provided by the embodiments of the present application may be implemented by a server or a terminal alone, or implemented by a server and a terminal in cooperation. In some embodiments, the terminal or the server may implement the method for processing media information provided by the embodiments of the present application by running a computer program. For example, the computer program may be a native program or a software module in an operating system; can be a local (Native) Application program (APP), i.e. a program that needs to be installed in an operating system to run, such as a client supporting a virtual scene, e.g. a game APP; or may be an applet, i.e. a program that can be run only by downloading it to a browser environment; but also an applet that can be embedded into any APP. In general, the computer programs described above may be any form of application, module or plug-in.
The following describes a method for processing media information provided in the embodiments of the present application by taking a terminal as an example. Referring to fig. 3, fig. 3 is a schematic flowchart of a method for processing media information provided in the embodiment of the present application, and it should be noted that a terminal in the embodiment of the present application is a sending end of a session message, and the method for processing media information provided in the embodiment of the present application includes:
in step 101, the terminal displays target content of the media information in a media information display interface, wherein the target content is a part of content included in the media information.
In actual implementation, a display client of the media information is arranged on the terminal, a display interface of the media information is displayed through the display client, and a user can display target content of the media information through the display interface. Taking the media information as the video information as an example, the corresponding display client may be a video playing client, and in response to the start operation for the video playing client, part of the content of the video may be displayed in the playing interface of the playing client. Taking the example that the media information is an article, the corresponding display client may be a reading client, and in response to the start operation for the video reading client, part of the content of the article may be displayed in the display interface of the reading client.
The description is made for the presentation manner of the target content of the media information, and in some embodiments, the terminal may implement the presentation of the target content of the media information in such a manner that: the terminal displays the content searching area and the media display area in the media information display interface, displays at least one piece of recommended content in the content searching area, and displays the content of the media information in the media display area; the recommended content is part of content included in the recommended media information; in the process of presenting the content, in response to a selection operation for a target recommended content in the at least one piece of recommended content, jumping the presented content to a first content including the target recommended content, and taking the first content as the target content.
In actual implementation, in the process of displaying the media information, the content of the media information may be searched and located based on the recommended content to obtain a first content including the target recommended content, and the content displayed in the media information display interface is skipped to the determined first content (i.e., the target content), so that accurate location and rapid display of the target content are achieved. It should be noted that, in an application scenario where content in the media information is encrypted, the recommended content may be content that may need to be protected in the media information, where for the protected content, it may be determined based on corresponding privacy protection keywords, for example, the privacy protection keywords may be "important", "privacy", "non-disclosure", and the like. Automatically searching the frame image of the privacy protection keyword in the media information according to a searching algorithm based on artificial intelligence learning; when the media information contains voice, the media information can also be determined by carrying out understanding and recognition on the semantics of the voice part in the media information based on voice recognition, screening out a key frame of a (image) page where key information (the image does not explicitly indicate that encryption protection is needed, but after the voice is semantically understood, the information is determined to be the information needing encryption protection), namely carrying out semantic understanding on the voice in the media information based on a voice recognition model of artificial intelligence machine learning. The speech recognition model can be constructed based on a Gaussian mixture model-hidden Markov model (GMM-HMM model), the GMM-HMM model is modeled by taking phonemes as a unit, pronunciation phonemes (the phoneme of Chinese is pinyin and corresponds to a Chinese character; the phoneme of English is phonetic symbol and corresponds to an English word) in target content are recognized, the corresponding Chinese character (word) or word is found in a dictionary, and the position of the speech content corresponding to the target content (including the time point of the speech content) is determined in the speech content of the media information.
Exemplarily, referring to fig. 4, fig. 4 is a schematic diagram of a media information presentation interface provided in an embodiment of the present application, in which reference numeral 1 in the media information presentation interface shows a content search area for displaying at least one piece of recommended content; shown at numeral 2 is a media presentation area for playing the content of the media information. When the user clicks any piece of recommended content in the content search area, the content displayed in the media display area can be controlled to directly jump to the target content including the current recommended content. That is, after the user clicks the "key data", the user directly jumps to the time point (i.e., directly jumps to the time point "1: 04") including the content in the media information to play. The content search area may be switched between display and hiding based on the display or hiding function item shown in the number 3, so that the screen display area of the terminal can be effectively utilized.
The above manner of determining the target content based on at least one piece of recommended content can achieve rapid positioning for the target content.
In some embodiments, the terminal may implement the presentation of the target content of the media information in such a manner that: displaying the position information of the corresponding recommended content in the association area of each recommended content in the content searching area; the position information is used for indicating the display position of the recommended content in the media information.
In practical implementation, in addition to the recommended content, the content search area may also display position information indicating a display position of the currently recommended content in the media information, and based on the position information, quick positioning for the target content may be achieved. If the media information is a video, the location information may be a time point at which the recommended content appears in the entire video; if the media information is a document, the location information may be a page number of the recommended content in the document, which is not limited in this embodiment of the present application.
Illustratively, referring to fig. 5, fig. 5 is a schematic diagram of position information of target content provided by an embodiment of the present application, where the position information shown by number 1 in the diagram is a playing time point, and the position information shown by number 2 in the diagram is a page number appearing in a document.
The above-mentioned manner of displaying the position information can further improve the accuracy of the positioning target content.
In some embodiments, the terminal may implement the presentation of the target content of the media information in such a manner that: displaying the content of the media information in a media information display interface, and displaying at least one keyword; in the process of displaying the content, responding to the selection operation aiming at a target keyword in the at least one keyword, jumping to second content from the displayed content, and taking the second content as the target content; and the second content is obtained by searching the content included in the media information based on the target keyword.
In practical implementation, the manner of determining the target content may be determined based on preset keywords, a plurality of keywords for determining the protected content are presented in the media information presentation interface, and the second content is determined in response to a selection operation for the target keywords. There are a variety of display methods for keywords: the media information can be displayed in a suspension manner in a display area of the media information in the display interface, and can also be displayed in a special keyword display area (the content search area mentioned above). In addition, when there are a plurality of second contents determined based on the keyword, since the plurality of second contents have a time-series relationship, the second content appearing earliest may be automatically used as the target content, and the second content list for the user to select may be displayed, and the user may manually operate to select the target content from the plurality of second contents.
Exemplarily, referring to fig. 6, fig. 6 is a schematic diagram of a second content determination interface provided in an embodiment of the present application, in which reference numeral 1 shows keywords for media information, such as "key data", "privacy", "security", "protection", "privacy", and the like, and the keywords are presented in a floating-layer style in the interface.
The mode of determining the target content based on the keywords can be more targeted, and meanwhile, the human-computer interaction experience is improved, namely the participation of the user is stronger.
In some embodiments, the terminal may implement the presentation of the target content of the media information in such a manner that: displaying the content of the media information in a media information display interface, and displaying a content search function item; in the process of displaying the content, responding to a search instruction aiming at the input content and triggered based on the content search function item, skipping the displayed content to third content, and taking the third content as target content; the third content is obtained by searching the content included in the media information based on the input content.
In practical implementation, the content search function item may be suspended in the display area of the media information, or the content search area and the media information display area may be displayed in the media information display interface, and the search function item (including the search box and the corresponding search control) is displayed in the search area. The user can directly determine the target content by inputting the content to be searched in the content search function item (such as a content search box).
Exemplarily, referring to fig. 7, fig. 7 is an interface schematic diagram of a content search function item provided in an embodiment of the present application, where reference numeral 1 shows a content search area in a presentation interface of media information, reference numeral 2 shows a content search area presentation area in the presentation interface of media information, reference numeral 3 shows a content search function item presented in the content presentation area, and reference numeral 4 shows a content search function item directly presented in the presentation area in a floating layer.
In some embodiments, the terminal may implement the presentation of the target content of the media information in such a manner that: displaying a content searching area and a media displaying area in a media information displaying interface, displaying at least one content thumbnail in the content searching area, and displaying the content of media information in the media displaying area; wherein, the content thumbnail is the thumbnail of the content unit of the media information; in the process of displaying the content, responding to the selection operation of the target content thumbnail in the at least one content thumbnail, jumping the displayed content to fourth content corresponding to the target content thumbnail, and taking the fourth content as the target content.
In practical implementation, the terminal may also determine the target content according to the content unit of the media information. The presentation style for the content unit may be presented in the form of a content thumbnail, for example, a thumbnail of the content unit may be displayed in the content search area. In addition, the content units of the media information may be divided according to the type of the media information: when the media information is a conference recording video with the presentation as content, the content unit can be each presentation; when the media information is a scene video, the content unit can be scene content when the scene is shot or a lens in the scene; when the media information is a common document (article), the content unit may be content corresponding to each page. The embodiment of the present application does not limit the specific form of the content unit. Meanwhile, due to the size limitation of the terminal screen, the thumbnail can be provided with a function of enlarged display, namely, after the thumbnail obtains a focus, the thumbnail can be enlarged and displayed in a relevant area, so that a user can conveniently select the target content.
Exemplarily, referring to fig. 8, fig. 8 is a schematic diagram of a thumbnail of a content unit, taking media information as an example, where number 1 in the figure shows a thumbnail for the content unit, where the thumbnail may be a thumbnail corresponding to each frame image containing target content, and after the thumbnail shown in number 2 in the figure obtains a focus, an enlarged image (shown in number 3 in the figure) for the content contained in the thumbnail may be presented, so that the target content in the thumbnail can be more clearly shown; in response to a trigger operation for a target thumbnail in the thumbnail, such as double-clicking the thumbnail shown by the number 2, the content of the presentation of the media information can be directly jumped to the content corresponding to the thumbnail (the content shown by the number 4 in the figure). In practical applications, the position information of the content corresponding to the thumbnail in the media information, such as the thumbnail corresponding to number 2 shown as number 5 in the figure, can also be displayed, and the position appearing in the video is the time point "1: 18". In the figure, numeral 2 indicates a thumbnail image when the content unit is a page number when the media information is an article. Numeral 3 shows that when the cursor is hovered over the thumbnail, an enlarged view corresponding to the content indicated by the thumbnail is presented.
By the method for displaying the content units of the media information through the thumbnails, the user can conveniently preview the content of the corresponding content units, and the target content units are further determined.
In some embodiments, the terminal may implement the presentation of the target content of the media information in such a manner that: when the media information is a video, playing the content of the video in a media information display interface, and displaying a playing progress bar for indicating the playing progress of the video; and in the process of displaying the content, responding to the progress adjustment operation triggered based on the playing progress bar, skipping the displayed content to the fifth content which is indicated to be adjusted by the progress adjustment operation, and taking the fifth content as the target content.
In actual implementation, when the media information is a video, a play progress bar for indicating a play progress may be displayed in the process of playing the video, and a user switches the displayed content by adjusting the play progress bar. Namely, the target content is determined by dragging the playing progress bar.
Illustratively, referring to fig. 9, fig. 9 is a schematic diagram of a play progress bar provided in an embodiment of the present application, in which reference numeral 1 shows the play progress bar for controlling the play progress of media information, and in response to an adjustment operation on the play progress bar, the play progress of the media information may be adjusted. In addition, the target content of the media information can be marked on the playing progress bar, and when the progress bar is dragged to the corresponding time point, a thumbnail of the target content corresponding to the current time point can be displayed, so that a user can select whether to jump to the target content corresponding to the thumbnail.
In step 102, a content encryption command is received, wherein the content encryption command is used for indicating that all or part of the target content is encrypted.
In actual implementation, after the terminal determines the target content based on step 101, a content encryption operation for the target content may be further performed, where the content encryption may be performed for all the content of the target content or for part of the content of the target content. Namely, the terminal executes the encryption operation corresponding to the content encryption instruction on the target content based on the received content encryption instruction.
To describe the triggering manner of the content encryption command, in some embodiments, the terminal may receive the content encryption command by: the terminal displays an automatic encryption control in a media information display interface; and receiving a content encryption instruction in response to a triggering operation for the automatic encryption control.
In practical implementation, the terminal may receive a content encryption instruction based on the automatic encryption control, where the content encryption instruction is to automatically encrypt the target content. When the media information is a video, the starting time point of the target content appearing in the video and the ending time point of the target content appearing in the video are required to be firstly, and the video content (including voice) from the starting time point to the ending time point is automatically encrypted.
For example, referring to fig. 4, in response to the triggering operation of the "encryption" control in the figure, a part of content (i.e., content corresponding to the time point "1: 04" in the figure) in the currently presented media information may be automatically encrypted, so that the part of content may be controlled to be in an invisible state in the subsequent presentation process.
In some embodiments, the terminal may further receive the content encryption instruction by: displaying a smearing encryption control in a media information display interface; and receiving a content encryption instruction in response to the smearing operation triggered based on the smearing encryption control.
In actual implementation, the terminal may further receive a corresponding content encryption instruction through a smearing encryption control presented in the media information presentation interface. In practical application, when a user needs to encrypt part of content in target content, a content encryption instruction for the part of content in the target content can be triggered by triggering the smearing encryption control.
Exemplarily, referring to fig. 10, fig. 10 is a schematic view of smearing encryption provided by an embodiment of the present application, where reference numeral 1 in the figure shows a smearing encryption control, and reference numeral 3 shows target content, where the target content is part of data in a certain page in a presentation. In practical applications, with continued reference to fig. 10, in response to a trigger operation for the "encryption" control in the figure, the media information may be controlled to be in an encryptable state, and an encryption mode selection interface including two functional items, that is, "automatic encryption" and "smearing encryption" is presented, where the process of implementing "automatic encryption" which may also be referred to as "one-key encryption" is the same as that of the "encryption" control in fig. 4.
In some embodiments, the terminal may also receive the smearing operation by: the terminal responds to the triggering operation aiming at the smearing encryption control and displays an icon corresponding to at least one smearing tool; responding to the selection operation of a target icon in at least one icon, and displaying a target smearing tool corresponding to the target icon; receiving the application operation triggered based on the target application tool.
In practical implementation, after the terminal responds to the triggering operation for the smearing encryption control, the terminal can control the target content to be in a spreadable state, and select the target smearing tool to trigger the smearing operation for the target content. In practical applications, the degree of fineness of the smearing on the target content can be controlled by different types of smearing tools. For example, the application tool may be a "pen" for line application, a "brush" for irregular area application, or a "shape brush" for regular area application, and the specific form of the application tool is not limited.
Illustratively, referring to fig. 10, in response to a trigger operation for a "smear encryption" function item shown by the number 1 in the drawing, a smear tool selection interface is presented, and in response to a selection operation for a target smear tool, a smear operation is performed with the target smear tool (a "pen" that performs line smearing) in the presentation area.
In some embodiments, the terminal may also receive the content encryption instruction by: the terminal displays a box selection encryption control in a media information display interface; responding to the trigger operation aiming at the box selection encryption control, and controlling the target content to be in an editing state; in response to a content selection operation for the target content in the editing state, displaying a selection box including the selected content; content encryption instructions for the content included in the box are received.
In actual implementation, the terminal may further perform area selection on the content in the target content through the box selection encryption control to obtain a part of the content in the target content, and receive a content encryption instruction for the content included in the box selection when the operation triggered based on the box selection encryption control for selecting the area is completed.
Exemplarily, referring to fig. 11, fig. 11 is a schematic view of a box encryption provided in an embodiment of the present application, where media information is a video based on a presentation, a target content is a picture in a current presentation page, at this time, a "box encryption" control shown by trigger number 1 is used to present a box (regular quadrangle, which may also be in other shapes) shown by number 2 in a presentation area, determine a starting point of the box, drag the box, enable a part of content in a current frame image in the video to be in an area formed by the dragging of the box, and trigger a corresponding completion operation (a "√" function shown by number 3), at this time, a content encryption instruction for the selected content in the box can be received; when a cancel frame selection operation is triggered (an "x" function item shown by the number 3), an operation related to frame selection encryption can be canceled.
Above-mentioned smear tool based on different grade type is paintd the encryption to target content, can use the encryption demand of various different encryption precisions, improves encrypted universality, can be applicable to the encryption of various target content, can also effectively improve the accuracy nature of encrypting simultaneously.
In step 103, in response to the content encryption instruction, the encrypted content indicated by the floating layer overlay content encryption instruction is adopted.
In actual implementation, the terminal encrypts the target content of the media information through the encryption processing logic in response to the content encryption instruction. The encryption processing logic comprises the following two processes: and encrypting the key page corresponding to the target content in the media information and encrypting the voice information corresponding to the target content in the media information. Firstly, converting a key page corresponding to target content into a frame image sequence, and determining the position where a first frame image in the frame image sequence begins to appear in media information and the position where a last frame image in the frame image sequence finally appears in the media information (wherein, the position of the frame image can be embodied in a time point mode in a playing progress bar). For media information containing teletext content, a key page may occupy a plurality of successive frame images in a sequence of frame images. If the play time of a presentation to be encrypted is 2 minutes based on the media information of the presentation, each frame image corresponding to the 2 minutes of media information is encrypted individually (the same frame image is encrypted in a combined manner), so that the encryption of the presentation to be encrypted can be completed. Secondly, performing an encryption operation on the voice information corresponding to the target content, specifically, performing voice recognition on the voice information corresponding to the target content, performing voice masking on the voice content obtained through recognition, encrypting the point information or short message aiming at the encryption of a single certain information, namely, performing encryption processing on data, namely, performing compression coding processing through a mobile terminal and a communication network, performing regular pulse excitation-long prediction (RPE-LTP) coding through PCM coding and a vocoder at the vocoder, and finally outputting the voice subjected to the compression coding. After the image information and the voice information corresponding to the target content are encrypted, the encrypted content can be covered by the floating layer, that is, the encrypted content can be in an invisible state by adopting a floating layer covering mode, so that the data security is ensured. For example, after the media information is encrypted, when the target content of the media information is shown again, the floating layer of the 'content is encrypted' is directly displayed, or the encrypted watermark of the complete page is provided, and the user is informed that the content is encrypted through the display of characters and graphs.
In some embodiments, the terminal may further receive a content encryption instruction triggered based on the automatic encryption control, and automatically overlay all content of the target content with the floating layer in response to the content encryption instruction.
In practical implementation, when the encryption type for the target content is automatic encryption, the floating layer may be used to cover the entire content of the target content. It should be noted that, when the media information is a video, the time length of the floating layer display is equal to the display time length of the target content in the video, that is, the terminal determines the starting time point of the target content appearing in the video and the ending time point of the target content in the video, determines the display time length of the target content based on the ending time point and the starting time point, and when the target content is displayed, the floating layer starts to be displayed from the starting time point until the floating layer disappears from the ending time point. When the media information is a document comprising a plurality of pages, the floating layer directly covers the page number of the target content.
Exemplarily, referring to fig. 12, fig. 12 is a schematic diagram of a floating layer presentation provided in an embodiment of the present application, where media information is a video as an example, a target content start presentation is "1: 18" shown in a number 2 in a time point diagram, and a time point of ending presentation is "1: 58" which is a second before "1: 57" shown in a number 3, then for a video with completed encryption, in a process of the video presentation, the video is normally presented at "1: 17" shown in a number 1 (i.e. before "1: 18"), starting from "1: 18" shown in a number 2, the floating layer is presented to cover the target content until "1: 58" shown in a number 3 is a second before "1: 57", and then the related content of the video is normally presented again at "1: 58" shown in a number 3.
In some embodiments, the terminal may further receive a content encryption instruction triggered based on the smearing encryption control, and in response to the content encryption instruction, display a smearing track of the smearing operation by using the floating layer, and use the content covered by the smearing track as the encrypted content indicated by the content encryption instruction.
In practical implementation, the smearing encryption may be to erase or hide the target content by a pen touch or other encrypted hiding manner to protect the content. When the encryption type for the target content is the smearing encryption, the floating layer can be used to display a smearing track corresponding to the smearing operation, that is, the shape and size of the floating layer are the same as those of the smearing track. In this case, the floating layer does not cover the entire content of the target content, but only covers a part of the target content subjected to the smearing operation. It should be noted that, when the media information is a video, the floating layer appears for the same time length as the smeared content is displayed.
Illustratively, referring to fig. 10, clicking the number 1 in the figure to show the "smearing encryption" function item, presenting a smearing tool selection interface shown in the number 2, in response to a selection operation (selecting a "pen" in the figure) for a target smearing tool, smearing the image-text information presented in the media information to obtain a smearing track shown in the number 3 in the figure, and directly presenting the smearing track shown in the number 3 when the smeared content is presented, so as to make the smeared content in an invisible state.
In some embodiments, the terminal may further receive a content encryption instruction triggered by the box-based encryption control, and in response to the content encryption instruction, overlay the content included in the box with the floating layer.
In actual implementation, the terminal can select the frame of the target content through the frame selection encryption control aiming at the target content with a large area. In practical application, a box selection tool is presented by triggering a box selection encryption control, then a starting point of a box selection is determined, the content selected by the box selection is used as target content in response to the dragging operation of the box selection, when the dragging operation is completed, a content encryption instruction for the content included by the box selection is received, and at the moment, the content included by the box selection is covered by a floating layer.
Exemplarily, referring to fig. 11, taking a video in which media information is based on a presentation as an example, a target content is a picture in a current presentation page, at this time, a "box encryption" control shown by a trigger number 1 presents a box (regular quadrangle, which may be in other shapes) shown by a number 2 in a presentation area, determines a starting point of the box, drags the box, so that a part of content in a current frame image in the video is in an area formed by the dragging of the box, and triggers a corresponding completion operation (a "√" function shown by a number 3), at this time, a content encryption instruction for the selected content in the box may be received, and the content in the box is covered with a floating layer; when a cancel in-box operation is triggered (an "x" function item shown at number 3), the box encryption related operation may be cancelled.
The content encryption instruction triggered by the box selection encryption control can realize the selection operation of the content with a large area, so that the acquisition efficiency of the target content can be improved, and the encryption operation rate can be further improved.
In some embodiments, the terminal may further locate the encrypted target content by: marking the position of the target content in the media information, and displaying corresponding marking information; and in the process of showing other contents different from the target contents, when a trigger operation aiming at the mark information is received, displaying the encrypted contents indicated by the content encryption instruction covered by the floating layer.
In actual implementation, after the target content in the media information is encrypted, the encrypted target content may be edited again. For example, when the user finds that the encrypted target content is not accurate (e.g., too much or too little), the encrypted target content may be edited again. Therefore, in order to quickly locate the position of the encrypted target content, the position of the encrypted target content in the media information can be marked, and thus, the encrypted target content can be quickly located based on the relevant mark in the process of showing other content of the media information different from the target content. Wherein the other content may be regarded as content included in the media information other than the target content. In addition, different encrypted target contents may be marked by different marking patterns.
Exemplarily, referring to fig. 13, fig. 13 is a schematic diagram of position marks of target content provided by an embodiment of the present application, taking media information as a video as an example, in the figure, in a playing progress bar of the video, positioning information a1-a2 and b1-b2 of 2 encrypted target contents in the video are shown in different display styles, that is, in the process of playing the video, when playing time point a1, a floating layer is presented to cover corresponding content until time point a2, so as to prompt a1-a2 that the corresponding target content is encrypted, and after playing time point a2, the media information is normally played until playing time point b1 presents the floating layer again to cover corresponding content until time point b2, and then the media information is normally played. At this time, the user finds that the target content at the position a1-a2 is more than the content actually required to be encrypted, and needs to edit the encrypted content at the position a1-a2 again, and at this time, the user only needs to click the mark of a1-a2 or directly adjust the playing progress bar of the video to the time point a1, so that the target content at the position a1-a2 can be located.
The mode for marking the encrypted target content is convenient for a user to quickly locate the position, so that operations such as modification of the encrypted content can be performed, the marked content can be quickly located, and secondary operations for the encrypted content can be improved.
In some embodiments, after the content indicated by the floating-layer overlay content encryption instruction is encrypted, the terminal may also revoke the corresponding encryption operation by: and in response to the revocation operation aiming at the content encryption instruction, removing the floating layer covered on the content and displaying the content in a visible state.
In practical implementation, after the encryption operation is performed on the target content, the encryption on the target content may also be revoked in response to a revocation operation on the content encryption instruction, that is, the overlay floating layer is removed, so that the target content is in a visible state.
Exemplarily, referring to fig. 14, fig. 14 is a schematic diagram of a revocation operation provided by an example of the present application, in which reference 1 shows a previous revocation control, and clicking "previous revocation" can revoke an encryption operation performed on a target content for one time, and reference 2 shows a next revocation control, and clicking "next revocation" can revoke an encryption operation performed on a target content for the next time; clicking the 'one-click cancel' control shown by the number 3 can cancel the encryption operation executed on the content of the media information in the media information display process.
By providing different types of revocation controls, revocation of different granularities can be performed on encrypted content, and media information can be restored in time.
In step 104, in response to the content generation instruction, the target media information is generated, and when the content in the target media information is displayed, the content is controlled to be in an invisible state.
In actual implementation, after the encryption operation on the target content of the media information is completed, the target media information including the encrypted content may be generated, and the generated target media information may be stored for other users to perform related operations. When the target media information is presented again, the encrypted target content is in an invisible state. Thus, the security of the media information can be effectively improved.
In some embodiments, when the media information is a video, the target content is a frame image of the video, and the content encryption instruction indicates that the encrypted content is the target image content included in the frame image. The terminal may control the content to be in the invisible state by: in the process of playing the video, when a target video clip in the video is played and the target video clip comprises a plurality of frame images containing target image content, the target image content in each frame image is controlled to be in an invisible state.
In practical implementation, when the media information is a video, the media information is composed of a plurality of frame images with a time sequence relationship, and if the encrypted content indicated by the content encryption instruction received by the terminal is the target image content included in the frame images, the terminal needs to acquire all the frame images corresponding to the current target image content in the video, and in the process of playing the video, control all the frame images including the target image content to be in an invisible state, and shield corresponding voice information (that is, voice related to the target image content is in an inaudible state).
Illustratively, referring to fig. 15, fig. 15 is a schematic diagram of content units provided by an embodiment of the present application, where a video shown in the diagram is a session record, the content units are frame images, an annual summary including 15 presentations is shown in the video, when the 8 th page is presented, that is, the content shown by number 2 in the frame images at the time points of "1: 24" of the entire video is encrypted, since the content shown by number 2 appears on the 8 th page of the entire presentation corresponding to the video, the starting time point and the ending time point of the presentation of the 8 th page in the video are obtained, and it is assumed that the starting time point of the presentation of the 8 th page in the video (the playing time length is 3 minutes and 45 seconds) is "1: 10 "(time point shown in number 3), and the ending time point is" 2:38 "(time point shown in number 4), then the corresponding frame images in the time period from" 1:10 "to" 2:38 "are in an invisible state (i.e. shielded by using a floating layer) during playing, and the voice in the time period from" 1:10 "to" 2:38 "is shielded (i.e. the voice is in an inaudible state).
In some embodiments, when the media information is a video, the terminal may mask an audio segment corresponding to the target video segment by: the terminal acquires audio clips corresponding to a plurality of frame images; encrypting the content of the audio clip to obtain an encrypted target audio clip; in the process of playing the video, when the target audio clip is played, the target audio clip is shielded.
In practical implementation, when the media information is a video, and when a target video segment in the video is controlled to include a plurality of frame images containing target image content and to be in an invisible state, the terminal further needs to encrypt an audio segment (target audio segment) corresponding to the plurality of frame images, and when the target audio segment is played in the process of playing the video, the target audio segment is directly shielded. In practical application, when the terminal plays the target audio clip, the terminal can replace the target audio clip with other audio clips with equal duration, and can also directly control the target audio state to be in a mute state. In addition, the terminal may determine the target audio segment by means of speech recognition, and at the same time, the encryption processing logic (technical implementation process) for the target audio segment may be implemented by the following means: and carrying out compression coding processing on the target audio segment by combining PCM coding and RPE-LTP coding, and outputting the audio segment after compression coding.
Illustratively, a 36-minute-duration video conference M (including 48 presentations) is provided, wherein the 3 rd presentation P is protected content (i.e., requires encryption processing), a frame image sequence Q of the 3 rd presentation in the video M is determined, assuming that 25 frame images are included, an audio clip corresponding to the 25 frame images is determined, the obtained audio clip is compressed and encrypted, and masking processing is performed on the target audio clip.
In some embodiments, referring to fig. 16, fig. 16 is a flowchart of an audio segment masking process provided by an embodiment of the present application, and is described in conjunction with the steps shown in fig. 16.
Step 201, when the media information is a video, the terminal acquires an audio file in the video, and performs semantic recognition on the audio file to obtain a recognized content.
In practical implementation, the terminal performs semantic understanding on an audio file corresponding to a video in a voice recognition mode to obtain voice content corresponding to the video. The terminal can identify pronunciation phonemes in the audio file through a speech recognition model constructed based on a GMM-HMM model, and find Chinese characters (words) or words with corresponding semantics in a dictionary to obtain identified contents.
Based on the encrypted content indicated by the content encryption directive, a search is performed in the identified content to determine an audio clip matching the encrypted content indicated by the content encryption directive, step 202.
In actual implementation, the terminal parses the content encryption command to obtain the encrypted content indicated by the content encryption command, and in the voice content obtained in step 201, retrieves the encrypted content indicated by the content encryption command, and determines the specific position of the audio clip by corresponding to the start time point and the end time point of the audio clip in the video, and takes the audio clip as a matched audio clip.
Step 203, encrypting the content of the audio clip to obtain the encrypted target audio clip.
In actual implementation, the terminal encrypts the obtained audio clip (i.e., performs a masking process on the audio clip). The implementation logic of encryption may be to perform compression coding processing on the target audio segment by combining PCM coding and RPE-LTP coding, and output the target audio segment after compression coding.
Step 204, in the process of playing the video, when the target audio clip is played, the target audio clip is shielded. In practical implementation, when the terminal plays a video, and when the terminal plays a target audio clip, the terminal can replace the target audio clip with another audio clip (for example, playing an advertisement audio, a busy tone, or playing a voice loop in a manner that the voice is shielded, etc.) to implement shielding processing for the target audio clip; the target audio clip can be controlled to be in a mute state, and prompt information that the audio clip is shielded is shown to inform the user that the target audio clip is encrypted. It should be noted that, when the graphics context information in the video corresponding to the target audio clip is the encrypted content, the terminal controls the graphics context information to be in the invisible state while the target audio clip is displayed.
In some embodiments, when the terminal displays the content in the target media information, the terminal displays an encrypted prompt message, where the encrypted prompt message is used to prompt that the content corresponding to the current display position is encrypted.
In actual implementation, when the terminal displays the content in the target media information, when the encrypted target content is displayed, the prompt information for prompting that the content corresponding to the current display position is encrypted can be displayed. The encrypted prompt message may be at least one of text message, image message, or animation message, where the image message or the animation message may be a message with an advertisement announcement function, such as playing an animation related to the content of the media message, and the embodiment of the present application does not limit the specific form of the encrypted prompt message.
Exemplarily, referring to fig. 17, fig. 17 is a schematic diagram of an encrypted hint provided by an embodiment of the present application, where the encrypted hint "the content is encrypted" is a text type encrypted hint shown by number 1 in the figure; the number 2 indicates the animation type information, which may be set by the publisher of the media information, and may be related advertisement for service promotion.
In some embodiments, the terminal may further perform the following target operations after generating the target media information: the terminal receives a target operation instruction which indicates to execute target operation on the target media information, wherein the target operation comprises one of the following operations: sharing operation, uploading operation and exporting operation; and responding to the target operation instruction, and executing the target operation on the target media information.
In actual implementation, for the generated target media file, operations such as sharing, uploading, exporting and the like for the target media information may be further performed, so that other users may perform other operations, such as viewing and the like, for the target media file.
For example, referring to fig. 18, fig. 18 is a schematic view of a target operation provided in an embodiment of the present application, where after a target media file is generated, a user clicks a "share" control shown by number 1, and may share target media information with other users having a social relationship with a current user; click number 2 shows that the "upload" control can save the target media file to the cloud; click number 3 shows that the "export" control can save the target media file to the local terminal in the target format.
The target operation executed aiming at the target media information can effectively increase the applicable scenes of the target media information.
In some embodiments, before generating the target media information, the terminal may further determine the target object by: responding to the permission setting instruction, and displaying at least one object having a social relation with the current object; in response to an object selection operation for at least one object, the selected object is determined as a target object.
In actual implementation, when a target operation performed on target media information has a corresponding target object, corresponding permissions may be set for one or more other objects having a social relationship with the current object for different responses to the target operation before generating the target media information. If the sharing operation is performed on the target media information containing the encrypted content, different permissions may be set for the object indicated by the sharing operation, if an object shows the target media information at a corresponding terminal, the encrypted target content is still in an invisible state, and if some objects show the target media information at corresponding terminals, the encrypted target content may be in a visible state. That is, a white list or a black list may be set for other objects having a social relationship with the current object.
For example, referring to fig. 19, fig. 19 is a schematic diagram of an object permission setting interface provided in an embodiment of the present application, a current object clicks a "permission setting" function item, the permission setting interface shown in the diagram is presented, number 1 in the interface shows a plurality of other objects having a social association relationship with the current object, and the current object may set, for each object shown by number 1, whether encrypted content in media information can be normally presented. The current object adds the "object 3", "object 5" and "object 7" to the rights white list, so that when the object 3 displays the target media information corresponding to the media information, the encrypted content can be controlled to be in a visible state (i.e. normally displayed).
Accordingly, in some embodiments, after generating the target media information, the terminal may further control the presentation state of the target media by: when a first sharing instruction is received, wherein the first sharing instruction is used for indicating that the target media information is shared to the target object, the target media information is shared to a first terminal of the target object, so that when the first terminal displays the content in the target media information, the content is controlled to be in an invisible state; when a first sharing instruction is received, the first sharing instruction is used for indicating that the target media information is shared to other objects except the target object in at least one object, the target media information is shared to second terminals of the other objects, and therefore when the second terminals display the content in the target media information, the content is controlled to be in a visible state.
In actual implementation, when the target operation for the target media information is a sharing operation, if the determined target object for receiving the target media information is in a permission blacklist set by the current object for the media information, and the target object displays the target media information through a terminal where the target object is located, a floating layer for covering the target content is displayed, namely the target content is in an invisible state at the moment; when the target object is in a permission white list set by the current object for the media information, the target object displays the target media information through the terminal where the target object is located, and when the target media information is displayed, the target content is normally displayed, namely the target content is in a visible state.
By applying the embodiment, the position of the target content in the media information is determined in various positioning modes in the display process of the media information, so that the searching efficiency of the target content is improved, and the encryption efficiency is improved; the method comprises the steps that a content encryption instruction for indicating that all or part of target content of media information is encrypted is responded, and the target content is encrypted, so that encryption operation for part of content in the media information can be realized, the accuracy of encrypted content is improved, and the accuracy of encryption operation is improved; and generating target media information based on the content generation instruction, and controlling the encrypted content to be in an invisible state in a floating layer covering mode in the target media information display process, so that the human-computer interaction experience can be improved on the premise of ensuring the data security.
Next, a method for processing media information provided in the embodiments of the present application is described by taking a terminal as a receiving end embodiment of target media information. Referring to fig. 20, fig. 20 is a schematic flowchart of a method for processing media information according to an embodiment of the present application, where the method for processing media information according to the embodiment of the present application includes:
step 301, a receiving end acquires target media information, wherein the target media information comprises a plurality of continuous content units, and part of content in the content units is encrypted by adopting a floating layer covering mode.
In actual implementation, the target media information acquired by the terminal comprises a plurality of continuous content units, and the existing part of the content is encrypted. Taking the example that the target media information is a video containing encrypted content, the corresponding content unit may be an image frame, a shot, a scene, etc.; taking the example where the target media information is a multi-page document containing encrypted content, the content units may be paragraphs, pages, etc.
And step 302, responding to a display instruction aiming at the target media information, and displaying the content included in the target media information.
In practical implementation, the terminal deploys a client for playing the media information, and when receiving a display instruction for the target media information, the terminal displays the target media information in the media information interface. The manner of receiving the display instruction may be that the user triggers a display function item for displaying the media information.
And step 303, controlling the partial content to be in an invisible state when the encrypted partial content is displayed.
In practical implementation, when the terminal displays the encrypted partial content in the process of displaying the target media information, the floating layer is displayed to shield the encrypted content, namely, the partial content is controlled to be in an invisible state.
In some embodiments, the partial content may be controlled to be in a visible state by: displaying an identity verification function item; and in response to the identity verification operation for the current object triggered based on the identity verification function item, when the identity verification for the current object passes, the control part of the content is switched from the invisible state to the visible state.
In actual implementation, because the authority setting is performed on the relevant object (user) receiving the target media information when the target media information is generated, the encrypted content can be normally displayed when the object in the authority white list displays the target media information through the terminal. That is, when the object displaying the target media information is in the permission white list, the target media information can be normally displayed. In practical application, in order to further ensure the security of the media information, the receiving end may check the identity of the current object corresponding to the receiving end before displaying the media information, and the specific check mode may be a bottom-layer check based on user authorization or a check based on user information input. When the identity verification is passed, the target media information can be normally displayed, namely, the control part of the content is switched from the invisible state to the visible state.
Exemplarily, referring to fig. 21, fig. 21 is a schematic diagram of an identity verification function interface provided in this embodiment, where number 1 in the diagram shows an object, media information corresponding to "2020 conference end" is shown in a corresponding terminal, when content corresponding to a time point "1: 18" is shown, a floating layer is displayed to shield related content and prompt that "the content is encrypted", at this time, the object number 1 may perform corresponding identity verification processing by clicking an "identity verification" function item shown by number 2 in the display interface, click the "identity verification" function item, present a verification interface shown by number 3, that is, by scanning a two-dimensional code in the verification interface, send identity confirmation information to a publisher of the media information (a publisher avatar shown by number 4), after the identity verification passes (the publisher has a social association relationship with the object shown by number 1), the content corresponding to the time point of "1: 18" is controlled to be in a visible state (i.e., the normally presented content shown by the number 4). When the identity verification fails, a message prompt interface can be popped up, namely 'no right to check the content and please contact the publisher for processing'.
The above-mentioned mode of carrying out identity verification to the object of receiving the target media information can avoid the content of encryption to be propagated and revealed, promotes the security of encrypting the content.
By applying the embodiment, when the target media message containing the protected content is displayed, the target object is subjected to corresponding identity verification, so that the target object can be accurately screened, the protected content can be prevented from being spread and leaked, and the safety of the protected content is improved.
Next, an exemplary application of the embodiment of the present application in a practical application scenario will be described.
In the related art, video protection encryption is divided into three schemes, namely downloading prevention, screen recording prevention and anti-theft chain. The anti-download scheme mainly adopts a file slicing technology, namely, an uploaded video is cut into countless small segments, and different encryption algorithms are adopted in each small segment. With this encryption algorithm, even if the video is downloaded, it cannot be played out completely because the critical data order of the video has been disturbed.
However, for the screen recording prevention scheme, modes such as setting a marquee and prohibiting a browser from recording a screen are mainly adopted to prevent screen recording, and although screen recording prevention cannot be achieved completely, great screen recording difficulty is increased for pirates. However, the educational institution or the teacher individual can add the institution mark or the trademark to the live video to help the video-stealing person to publicize, and it needs to be noted that the trademark and the institution mark cannot be clear or cannot be absent. The anti-theft chain scheme mainly limits the identity of a watching user and effectively restrains pirated videos through the self-defined functions of a live background, such as white list setting, identity authentication watching, payment watching and the like. Meanwhile, in order to prevent people who have pirated videos in watching users from watching live videos, the users need to check identities, namely, the videos generate different addresses every millisecond, and each address is allowed to be watched once, so that the videos are prevented from being illegally spread in the process of playing.
In the scheme, the related video encryption technology only aims at the whole video file and cannot encrypt or smear partial information of the content in the video file; in addition, the video encryption technology cannot be positioned in specific video content through key information mentioned in the conference process, and smearing protection of partial information is implemented.
Based on this, the embodiment of the present application provides a method for processing media information, where the method is a method for protecting video information in an encrypted smearing manner, where the video information may be a meeting record, a teacher classroom video, and the like. The video information can be obtained by encrypting and smearing the video content or the image-text information content appearing in the video before the user generates the cloud storage record by recognizing the voice key or the subsequent video in the live conference scene, and finally synthesizing a new video record again and uploading the new video record to the cloud so that the user can protect the content information of the video in the subsequent transmission/review content.
Next, a usage scenario and a processing flow of the processing method of media information will be explained. When a user uses video conference (live broadcast) software, keyword eyes such as 'secret', 'important', 'key' and communicated semantic comprehension, which are mentioned in the process of the user, for example, a person on the conference mentions that the 'sensitive data cannot be uploaded', and the like can be identified, after the video conference is finished, the user can locate the keyword pages of the keyword eyes through the generated video records temporarily stored in the cloud, and directly encrypt or smear part of information in the whole video page of the current frame; or the user can smear the page or part of information to be encrypted by adjusting the progress bar of the video conference record, in the encryption process, the user can cancel all encryption/smearing in a single step or cancel all encryption/smearing by one key, finally, the record generation button is clicked, the current video conference record can be stored in the cloud again (if the user does not perform any operation, the video conference record can be directly stored in the cloud within a limited period), and the video conference record can be transmitted or shared after being stored in the cloud.
Next, an interactive flow of the processing method of media information is explained from the product side. Referring to fig. 22, fig. 22 is a flowchart of a processing interface of media information provided in an embodiment of the present application, where a video played in fig. 1 is a video conference record that has been temporarily stored in a cloud, the video pauses at a position of a keyword that a user has said in a conference process or automatically locates at a position where the video pauses according to semantic understanding in the conference process, and a smearing position can be confirmed by dragging a video playing progress bar; fig. 2 shows that after the user clicks the "encrypt" button, the video file is controlled to be in an encrypted state, and the "one-key encrypt" (automatic encrypt in the foregoing) and "smear encrypt" function items are presented in the video playing interface; after the click operation of the one-key encryption button is responded, the presentation page to which the content at the position in the video belongs is encrypted, all frame pictures related to the encrypted presentation page are encrypted, and at the moment, the 'withdrawal' state is activated, and the user can withdraw the operation by returning to the previous step; after the user clicks 'finish', the video file is encrypted successfully, a 'cancel' button appears, and the user clicks to indicate that the encryption before the one-key cancellation is carried out; at the moment, the user can also click on encryption storage, and then the current video record can be stored; when a user drags a playing progress bar of the video to another playing time point '01: 18' (another positioning position) to prepare to start encryption; responding to the click operation of the encryption button in the figure (2) again, controlling the video information to be in an encryptable state, and presenting the one-key encryption function item and the smearing encryption function item in a playing interface of the video information; in fig. 4 is shown that in response to a click operation for the "paint encrypt" button, a "pencil" icon is presented, which can be clicked or manually painted to cover important information; in response to the click operation of the 'completion' button, the video information is successfully encrypted, a 'cancel' button is presented, and in response to the click operation of the 'cancel' button, the encryption operation of the video information can be cancelled by one key; in addition, in response to a click operation for "save encrypted", the current video information may be saved. In response to the click operation for 'encryption saving', encrypted video information (also called target video information) is generated and stored in the cloud, then operations such as sharing and exporting can be performed on the encrypted video information, a user can select a user to check the permission in the sharing process, and an open permission can be set for people who do not need to keep secret to share, namely, the encryption is closed.
Next, from the aspect of technical implementation, a hardware environment, implementation logic, and a data processing flow of the media information processing method provided in the embodiment of the present application are described. Referring to fig. 23, fig. 23 is a schematic diagram of a media information processing method according to an embodiment of the present application.
Firstly (the first step in the figure), in a video conference (live broadcast) client side of any mobile terminal or desktop terminal, in the live broadcast video conference process, keywords such as 'secret', 'important', 'key' and the like mentioned by a speaker are automatically identified through an intelligent voice identification technology; after the initial video conference record is generated, a key video page (one or more continuous frame images) where the key words are located is automatically searched according to Artificial Intelligence (AI), and besides, the key frames of the page where the key information is located are selected according to semantic understanding identification in all conference videos of an AI algorithm. After the corresponding keyword is recognized, modeling is carried out on the time sequence information by utilizing an HMM model in an acoustic model GMM-HMM, and after one state of the HMM is given, the GMM models the probability distribution of the speech feature vector belonging to the state. By recognizing pronunciation, Chinese is the correspondence between pinyin and Chinese characters, and English is the correspondence between phonetic symbols and words. Finding out corresponding Chinese characters (words) or words in a dictionary according to phonemes identified by the acoustic model, establishing a bridge between the acoustic model and the language model, connecting the acoustic model and the language model, identifying video segments for speaking the keywords in the video information in the generated conference record, positioning, and pausing to display a current key page (namely jumping to a key frame corresponding to the keywords).
Secondly (the second step in the figure), the key page in the video information is converted into a plurality of frames of pictures with time sequence relation, and the playing progress bar of the video information is adjusted to the time point of the key page. For encryption of a complete page, pictures can be directly encrypted according to an encryption algorithm, and picture and text information such as pictures, presentation files, labels and the like in a video live conference with the same page is automatically combined for encryption, specifically, the scalable and lightweight video encryption algorithm based on H.264/AVC (video compression format) aims at diversity formed by video application scenes, safety requirements and calculation resource differences, and is a scalable and lightweight video encryption algorithm which can meet most media application platforms. In consideration of the security of the encryption algorithm, the encryption speed, the mutual restriction relationship among several factors such as compression ratio and the like, the key data such as the intra-frame prediction mode, the motion vector difference and the brightness quantization transformation coefficient are encrypted by using the standard encryption algorithm. The algorithm can realize the encryption effect of multiple levels from low to high according to the difference between the security requirement level and the computing resource level in practical application, the compression ratio is not changed greatly, the computing complexity is relatively low, and certain operability is realized. In the encryption process, the single frames are encrypted, the same frames can be combined and encrypted, the encrypted frames are displayed as that the content is encrypted, or an encrypted watermark of a complete page is provided, and the user is informed that the content is encrypted through the display of characters and graphs.
In addition, the encryption of the key page for partial information can erase or block any characters and graphics on the pictures, the presentations and the labels in a stroke or other encrypted covering modes through real-time smearing processing so as to protect the contents of the pictures, the presentations and the labels. The essence of smearing is that the information which is not required to be displayed in the video is supplemented, the video supplementing method is based on an optical flow technology, the color and the optical flow can be synthesized, the color is transmitted along the track of the optical flow, the time continuity of the video is improved, and therefore the memory problem is relieved, and high-resolution output is achieved. The method based on the optical flow is adopted, the edge of a moving object is extracted and supplemented, and then the optical flow is supplemented by taking the optical flow edge as a guide. Since not all missing regions in the video can be completed by this method, researchers have introduced non-local optical flow, enabling video content to propagate over motion boundaries.
Meanwhile, for voice information in a video, related encryption needs to be performed as well, specifically, voice identification is performed on the voice information to obtain voice content, voice shielding is performed on the content of the whole page, the point information or short message is encrypted for encryption of a single piece of information, namely, data is encrypted, namely, compression coding is performed through a mobile terminal and a communication network, RPE-LTP coding is performed through PCM coding and a vocoder, and finally, voice after compression coding is output.
And thirdly (the third step in the figure), after partial information in the video content is encrypted or smeared, the real-time processing is reintegrated into a new video record, and the encrypted video information can be stored in the cloud again in a mode of returning to the previous step for revocation or revoking all encryption by one key.
Finally (fourth step in the figure), for the sharing operation of the generated encrypted video information, encrypted sharing and non-encrypted sharing can be selected, and in the sharing process, the encrypted video information stored in the cloud is encoded and converted into corresponding links. For a target object (user) indicated by a sharing operation requiring an open right, the video converts the revoked encrypted content into a link, and a specific revocation encryption mode is reverse coding for the situation of previous encryption.
By applying the embodiment of the application, partial information of the content in the video file can be encrypted or smeared, including but not limited to image and text information such as pictures, presentations, labels and the like, so that the video recording information is protected from being stolen or used for other purposes in subsequent transmission and sharing.
According to the embodiment of the application, the mode of encrypting and smearing the generated video conference record in the conference process and after the conference process is adopted, so that part of information in the video content, including but not limited to picture, presentation, annotation and other image-text information, is encrypted or smeared in the video recording Chinese on the temporary cloud, and the video recording information is protected from being stolen or used for other purposes in the transmission and sharing after being stored in the cloud again.
In the embodiment of the present application, related data such as user information is referred to, when the embodiment of the present application is applied to a specific product or technology, user permission or consent needs to be obtained, and collection, use and processing of the related data need to comply with relevant laws and regulations and standards of relevant countries and regions.
Continuing with the exemplary structure of the media information processing device 555 implemented as software modules provided by the embodiments of the present application, in some embodiments, as shown in fig. 2A, the software modules stored in the media information processing device 555 in the memory 540 may include:
the display module 5551 is configured to display target content of the media information in a media information display interface, where the target content is a part of content included in the media information;
a receiving module 5552, configured to receive a content encryption instruction, where the content encryption instruction is used to instruct to encrypt all or part of the target content;
the covering module is used for responding to the content encryption instruction and covering the encrypted content indicated by the content encryption instruction by adopting a floating layer;
the generating module 5553 is configured to generate target media information in response to a content generating instruction, and control the content to be in an invisible state when the content in the target media information is displayed.
In some embodiments, the presentation module is further configured to present a content search area and a media presentation area in a media information presentation interface, display at least one piece of recommended content in the content search area, and present the content of the media information in the media presentation area; the recommended content is part of recommended content included in the media information; in the process of presenting the content, responding to a selection operation aiming at a target recommended content in the at least one piece of recommended content, skipping the presented content to a first content comprising the target recommended content, and taking the first content as the target content.
In some embodiments, the presentation module is further configured to present, in an associated area of each recommended content in the content search area, location information of the corresponding recommended content; the position information is used for indicating the display position of the recommended content in the media information.
In some embodiments, the presentation module is further configured to present the content of the media information in a media information presentation interface and display at least one keyword; in the process of displaying the content, responding to the selection operation aiming at a target keyword in the at least one keyword, jumping to second content from the displayed content, and taking the second content as the target content; and the second content is obtained by searching the content included in the media information based on the target keyword.
In some embodiments, the presentation module is further configured to present the content of the media information and display a content search function item in a media information presentation interface; in the process of displaying the content, responding to a search instruction for input content triggered based on the content search function item, skipping the displayed content to third content, and taking the third content as the target content; the third content is obtained by searching the content included in the media information based on the input content.
In some embodiments, the presentation module is further configured to present a content search area and a media presentation area in a media information presentation interface, display at least one content thumbnail in the content search area, and present the content of the media information in the media presentation area; the content thumbnail is a thumbnail of a content unit of the media information; in the process of displaying the content, responding to the selection operation of a target content thumbnail in the at least one content thumbnail, jumping to the displayed content to fourth content corresponding to the target content thumbnail, and taking the fourth content as the target content.
In some embodiments, when the media information is video information, the display module is further configured to play the content of the video in the media information display interface and display a play progress bar for indicating a play progress of the video when the media information is a video; in the process of displaying the content, responding to progress adjustment operation triggered based on the playing progress bar, skipping the displayed content to fifth content adjusted by the progress adjustment operation, and taking the fifth content as the target content.
In some embodiments, the overlay module is further configured to mark a position of the target content in the media information and display corresponding mark information; in the process of displaying other content different from the target content, when a trigger operation aiming at the mark information is received, displaying the content which is indicated to be encrypted by the content encryption instruction covered by a floating layer.
In some embodiments, the receiving module is further configured to display an automatic encryption control in the media information presentation interface; receiving a content encryption instruction in response to a triggering operation for the automatic encryption control; accordingly, the method can be used for solving the problems that,
accordingly, in some embodiments, the overlay module is further configured to automatically overlay the entire content of the target content with a floating layer in response to the content encryption instruction.
In some embodiments, the receiving module is further configured to display a smearing encryption control in a media information presentation interface; and receiving a content encryption instruction in response to the smearing operation triggered based on the smearing encryption control.
In some embodiments, the overlay module is further configured to, in response to the content encryption instruction, display a smearing track of the smearing operation in a floating layer, and use content overlaid by the smearing track as the encrypted content indicated by the content encryption instruction.
In some embodiments, the overlay module is further configured to display an icon corresponding to at least one application tool in response to a trigger operation for the application encryption control; responding to the selection operation of a target icon in at least one icon, and displaying a target smearing tool corresponding to the target icon; receiving the application operation triggered based on the target application tool.
In some embodiments, the receiving module is further configured to display a box encryption control in the media information presentation interface; controlling the target content to be in an editing state in response to a triggering operation for the box selection encryption control; in response to a content selection operation for the target content in the editing state, displaying a selection box including the selected content; and receiving a content encryption instruction aiming at the content included in the selection box.
Accordingly, in some embodiments, the overlay module is further configured to overlay the content included in the box with a floating layer in response to the content encryption instruction.
In some embodiments, when the media information is a video, the target content is a frame image of the video, and the content indicated by the content encryption instruction is the target image content included in the frame image; the generating module is further configured to, in the process of playing the video, control the target image content in each frame image to be in an invisible state when a target video clip played to the video includes a plurality of frame images including the target image content.
In some embodiments, the overlay module is further configured to obtain audio segments corresponding to the plurality of frame images; encrypting the content of the audio clip to obtain an encrypted target audio clip; and in the process of playing the video, shielding the target audio clip when the target audio clip is played.
In some embodiments, the overlay module is further configured to, when the media information is a video, obtain an audio file in the video, and perform semantic recognition on the audio file to obtain a recognized content; based on the encrypted content indicated by the content encryption instruction, retrieving in the identified content to determine an audio clip matching the encrypted content indicated by the content encryption instruction; encrypting the content of the audio clip to obtain an encrypted target audio clip; and in the process of playing the video, shielding the target audio clip when the target audio clip is played.
In some embodiments, the overlay module is further configured to display an encrypted prompt message when the content in the target media information is presented; and the encryption prompt information is used for prompting that the content corresponding to the current display position is encrypted.
In some embodiments, the apparatus for processing media information further comprises an execution module, configured to receive a target operation instruction indicating a target operation to be performed on the target media information, where the target operation includes one of: sharing operation, uploading operation and exporting operation; and responding to the target operation instruction, and executing the target operation on the target media information.
In some embodiments, the generating module, in response to the content generating instruction, before generating the target media information, is further configured to display at least one object having a social relationship with the current object in response to the permission setting instruction; determining the selected object as a target object in response to an object selection operation for the at least one object; accordingly, the method has the advantages that,
in some embodiments, after generating the target media information, the generating module is further configured to, when receiving a first sharing instruction, the first sharing instruction being used to instruct to share the target media information to the target object, share the target media information to a first terminal of the target object, so that the first terminal controls the content to be in an invisible state when displaying the content in the target media information; when a first sharing instruction is received, the first sharing instruction is used for indicating that the target media information is shared to other objects except the target object in at least one object, the target media information is shared to second terminals of the other objects, and therefore the second terminals control the content to be in a visible state when the content in the target media information is displayed.
In some embodiments, the overlay module is further configured to remove a floating layer overlaid on the content and display the content in a visible state in response to a revocation operation with respect to the content encryption instruction.
In some embodiments, as shown in FIG. 2B, the software modules stored in the media information processing device 556 of the memory 540 may include:
an obtaining module 5561, configured to obtain target media information, where the target media information includes a plurality of continuous content units, and a part of content in the content units is encrypted in a floating layer overlay manner;
an information presentation module 5562, configured to present content included in the target media information in response to a presentation instruction for the target media information;
a control module 5563, configured to control the partial content to be in an invisible state when being presented to the encrypted partial content.
In some embodiments, the control module is further configured to display an identity check function item; and in response to the identity verification operation for the current object triggered based on the identity verification function item, controlling the partial content to be switched from the invisible state to the visible state when the identity verification for the current object passes.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the method for processing the media information according to the embodiment of the present application.
Embodiments of the present application provide a computer-readable storage medium storing executable instructions, which when executed by a processor, will cause the processor to perform a method for processing media information provided by embodiments of the present application, for example, the method for processing media information as shown in fig. 3.
In some embodiments, the computer-readable storage medium may be memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, and may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a hypertext Markup Language (HTML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
By way of example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.
In summary, according to the embodiment of the present application, encryption of part of content in media information can be achieved, so that accuracy of encryption is improved, and encryption efficiency is further improved.
The above description is only an example of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (25)

1. A method for processing media information, the method comprising:
displaying target content of media information in a media information display interface, wherein the target content is part of content included in the media information;
receiving a content encryption instruction, wherein the content encryption instruction is used for indicating that the target content is completely or partially encrypted;
in response to the content encryption instruction, overlaying the encrypted content indicated by the content encryption instruction with a floating layer;
and responding to a content generation instruction, generating target media information, and controlling the content to be in an invisible state when the content in the target media information is displayed.
2. The method of claim 1, wherein presenting the target content of the media information in the media information presentation interface comprises:
displaying a content search area and a media display area in a media information display interface, displaying at least one piece of recommended content in the content search area, and displaying the content of the media information in the media display area;
the recommended content is part of content included in the recommended media information;
in the process of presenting the content, responding to a selection operation aiming at a target recommended content in the at least one piece of recommended content, skipping the presented content to a first content comprising the target recommended content, and taking the first content as the target content.
3. The method of claim 2, wherein the method further comprises:
displaying the position information of the corresponding recommended content in the association area of each recommended content in the content searching area;
the position information is used for indicating the display position of the recommended content in the media information.
4. The method of claim 1, wherein presenting the target content of the media information in the media information presentation interface comprises:
displaying the content of the media information in a media information display interface, and displaying at least one keyword;
in the process of displaying the content, responding to the selection operation of a target keyword in the at least one keyword, skipping the displayed content to a second content, and taking the second content as the target content;
and the second content is obtained by searching the content included in the media information based on the target keyword.
5. The method of claim 1, wherein presenting the target content of the media information in the media information presentation interface comprises:
displaying the content of the media information in a media information display interface, and displaying a content search function item;
in the process of displaying the content, responding to a search instruction for input content triggered based on the content search function item, skipping the displayed content to third content, and taking the third content as the target content;
the third content is obtained by searching the content included in the media information based on the input content.
6. The method of claim 1, wherein presenting the target content of the media information in the media information presentation interface comprises:
displaying a content search area and a media display area in a media information display interface, displaying at least one content thumbnail in the content search area, and displaying the content of the media information in the media display area;
the content thumbnail is a thumbnail of a content unit of the media information;
in the process of displaying the content, responding to the selection operation of a target content thumbnail in the at least one content thumbnail, skipping the displayed content to fourth content corresponding to the target content thumbnail, and taking the fourth content as the target content.
7. The method of claim 1, wherein presenting the target content of the media information in the media information presentation interface comprises:
when the media information is a video, playing the content of the video in the media information display interface, and displaying a playing progress bar for indicating the playing progress of the video;
in the process of displaying the content, responding to a progress adjustment operation triggered based on the playing progress bar, skipping the displayed content to a fifth content adjusted by the progress adjustment operation, and taking the fifth content as the target content.
8. The method of claim 1, wherein after overwriting the content indicated by the content encryption directive with a floating layer, the method further comprises:
marking the position of the target content in the media information, and displaying corresponding marking information;
in the process of displaying other content different from the target content, when a trigger operation aiming at the mark information is received, displaying the content which is indicated to be encrypted by the content encryption instruction covered by a floating layer.
9. The method of claim 1, wherein receiving content encryption instructions comprises:
displaying an automatic encryption control in a media information display interface;
receiving a content encryption instruction in response to a triggering operation for the automatic encryption control;
the responding to the content encryption instruction, and covering the encrypted content indicated by the content encryption instruction with a floating layer, including:
and automatically adopting a floating layer to cover the whole content of the target content in response to the content encryption instruction.
10. The method of claim 1, wherein the receiving content encryption instructions comprises:
displaying a smearing encryption control in a media information display interface;
and receiving a content encryption instruction in response to the smearing operation triggered based on the smearing encryption control.
11. The method of claim 10, wherein said overwriting, in response to the content encryption directive, content indicated to be encrypted by the content encryption directive with a floating layer comprises:
and responding to the content encryption instruction, displaying a smearing track of the smearing operation by adopting a floating layer, and taking the content covered by the smearing track as the encrypted content indicated by the content encryption instruction.
12. The method of claim 10, wherein the method further comprises:
responding to the triggering operation aiming at the smearing encryption control, and displaying an icon corresponding to at least one smearing tool;
responding to the selection operation of a target icon in at least one icon, and displaying a target smearing tool corresponding to the target icon;
receiving the application operation triggered based on the target application tool.
13. The method of claim 1, wherein receiving content encryption instructions comprises:
displaying a box selection encryption control in a media information display interface;
controlling the target content to be in an editing state in response to a triggering operation for the box selection encryption control;
in response to a content selection operation for the target content in the editing state, displaying a selection box including the selected content;
and receiving a content encryption instruction aiming at the content included in the selection box.
14. The method according to claim 1, wherein when the media information is a video, the target content is a frame image of the video, and the content encryption instruction indicates that the encrypted content is the target image content included in the frame image;
the controlling the content to be in an invisible state when the content in the target media information is shown comprises:
in the process of playing the video, when a target video clip in the video is played and the target video clip comprises a plurality of frame images containing the target image content, controlling the target image content in each frame image to be in an invisible state.
15. The method of claim 14, wherein prior to generating the target media information in response to the content generation instructions, the method further comprises:
acquiring audio clips corresponding to the plurality of frame images;
encrypting the content of the audio clip to obtain an encrypted target audio clip;
and shielding the target audio clip when the target audio clip is played in the process of playing the video.
16. The method of claim 1, wherein when the media information is video, the method further comprises:
acquiring an audio file in the video, and performing semantic recognition on the audio file to obtain recognized content;
based on the encrypted content indicated by the content encryption instructions, retrieving from the identified content to determine an audio clip that matches the encrypted content indicated by the content encryption instructions;
encrypting the content of the audio clip to obtain an encrypted target audio clip;
and shielding the target audio clip when the target audio clip is played in the process of playing the video.
17. The method of claim 1, wherein the method further comprises:
displaying encrypted prompt information when the content in the target media information is displayed;
and the encryption prompting information is used for prompting that the content corresponding to the current display position is encrypted.
18. The method of claim 1, wherein prior to generating the target media information in response to the content generation instructions, the method further comprises:
responding to the permission setting instruction, and displaying at least one object having a social relation with the current object;
determining the selected object as a target object in response to an object selection operation for the at least one object;
after the generating the target media information, the method further comprises:
when a first sharing instruction is received, wherein the first sharing instruction is used for indicating that the target media information is shared to the target object, the target media information is shared to a first terminal of the target object, so that the first terminal controls the content to be in an invisible state when the content in the target media information is displayed;
when a first sharing instruction is received, the first sharing instruction is used for indicating that the target media information is shared to other objects except the target object in at least one object, the target media information is shared to second terminals of the other objects, and therefore the second terminals control the content to be in a visible state when the content in the target media information is displayed.
19. The method of claim 1, wherein after overwriting the content indicated by the content encryption directive with a floating layer, the method further comprises:
in response to a revocation operation directed to the content encryption instruction, removing a floating layer overlaid on the content and displaying the content in a visible state.
20. A method for processing media information, the method comprising:
acquiring target media information, wherein the target media information comprises a plurality of continuous content units, and partial content in the content units is encrypted in a floating layer covering mode;
in response to a display instruction for the target media information, displaying content included in the target media information;
controlling the partial content to be in an invisible state when being presented to the encrypted partial content.
21. The method of claim 20, wherein the method further comprises:
displaying an identity verification function item;
and in response to the identity verification operation for the current object triggered based on the identity verification function item, controlling the partial content to be switched from the invisible state to the visible state when the identity verification for the current object passes.
22. An apparatus for processing media information, the apparatus comprising:
the display module is used for displaying target content of the media information in a media information display interface, wherein the target content is part of content included in the media information;
a receiving module, configured to receive a content encryption instruction, where the content encryption instruction is used to instruct to perform full or partial encryption on the target content;
the covering module is used for responding to the content encryption instruction and covering the encrypted content indicated by the content encryption instruction by adopting a floating layer;
and the generating module is used for responding to a content generating instruction, generating target media information and controlling the content to be in an invisible state when the content in the target media information is displayed.
23. An electronic device, characterized in that the electronic device comprises:
a memory for storing executable instructions;
a processor for implementing the method of processing media information of any of claims 1 to 21 when executing the executable instructions stored in the memory.
24. A computer-readable storage medium storing executable instructions, wherein the executable instructions when executed by a processor implement the method for processing media information according to any one of claims 1 to 21.
25. A computer program product comprising a computer program or instructions, characterized in that the computer program or instructions, when executed by a processor, implement the method of processing media information according to any one of claims 1 to 21.
CN202210638029.7A 2022-06-07 2022-06-07 Media information processing method, device, equipment and storage medium Active CN115134635B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210638029.7A CN115134635B (en) 2022-06-07 2022-06-07 Media information processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210638029.7A CN115134635B (en) 2022-06-07 2022-06-07 Media information processing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115134635A true CN115134635A (en) 2022-09-30
CN115134635B CN115134635B (en) 2024-04-19

Family

ID=83377858

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210638029.7A Active CN115134635B (en) 2022-06-07 2022-06-07 Media information processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115134635B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103442300A (en) * 2013-08-27 2013-12-11 Tcl集团股份有限公司 Audio and video skip playing method and device
CN106485173A (en) * 2015-08-25 2017-03-08 腾讯科技(深圳)有限公司 Sensitive information methods of exhibiting and device
US20180189461A1 (en) * 2016-12-31 2018-07-05 Entefy Inc. System and method of applying multiple adaptive privacy control layers to encoded media file types
CN110719527A (en) * 2019-09-30 2020-01-21 维沃移动通信有限公司 Video processing method, electronic equipment and mobile terminal
CN110881033A (en) * 2019-11-07 2020-03-13 腾讯科技(深圳)有限公司 Data encryption method, device, equipment and readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103442300A (en) * 2013-08-27 2013-12-11 Tcl集团股份有限公司 Audio and video skip playing method and device
CN106485173A (en) * 2015-08-25 2017-03-08 腾讯科技(深圳)有限公司 Sensitive information methods of exhibiting and device
US20180189461A1 (en) * 2016-12-31 2018-07-05 Entefy Inc. System and method of applying multiple adaptive privacy control layers to encoded media file types
CN110719527A (en) * 2019-09-30 2020-01-21 维沃移动通信有限公司 Video processing method, electronic equipment and mobile terminal
CN110881033A (en) * 2019-11-07 2020-03-13 腾讯科技(深圳)有限公司 Data encryption method, device, equipment and readable storage medium

Also Published As

Publication number Publication date
CN115134635B (en) 2024-04-19

Similar Documents

Publication Publication Date Title
WO2021114881A1 (en) Intelligent commentary generation method, apparatus and device, intelligent commentary playback method, apparatus and device, and computer storage medium
CN101639943B (en) Method and apparatus for producing animation
CN112822542A (en) Video synthesis method and device, computer equipment and storage medium
CN112135160A (en) Virtual object control method and device in live broadcast, storage medium and electronic equipment
CN111930994A (en) Video editing processing method and device, electronic equipment and storage medium
CN111800668B (en) Barrage processing method, barrage processing device, barrage processing equipment and storage medium
CN112423081B (en) Video data processing method, device and equipment and readable storage medium
US20190087081A1 (en) Interactive media reproduction, simulation, and playback
CN112104908A (en) Audio and video file playing method and device, computer equipment and readable storage medium
US20190034213A1 (en) Application reproduction in an application store environment
US11758218B2 (en) Integrating overlaid digital content into displayed data via graphics processing circuitry
CN113806570A (en) Image generation method and generation device, electronic device and storage medium
KR20090124240A (en) Device for caption edit and method thereof
CN113191184A (en) Real-time video processing method and device, electronic equipment and storage medium
CN115134635B (en) Media information processing method, device, equipment and storage medium
CN116956019A (en) Text generation method, text generation device, electronic equipment and computer readable storage medium
CN110636320A (en) Animation generation method and device for live broadcast, storage medium and electronic equipment
CN109168025B (en) Video playing method capable of marking audit video sensitive operation and crossing platform
US11682101B2 (en) Overlaying displayed digital content transmitted over a communication network via graphics processing circuitry using a frame buffer
CN111935493B (en) Anchor photo album processing method and device, storage medium and electronic equipment
US20230326108A1 (en) Overlaying displayed digital content transmitted over a communication network via processing circuitry using a frame buffer
KR20190010405A (en) System, method and program for protecting copying webtoon
KR20100134022A (en) Photo realistic talking head creation, content creation, and distribution system and method
CN115481598A (en) Document display method and device
CN116980718A (en) Scenario recomposition method and device for video, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant