CN113259721B - Video data sending method and electronic equipment - Google Patents

Video data sending method and electronic equipment Download PDF

Info

Publication number
CN113259721B
CN113259721B CN202110680442.5A CN202110680442A CN113259721B CN 113259721 B CN113259721 B CN 113259721B CN 202110680442 A CN202110680442 A CN 202110680442A CN 113259721 B CN113259721 B CN 113259721B
Authority
CN
China
Prior art keywords
sensitive
video data
shooting
area
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110680442.5A
Other languages
Chinese (zh)
Other versions
CN113259721A (en
Inventor
全绍军
洪伟
熊旭
林格
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Longse Technology Co ltd
Original Assignee
Longse Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Longse Technology Co ltd filed Critical Longse Technology Co ltd
Priority to CN202110680442.5A priority Critical patent/CN113259721B/en
Publication of CN113259721A publication Critical patent/CN113259721A/en
Application granted granted Critical
Publication of CN113259721B publication Critical patent/CN113259721B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2347Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving video stream encryption
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4405Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving video stream decryption

Abstract

The invention is suitable for the technical field of multimedia, and provides a video data sending method and electronic equipment, wherein the video data sending method comprises the following steps: determining original video data; acquiring shooting attribute information associated with original video data, and generating a sensitive object list based on the shooting attribute information; respectively marking the sensitive areas containing sensitive objects in each video image frame in the original video data, and generating marking information for determining and marking the positions based on the positions of all the sensitive areas in the video image frames; processing each sensitive area through a preset preprocessing algorithm to obtain a plurality of processed video image frames; and generating target video data based on all the processed video image frames, and sending the target video data and the marking information to a target terminal. The invention can ensure the confidentiality and the safety of the privacy information in the video data and can also ensure the normal playing of the video data.

Description

Video data sending method and electronic equipment
Technical Field
The invention belongs to the technical field of multimedia, and particularly relates to a video data sending method and electronic equipment.
Background
With the continuous development of multimedia technology, the application field of video playing is more and more extensive, and in most application scenes, video data often needs to be sent from one electronic device to another electronic device, such as video monitoring application scenes, video sharing application scenes, and the like. Therefore, how to safely transmit video data to the peer device directly affects the development of video playing applications.
In the existing video data transmission technology, before transmission, a video image frame may be encoded in a preset encoding mode to generate video data, and then the video data is transmitted to a communication peer. However, the video data contains more information, and may contain privacy information of the user, such as user name, user appearance, user address, and the like, and if the video data is directly transmitted to the correspondent node, the video data is intercepted or obtained by monitoring and downloading during the transmission process, the privacy information of the user is easily exposed. Therefore, the existing video data transmission technology is easy to reveal the privacy information of users, and is low in safety and confidentiality.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method for sending video data and an electronic device, so as to solve the problems that the existing video data sending technology is easy to reveal privacy information of a user, and has low security and confidentiality.
A first aspect of an embodiment of the present invention provides a method for transmitting video data, including:
determining original video data, wherein the original video data meet preset video sending conditions;
acquiring shooting attribute information associated with the original video data, and generating a sensitive object list based on the shooting attribute information; at least one sensitive object is contained in the sensitive object list; the shooting attribute information is used for determining related attribute information in the process of shooting original video data; the sensitive object is a shooting object related to the privacy of a user;
respectively marking the sensitive areas containing the sensitive objects in each video image frame in the original video data, and generating marking information for determining and marking the positions based on the positions of all the sensitive areas in the corresponding video image frames;
processing each sensitive area through a preset preprocessing algorithm to obtain a plurality of processed video image frames, wherein sensitive objects in the processed video image frames are hidden;
and generating target video data based on all the processed video image frames, and sending the target video data and the tag information to a target terminal so that the target terminal restores each processed sensitive area in the target video data based on the tag information.
A second aspect of an embodiment of the present invention provides a video data transmitting apparatus, including:
the device comprises an original video data acquisition unit, a video sending unit and a video sending unit, wherein the original video data acquisition unit is used for determining original video data which meet preset video sending conditions;
the sensitive object list generating unit is used for acquiring shooting attribute information associated with the original video data and generating a sensitive object list based on the shooting attribute information; at least one sensitive object is contained in the sensitive object list; the shooting attribute information is used for determining related attribute information in the process of shooting original video data; the sensitive object is a shooting object related to the privacy of a user;
the marking information generating unit is used for respectively marking the sensitive areas containing the sensitive objects in each video image frame in the original video data and generating marking information used for determining the positions of the sensitive areas on the basis of the positions of the sensitive areas in the corresponding video image frames;
the preprocessing unit is used for processing each sensitive area through a preset preprocessing algorithm to obtain a plurality of processed video image frames, and sensitive objects in the processed video image frames are hidden;
and the video data sending unit is used for generating target video data based on all the processed video image frames and sending the target video data and the tag information to a target terminal so that the target terminal restores each processed sensitive area in the target video data based on the tag information.
A third aspect of embodiments of the present invention provides an electronic device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the first aspect when executing the computer program.
A fourth aspect of embodiments of the present invention provides a computer-readable storage medium storing a computer program which, when executed by a processor, performs the steps of the first aspect.
The method for sending the video data and the electronic device provided by the embodiment of the invention have the following beneficial effects that:
when the video sending condition is met, acquiring original video data matched with the video sending condition, then acquiring shooting attribute information associated with the video data, acquiring a sensitive object list determined based on the shooting attribute information, marking a sensitive area containing a sensitive object in each video image frame of the original video data, and recording mark information containing the positions of all the sensitive areas; the method comprises the steps of processing the sensitive areas in each video image frame respectively, hiding sensitive objects contained in each video image frame, generating target video data based on each processed video image frame, and sending the target video data and corresponding mark information to a target terminal, so that after the target terminal receives the target video data, the processed sensitive areas in each video image frame can be restored based on the mark information, and privacy information in the video data can be protected. Compared with the existing video data sending technology, the video data obtained through universal coding is not sent to the target terminal, the sensitive area related to the privacy information is preprocessed before sending operation, the privacy data are determined automatically based on the shooting attribute information of the video data, the accuracy and automation of determination of the privacy data can be improved, and the obtaining efficiency of the privacy information is improved; on the other hand, the processed target video data and the corresponding mark information are sent to the target terminal, so that the target terminal can restore the target video data conveniently, and the normal playing of the video data can be ensured while the confidentiality and the safety of the privacy information are ensured.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a flowchart illustrating an implementation of a method for transmitting video data according to a first embodiment of the present invention;
fig. 2 is a flowchart illustrating a specific implementation of a method for sending video data S102 according to a second embodiment of the present invention;
fig. 3 is a flowchart illustrating a specific implementation of a method for sending video data S105 according to a third embodiment of the present invention;
fig. 4 is a flowchart illustrating a specific implementation of a method for transmitting video data according to a fourth embodiment of the present invention;
fig. 5 is a flowchart illustrating a specific implementation of a method S104 for sending video data according to a fifth embodiment of the present invention;
fig. 6 is a block diagram of a video data transmitting apparatus according to an embodiment of the present invention;
fig. 7 is a schematic diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
When the video sending condition is met, acquiring original video data matched with the video sending condition, then acquiring shooting attribute information associated with the video data, acquiring a sensitive object list determined based on the shooting attribute information, marking a sensitive area containing a sensitive object in each video image frame of the original video data, and recording mark information containing the positions of all the sensitive areas; the method comprises the steps of processing the sensitive areas in each video image frame respectively, hiding the sensitive objects contained in each video image frame, generating target video data based on each processed video image frame, and sending the target video data and corresponding mark information to a target terminal, so that after the target terminal receives the target video data, the processed sensitive areas in each video image frame can be restored based on the mark information, privacy information in the video data is protected, and the problems of low safety and confidentiality, and the like due to the fact that privacy information of users is easy to leak by the existing video data sending technology are solved.
In the embodiment of the present invention, the main execution body of the process is an electronic device, and the electronic device includes but is not limited to: the video data transmission device comprises a server, a computer, a smart phone, a notebook computer, a tablet computer and other devices capable of executing video data transmission. Fig. 1 shows a flowchart of an implementation of a method for transmitting video data according to a first embodiment of the present invention, which is detailed as follows:
in S101, original video data that satisfies a preset video transmission condition is determined.
In this embodiment, the electronic device may be configured with video transmission conditions, and when detecting that the preset video transmission conditions are currently satisfied, the operation of S101 may be performed. The video sending condition may be an event sending condition, for example, when the electronic device receives a video sending request sent by a target terminal, it is recognized that the video sending condition is satisfied, and the operation of S101 is executed; for another example, the electronic device monitors an online notification of a certain video capture device, and the video capture device is associated with a corresponding monitor terminal, i.e. a target terminal, in this case, the electronic device may recognize that a video transmission condition is met, so as to transmit the original video data uploaded by the video capture device to the monitor terminal associated with the video capture device.
In a possible implementation manner, the video sending condition may also be a time trigger condition, in which case, the electronic device may be configured with at least one trigger time node or trigger period, and if the electronic device detects that the current time reaches a preset trigger time node or a start time of the trigger period, it is recognized that the video sending condition is satisfied, and the operation of S101 is performed.
In this embodiment, each original video data may be associated with a corresponding video transmission condition, and when the electronic device detects that the video transmission condition is satisfied at a certain time, the electronic device may identify the original video data associated with the video transmission condition as the video data that needs to be transmitted.
For example, the electronic device is a video capture terminal, such as a distributed monitoring device, a camera, and the like, and video data associated with all video sending conditions are the same and are the video data being captured.
In this case, if the server receives a monitoring request sent by a certain monitoring device, the server may determine, based on a device identifier of the monitoring device, video data collected by a collection device associated with the monitoring device, and the video data is the video data to be sent, that is, the original video data matched with the video sending condition.
In S102, acquiring shooting attribute information associated with the original video data, and generating a sensitive object list based on the shooting attribute information; at least one sensitive object is contained in the sensitive object list; the shooting attribute information is used for determining related attribute information in the process of shooting original video data; the sensitive object is a shooting object related to the privacy of the user.
In this embodiment, the electronic device may obtain shooting attribute information associated with original video data to be sent, where the shooting attribute information is used to determine attribute information related to a process of shooting the original video data, and optionally, the shooting attribute information includes, but is not limited to: the shooting attribute information may further include shooting contents determined by recognizing the original video data.
In a possible implementation manner, when the acquisition device of the original video data generates the video data, the shooting attribute information may be directly configured, that is, each original video data is associated with corresponding shooting attribute information, in this case, the electronic device may analyze the original video data, and may obtain the corresponding shooting attribute information.
In a possible implementation manner, if the original video data is not associated with corresponding shooting attribute information in advance, the electronic device may analyze the original video data, determine attribute values corresponding to the original video data in multiple attribute dimensions, and generate shooting attribute information of the original video data based on the attribute values corresponding to the attribute dimensions obtained through analysis. For example, the electronic device may determine the illumination intensity of the scene according to the background image by identifying the background image corresponding to each video image frame in the original video data to obtain the shooting time information, and may also obtain the shooting scene information according to the shooting objects, such as buildings, ornaments, and the like, included in the background image, so as to generate the shooting attribute information corresponding to the original video data.
In this embodiment, after acquiring shooting attribute information associated with original video data, the electronic device may determine, based on the shooting attribute information, a sensitive object that may be included in the original video image, and generate a corresponding sensitive object list based on all the sensitive objects, so as to automatically determine an object that may be related to user privacy in the original video image. For example, if the shooting scene information of the original video data is an office scene, the objects related to the privacy of the user, i.e. the sensitive objects, may include: company documents, computer screens, company stamps, etc.; if the shooting scene information of the original video data is a conference scene, the sensitive object may include: a table label for marking names of conference personnel, a conference file, a face of a conference object and the like. Therefore, the sensitive object contained in the original video data has strong correlation with the shooting attribute information, so that the electronic equipment can generate a sensitive object list corresponding to the obtained shooting attribute information.
In a possible implementation manner, the electronic device may store a corresponding relationship between the shooting attribute information and the sensitive object, and if the shooting attribute information includes a plurality of attribute dimensions, the corresponding relationship may record a corresponding relationship between different attribute values of each attribute dimension and the sensitive object, and the different attribute values may correspond to one or more sensitive objects. In this case, the electronic device may determine, by querying the correspondence, the sensitive object corresponding to the attribute value of each attribute dimension in the shooting attribute information, for example, the sensitive object corresponding to the office is a company file, a computer screen, or the like, and generate the sensitive object list based on the sensitive objects corresponding to all the attribute values.
In this embodiment, the electronic device may determine the sensitive object list by shooting the attribute information, so that the electronic device may determine the sensitive object that may be included without identifying each video image frame in the original video data, thereby greatly improving the acquisition efficiency of the sensitive object list and reducing the data processing amount of the electronic device. Moreover, because the searched computation amount is greatly lower than that of image identification, the electronic device can further reduce the resource amount occupied by the processing of the original video data by determining the sensitive object list and then searching the corresponding sensitive area from the video image frame, so as to improve the sending efficiency of the video data.
In S103, sensitive regions including the sensitive object in each video image frame in the original video data are marked respectively, and based on the positions of all the sensitive regions in the corresponding video image frames, marking information for determining to mark the positions is generated.
In this embodiment, after determining the sensitive object list, the electronic device may identify, through the sensitive object list, a sensitive area in the original video data, where the sensitive area includes a user privacy concern. Specifically, the manner of identifying the sensitive region is specifically: respectively acquiring each video image frame in original video data, judging whether the video image frame contains any sensitive object in a sensitive object list, and taking the area containing the sensitive object in the video image frame as the sensitive area. It should be noted that, if more than two sensitive objects are included in one video image frame, the number of the sensitive areas in the video image frame is the same as the number of the sensitive objects included in the video image frame.
In a possible implementation manner, the manner of identifying the sensitive region may be: the electronic equipment can acquire an object template of each sensitive object in the sensitive object list, perform sliding matching in the video image frame, judge whether the similarity between any area and the object template is greater than a preset similarity threshold, and if so, identify that the area contains the sensitive object associated with the object template and identify the area as the sensitive area. Particularly, since the video image frames have strong correlation, after the electronic device identifies that any sensitive object is included in a certain video image frame, the electronic device can locate the area where the sensitive object is located in each subsequent video image frame based on an object tracking algorithm, so that sliding identification is not required to be performed every time, and the efficiency of identifying the sensitive area is improved.
In a possible implementation manner, the manner of identifying the sensitive region may be: the electronic device may import the video image frame into a preset convolutional neural network to extract an image feature vector in the video image frame, determine a photographic subject included in the video image frame based on the image feature vector, and if the photographic subject of the video image frame includes a sensitive subject in any one of the sensitive subject lists, locate an area where the sensitive subject is located in the video image frame to determine the sensitive area.
In this embodiment, the electronic device may record the area, the area center coordinates, the area profile, and the frame number of the video image frame of each sensitive area, generate the sensitive position parameter, and generate the above-mentioned mark information based on the sensitive position parameters corresponding to all sensitive areas.
In S104, each of the sensitive regions is processed by a preset preprocessing algorithm to obtain a plurality of processed video image frames, and the sensitive objects in the processed video image frames are hidden.
In the embodiment, in order to avoid privacy disclosure of a user, the electronic device improves the confidentiality and the security of a transmission process, and after the sensitive areas in each video image frame are identified, the sensitive areas can be processed through a preprocessing algorithm to hide sensitive objects contained in the video image frame. Wherein, the preprocessing algorithm includes but is not limited to: a blurring algorithm, a false point repairing algorithm, a region image replacing algorithm, a region content removing algorithm, and the like, wherein the blurring algorithm is specifically used for blurring a sensitive region, and the blurring process includes but is not limited to: the system comprises a Gaussian blur module, a dynamic module, a module for blurring the area based on a preset mosaic pattern and the like. The preprocessing algorithms adopted by different sensitive areas can be the same or different, and are determined according to actual conditions.
In a possible implementation manner, the electronic device may determine the preprocessing algorithm to be used according to an object type of the sensitive object corresponding to the sensitive area. For example, if the sensitive object is of a face type, the sensitive area of the sensitive object may be processed by using an area image replacement algorithm, for example, a preset standard face is covered on the corresponding sensitive area; if the sensitive object is an electronic screen, the sensitive area of the sensitive object can be processed by adopting a false point repairing algorithm, for example, an interface on which privacy information is displayed on the electronic screen comprises privacy content and a background image, and a display layer of the privacy content is on a display layer of the background image.
In S105, target video data is generated based on all the processed video image frames, and the target video data and the tag information are sent to a target terminal, so that the target terminal restores each processed sensitive area in the target video data based on the tag information.
In this embodiment, after hiding the sensitive object in each video image frame, the electronic device obtains a plurality of processed video image frames, rearranges the processed video image frames and the video image frames not containing the sensitive object according to the frame number, and encapsulates the rearranged video image frames to generate the target video data. The electronic device can send the target video data and the generated mark information to the target terminal together so as to complete the sending task of the video data. The target terminal may specifically determine based on the video sending condition, for example, if the video sending condition is an event sending condition, that is, if a video sending request sent by a correspondent node is received, a sending process of video data is executed, where in this case, the correspondent node is the target terminal; if the video transmission condition is a time transmission condition, different transmission time nodes may be associated with corresponding transmission terminals, and a target terminal to be transmitted is determined based on the transmission time node matched at the current time. Of course, if the electronic device is a video capture device associated with a corresponding monitor terminal, the monitor terminal may be identified as a target terminal, and the target video data may be sent to the monitor terminal.
In this embodiment, after receiving the target video data and the corresponding tag information, the target terminal may identify and obtain the sensitive areas included in each processed video image frame in the target video data, and perform reduction processing on each sensitive area through a reduction algorithm associated with the preprocessing algorithm, so as to recover the sensitive objects in each sensitive area, ensure consistency of image content in the video data, and implement normal playing of the video data. The reduction algorithm and the preprocessing algorithm are pre-agreed between the electronic equipment and the target terminal, so that the confidentiality and the safety of reduction operation can be guaranteed.
As can be seen from the above, when the sending method of video data provided by the embodiment of the present invention meets the video sending condition, the original video data matched with the video sending condition is obtained, then the shooting attribute information associated with the video data is obtained, the sensitive object list determined based on the shooting attribute information is obtained, the sensitive area containing the sensitive object is marked in each video image frame of the original video data, and the marking information containing the positions of all the sensitive areas is recorded; the method comprises the steps of processing the sensitive areas in each video image frame respectively, hiding sensitive objects contained in each video image frame, generating target video data based on each processed video image frame, and sending the target video data and corresponding mark information to a target terminal, so that after the target terminal receives the target video data, the processed sensitive areas in each video image frame can be restored based on the mark information, and privacy information in the video data can be protected. Compared with the existing video data sending technology, the video data obtained through universal coding is not sent to the target terminal, the sensitive area related to the privacy information is preprocessed before sending operation, the privacy data are determined automatically based on the shooting attribute information of the video data, the accuracy and automation of determination of the privacy data can be improved, and the obtaining efficiency of the privacy information is improved; on the other hand, the processed target video data and the corresponding mark information are sent to the target terminal, so that the target terminal can restore the target video data conveniently, and the normal playing of the video data can be ensured while the confidentiality and the safety of the privacy information are ensured.
Fig. 2 shows a flowchart of a specific implementation of a method for sending video data S102 according to a second embodiment of the present invention. Referring to fig. 2, with respect to the embodiment shown in fig. 1, in the sending method of video data provided by this embodiment, S102 includes: s1021~ S1027, detailed description is as follows:
furthermore, the shooting attribute information comprises a shooting scene and a user identifier to which the original video data belongs;
the acquiring shooting attribute information associated with the original video data and generating a sensitive object list based on the shooting attribute information includes:
in S1021, position information when the original video data is captured is determined, the position information is marked on a preset map application, and at least one candidate scene type corresponding to the position information is determined.
In S1022, any one of the video image frames in the original video data is parsed, a shooting object included in the video image frame is determined, and matching degrees between the candidate scene types and the video image frames are respectively calculated based on the shooting object.
In S1023, the shooting scene of the original video data is determined based on the matching degree corresponding to the candidate scene type, and a first candidate object list corresponding to the shooting scene is obtained.
In this embodiment, the shooting attribute information acquired by the electronic device at least includes two types, which are a shooting scene for shooting the original video data and a user identifier corresponding to a user to which the original video data belongs. Specifically, the shooting scene for acquiring the original video data is determined through three steps S1021 to S1023.
In this embodiment, the video capture device may be configured with a positioning module, and the positioning module may acquire position information of the video capture device, where the position information may be represented in a form of latitude and longitude, that is, an absolute position, or may be a relative position, for example, a relative position determined in a form of a wireless local area network or a bluetooth network, and the manner and the method for acquiring the position information are not limited herein. When the video acquisition device generates original video data, the position information determined by the positioning module can be added into the original video data, and the electronic device can analyze the original video data to obtain the position information.
In this embodiment, since the position information is generally a two-dimensional position coordinate, most of the photographed scenes are in the building, the building includes a plurality of floors, one floor includes a plurality of different units, and different units may also correspond to different scene types, based on which, in order to improve the accuracy of the photographed scenes obtained by recognition, the position information alone is inaccurate, so that the area corresponding to the position information may be marked on a preset map application to determine the building associated with the area, and at least one candidate scene type may be obtained according to the name and type of the building. For example, the location information marked on the map application may determine that the location corresponds to a shopping mall, the shopping mall includes different shops, and the different shops may correspond to a scene type, so that the location information may be associated with a plurality of different candidate scene types.
In this embodiment, the electronic device may extract any one of the video image frames from the original video image for parsing to determine a photographic subject contained within the video image frame. Because the original video data corresponds to a shooting scene, that is, the shooting scenes corresponding to all the video image frames in the original video data are the same, the electronic device can arbitrarily select one video image frame from the original video data to analyze. Of course, the electronic device may perform preliminary analysis on the picture in the video image frame, and if most of the selected area in the video image frame is blocked by the obstacle, in this case, the electronic device may reselect another video image frame to perform image analysis to determine the shooting object included in the video image frame. The electronic device may identify an object type of each photographic object and calculate a degree of matching between the video image frame and a candidate scene type based on the object type.
In a possible implementation manner, the manner of calculating the matching degree may be: the electronic device may determine, according to whether the identified photographic subject is included in the candidate object list, a matching degree corresponding to the candidate scene type based on the number of the included photographic subjects and the total number of the photographic subjects.
In this embodiment, the electronic device may select, from the matching degrees of the multiple candidate scene types, the candidate scene type with the largest matching degree value as the shooting scene corresponding to the original video data, and acquire the first candidate object list associated with the shooting scene in advance.
In S1024, the security level associated with the user identifier and the user identity information are queried.
In this embodiment, the electronic device may obtain a user identifier corresponding to the original video data, where the user identifier may be a user name, a user number, and other information that may represent a user identity of a user to which the user belongs, and the user identifier may be encapsulated in a data packet of the original video data, and the electronic device may directly obtain the user identifier by parsing the data packet.
In this embodiment, the electronic device may store a user database, and based on the user identifier carried in the original video data, the user-related information, such as the above security level and the user identity information, may be determined from the user database. For example, the user identity information may be information that may be used to represent the user identity, the user's profession, the user's title, and the like.
In S1025, at least one feature-sensitive content associated with the user identity information is obtained, and an extended sensitive content associated with the feature-sensitive content is determined according to the privacy level.
In this embodiment, different user identity information may correspond to different sensitive content, and based on this, after determining the user identity information corresponding to the original video data, the electronic device may determine the feature sensitive content associated therewith. For example, if the user identity information is a doctor, the corresponding feature-sensitive content may be patient information; if the user identity information is an administrative chief, the corresponding feature sensitive content may be a company document. The electronic equipment can determine feature sensitive content associated with the user identity information, and can also determine extension sensitive content corresponding to each feature sensitive content according to different security levels; wherein, the higher the security level is, the more the corresponding extension sensitive content is; conversely, the lower the security level, the less the corresponding extended sensitive content.
Further, as another embodiment of the present application, the determining, in the above S1025, the extended sensitive content associated with the feature sensitive content according to the security level may specifically include the following steps, respectively:
step 1: and determining an effective extension range related to the security level based on the corresponding relation between the preset security level and the extension range.
Step 2: and calculating content association distances between each candidate sensitive content in the sensitive content dictionary and the characteristic sensitive content based on a preset sensitive content dictionary. Wherein, the calculation algorithm for calculating the content association distance is as follows:
Figure 995404DEST_PATH_IMAGE001
wherein the content of the first and second substances,RelatedDistassociating a distance to the content;Targetthe feature sensitive content;Optionthe candidate sensitive content is selected;Seman(x-y) Is a semantic similarity function;BaseSemanis a preset reference semantic similarity;Pic(x) Characterizing a function for the graph;γandφis a preset weighting coefficient;
Figure 606513DEST_PATH_IMAGE002
the value corresponding to the feature sensitive content in the ith semantic dimension is obtained;
Figure 288031DEST_PATH_IMAGE003
the value corresponding to the ith semantic dimension of the candidate sensitive content is obtained;nis the total number of semantic dimensions.
And step 3: and selecting the candidate sensitive content of which the content association distance is smaller than the effective extension range as the extension sensitive content associated with the feature sensitive content.
In this embodiment, the electronic device may determine an extension range corresponding to the security level according to the security level, wherein the higher the security level is, the larger the corresponding extension range is; the lower the privacy level, the smaller the corresponding extension. The electronic equipment also stores a sensitive content dictionary in advance, and the sensitive content dictionary stores a plurality of sensitive words related to the privacy information, namely the candidate sensitive content. The electronic equipment can determine the content association distance between the candidate sensitive content and the feature sensitive content through a preset content association distance calculation algorithm, and selects the extended sensitive content matched with the security level based on the extended range related to the security level. The method comprises the steps of calculating a content association distance between extension sensitive content and feature sensitive content, wherein the content association distance is determined by three aspects, namely semantic similarity, graphic representation capability and word vector distance; if the semantic similarity between the two is higher, the corresponding content association distance is closer; if the graphic representation capability between the two is closer, the corresponding content association distance is closer. In addition, the electronic device may further mark candidate sensitive content and feature sensitive content in a preset word vector coordinate system, each coordinate axis in the word vector coordinate system represents different semantic dimensions, based on feature values corresponding to a plurality of semantic dimensions, may determine coordinate points of the candidate sensitive content and the feature sensitive content in the word vector coordinate system, and calculate a vector distance between the two coordinate points as the word vector distance.
In the embodiment of the application, the effective extension range corresponding to the security level is determined through the security level, the content association distance between the candidate sensitive content and the feature sensitive content is calculated, and the candidate sensitive content with higher association degree and matched with the security level is selected as the extension sensitive content, so that the accuracy of selecting the extension sensitive content is improved.
In S1026, a second candidate list is generated according to all the feature sensitive content and all the extension sensitive content.
In this embodiment, the electronic device may determine a sensitive object based on the user identity based on the feature sensitive content and each feature sensitive content, and generate the second candidate object list.
In S1027, the sensitive object list is generated based on the first candidate object list and the second candidate object list.
In this embodiment, after determining the first candidate object list determined based on the shooting scene and the second candidate object list determined based on the user identifier, the electronic device may generate the sensitive object list corresponding to the original video data based on the sensitive objects included in the two lists. The electronic device may select a sensitive object corresponding to a union or an intersection of the two candidate object lists to obtain the sensitive object list.
In the embodiment of the application, a first candidate object list and a second candidate object list are generated through a shooting scene and a user identifier of original video data, and a sensitive object list is generated based on the two lists, so that the sensitive object list not only can consider the scene factor of shooting the original video data, but also considers the identity factor of a user to which the original video data belongs, the accuracy of sensitive objects in the sensitive object list is improved, the accuracy of subsequent identification of sensitive contents in the video data can be further improved, and the confidentiality degree of private information is improved.
Fig. 3 shows a flowchart of a specific implementation of a method for sending video data S105 according to a third embodiment of the present invention. Referring to fig. 3, with respect to the embodiment shown in fig. 1, in the sending method of video data provided by this embodiment, S105 includes: S301-S304, detailed details are as follows:
further, the generating target video data based on all the processed video image frames and sending the target video data and the tag information to a target terminal includes:
in S301, a routing path corresponding to transmission of the target video data is determined based on the communication address of the target terminal.
In this embodiment, the electronic device may obtain a communication address of the target terminal, and generate a routing path corresponding to forwarding target video data based on the local address and the communication address.
In S302, the number of routes passed by the routing path and the network environment in which each routing device is located are identified, and a risk level corresponding to the routing path is determined.
In this embodiment, the electronic device may determine, through the determined routing path, each routing device that needs to be passed through during forwarding, so as to be able to identify and obtain the number of routes, and may also identify and obtain a network environment where the electronic device is located through the routing path, for example, it may be determined through the network environment that the routing device is used to connect a local area network and the internet, or is used to access the local area network through the internet. The electronic equipment calculates and obtains a risk level corresponding to the routing path based on the two parameters; wherein, the more the number of routes is, the higher the corresponding risk level is; and the lower the security level corresponding to the network environment is, the higher the risk level corresponding to the routing path is.
In a possible implementation manner, different network environments may correspond to one risk coefficient, and the electronic device may superimpose risk coefficients corresponding to network environments in which the routing devices pass through, so as to obtain a risk level of the routing path.
In S303, an encryption key associated with the risk level is generated, and the target video data is encrypted by the encryption key to generate an encrypted video file; the number of bits of the encryption key is determined by the risk level.
In this embodiment, after obtaining the risk level corresponding to the routing path, the electronic device may generate an encryption key corresponding to the routing path. For example, the higher the risk level, the more the number of bits of the generated encryption key. Or, different key generation algorithms may be associated with different risk levels, the encryption key may be generated based on a key generation algorithm matching the risk levels, and the target video data may be encrypted by the encryption key, so as to obtain a corresponding encrypted video file.
In S304, the encrypted video file and the tag information are sent to the target terminal.
In this embodiment, the electronic device sends the encrypted video file and the tag information to the target terminal. The target terminal may store a decryption key corresponding to the encryption key, and may decrypt the encrypted video file through the locally stored decryption key to obtain the target video data.
In the embodiment of the application, the corresponding encryption key is generated by determining the risk level corresponding to the routing path for sending the video data, and the encryption processing is performed on the encrypted video file through the encryption key, so that the security and confidentiality of video data transmission are improved.
Fig. 4 is a flowchart illustrating a specific implementation of a method for transmitting video data according to a fourth embodiment of the present invention. Referring to fig. 4, compared with the embodiment of fig. 1, in the sending method of video data provided in this embodiment, S101 specifically includes S401 to S402, and S104 specifically includes: S405-S406, detailed details are as follows:
in S401, receiving a video transmission request transmitted by the target terminal; the video sending request comprises an object identification of at least one monitored object.
In S402, if the video data associated with the object identifier is stored, it is recognized that the video transmission condition is satisfied, and the video data associated with the object identifier is determined as the original video data.
In this embodiment, the electronic device may receive a video sending request sent by a target terminal, where the video sending request may specifically be a request for acquiring monitoring video data, and therefore the video sending request may carry an object identifier of a monitoring object. The object identification may be a user name, a user code, and the like of the monitoring object, which may represent the identity of the monitoring object. The electronic device may detect whether video data associated with the object identifier exists from within a stored video database, and if so, identify that a video transmission condition is satisfied, and identify the video data associated with the object identifier as original video data. On the contrary, if the video data associated with the object identifier of the monitored object is not stored, a prompt message that the video does not exist can be fed back to the target terminal.
In S403, acquiring shooting attribute information associated with the original video data, and generating a sensitive object list based on the shooting attribute information; at least one sensitive object is contained in the sensitive object list; the shooting attribute information is used for determining related attribute information in the process of shooting original video data; the sensitive object is a shooting object related to the privacy of the user.
In S404, sensitive regions including the sensitive object in each video image frame in the original video data are marked respectively, and based on the positions of all the sensitive regions in the corresponding video image frame, marking information for determining to mark the positions is generated.
In S405, a face image associated with the object identifier is acquired, and the face image is imported into an electronic seal generation algorithm to generate a blurred electronic seal corresponding to the monitored object.
In this embodiment, the implementation manners of S404 and S405 are completely the same as the implementation manners of S102 and S103 in the embodiment of fig. 1, and refer to the description of S102 and S103 specifically, which is not described herein again.
In S406, a monitoring object associated with the sensitive region in the video image frame is identified, the sensitive region is blurred based on a blurred electronic seal associated with the monitoring object, and an object identifier of the monitoring object associated with the sensitive region is recorded in the tag information, so that the target terminal performs blur restoration processing on the sensitive region based on a face image associated with the locally stored object identifier.
In this embodiment, the preprocessing mode of the electronic device for the sensitive area is specifically fuzzy processing, and the face image associated with the object identifier of the monitored object is imported into a preset electronic seal generation algorithm, so as to generate an electronic seal associated with the monitored object. Then, when the electronic device preprocesses the sensitive area, the electronic device can identify whether the sensitive area contains the face of the monitored object, and if so, the electronic device performs fuzzy processing on the sensitive area through the fuzzy electronic seal corresponding to the monitored object. If the sensitive area does not contain the face of any monitored object, the sensitive area can be processed by other preprocessing algorithms. The object identification of the monitoring object is sent to the electronic equipment by the target terminal, namely the target terminal locally stores the face image of the monitoring object, and the same fuzzy electronic seal can be generated based on the locally stored face image, and the fuzzy sensitive area is restored based on the fuzzy electronic seal.
In S407, target video data is generated based on all the processed video image frames, and the target video data and the tag information are sent to a target terminal, so that the target terminal restores each processed sensitive area in the target video data based on the tag information.
In this embodiment, the implementation manner of S407 is completely the same as the implementation manner of S105 in the embodiment of fig. 1, and reference may be specifically made to the description of S105, which is not described herein again.
In the embodiment of the application, the video sending request sent by the target terminal is received, the object identifier of the monitored object is obtained from the request, the fuzzy electronic seal is generated through the face image of the object identifier, and the fuzzy processing is performed on the light and dark areas associated with the monitored object, so that the fuzzy electronic seal can be prevented from being transmitted in the communication process, and the confidentiality and the safety of private data can be improved.
Fig. 5 shows a flowchart of a specific implementation of a method S104 for sending video data according to a fifth embodiment of the present invention. Referring to fig. 5, compared with any one of the embodiments in fig. 1 to 3, the sending method of video data in the embodiment of S104 specifically includes steps S501 to S508, which are specifically detailed as follows:
in S501, an object type corresponding to the sensitive region is identified.
In this embodiment, the electronic device may identify an object type of a sensitive object associated with the sensitive region. The object type at least comprises two types, namely a face type and a non-face type. For the non-face type sensitive area, preprocessing can be carried out in a mode of S502-S506; for the sensitive region of the face type, the preprocessing may be performed in the manners of S507 and S508.
In S502, if the object type is a non-face type, extracting an associated background region corresponding to the sensitive region from the video image frame; the associated background region is a region surrounding the sensitive region in the video image frame.
In S503, a regional sub-image is extracted from the sensitive region through a preset sliding window, and an image similarity between the regional sub-image and an associated background region adjacent to the regional sub-image is calculated.
In S504, if the image similarity is greater than a preset similarity threshold, an extended replacement image matched with the size of the sliding window is generated based on the associated background region, and the extended replacement image is overlaid on the region sub-image to block the region sub-image.
In S505, if there is an area of the sensitive area that does not cover the extended replacement image, the operation of extracting the area sub-image from the sensitive area through the preset sliding window is executed again until all the sensitive areas are covered by the extended replacement images.
In this embodiment, if it is detected that the sensitive area is a non-face area, an associated background area corresponding to the sensitive area may be determined, and the sensitive content in the sensitive area is subjected to background restoration based on the associated background area, so that the sensitive content in the sensitive area is shielded by the content of the associated background, and the purpose of hiding the sensitive content is achieved. The electronic equipment can perform sliding framing in the sensitive area through the sliding window, judge the image similarity between the sub-image of the area and the adjacent associated background area, if the similarity between the sub-image of the area and the adjacent associated background area is higher, generate an extended replacement image matched with the size of the sliding window directly based on the associated background area connected with the sub-image of the area to cover the sensitive content in the sensitive area, and continue framing the sub-image of the area which is not subjected to covering processing in the sensitive area through the sliding window until each sensitive area is covered by the eye relief replacement image.
In S506, if the image similarity is less than or equal to a preset similarity threshold, increasing an area of an associated background region, and returning to perform the operation of extracting the associated background region corresponding to the sensitive region in the video image frame based on the increased area.
In this embodiment, if the similarity value between the framed region sub-image and the adjacent associated background image is smaller, that is, smaller than or equal to the preset similarity threshold, it indicates that the extended replacement image generated based on the associated background image is overlaid on the region sub-image, so that the image is not harmonious, and other people can see the trace of repair.
In the embodiment of the application, the extended alternative image with higher similarity to the associated background image is generated, and the extended alternative image is overlaid on the sensitive area step by step to shield the sensitive object contained in the sensitive area, so that the protection of the privacy content of the user is realized.
In S507, if the object type is a face type, a substitute face matched with the sensitive object associated with the sensitive area is selected from a preset face image library.
In S508, the replacement face is overlaid on the sensitive area to shield the sensitive area.
In this embodiment, if the electronic device recognizes that the sensitive object in the sensitive area is of a face type, the electronic device may cover the sensitive area with a replacement face matched with the sensitive object, so as to block the sensitive area in the video image frame, and make the blocking operation invisible to other users, that is, no repair trace is perceived.
In the embodiment of the application, the face in the sensitive area is shielded through the preset replaced face, so that the restoration trace of the video image frame generated by preprocessing can be reduced while the privacy data of a user is prevented from being leaked, and the uniformity of the whole display effect of the image is improved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
Fig. 6 is a block diagram illustrating a configuration of a video data transmitting apparatus according to an embodiment of the present invention, where the electronic device includes units for performing the steps in the corresponding embodiment of fig. 1. Please refer to fig. 1 and fig. 1 for the corresponding description of the embodiment. For convenience of explanation, only the portions related to the present embodiment are shown.
Referring to fig. 6, the video data transmitting apparatus includes:
an original video data obtaining unit 61, configured to determine original video data, where the original video data meets a preset video sending condition;
a sensitive object list generating unit 62, configured to obtain shooting attribute information associated with the original video data, and generate a sensitive object list based on the shooting attribute information; at least one sensitive object is contained in the sensitive object list; the shooting attribute information is used for determining related attribute information in the process of shooting original video data; the sensitive object is a shooting object related to the privacy of a user;
a marking information generating unit 63, configured to mark a sensitive area including the sensitive object in each video image frame in the original video data, and generate marking information for determining to mark the position based on the positions of all the sensitive areas in the corresponding video image frame;
the preprocessing unit 64 is configured to process each of the sensitive regions through a preset preprocessing algorithm to obtain a plurality of processed video image frames, where a sensitive object in each of the processed video image frames is hidden;
a video data sending unit 65, configured to generate target video data based on all the processed video image frames, and send the target video data and the tag information to a target terminal, so that the target terminal restores each processed sensitive area in the target video data based on the tag information.
Optionally, the shooting attribute information includes a shooting scene and a user identifier to which the original video data belongs;
the sensitive object list generating unit 62 includes:
the candidate scene type determining unit is used for determining position information when the original video data are shot, marking the position information on a preset map application, and determining at least one candidate scene type corresponding to the position information;
the matching degree calculation unit is used for analyzing any one of the video image frames in the original video data, determining a shooting object contained in the video image frame, and respectively calculating the matching degree between the candidate scene type and the video image frame based on the shooting object;
a first candidate object list determining unit, configured to determine the shooting scene of the original video data based on a matching degree corresponding to the candidate scene type, and obtain a first candidate object list corresponding to the shooting scene;
the user identification query unit is used for querying the security level and the user identity information associated with the user identification;
the extended sensitive content determining unit is used for acquiring at least one feature sensitive content associated with the user identity information and determining the extended sensitive content associated with the feature sensitive content according to the security level;
a second candidate object list generating unit, configured to generate a second candidate object list according to all the feature sensitive contents and all the extension sensitive contents;
and the candidate object list merging unit is used for generating the sensitive object list based on the first candidate object list and the second candidate object list.
Optionally, the extension-sensitive content determining unit includes:
an effective extension range determining unit, configured to determine an effective extension range related to a preset security level based on a corresponding relationship between the security level and the extension range;
the content association distance calculation unit is used for calculating content association distances between each candidate sensitive content in the sensitive content dictionary and the characteristic sensitive content based on a preset sensitive content dictionary;
an extension sensitive content selecting unit, configured to select the candidate sensitive content of which the content association distance is smaller than the effective extension range, as the extension sensitive content associated with the feature sensitive content;
wherein, the calculation algorithm for calculating the content association distance is as follows:
Figure 354731DEST_PATH_IMAGE001
wherein the content of the first and second substances,RelatedDistassociating a distance to the content;Targetthe feature sensitive content;Optionthe candidate sensitive content is selected;Seman(x-y) Is a semantic similarity function;BaseSemanis a preset reference semantic similarity;Pic(x) Characterizing a function for the graph;γandφis a preset weighting coefficient;
Figure 288052DEST_PATH_IMAGE002
the value corresponding to the feature sensitive content in the ith semantic dimension is obtained;
Figure 550406DEST_PATH_IMAGE003
the value corresponding to the ith semantic dimension of the candidate sensitive content is obtained;nis the total number of semantic dimensions.
Optionally, the video data transmitting unit 65 includes:
a routing path determining unit, configured to determine, based on a communication address of the target terminal, a routing path corresponding to sending the target video data;
a risk level determining unit, configured to identify the number of routes that the routing path passes through and a network environment in which each routing device is located, and determine a risk level corresponding to the routing path;
an encryption key generation unit, configured to generate an encryption key associated with the risk level, and encrypt the target video data with the encryption key to generate an encrypted video file; the number of bits of the encryption key is determined by the risk level;
and the encrypted file sending unit is used for sending the encrypted video file and the mark information to the target terminal.
Optionally, the original video data obtaining unit 61 includes:
a video sending request receiving unit, configured to receive a video sending request sent by the target terminal; the video sending request comprises an object identifier of at least one monitored object;
the object identifier searching unit is used for identifying that the video sending condition is met if the video data associated with the object identifier is stored, and determining the video data associated with the object identifier as the original video data;
correspondingly, the preprocessing unit 64 includes:
the fuzzy electronic seal generating unit is used for acquiring a face image associated with the object identifier, importing the face image into an electronic seal generating algorithm and generating a fuzzy electronic seal corresponding to the monitored object;
and the fuzzy electronic seal processing unit is used for identifying a monitoring object associated with the sensitive area in the video image frame, performing fuzzy processing on the sensitive area based on the fuzzy electronic seal associated with the monitoring object, and recording an object identifier of the monitoring object associated with the sensitive area in the mark information, so that the target terminal performs fuzzy reduction processing on the sensitive area based on a face image associated with the locally stored object identifier.
Optionally, the preprocessing unit 64 includes:
the object type identification unit is used for identifying the object type corresponding to the sensitive area;
a non-face type processing unit, configured to extract, if the object type is a non-face type, an associated background region corresponding to the sensitive region from the video image frame; the associated background region is a region surrounding the sensitive region in the video image frame;
the image similarity calculation unit is used for extracting a region sub-image from the sensitive region through a preset sliding window and calculating the image similarity between the region sub-image and an adjacent associated background region of the region sub-image;
an extended replacement image replacing unit, configured to generate an extended replacement image that matches the size of the sliding window based on the associated background region if the image similarity is greater than a preset similarity threshold, and overlay the extended replacement image on the region sub-image to block the region sub-image;
a return execution unit, configured to, if there is an area of the sensitive area that does not cover the extended replacement image, return to execute the operation of extracting the area sub-image from the sensitive area through a preset sliding window until all the sensitive areas are covered by each of the extended replacement images;
and the background region expansion unit is used for increasing the region area of the associated background region if the image similarity is less than or equal to a preset similarity threshold, and returning to execute the operation of extracting the associated background region corresponding to the sensitive region in the video image frame based on the increased region area.
Optionally, the preprocessing unit 64 further includes:
the face type processing unit is used for selecting a replacement face matched with the sensitive object associated with the sensitive area from a preset face image library if the object type is the face type;
and the replaced face covering unit is used for covering the replaced face on the sensitive area so as to shield the sensitive area.
Therefore, the electronic device provided by the embodiment of the invention can also acquire the original video data matched with the video sending condition when the video sending condition is met, then acquire the shooting attribute information associated with the video data, obtain the sensitive object list determined based on the shooting attribute information, mark the sensitive area containing the sensitive object in each video image frame of the original video data, and record the mark information containing the positions of all the sensitive areas; the method comprises the steps of processing the sensitive areas in each video image frame respectively, hiding sensitive objects contained in each video image frame, generating target video data based on each processed video image frame, and sending the target video data and corresponding mark information to a target terminal, so that after the target terminal receives the target video data, the processed sensitive areas in each video image frame can be restored based on the mark information, and privacy information in the video data can be protected. Compared with the existing video data sending technology, the video data obtained through universal coding is not sent to the target terminal, the sensitive area related to the privacy information is preprocessed before sending operation, the privacy data are determined automatically based on the shooting attribute information of the video data, the accuracy and automation of determination of the privacy data can be improved, and the obtaining efficiency of the privacy information is improved; on the other hand, the processed target video data and the corresponding mark information are sent to the target terminal, so that the target terminal can restore the target video data conveniently, and the normal playing of the video data can be ensured while the confidentiality and the safety of the privacy information are ensured.
Fig. 7 is a schematic diagram of an electronic device according to another embodiment of the invention. As shown in fig. 7, the electronic apparatus 7 of this embodiment includes: a processor 70, a memory 71 and a computer program 72, such as a transmission program of video data, stored in said memory 71 and executable on said processor 70. The processor 70, when executing the computer program 72, implements the steps in the above-described respective embodiments of the method of transmitting video data, such as S101 to S105 shown in fig. 1. Alternatively, the processor 70, when executing the computer program 72, implements the functions of the units in the above-described device embodiments, such as the functions of the modules 61 to 65 shown in fig. 6.
Illustratively, the computer program 72 may be divided into one or more units, which are stored in the memory 71 and executed by the processor 70 to accomplish the present invention. The one or more units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 72 in the electronic device 7.
The electronic device may include, but is not limited to, a processor 70, a memory 71. It will be appreciated by those skilled in the art that fig. 7 is merely an example of the electronic device 7, and does not constitute a limitation of the electronic device 7, and may include more or less components than those shown, or combine certain components, or different components, for example, the electronic device may also include input output devices, network access devices, buses, etc.
The Processor 70 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 71 may be an internal storage unit of the electronic device 7, such as a hard disk or a memory of the electronic device 7. The memory 71 may also be an external storage device of the electronic device 7, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the electronic device 7. Further, the memory 71 may also include both an internal storage unit and an external storage device of the electronic device 7. The memory 71 is used for storing the computer program and other programs and data required by the electronic device. The memory 71 may also be used to temporarily store data that has been output or is to be output.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (9)

1. A method for transmitting video data, comprising:
determining original video data, wherein the original video data meet preset video sending conditions;
acquiring shooting attribute information associated with the original video data, and generating a sensitive object list based on the shooting attribute information; at least one sensitive object is contained in the sensitive object list; the shooting attribute information is used for determining related attribute information in the process of shooting original video data; the sensitive object is a shooting object related to the privacy of a user;
respectively marking the sensitive areas containing the sensitive objects in each video image frame in the original video data, and generating marking information for determining and marking the positions based on the positions of all the sensitive areas in the corresponding video image frames;
respectively processing each sensitive area through a preset preprocessing algorithm to obtain a plurality of processed video image frames, wherein sensitive objects in the processed video image frames are hidden;
generating target video data based on all the processed video image frames, and sending the target video data and the tag information to a target terminal so that the target terminal restores each processed sensitive area in the target video data based on the tag information;
the shooting attribute information comprises a shooting scene and a user identifier to which the original video data belongs;
the acquiring shooting attribute information associated with the original video data and generating a sensitive object list based on the shooting attribute information includes:
determining position information when the original video data is shot, marking the position information on a preset map application, and determining at least one candidate scene type corresponding to the position information;
analyzing any one of the video image frames in the original video data, determining a shooting object contained in the video image frame, and respectively calculating the matching degree between the candidate scene type and the video image frame based on the shooting object;
determining the shooting scene of the original video data based on the matching degree corresponding to the candidate scene type, and acquiring a first candidate object list corresponding to the shooting scene;
inquiring the security level and the user identity information associated with the user identification;
acquiring at least one feature sensitive content associated with the user identity information, and determining an extension sensitive content associated with the feature sensitive content according to the privacy level;
generating a second candidate object list according to all the feature sensitive contents and all the extension sensitive contents;
generating the sensitive object list based on the first candidate object list and the second candidate object list.
2. The sending method of claim 1, wherein the obtaining at least one feature-sensitive content associated with the user identity information and determining an extended sensitive content associated with the feature-sensitive content according to the privacy level comprises:
determining an effective extension range related to the security level based on a corresponding relation between a preset security level and the extension range;
calculating content association distances between each candidate sensitive content in the sensitive content dictionary and the feature sensitive content based on a preset sensitive content dictionary;
selecting the candidate sensitive content of which the content association distance is smaller than the effective extension range as the extension sensitive content associated with the feature sensitive content;
wherein, the calculation algorithm for calculating the content association distance is as follows:
Figure 200523DEST_PATH_IMAGE001
wherein the content of the first and second substances,RelatedDistassociating a distance to the content;Targetthe feature sensitive content;Optionthe candidate sensitive content is selected;Seman(Target-Option) Is a semantic similarity function;Pic(Target) AndPic(Option) Characterizing a function for the graph;BaseSemanis a preset reference semantic similarity;γandφis a preset weighting coefficient;
Figure 776998DEST_PATH_IMAGE002
the value corresponding to the feature sensitive content in the ith semantic dimension is obtained;
Figure 437786DEST_PATH_IMAGE003
the value corresponding to the ith semantic dimension of the candidate sensitive content is obtained;nis the total number of semantic dimensions.
3. The method according to claim 1, wherein the generating target video data based on all the processed video image frames and sending the target video data and the tag information to a target terminal comprises:
determining a routing path corresponding to the sending of the target video data based on the communication address of the target terminal;
identifying the number of routes passed by the routing path and the network environment of each routing device, and determining the risk level corresponding to the routing path;
generating an encryption key associated with the risk level, and encrypting the target video data through the encryption key to generate an encrypted video file; the number of bits of the encryption key is determined by the risk level;
and sending the encrypted video file and the mark information to the target terminal.
4. The transmission method according to any one of claims 1 to 3, wherein the determining original video data comprises:
receiving a video sending request sent by the target terminal; the video sending request comprises an object identifier of at least one monitored object;
if the video data associated with the object identifier is stored, identifying that the video sending condition is met, and determining the video data associated with the object identifier as the original video data;
correspondingly, the processing each sensitive area through a preset preprocessing algorithm to obtain a plurality of processed video image frames includes:
acquiring a face image associated with the object identifier, importing the face image into an electronic seal generation algorithm, and generating a fuzzy electronic seal corresponding to the monitored object;
and identifying a monitoring object associated with the sensitive area in the video image frame, performing fuzzy processing on the sensitive area based on a fuzzy electronic seal associated with the monitoring object, and recording an object identifier of the monitoring object associated with the sensitive area in the mark information, so that the target terminal performs fuzzy reduction processing on the sensitive area based on a face image associated with the locally stored object identifier.
5. The sending method according to any one of claims 1 to 3, wherein the processing each of the sensitive regions by a preset preprocessing algorithm to obtain a plurality of processed video image frames comprises:
identifying the object type corresponding to the sensitive area;
if the object type is a non-face type, extracting a related background area corresponding to the sensitive area from the video image frame; the associated background region is a region surrounding the sensitive region in the video image frame;
extracting a region subimage from the sensitive region through a preset sliding window, and calculating the image similarity between the region subimage and an adjacent associated background region of the region subimage;
if the image similarity is larger than a preset similarity threshold, generating an extended alternative image matched with the size of the sliding window based on the associated background area, and covering the extended alternative image on the area sub-image to shield the area sub-image;
if the sensitive area has an area which is not covered by the extension replacement image, returning to execute the operation of extracting the area sub-image from the sensitive area through a preset sliding window until all the sensitive areas are covered by the extension replacement images;
if the image similarity is smaller than or equal to a preset similarity threshold, increasing the area of the associated background area, and returning to execute the operation of extracting the associated background area corresponding to the sensitive area in the video image frame based on the increased area.
6. The sending method of claim 5, further comprising, after the identifying the object type corresponding to the sensitive region:
if the object type is a face type, selecting a replacement face matched with the sensitive object associated with the sensitive area from a preset face image library;
covering the replacement human face on the sensitive area so as to shield the sensitive area.
7. A control apparatus for transmission of video data, comprising:
the device comprises an original video data acquisition unit, a video sending unit and a video sending unit, wherein the original video data acquisition unit is used for determining original video data which meet preset video sending conditions;
the sensitive object list generating unit is used for acquiring shooting attribute information associated with the original video data and generating a sensitive object list based on the shooting attribute information; at least one sensitive object is contained in the sensitive object list; the shooting attribute information is used for determining related attribute information in the process of shooting original video data; the sensitive object is a shooting object related to the privacy of a user;
the marking information generating unit is used for respectively marking the sensitive areas containing the sensitive objects in each video image frame in the original video data and generating marking information used for determining the positions of the sensitive areas on the basis of the positions of the sensitive areas in the corresponding video image frames;
the preprocessing unit is used for processing each sensitive area through a preset preprocessing algorithm to obtain a plurality of processed video image frames, and sensitive objects in the processed video image frames are hidden;
a video data sending unit, configured to generate target video data based on all the processed video image frames, and send the target video data and the tag information to a target terminal, so that the target terminal restores each processed sensitive area in the target video data based on the tag information;
the shooting attribute information comprises a shooting scene and a user identifier to which the original video data belongs;
the sensitive object list generating unit includes:
the candidate scene type determining unit is used for determining position information when the original video data are shot, marking the position information on a preset map application, and determining at least one candidate scene type corresponding to the position information;
the matching degree calculation unit is used for analyzing any one of the video image frames in the original video data, determining a shooting object contained in the video image frame, and respectively calculating the matching degree between the candidate scene type and the video image frame based on the shooting object;
a first candidate object list determining unit, configured to determine the shooting scene of the original video data based on a matching degree corresponding to the candidate scene type, and obtain a first candidate object list corresponding to the shooting scene;
the user identification query unit is used for querying the security level and the user identity information associated with the user identification;
the extended sensitive content determining unit is used for acquiring at least one feature sensitive content associated with the user identity information and determining the extended sensitive content associated with the feature sensitive content according to the security level;
a second candidate object list generating unit, configured to generate a second candidate object list according to all the feature sensitive contents and all the extension sensitive contents;
and the candidate object list merging unit is used for generating the sensitive object list based on the first candidate object list and the second candidate object list.
8. An electronic device, characterized in that the electronic device comprises a memory, a processor and a computer program stored in the memory and executable on the processor, the processor executing the computer program with the steps of the method according to any of claims 1 to 6.
9. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6.
CN202110680442.5A 2021-06-18 2021-06-18 Video data sending method and electronic equipment Active CN113259721B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110680442.5A CN113259721B (en) 2021-06-18 2021-06-18 Video data sending method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110680442.5A CN113259721B (en) 2021-06-18 2021-06-18 Video data sending method and electronic equipment

Publications (2)

Publication Number Publication Date
CN113259721A CN113259721A (en) 2021-08-13
CN113259721B true CN113259721B (en) 2021-09-24

Family

ID=77188656

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110680442.5A Active CN113259721B (en) 2021-06-18 2021-06-18 Video data sending method and electronic equipment

Country Status (1)

Country Link
CN (1) CN113259721B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113806002A (en) * 2021-09-24 2021-12-17 维沃移动通信有限公司 Image display method and device
CN113936202A (en) * 2021-10-14 2022-01-14 北京地平线信息技术有限公司 Image security processing method and device, electronic equipment and storage medium
CN114339049A (en) * 2021-12-31 2022-04-12 深圳市商汤科技有限公司 Video processing method and device, computer equipment and storage medium
CN114584787A (en) * 2022-05-05 2022-06-03 浙江智慧视频安防创新中心有限公司 Coding method and device based on digital retina sensitive video content hiding
CN115100209B (en) * 2022-08-28 2022-11-08 电子科技大学 Camera-based image quality correction method and correction system
CN115620214B (en) * 2022-12-20 2023-03-07 浙江奥鑫云科技有限公司 Safety processing method for network information data
CN116644476A (en) * 2023-07-21 2023-08-25 太平金融科技服务(上海)有限公司深圳分公司 Image shielding method and device, electronic equipment and storage medium
CN117235805B (en) * 2023-11-16 2024-02-23 中汽智联技术有限公司 Vehicle data processing system, processing method, device and medium
CN117455751B (en) * 2023-12-22 2024-03-26 新华三网络信息安全软件有限公司 Road section image processing system and method
CN117521159B (en) * 2024-01-05 2024-05-07 浙江大华技术股份有限公司 Sensitive data protection method, device and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102768715A (en) * 2011-04-30 2012-11-07 三星电子株式会社 Privacy and trends
CN112583833A (en) * 2020-12-14 2021-03-30 珠海格力电器股份有限公司 Data encryption processing method and device, electronic equipment and storage medium

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10075618B2 (en) * 2013-04-22 2018-09-11 Sony Corporation Security feature for digital imaging
US11182781B2 (en) * 2014-06-16 2021-11-23 Bank Of America Corporation Block chain encryption tags
WO2016106383A2 (en) * 2014-12-22 2016-06-30 Robert Bosch Gmbh First-person camera based visual context aware system
KR102376962B1 (en) * 2015-12-15 2022-03-21 삼성전자주식회사 Server, electronic device, and method for image processing in electronic device
US10686765B2 (en) * 2017-04-19 2020-06-16 International Business Machines Corporation Data access levels
AU2018321357B2 (en) * 2017-08-22 2021-12-16 Alarm.Com Incorporated Preserving privacy in surveillance
CN109308449B (en) * 2018-08-06 2021-06-18 瑞芯微电子股份有限公司 Foreign matter filtering video coding chip and method based on deep learning
US20200250401A1 (en) * 2019-02-05 2020-08-06 Zenrin Co., Ltd. Computer system and computer-readable storage medium
US11425105B2 (en) * 2019-02-07 2022-08-23 Egress Software Technologies Ip Limited Method and system for processing data packages
CN111586361B (en) * 2020-05-19 2021-10-15 浙江大华技术股份有限公司 Image processing method and related device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102768715A (en) * 2011-04-30 2012-11-07 三星电子株式会社 Privacy and trends
CN112583833A (en) * 2020-12-14 2021-03-30 珠海格力电器股份有限公司 Data encryption processing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113259721A (en) 2021-08-13

Similar Documents

Publication Publication Date Title
CN113259721B (en) Video data sending method and electronic equipment
Sajjad et al. Robust image hashing based efficient authentication for smart industrial environment
CN112949545B (en) Method, apparatus, computing device and medium for recognizing face image
Korshunov et al. Framework for objective evaluation of privacy filters
CN108848334A (en) A kind of method, apparatus, terminal and the storage medium of video processing
US11822698B2 (en) Privacy transformations in data analytics
CN111931145A (en) Face encryption method, face recognition method, face encryption device, face recognition device, electronic equipment and storage medium
CN111552984A (en) Display information encryption method, device, equipment and storage medium
CN113393471A (en) Image processing method and device
CN113486377A (en) Image encryption method and device, electronic equipment and readable storage medium
CN112802138A (en) Image processing method and device, storage medium and electronic equipment
CN111881740A (en) Face recognition method, face recognition device, electronic equipment and medium
KR20190066218A (en) Method, computing device and program for executing harmful object control
CN113688658A (en) Object identification method, device, equipment and medium
Luo et al. Anonymous subject identification and privacy information management in video surveillance
Vega et al. Image tampering detection by estimating interpolation patterns
CN111368128B (en) Target picture identification method, device and computer readable storage medium
CN112672102B (en) Video generation method and device
CN106529307B (en) Photograph encryption method and device
KR20150061470A (en) VDI service providing system and method
US10282633B2 (en) Cross-asset media analysis and processing
Lin et al. Moving object detection in the encrypted domain
CN113052044A (en) Method, apparatus, computing device, and medium for recognizing iris image
CN113139527A (en) Video privacy protection method, device, equipment and storage medium
JP2018142137A (en) Information processing device, information processing method and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant