CN109831677A - Video desensitization method, device, computer equipment and storage medium - Google Patents

Video desensitization method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN109831677A
CN109831677A CN201811535162.XA CN201811535162A CN109831677A CN 109831677 A CN109831677 A CN 109831677A CN 201811535162 A CN201811535162 A CN 201811535162A CN 109831677 A CN109831677 A CN 109831677A
Authority
CN
China
Prior art keywords
ready
video
data
desensitization
getting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811535162.XA
Other languages
Chinese (zh)
Other versions
CN109831677B (en
Inventor
王振华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201811535162.XA priority Critical patent/CN109831677B/en
Publication of CN109831677A publication Critical patent/CN109831677A/en
Application granted granted Critical
Publication of CN109831677B publication Critical patent/CN109831677B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

This application involves a kind of video desensitization method, device, computer equipment and storage mediums.The method is related to biological identification technology, comprising: extracts video data to be identified from wait the video flowing that desensitizes, and extracts image data and audio data from video data to be identified;By image data input it is preset get ready in Activity recognition model, obtain getting Activity recognition ready as a result, and by audio data input it is preset get ready in speech recognition modeling, obtain getting speech recognition result ready;According to getting Activity recognition result ready and getting speech recognition result ready, video flowing to be desensitized is subjected to cutting process, obtains video segment data;It desensitizes according to preset business regular, non-desensitization video-frequency band is extracted from video segment data;Non- desensitization video-frequency band is subjected to splicing, the desensitization video flowing after being desensitized.It can be improved the treatment effeciency of video desensitization using this method.

Description

Video desensitization method, device, computer equipment and storage medium
Technical field
This application involves field of computer technology, more particularly to a kind of video desensitization method, device, computer equipment and Storage medium.
Background technique
Data desensitization refers to the deformation that certain sensitive informations are carried out with data by desensitization rule, realizes privacy-sensitive data Reliably protecting.In the case where being related to client secure data or some commercial sensitive datas, system convention is not being violated Under the conditions of, truthful data is transformed and test use is provided.Such as common train ticket, electric business consignee's address in life Desensitization process will be done to sensitive information.
Currently, the desensitization process of sensitive information in video data is generally required professional and is carried out manually to video It is cut, is spliced again after removing sensitive information section, the treatment effeciency of video desensitization is low.
Summary of the invention
Based on this, it is necessary in view of the above technical problems, provide a kind of video that can be improved video desensitization process efficiency Desensitization method, device, computer equipment and storage medium.
A kind of video desensitization method, which comprises
Video data to be identified is extracted from wait the video flowing that desensitizes, and from video data to be identified extract image data and Audio data;
By image data input it is preset get ready in Activity recognition model, obtain getting Activity recognition ready as a result, and by audio Data input is preset to be got ready in speech recognition modeling, obtains getting speech recognition result ready;
According to getting Activity recognition result ready and getting speech recognition result ready, video flowing to be desensitized is subjected to cutting process, is obtained To video segment data;
It desensitizes according to preset business regular, non-desensitization video-frequency band is extracted from video segment data;
Non- desensitization video-frequency band is subjected to splicing, the desensitization video flowing after being desensitized.
In one of the embodiments, by image data input it is preset get ready in Activity recognition model, obtain getting row ready For recognition result, and by audio data input it is preset get ready in speech recognition modeling, obtain getting speech recognition result ready include:
Determine the identification information of the corresponding affiliated business personnel of video data to be identified;
Inquiry identification information respectively corresponds preset get ready and Activity recognition model and gets speech recognition modeling ready;
Image feature data are extracted from image data, and audio characteristic data is extracted from audio data;
The input of image feature data is got ready in Activity recognition model, obtains getting Activity recognition ready as a result, by audio frequency characteristics Data input is got ready in speech recognition modeling, obtains getting speech recognition result ready.
In one of the embodiments, inquiry identification information respectively correspond it is preset get ready Activity recognition model and Before getting speech recognition modeling ready, further includes:
Historical behavior image data is obtained from operation system and history gets voice data ready;
It gets historical behavior image data and history ready voice data respectively to classify according to business personnel, obtains each industry The corresponding historical behavior image data of business personnel and the corresponding history of each business personnel get voice data ready;
The corresponding historical behavior image data of each business personnel of training, obtains getting Activity recognition model ready;
The corresponding history of each business personnel of training gets voice data ready, obtains getting speech recognition modeling ready.
It, will be wait the view that desensitizes in one of the embodiments, according to getting Activity recognition result ready and getting speech recognition result ready Frequency stream carries out cutting process, and obtaining video-frequency band data includes:
It inquires preset get ready and triggers rule, getting triggering rule ready includes that behavior triggers rule and speech trigger rule;
Activity recognition result will be got ready to be compared with behavior triggering rule, obtain behavior triggering as a result, voice will be got ready Recognition result is compared with speech trigger rule, obtains speech trigger result;
Result and speech trigger are triggered as a result, obtaining getting recognition result ready according to behavior;
When getting recognition result ready is that operation is got ready, for video data to be identified addition cutting point identification;
Video flowing to be desensitized is subjected to cutting process according to cutting point identification, obtains video segment data.
In one of the embodiments, further include:
When receive get ready cutting instruction when, determine get ready cutting instruction cutting moment value;
Determine cutting moment value corresponding cutting video frame in video data to be identified;
For cutting video frame addition cutting point identification;
It returns and video flowing to be desensitized is subjected to cutting process according to cutting point identification, obtain video segment data.
It desensitizes in one of the embodiments, according to preset business regular, non-desensitization view is extracted from video segment data Frequency range includes:
Inquire preset business desensitization rule;
It is desensitized according to business regular, determines the non-desensitization video-frequency band in video segment data;
Extract non-desensitization video-frequency band.
It is desensitized in one of the embodiments, according to business regular, determines the non-desensitization video-frequency band packet in video segment data It includes:
Determine that type is got in non-desensitization ready according to business desensitization rule;
The corresponding cutting point identification of inquiry video segment data gets type ready;
Type will be got ready to get type ready with non-desensitization and match;
Using matching result unanimously corresponding video segment data as non-desensitization video-frequency band.
A kind of video desensitization device, described device include:
Data extraction module is identified, for extracting video data to be identified from wait the video flowing that desensitizes, and from view to be identified Frequency extracts image data and audio data in;
Get recognition processing module ready, for by image data input it is preset get ready in Activity recognition model, got ready Activity recognition as a result, and by audio data input it is preset get ready in speech recognition modeling, obtain getting speech recognition result ready;
Video cutting process module gets Activity recognition result ready for basis and gets speech recognition result ready, will be wait desensitize Video flowing carries out cutting process, obtains video segment data;
Video-frequency band screening module, it is regular for desensitizing according to preset business, non-desensitization view is extracted from video segment data Frequency range;
Video-frequency band splicing module, for non-desensitization video-frequency band to be carried out splicing, the desensitization video after being desensitized Stream.
A kind of computer equipment, including memory and processor, the memory are stored with computer program, the processing Device performs the steps of when executing the computer program
Video data to be identified is extracted from wait the video flowing that desensitizes, and from video data to be identified extract image data and Audio data;
By image data input it is preset get ready in Activity recognition model, obtain getting Activity recognition ready as a result, and by audio Data input is preset to be got ready in speech recognition modeling, obtains getting speech recognition result ready;
According to getting Activity recognition result ready and getting speech recognition result ready, video flowing to be desensitized is subjected to cutting process, is obtained To video segment data;
It desensitizes according to preset business regular, non-desensitization video-frequency band is extracted from video segment data;
Non- desensitization video-frequency band is subjected to splicing, the desensitization video flowing after being desensitized.
A kind of computer readable storage medium, is stored thereon with computer program, and the computer program is held by processor It is performed the steps of when row
Video data to be identified is extracted from wait the video flowing that desensitizes, and from video data to be identified extract image data and Audio data;
By image data input it is preset get ready in Activity recognition model, obtain getting Activity recognition ready as a result, and by audio Data input is preset to be got ready in speech recognition modeling, obtains getting speech recognition result ready;
According to getting Activity recognition result ready and getting speech recognition result ready, video flowing to be desensitized is subjected to cutting process, is obtained To video segment data;
It desensitizes according to preset business regular, non-desensitization video-frequency band is extracted from video segment data;
Non- desensitization video-frequency band is subjected to splicing, the desensitization video flowing after being desensitized.
Above-mentioned video desensitization method, device, computer equipment and storage medium, from wait desensitize obtained in video flowing wait know Image data and audio data are extracted in other video data, and image data and audio data are inputted respectively and correspond to preset get ready It Activity recognition model and gets ready in speech recognition modeling, further according to obtaining getting Activity recognition result ready and get speech recognition result ready It treats desensitization video flowing and carries out cutting process, it finally will be non-in the video segment data after cutting process according to business desensitization rule The video-frequency band that desensitizes carries out splicing, the desensitization video flowing after being desensitized, to realize at the desensitization to video data Reason.Video desensitization treatment process in, can according in video data to be identified image data and audio data beaten Point identification and cutting process carry out non-desensitization video-frequency band splicing according still further to preset business desensitization rule, do not need artificial The processing for directly participating in video cutting and splicing, improves the treatment effeciency of video desensitization.
Detailed description of the invention
Fig. 1 is the application scenario diagram of video desensitization method in one embodiment;
Fig. 2 is the flow diagram of video desensitization method in one embodiment;
Fig. 3 is the flow diagram of video cutting process in one embodiment;
Fig. 4 is the flow diagram of video desensitization method in another embodiment;
Fig. 5 is the structural block diagram of video desensitization device in one embodiment;
Fig. 6 is the internal structure chart of computer equipment in one embodiment.
Specific embodiment
It is with reference to the accompanying drawings and embodiments, right in order to which the objects, technical solutions and advantages of the application are more clearly understood The application is further elaborated.It should be appreciated that specific embodiment described herein is only used to explain the application, not For limiting the application.
Video desensitization method provided by the present application, can be applied in application environment as shown in Figure 1.Wherein, recording is set Standby 102 are communicated with server 104 by network by network.Recording arrangement 102 carries out video record, and by recording to Video stream desensitize to server 104, server 104 is extracted from wait the video data to be identified obtained in video flowing that desensitizes Image data and audio data are inputted respectively and corresponding preset get Activity recognition model ready and beat by image data and audio data In point speech recognition modeling, treats desensitization video further according to obtaining getting ready Activity recognition result and getting speech recognition result ready and flow into Row cutting process finally spells the non-desensitization video-frequency band in the video segment data after cutting process according to business desensitization rule Processing is connect, the desensitization video flowing after being desensitized, to realize the desensitization process to video data.
Wherein, recording arrangement 102 can be, but not limited to be various video record video cameras, or have video record The terminal of function, such as personal computer, laptop, smart phone, tablet computer and portable wearable device, server 104 can be realized with the server cluster of the either multiple server compositions of independent server.
In one embodiment, as shown in Fig. 2, providing a kind of video desensitization method, it is applied in Fig. 1 in this way It is illustrated for server 104, comprising the following steps:
Step S201: video data to be identified is extracted from wait the video flowing that desensitizes, and is extracted from video data to be identified Image data and audio data.
Wherein, video flowing to be desensitized is the video data for needing to carry out sensitive information desensitization process, can be by recording arrangement 102 recordings obtain.For example, video flowing to be desensitized can be double record videos of financial industry, i.e., in financial products sales process In synchronization video recording, by double records can make in financial service sales behavior can play back, important information can be inquired, problem blame Appoint can confirm, sincere sale.In specific implementation, vocational window is generally provided with recording arrangement 102 to be recorded, can obtains To the video flowing to be desensitized.When treating desensitization video flowing and carrying out desensitization process, needs therefrom to intercept removal and be related to sensitive information Video-frequency band content, then need to treat desensitization video flowing and cut, and spliced again, to realize desensitization effect.
And treat desensitization video flowing cut, splicing when, need therefrom to intercept the video data of certain length It carries out cutting and gets identification ready, to trigger video cutting process.Video data to be identified is wait pre- setting video in the video flowing that desensitizes The video data of stream identification length, video flowing identification length are set according to actual needs, can be by video counts to be identified According to carrying out getting identification ready, to the corresponding cutting point identification of addition and carry out video cutting process, may be implemented to treat desensitization video Stream is cut in real time, it is ensured that the timeliness of video flowing desensitization process to be desensitized, to improve the place of video flowing desensitization to be desensitized Manage efficiency.
Generally, video data is made of image and audio two parts, and image and audio two parts can be beaten Point identification.Specifically, when carrying out getting identification ready to video data to be identified, shadow can be extracted respectively from video data to be identified As data and audio data, and simultaneously in video data to be identified image data and audio data carry out at identification respectively Reason is realized so as to identify in video image whether to occur getting ready whether occur getting voice ready in behavior or video/audio Image behavior and audio speech get identification ready, improve the accuracy for getting identification ready.
Step S203: by image data input it is preset get ready in Activity recognition model, obtain getting Activity recognition ready as a result, And by audio data input it is preset get ready in speech recognition modeling, obtain getting speech recognition result ready.
After extraction obtains image data and audio data in video data to be identified, respectively by image data and audio number It corresponding get Activity recognition model ready according to inputting and gets ready and carry out getting identification ready in speech recognition modeling.Wherein, behavior knowledge is got ready Other model can be based on artificial neural network algorithm, through the business personnel of training operation system under corresponding business scenario History is got behavioral data ready and is obtained, such as can act for applause, movement, the hammer action etc. of raising one's hand limbs behavior act, can also Think various gesture motions etc.;Getting speech recognition modeling ready can then be obtained by training the history of business personnel get voice data ready It arrives, such as can get ready for key words sound, such as " X problem ", " good, next problem ", " beginning/end ", " good , thanks " etc. crucial words and phrases.
In the present embodiment, when carrying out getting identification ready to video data to be identified, on the one hand image data is inputted default Get ready and carry out getting Activity recognition ready in Activity recognition model, obtain getting Activity recognition result ready;On the other hand, by audio data It inputs preset get ready to carry out getting speech recognition ready in speech recognition modeling, obtains getting speech recognition result ready.By to image Data and audio data carry out getting identification ready respectively, and business personnel is allow to pass through limb action, gesture or voice keyword etc. Form carries out triggering and gets ready, extends the diversity for getting operation ready, avoids the fluency for influencing operation flow, while ensuring to view The accuracy of frequency desensitization.
Step S205: according to getting Activity recognition result ready and getting speech recognition result ready, video flowing to be desensitized is cut Processing is cut, video segment data is obtained.
It obtains after getting Activity recognition result ready and getting speech recognition result ready, Activity recognition result is got ready according to this and is got ready Speech recognition result treats desensitization video flowing and carries out cutting process, each video segment data after being cut.It specifically, can be comprehensive Conjunction gets Activity recognition result ready and gets speech recognition result ready and obtains getting recognition result ready, and gets recognition result ready according to this and carry out It gets judgement ready, if judging result is that operation is got ready, carries out cutting point identification, and will video be desensitized according to the cutting point identification Stream cutting.
Step S207: it desensitizes according to preset business regular, non-desensitization video-frequency band is extracted from video segment data.
After obtaining each video segment data after video flowing cutting of desensitizing, need therefrom to be related to the view of sensitive information content Frequency range removal, retains other video-frequency bands, to realize the desensitization process for treating desensitization video flowing.In the present embodiment, obtain default Business desensitize rule, the business desensitization rule according to practical business desensitize demand is configured.For example, for double record videos In, identity validation class is generally comprised, verification class is examined and knows and inform the video-frequency band of class, wherein checking class video for examining Section is related to the anti-fraud audit words art of financial service system, is trade secret, needs to carry out desensitization process, then corresponding business Desensitization rule can be to examine that the video segment data of verification class is desensitization video-frequency band, and the desensitization video-frequency band of the category is removed. It desensitizes according to business regular, non-desensitization video-frequency band is extracted from video segment data, by splicing non-desensitization video-frequency band, can obtain Desensitization video flowing after to desensitization process.
Step S209: non-desensitization video-frequency band is subjected to splicing, the desensitization video flowing after being desensitized.
After obtaining non-desensitization video-frequency band, spliced, the desensitization video flowing after being desensitized.It is implementing When, it can be spliced according to the time shaft sequence of each non-desensitization video-frequency band, such as be spliced according to the time order and function of video-frequency band; It can also be spliced according to practical business demand, obtain the desensitization video flowing for meeting condition.
In above-mentioned video desensitization method, image data is extracted from wait the video data to be identified obtained in video flowing that desensitizes And audio data, image data and audio data are inputted respectively it is corresponding it is preset get Activity recognition model ready and get voice ready know In other model, treats desensitization video flowing further according to obtaining getting ready Activity recognition result and getting speech recognition result ready and carry out cut place Non- desensitization video-frequency band in video segment data after cutting process is finally carried out splicing according to business desensitization rule by reason, Desensitization video flowing after being desensitized, to realize the desensitization process to video data.In the treatment process of video desensitization In, can according in video data to be identified image data and audio data carry out getting identification and cutting process ready, according still further to Preset business desensitization rule carries out non-desensitization video-frequency band splicing, does not need artificial directly to participate in video cutting and splice Processing improves the treatment effeciency of video desensitization.
In one embodiment, by image data input it is preset get ready in Activity recognition model, obtain getting ready behavior knowledge Not as a result, and by audio data input it is preset get ready in speech recognition modeling, obtain getting speech recognition result ready and comprise determining that The identification information of business personnel belonging to video data to be identified is corresponding;Inquiry identification information respectively corresponds preset beat It puts Activity recognition model and gets speech recognition modeling ready;Image feature data are extracted from image data, are mentioned from audio data Take audio characteristic data;The input of image feature data is got ready in Activity recognition model, obtains getting Activity recognition ready as a result, by sound The input of frequency characteristic is got ready in speech recognition modeling, obtains getting speech recognition result ready.
In the present embodiment, gets Activity recognition model ready and get speech recognition modeling ready and be based on each business people in operation system The history of member is got data training ready and is obtained.Generally, during the double records of service surface core, different business systems have different beat Point operation requires, and different business personnels also has and different gets operating habit ready.
Specifically, image data is inputted it is preset get ready in Activity recognition model, obtain getting Activity recognition ready as a result, And by audio data input it is preset get ready in speech recognition modeling, when obtaining getting speech recognition result ready, first determine to be identified The identification information of business personnel belonging to video data is corresponding.In specific application, it for each business service window, is respectively provided with There is recording arrangement 102, corresponding affiliated industry can be determined according to recording arrangement 102 by the source of video data to be identified Business personnel, and further inquire the corresponding identification information of the business personnel.Identification information can be, but not limited to as member Work number, employee name etc. can be with the identity informations of unique identification business personnel.After determining identification information, inquiry and the body Part identification information correspond to it is preset get Activity recognition model ready and get speech recognition modeling ready, get Activity recognition model ready and get ready The history that speech recognition modeling is based respectively on corresponding business personnel gets behavioral data ready and history is got voice data training ready and obtained, Get the with strong points of identification ready, recognition accuracy is high.
It obtains after getting Activity recognition model ready and getting speech recognition modeling ready, on the one hand, image is extracted from image data The input of image feature data is got ready in Activity recognition model, obtains getting Activity recognition result ready by characteristic.On the other hand, Audio characteristic data is extracted from audio data, and audio characteristic data input is got ready in speech recognition modeling, is got ready Speech recognition result.When carrying out getting identification ready to image data and audio data, feature extraction, the redundancy of filtering useless are carried out Information obtains image feature data and audio characteristic data, and carry out it is subsequent get identifying processing ready, obtain getting Activity recognition ready As a result and speech recognition result is got ready.
In one embodiment, preset get ready is respectively corresponded in inquiry identification information Activity recognition model and to get ready Before speech recognition modeling, further includes: obtain historical behavior image data from operation system and history gets voice data ready;Point It does not get historical behavior image data and history ready voice data to classify according to business personnel, it is corresponding to obtain each business personnel Historical behavior image data and the corresponding history of each business personnel get voice data ready;The corresponding history of each business personnel of training Behavior image data obtains getting Activity recognition model ready;The corresponding history of each business personnel of training gets voice data ready, is beaten Point speech recognition modeling.
When training gets Activity recognition model ready and gets speech recognition modeling ready, historical behavior is first obtained from operation system Image data and history get voice data ready.Wherein, historical behavior image data can exist for each business personnel in operation system Carry out that double records in service surface nuclear process take gets image data ready, for example, may include applauds, raise one's hand, both hands intersection, point It is first-class to get behavior ready;It is similar with historical behavior image data that history gets voice data ready, such as keyword statement, " X problem ", " good, to thanks " etc..In a particular application, each business personnel has different personal habits, corresponding historical behavior image Data and history get ready got ready in voice data operation performance it is also not identical, so according to business personnel by historical behavior image Data and history get voice data ready and classify, and being that each business personnel building is corresponding gets Activity recognition model ready and get language ready Sound identification model.
Specifically, the corresponding historical behavior image data of each business personnel of training, obtains getting Activity recognition model ready;Training The corresponding history of each business personnel gets voice data ready, obtains getting speech recognition modeling ready.It, can be by history row when specific implementation It is divided into training sample set and test sample collection for image data, by the supervised learning method training training sample set, is obtained Behavior model is got ready to be tested, then accuracy of identification test is carried out to behavior model to be tested of getting ready by test sample collection, After accuracy of identification test passes through, obtain getting Activity recognition model ready.The training process for getting speech recognition modeling ready, which is analogous to, to be got ready Activity recognition model.
In one embodiment, as shown in figure 3, the step of video cutting process, i.e., according to get ready Activity recognition result and It gets speech recognition result ready, video flowing to be desensitized is subjected to cutting process, obtaining video-frequency band data includes:
Step S301: inquiry is preset to get triggering rule ready, and getting triggering rule ready includes behavior triggering rule and voice touching Hair rule.
In the present embodiment, after obtaining getting Activity recognition result ready and getting speech recognition result ready, needed in conjunction with practical business The triggering rule of getting ready asked obtains getting ready recognition result, and according to this obtains getting recognition result ready and treat desensitization video flowing and cut Processing is cut, video segment data is obtained.Specifically, it inquires preset get ready and triggers rule, this gets triggering rule ready according to practical industry Business demand is set, and can specifically be set according to the habit of type of service and business personnel, is such as set as when image number When recognizing the applause behavior of business personnel in, or when recognizing the key sentence of " X problem " in audio data, Think that triggering is got ready.Getting triggering rule ready includes behavior triggering rule and speech trigger rule, corresponds respectively to image data That gets identification and audio data ready gets identification ready.
Step S303: will get Activity recognition result ready and be compared with behavior triggering rule, obtain behavior triggering as a result, will It gets speech recognition result ready to be compared with speech trigger rule, obtains speech trigger result.
After acquisition behavior triggering rule and speech trigger rule, on the one hand, Activity recognition result will be got ready and behavior triggers Rule is compared, and obtains behavior triggering result;On the other hand speech recognition result will be got ready to compare with speech trigger rule Compared with obtaining speech trigger result.
Step S305: result and speech trigger are triggered as a result, obtaining getting recognition result ready according to behavior.
Comprehensive behavior triggering result and speech trigger result obtain getting recognition result ready, behavior can such as be triggered result and Speech trigger result takes or operation, i.e., when behavior trigger in result and speech trigger result it is any got ready for operation when to get arriving Get ready recognition result for operation get ready, and to video data to be identified carry out cutting point identification addition processing.
Step S307: when getting recognition result ready is that operation is got ready, for video data to be identified addition cutting point identification.
Obtain after getting recognition result ready, judge its result type, when get ready recognition result for operation get ready when, show this to Identification video data is cut point, carries out getting processing ready to it, specifically can add cut point mark to the video data to be identified Know.Wherein, cutting point identification be used for identify video cutting cut point, treat desensitization video flowing cut when, Ke Yizhi It connects and searches cutting point identification progress cutting process.
Step S309: video flowing to be desensitized is subjected to cutting process according to cutting point identification, obtains video segment data.
When treating desensitization video flowing progress cutting process, search wait the cutting point identification in the video flowing that desensitizes, according to this It cuts point identification and carries out cutting process, so that video flowing to be desensitized be split, obtain each video segment data.
In the present embodiment, triggering rule is got ready according to what is set according to practical business demand, is known in conjunction with behavior of getting ready Other result and get ready speech recognition result treat desensitization video flowing carry out getting identification and cutting process ready, extend and get operation ready The accuracy that diversity avoids influencing the fluency of operation flow, while ensuring to desensitize to video.
In one embodiment, video desensitization method further includes the steps that cutting instruction is got in response ready: getting ready when receiving When cutting instruction, the cutting moment value for getting cutting instruction ready is determined;Determine that cutting moment value is corresponding in video data to be identified Cutting video frame;For cutting video frame addition cutting point identification;It returns and carries out video flowing to be desensitized according to cutting point identification Cutting process obtains video segment data.
In the present embodiment, in addition to extracting video data to be identified from video stream data, to video data to be identified into Row is got ready outside identification, and what can be sent with response external get cutting instruction ready, realizes that manual operation is got ready.Specifically, it is receiving To when getting cutting instruction ready, determine that this gets the cutting moment value of cutting instruction ready.Wherein, getting cutting instruction ready can be sent out by outside It send, gets button ready as business personnel clicks correlation;Cutting moment value is to get the sending time of cutting instruction, reflecting video fluxion ready According to the middle time shaft position for carrying out getting operation ready.
After determining the cutting moment value for getting cutting instruction ready, determine that the cutting moment value is corresponding from video data to be identified Cutting video frame.Generally, when cutting instruction is got in external transmission ready, show the moment corresponding view in video data to be identified Frequency frame carries out operation and gets ready, can be from the time of video data to be identified according to the cutting moment value for getting cutting instruction ready Axis determines corresponding cutting video frame.After determining cutting video frame, cutting point identification, cut point mark are added for the cutting video frame The cutting point identification can directly be searched when cutting to video stream data by knowing the cut point for identifying video cutting Carry out cutting process.
After point identification is cut in addition, the step of video stream data is subjected to cutting process according to cutting point identification is returned, By searching for the cutting point identification in video stream data, cutting process is carried out according still further to the cutting point identification, thus by video flowing Data are split, and obtain each video segment data.
In the present embodiment, to video data to be identified image data and audio data get ready identification outer, it is also real When receive it is external send get cutting instruction ready, and get cutting instruction ready according to this and carry out video cutting process, it is external right to realize Video gets the control of cutting ready, is capable of the operation diversity of effective extending video cutting, improves the efficiency of video cutting process, from And improve the efficiency of video desensitization process.
In one embodiment, it desensitizes according to preset business regular, non-desensitization video-frequency band is extracted from video segment data It include: the preset business desensitization rule of inquiry;It is desensitized according to business regular, determines the non-desensitization video-frequency band in video segment data; Extract non-desensitization video-frequency band.
After obtaining each video segment data after video flowing cutting of desensitizing, need therefrom to be related to the view of sensitive information content Frequency range removal, retains other video-frequency bands, to realize the desensitization process for treating desensitization video flowing.It, can be according in the present embodiment The non-desensitization that preset business desensitization rule is extracted non-desensitization video-frequency band from video segment data, and will be obtained in subsequent processing Video-frequency band splicing.Specifically, inquire preset business desensitization rule, business desensitization rule according to practical business desensitize demand into Row setting.It is desensitized according to business regular, determines the non-desensitization video-frequency band in video segment data, it specifically can be according to each video number of segment According to get ready type carry out desensitization video-frequency band and it is non-desensitization video-frequency band division.Determining the non-desensitization video in video segment data Duan Hou therefrom extracts the non-desensitization video-frequency band.
In one embodiment, it is desensitized according to business regular, determines that the non-desensitization video-frequency band in video segment data includes: root Determine that type is got in non-desensitization ready according to business desensitization rule;The corresponding cutting point identification of inquiry video segment data gets type ready;It will beat Vertex type is got type ready with non-desensitization and is matched;Using matching result unanimously corresponding video segment data as non-desensitization video Section.
In the present embodiment, when determining the non-desensitization video-frequency band in video segment data, first desensitized according to business regular determining Type is got in non-desensitization ready.It generally, include that the desensitization for carrying out desensitization process is needed to get type ready in business desensitization rule, such as it is right It in identity validation class, examines verification class and knows and inform the video-frequency band of class, wherein examining the cutting point identification of verification class video-frequency band As type is got in desensitization ready, and identity validation class and informing class of knowing then correspond to non-desensitization and get type ready.Determine that non-desensitization is beaten After vertex type, the type of getting ready of the corresponding cutting point identification of each video segment data is inquired, and get this ready type and get ready with non-desensitization Type is matched, if matching is consistent, is shown that the video segment data is non-desensitization video-frequency band, is otherwise the video-frequency band that desensitizes.Into One step, desensitization video-frequency band can also be extracted from video segment data, and desensitization video-frequency band is spliced, obtain desensitization video Section.
In one embodiment, as shown in figure 4, providing a kind of video desensitization method, comprising:
Step S401: video data to be identified is extracted from wait the video flowing that desensitizes, and is extracted from video data to be identified Image data and audio data.
The present embodiment is applied in double record video desensitization process of financial industry, and video flowing to be desensitized is vocational window record Double record video datas that control equipment 102 is recorded.The to be identified of length is identified from default video flowing is extracted from wait the video flowing that desensitizes After video data, image data and audio data are further extracted from video data to be identified, simultaneously to video to be identified Image data and audio data in data carry out identifying processing respectively, so as to identify in video image whether beat Whether occur getting voice ready in point behavior or video/audio.
Step S402: the identification information of the corresponding affiliated business personnel of video data to be identified is determined;
Step S403: inquiry identification information respectively corresponds preset get ready and Activity recognition model and gets speech recognition ready Model;
Step S404: image feature data are extracted from image data, audio characteristic data is extracted from audio data;
Step S405: the input of image feature data is got ready in Activity recognition model, obtains getting Activity recognition ready as a result, will Audio characteristic data input is got ready in speech recognition modeling, obtains getting speech recognition result ready.
In the present embodiment, gets Activity recognition model ready and get speech recognition modeling ready and be based on each business people in operation system The history of member is got data training ready and is obtained.Activity recognition is got ready according to the affiliated business personnel's inquiry of video data to be identified is corresponding It model and gets speech recognition modeling ready and carries out getting identifying processing ready.
Step S406: according to getting Activity recognition result ready and getting speech recognition result ready, video flowing to be desensitized is cut Processing is cut, video segment data is obtained.
In the present embodiment, after obtaining getting Activity recognition result ready and getting speech recognition result ready, needed in conjunction with practical business The triggering rule of getting ready asked obtains getting ready recognition result, and according to this obtains getting recognition result ready and treat desensitization video flowing and cut Processing is cut, video segment data is obtained.Specifically, may include: that inquiry is preset gets triggering rule ready, and getting triggering rule ready includes Behavior triggering rule and speech trigger rule;Activity recognition result will be got ready to be compared with behavior triggering rule, obtain behavior Triggering is compared as a result, speech recognition result will be got ready with speech trigger rule, obtains speech trigger result;It is touched according to behavior Result and speech trigger are sent out as a result, obtaining getting recognition result ready;It is video to be identified when getting recognition result ready is that operation is got ready Data addition cutting point identification;Video flowing to be desensitized is subjected to cutting process according to cutting point identification, obtains video segment data.
Step S407: preset business desensitization rule is inquired;
Step S408: determine that type is got in non-desensitization ready according to business desensitization rule;
Step S409: the corresponding cutting point identification of inquiry video segment data gets type ready;
Step S410: type will be got ready and get type ready with non-desensitization and match;
Step S411: using matching result unanimously corresponding video segment data as non-desensitization video-frequency band;
Step S412: non-desensitization video-frequency band is extracted;
Step S413: non-desensitization video-frequency band is subjected to splicing, the desensitization video flowing after being desensitized.
After obtaining each video segment data after video flowing cutting of desensitizing, need therefrom to be related to the view of sensitive information content Frequency range removal, retains other video-frequency bands, to realize the desensitization process for treating desensitization video flowing.It, can be according in the present embodiment Preset business desensitization rule extracts non-desensitization video-frequency band from video segment data, and non-by what is obtained in subsequent splicing The video-frequency band that desensitizes splicing, the desensitization video flowing after being desensitized.
It should be understood that although each step in the flow chart of Fig. 2-4 is successively shown according to the instruction of arrow, These steps are not that the inevitable sequence according to arrow instruction successively executes.Unless expressly stating otherwise herein, these steps Execution there is no stringent sequences to limit, these steps can execute in other order.Moreover, at least one in Fig. 2-4 Part steps may include that perhaps these sub-steps of multiple stages or stage are not necessarily in synchronization to multiple sub-steps Completion is executed, but can be executed at different times, the execution sequence in these sub-steps or stage is also not necessarily successively It carries out, but can be at least part of the sub-step or stage of other steps or other steps in turn or alternately It executes.
In one embodiment, as shown in figure 5, providing a kind of video desensitization device, comprising: identification data extraction module 501, recognition processing module 503, video cutting process module 505, video-frequency band screening module 507 and video-frequency band splicing module are got ready 509, in which:
Data extraction module 501 is identified, for extracting video data to be identified from wait the video flowing that desensitizes, and to be identified Image data and audio data are extracted in video data;
Get recognition processing module 503 ready, for by image data input it is preset get ready in Activity recognition model, beaten Point Activity recognition as a result, and by audio data input it is preset get ready in speech recognition modeling, obtain getting speech recognition result ready;
Video cutting process module 505 gets Activity recognition result ready for basis and gets speech recognition result ready, will be to de- Quick video flowing carries out cutting process, obtains video segment data;
Video-frequency band screening module 507, it is regular for desensitizing according to preset business, non-desensitization is extracted from video segment data Video-frequency band;
Video-frequency band splicing module 509, the view of desensitization for non-desensitization video-frequency band to be carried out splicing, after being desensitized Frequency flows.
In one embodiment, recognition processing module 503 is got ready to include identity determination unit, get pattern query list ready Member and gets recognition unit ready at characteristic extraction unit, in which: identity determination unit, for determining video counts to be identified According to the identification information of corresponding affiliated business personnel;Get pattern query unit ready, it is right respectively for inquiring identification information Preset get ready is answered Activity recognition model and to get speech recognition modeling ready;Characteristic extraction unit, for from image data Image feature data are extracted, audio characteristic data is extracted from audio data;It gets recognition unit ready, is used for image feature data Input is got ready in Activity recognition model, obtains getting Activity recognition ready as a result, getting audio characteristic data input ready speech recognition mould In type, obtain getting speech recognition result ready.
It in one embodiment, further include that historical data obtains module, historical data categorization module, Activity recognition pattern die Block and speech recognition modeling module, in which: historical data obtains module, for obtaining historical behavior image number from operation system Voice data is got ready according to history;Historical data categorization module, for getting historical behavior image data and history ready language respectively Sound data are classified according to business personnel, obtain the corresponding historical behavior image data of each business personnel and each business personnel couple The history answered gets voice data ready;Activity recognition model module, for training the corresponding historical behavior image number of each business personnel According to obtaining getting Activity recognition model ready;Speech recognition modeling module, for training the corresponding history of each business personnel to get voice ready Data obtain getting speech recognition modeling ready.
In one embodiment, video cutting process module 505 include trigger regular query unit, triggering comparing unit, Recognition result acquiring unit, cutting mark unit and cutting process unit, in which: regular query unit is triggered, it is pre- for inquiring If get ready triggering rule, get ready triggering rule include behavior triggering rule and speech trigger rule;Comparing unit is triggered, is used for Activity recognition result will be got ready to be compared with behavior triggering rule, obtain behavior triggering as a result, speech recognition result will be got ready It is compared with speech trigger rule, obtains speech trigger result;Recognition result acquiring unit, for triggering result according to behavior With speech trigger as a result, obtaining getting recognition result ready;Cutting mark unit, for when get ready recognition result for operation get ready when, Cutting point identification is added for video data to be identified;Cutting process unit, for according to cutting point identification will video flowing be desensitized Cutting process is carried out, video segment data is obtained.
It in one embodiment, further include cutting command reception module, cutting frame determining module, cutting mark adding module With cutting jump module, in which: cutting command reception module, for when receive get ready cutting instruction when, determination get cutting ready The cutting moment value of instruction;Frame determining module is cut, moment value is corresponding in video data to be identified cuts for determining cutting Cut video frame;Cutting mark adding module, for being cutting video frame addition cutting point identification;Jump module is cut, for returning It returns and video flowing to be desensitized is subjected to cutting process according to cutting point identification, obtain video segment data.
In one embodiment, video-frequency band screening module 507 include desensitization rule query unit, video-frequency band screening unit and Video-frequency band extraction unit, in which: desensitization rule query unit, for inquiring preset business desensitization rule;Video-frequency band screening is single Member, it is regular for being desensitized according to business, determine the non-desensitization video-frequency band in video segment data;Video-frequency band extraction unit, for mentioning Negated desensitization video-frequency band.
In one embodiment, video-frequency band screening unit includes getting type determination unit ready, getting type queries list ready Member, type matching subelement and video-frequency band screen subelement, in which: type determination unit are got ready, for desensitizing according to business Rule determines that type is got in non-desensitization ready;Type queries subelement is got ready, for inquiring the corresponding cutting point identification of video segment data Get type ready;Type matching subelement is got type ready and is matched for that will get type ready with non-desensitization;Video-frequency band screening is single Member, for using matching result unanimously corresponding video segment data as non-desensitization video-frequency band.
Specific about video desensitization device limits the restriction that may refer to above for video desensitization method, herein not It repeats again.Modules in above-mentioned video desensitization device can be realized fully or partially through software, hardware and combinations thereof.On Stating each module can be embedded in the form of hardware or independently of in the processor in computer equipment, can also store in a software form In memory in computer equipment, the corresponding operation of the above modules is executed in order to which processor calls.
In one embodiment, a kind of computer equipment is provided, which can be server, internal junction Composition can be as shown in Figure 6.The computer equipment includes processor, memory and the network interface connected by system bus. Wherein, the processor of the computer equipment is for providing calculating and control ability.The memory of the computer equipment includes non-easy The property lost storage medium, built-in storage.The non-volatile memory medium is stored with operating system and computer program.The built-in storage Operation for operating system and computer program in non-volatile memory medium provides environment.The network of the computer equipment connects Mouth with external terminal by network connection for being communicated.To realize that a kind of video is de- when the computer program is executed by processor Quick method.
It will be understood by those skilled in the art that structure shown in Fig. 6, only part relevant to application scheme is tied The block diagram of structure does not constitute the restriction for the computer equipment being applied thereon to application scheme, specific computer equipment It may include perhaps combining certain components or with different component layouts than more or fewer components as shown in the figure.
In one embodiment, a kind of computer equipment, including memory and processor are provided, which is stored with Computer program, the processor perform the steps of when executing computer program
Video data to be identified is extracted from wait the video flowing that desensitizes, and from video data to be identified extract image data and Audio data;
By image data input it is preset get ready in Activity recognition model, obtain getting Activity recognition ready as a result, and by audio Data input is preset to be got ready in speech recognition modeling, obtains getting speech recognition result ready;
According to getting Activity recognition result ready and getting speech recognition result ready, video flowing to be desensitized is subjected to cutting process, is obtained To video segment data;
It desensitizes according to preset business regular, non-desensitization video-frequency band is extracted from video segment data;
Non- desensitization video-frequency band is subjected to splicing, the desensitization video flowing after being desensitized.
In one embodiment, it is also performed the steps of when processor executes computer program and determines video counts to be identified According to the identification information of corresponding affiliated business personnel;Inquiry identification information, which respectively corresponds, preset gets Activity recognition mould ready Type and get speech recognition modeling ready;Image feature data are extracted from image data, and audio frequency characteristics number is extracted from audio data According to;The input of image feature data is got ready in Activity recognition model, obtains getting Activity recognition ready as a result, audio characteristic data is defeated Enter to get ready in speech recognition modeling, obtains getting speech recognition result ready.
In one embodiment, it also performs the steps of when processor executes computer program and is obtained from operation system Historical behavior image data and history get voice data ready;Respectively by historical behavior image data and history get ready voice data by Classify according to business personnel, obtains the corresponding historical behavior image data of each business personnel and the corresponding history of each business personnel Get voice data ready;The corresponding historical behavior image data of each business personnel of training, obtains getting Activity recognition model ready;Training is each The corresponding history of business personnel gets voice data ready, obtains getting speech recognition modeling ready.
In one embodiment, inquiry is also performed the steps of when processor executes computer program preset gets touching ready Hair rule, getting triggering rule ready includes behavior triggering rule and speech trigger rule;Activity recognition result will be got ready and behavior is touched Hair rule is compared, and is obtained behavior triggering and is compared as a result, speech recognition result will be got ready with speech trigger rule, obtains Speech trigger result;Result and speech trigger are triggered as a result, obtaining getting recognition result ready according to behavior;It is when getting recognition result ready When operation is got ready, for video data to be identified addition cutting point identification;Video flowing to be desensitized is cut according to cutting point identification Processing is cut, video segment data is obtained.
In one embodiment, it also performs the steps of to work as to receive when processor executes computer program and gets cutting ready When instruction, the cutting moment value for getting cutting instruction ready is determined;Determine that moment value is corresponding in video data to be identified cuts for cutting Cut video frame;For cutting video frame addition cutting point identification;It returns and cuts video flowing to be desensitized according to cutting point identification Processing, obtains video segment data.
In one embodiment, it is de- that inquiry preset business is also performed the steps of when processor executes computer program Quick rule;It is desensitized according to business regular, determines the non-desensitization video-frequency band in video segment data;Extract non-desensitization video-frequency band.
In one embodiment, it is also performed the steps of when processor executes computer program according to business desensitization rule Determine that type is got in non-desensitization ready;The corresponding cutting point identification of inquiry video segment data gets type ready;Type and non-desensitization will be got ready Type is got ready to be matched;Using matching result unanimously corresponding video segment data as non-desensitization video-frequency band.
In one embodiment, a kind of computer readable storage medium is provided, computer program is stored thereon with, is calculated Machine program performs the steps of when being executed by processor
Video data to be identified is extracted from wait the video flowing that desensitizes, and from video data to be identified extract image data and Audio data;
By image data input it is preset get ready in Activity recognition model, obtain getting Activity recognition ready as a result, and by audio Data input is preset to be got ready in speech recognition modeling, obtains getting speech recognition result ready;
According to getting Activity recognition result ready and getting speech recognition result ready, video flowing to be desensitized is subjected to cutting process, is obtained To video segment data;
It desensitizes according to preset business regular, non-desensitization video-frequency band is extracted from video segment data;
Non- desensitization video-frequency band is subjected to splicing, the desensitization video flowing after being desensitized.
In one embodiment, it is also performed the steps of when computer program is executed by processor and determines video to be identified The identification information of business personnel belonging to data are corresponding;Inquiry identification information, which respectively corresponds, preset gets Activity recognition ready Model and get speech recognition modeling ready;Image feature data are extracted from image data, and audio frequency characteristics are extracted from audio data Data;The input of image feature data is got ready in Activity recognition model, obtains getting Activity recognition ready as a result, by audio characteristic data Input is got ready in speech recognition modeling, obtains getting speech recognition result ready.
In one embodiment, it also performs the steps of when computer program is executed by processor and is obtained from operation system Historical behavior image data and history is taken to get voice data ready;Get historical behavior image data and history ready voice data respectively Classify according to business personnel, obtains the corresponding historical behavior image data of each business personnel and each business personnel is corresponding goes through History gets voice data ready;The corresponding historical behavior image data of each business personnel of training, obtains getting Activity recognition model ready;Training The corresponding history of each business personnel gets voice data ready, obtains getting speech recognition modeling ready.
In one embodiment, it is also performed the steps of when computer program is executed by processor and inquires preset get ready Triggering rule, getting triggering rule ready includes behavior triggering rule and speech trigger rule;Activity recognition result and behavior will be got ready Triggering rule is compared, and is obtained behavior triggering and is compared as a result, speech recognition result will be got ready with speech trigger rule, obtains To speech trigger result;Result and speech trigger are triggered as a result, obtaining getting recognition result ready according to behavior;When getting recognition result ready When being got ready for operation, for video data to be identified addition cutting point identification;Video flowing to be desensitized is carried out according to cutting point identification Cutting process obtains video segment data.
In one embodiment, it also performs the steps of to work as to receive to get ready when computer program is executed by processor and cut When cutting instruction, the cutting moment value for getting cutting instruction ready is determined;Determine that cutting moment value is corresponding in video data to be identified Cut video frame;For cutting video frame addition cutting point identification;It returns and cuts video flowing to be desensitized according to cutting point identification Processing is cut, video segment data is obtained.
In one embodiment, inquiry preset business is also performed the steps of when computer program is executed by processor Desensitization rule;It is desensitized according to business regular, determines the non-desensitization video-frequency band in video segment data;Extract non-desensitization video-frequency band.
In one embodiment, it also performs the steps of to be desensitized according to business when computer program is executed by processor and advise Then determine that type is got in non-desensitization ready;The corresponding cutting point identification of inquiry video segment data gets type ready;Type will be got ready to take off with non- Quick type of getting ready is matched;Using matching result unanimously corresponding video segment data as non-desensitization video-frequency band.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be with Relevant hardware is instructed to complete by computer program, the computer program can be stored in a non-volatile computer In read/write memory medium, the computer program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein, To any reference of memory, storage, database or other media used in each embodiment provided herein, Including non-volatile and/or volatile memory.Nonvolatile memory may include read-only memory (ROM), programming ROM (PROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include Random access memory (RAM) or external cache.By way of illustration and not limitation, RAM is available in many forms, Such as static state RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDRSDRAM), enhancing Type SDRAM (ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM) etc..
Each technical characteristic of above embodiments can be combined arbitrarily, for simplicity of description, not to above-described embodiment In each technical characteristic it is all possible combination be all described, as long as however, the combination of these technical characteristics be not present lance Shield all should be considered as described in this specification.
The several embodiments of the application above described embodiment only expresses, the description thereof is more specific and detailed, but simultaneously It cannot therefore be construed as limiting the scope of the patent.It should be pointed out that coming for those of ordinary skill in the art It says, without departing from the concept of this application, various modifications and improvements can be made, these belong to the protection of the application Range.Therefore, the scope of protection shall be subject to the appended claims for the application patent.

Claims (10)

1. a kind of video desensitization method, which comprises
Video data to be identified is extracted from wait the video flowing that desensitizes, and from the video data to be identified extract image data and Audio data;
It gets ready image data input is preset in Activity recognition model, obtains getting Activity recognition ready as a result, and will be described Audio data input is preset to be got ready in speech recognition modeling, obtains getting speech recognition result ready;
According to it is described get ready Activity recognition result and it is described get speech recognition result ready, the video flowing to be desensitized is cut Processing, obtains video segment data;
It desensitizes according to preset business regular, non-desensitization video-frequency band is extracted from the video segment data;
The non-desensitization video-frequency band is subjected to splicing, the desensitization video flowing after being desensitized.
2. the method according to claim 1, wherein described preset get image data input ready behavior In identification model, obtain getting Activity recognition ready as a result, and by the audio data input it is preset get ready in speech recognition modeling, Obtain getting speech recognition result ready include:
Determine the identification information of the corresponding affiliated business personnel of the video data to be identified;
It inquires the identification information and respectively corresponds preset get ready and Activity recognition model and get speech recognition modeling ready;
Image feature data are extracted from the image data, extract audio characteristic data from the audio data;
It will be got ready in Activity recognition model described in image feature data input, obtain getting Activity recognition ready as a result, will be described It is got ready in speech recognition modeling described in audio characteristic data input, obtains getting speech recognition result ready.
3. according to the method described in claim 2, it is characterized in that, being respectively corresponded in the inquiry identification information pre- If get Activity recognition model ready and get speech recognition modeling ready before, further includes:
Historical behavior image data is obtained from operation system and history gets voice data ready;
It gets the historical behavior image data and the history ready voice data respectively to classify according to business personnel, obtain Each corresponding historical behavior image data of business personnel and the corresponding history of each business personnel get voice data ready;
The corresponding historical behavior image data of training each business personnel obtains described getting Activity recognition model ready;
The corresponding history of training each business personnel gets voice data ready, obtains described getting speech recognition modeling ready.
4. the method according to claim 1, wherein described get Activity recognition result ready according to and described beat The video flowing to be desensitized is carried out cutting process by point speech recognition result, and obtaining video-frequency band data includes:
It inquires preset get ready and triggers rule, the triggering rule of getting ready includes that behavior triggers rule and speech trigger rule;
Activity recognition result and the behavior triggering rule of getting ready is compared, obtains behavior triggering as a result, will be described It gets speech recognition result ready to be compared with the speech trigger rule, obtains speech trigger result;
Result and the speech trigger are triggered as a result, obtaining getting recognition result ready according to the behavior;
When it is described get recognition result ready and got ready for operation when, for the video data to be identified addition cutting point identification;
The video flowing to be desensitized is subjected to cutting process according to the cutting point identification, obtains video segment data.
5. according to the method described in claim 4, it is characterized by further comprising:
When receive get ready cutting instruction when, determine described in get ready cutting instruction cutting moment value;
Determine the cutting moment value corresponding cutting video frame in the video data to be identified;
Cutting point identification is added for the cutting video frame;
The video flowing to be desensitized is subjected to cutting process according to the cutting point identification described in returning, obtains video segment data.
6. according to claim 1 to method described in 5 any one, which is characterized in that described desensitize according to preset business is advised Then, non-desensitization video-frequency band is extracted from the video segment data includes:
Inquire preset business desensitization rule;
It is desensitized according to the business regular, determines the non-desensitization video-frequency band in the video segment data;
Extract the non-desensitization video-frequency band.
7. according to the method described in claim 6, it is characterized in that, it is described according to the business desensitize rule, determine the view Non- desensitization video-frequency band in frequency range data includes:
Determine that type is got in non-desensitization ready according to business desensitization rule;
That inquires the corresponding cutting point identification of the video segment data gets type ready;
It gets type ready by described and gets type ready with the non-desensitization and match;
Using the consistent corresponding video segment data of matching result as the non-desensitization video-frequency band.
The device 8. a kind of video desensitizes, which is characterized in that described device includes:
Data extraction module is identified, for extracting video data to be identified from wait the video flowing that desensitizes, and from the view to be identified Frequency extracts image data and audio data in;
Get recognition processing module ready, for by the image data input it is preset get ready in Activity recognition model, got ready Activity recognition as a result, and by the audio data input it is preset get ready in speech recognition modeling, obtain getting speech recognition knot ready Fruit;
Video cutting process module, for got ready according to Activity recognition result and it is described get speech recognition result ready, by institute It states video flowing to be desensitized and carries out cutting process, obtain video segment data;
Video-frequency band screening module, it is regular for desensitizing according to preset business, non-desensitization view is extracted from the video segment data Frequency range;
Video-frequency band splicing module, for the non-desensitization video-frequency band to be carried out splicing, the desensitization video after being desensitized Stream.
9. a kind of computer equipment, including memory and processor, the memory are stored with computer program, feature exists In the step of processor realizes any one of claims 1 to 7 the method when executing the computer program.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program The step of method described in any one of claims 1 to 7 is realized when being executed by processor.
CN201811535162.XA 2018-12-14 2018-12-14 Video desensitization method, device, computer equipment and storage medium Active CN109831677B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811535162.XA CN109831677B (en) 2018-12-14 2018-12-14 Video desensitization method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811535162.XA CN109831677B (en) 2018-12-14 2018-12-14 Video desensitization method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109831677A true CN109831677A (en) 2019-05-31
CN109831677B CN109831677B (en) 2022-04-01

Family

ID=66858826

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811535162.XA Active CN109831677B (en) 2018-12-14 2018-12-14 Video desensitization method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109831677B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110781519A (en) * 2019-10-31 2020-02-11 东华大学 Safety desensitization method for voice data release
WO2020119508A1 (en) * 2018-12-14 2020-06-18 深圳壹账通智能科技有限公司 Video cutting method and apparatus, computer device and storage medium
CN112231748A (en) * 2020-10-13 2021-01-15 上海明略人工智能(集团)有限公司 Desensitization processing method and apparatus, storage medium, and electronic apparatus
CN113840109A (en) * 2021-09-23 2021-12-24 杭州海宴科技有限公司 Classroom audio and video intelligent note taking method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104780388A (en) * 2015-03-31 2015-07-15 北京奇艺世纪科技有限公司 Video data partitioning method and device
CN105472470A (en) * 2015-12-29 2016-04-06 重庆安碧捷科技股份有限公司 Medical treatment live broadcast system based on patient privacy information filtering
CN108053838A (en) * 2017-12-01 2018-05-18 上海壹账通金融科技有限公司 With reference to audio analysis and fraud recognition methods, device and the storage medium of video analysis
US20180205926A1 (en) * 2017-01-17 2018-07-19 Seiko Epson Corporation Cleaning of Depth Data by Elimination of Artifacts Caused by Shadows and Parallax
CN108806668A (en) * 2018-06-08 2018-11-13 国家计算机网络与信息安全管理中心 A kind of audio and video various dimensions mark and model optimization method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104780388A (en) * 2015-03-31 2015-07-15 北京奇艺世纪科技有限公司 Video data partitioning method and device
CN105472470A (en) * 2015-12-29 2016-04-06 重庆安碧捷科技股份有限公司 Medical treatment live broadcast system based on patient privacy information filtering
US20180205926A1 (en) * 2017-01-17 2018-07-19 Seiko Epson Corporation Cleaning of Depth Data by Elimination of Artifacts Caused by Shadows and Parallax
CN108053838A (en) * 2017-12-01 2018-05-18 上海壹账通金融科技有限公司 With reference to audio analysis and fraud recognition methods, device and the storage medium of video analysis
CN108806668A (en) * 2018-06-08 2018-11-13 国家计算机网络与信息安全管理中心 A kind of audio and video various dimensions mark and model optimization method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020119508A1 (en) * 2018-12-14 2020-06-18 深圳壹账通智能科技有限公司 Video cutting method and apparatus, computer device and storage medium
CN110781519A (en) * 2019-10-31 2020-02-11 东华大学 Safety desensitization method for voice data release
CN112231748A (en) * 2020-10-13 2021-01-15 上海明略人工智能(集团)有限公司 Desensitization processing method and apparatus, storage medium, and electronic apparatus
CN113840109A (en) * 2021-09-23 2021-12-24 杭州海宴科技有限公司 Classroom audio and video intelligent note taking method

Also Published As

Publication number Publication date
CN109831677B (en) 2022-04-01

Similar Documents

Publication Publication Date Title
CN109831677A (en) Video desensitization method, device, computer equipment and storage medium
CN109743624B (en) Video cutting method and device, computer equipment and storage medium
US6542881B1 (en) System and method for revealing necessary and sufficient conditions for database analysis
CN111325037B (en) Text intention recognition method and device, computer equipment and storage medium
CN109410071A (en) Core protects data processing method, device, computer equipment and storage medium
CN109711874A (en) User's portrait generation method, device, computer equipment and storage medium
CN109710402A (en) Method, apparatus, computer equipment and the storage medium of process resource acquisition request
CN110177182A (en) Sensitive data processing method, device, computer equipment and storage medium
CN109979440B (en) Keyword sample determination method, voice recognition method, device, equipment and medium
CN110096485A (en) Log inquiring method, device, computer equipment and storage medium
CN109766474A (en) Inquest signal auditing method, device, computer equipment and storage medium
CN110890088A (en) Voice information feedback method and device, computer equipment and storage medium
CN110459223A (en) Data tracking processing method, equipment, storage medium and device
WO2020233381A1 (en) Speech recognition-based service request method and apparatus, and computer device
CN111324375A (en) Code management method and device, computer equipment and storage medium
CN112784054A (en) Concept graph processing apparatus, concept graph processing method, and computer-readable medium
CN114830229A (en) Electronic device and control method thereof
US20210090180A1 (en) Methods for determining image content when generating a property loss claim through predictive analytics
CN114418398A (en) Scene task development method, device, equipment and storage medium
CN111459796B (en) Automated testing method, apparatus, computer device and storage medium
US8918406B2 (en) Intelligent analysis queue construction
CN112686744A (en) Method and device for monitoring external access service, electronic equipment and storage medium
JP2014175003A (en) Retrieval optimization system and method of the same
CN109739367A (en) Candidate word list generation method and device
CN113032836B (en) Data desensitization method and apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant