CN110458008A - Method for processing video frequency, device, computer equipment and storage medium - Google Patents
Method for processing video frequency, device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN110458008A CN110458008A CN201910599356.4A CN201910599356A CN110458008A CN 110458008 A CN110458008 A CN 110458008A CN 201910599356 A CN201910599356 A CN 201910599356A CN 110458008 A CN110458008 A CN 110458008A
- Authority
- CN
- China
- Prior art keywords
- image
- video
- service
- expression
- micro
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/53—Querying
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/53—Querying
- G06F16/535—Filtering based on additional data, e.g. user or group profiles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/55—Clustering; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
- G06V20/47—Detecting features for summarising video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
This application involves field of image processing, in particular to a kind of method for processing video frequency, device, computer equipment and storage medium.The described method includes: obtaining monitor video, the service sub-video of target monitoring object is intercepted from the monitor video;The first video image set comprising the target monitoring object is extracted from the service sub-video, target object image is extracted from first video image set;The second video image comprising service object is extracted from the service sub-video, service image set is obtained according to the target object image and second video image;Micro- Expression analysis is carried out to each integrated images in the service image set respectively, is obtained and each integrated images are matched presets micro- expression;According to the service image set and the information on services archives for presetting target monitoring object described in micro- expression generation.Video effective information can be improved using this method and obtain efficiency.
Description
Technical field
This application involves field of computer technology, more particularly to a kind of method for processing video frequency, device, computer equipment and
Storage medium.
Background technique
Currently, increasingly focusing on the attitude of service industry in society, it is desirable in Anywhere all as consumer
Top quality service can be obtained, therefore, it is necessary to the service quality to attendant to be monitored, in the service quality of attendant
It reminds and corrects in time when something goes wrong.
But the information content for including in monitor video is often very big, if quality-monitoring personnel want to be commented according to monitor video
When the service quality of valence attendant, need to review monitor video, and there are the picture frames of enormous amount in video, and contain
Therefore more redundancy therefrom gets the effective information that can be used in evaluating quality of server when needing to spend a large amount of
Between, the acquisition efficiency of video effective information is very low.
Summary of the invention
Based on this, it is necessary in view of the above technical problems, provide a kind of video effective information of can be improved and obtain efficiency
Method for processing video frequency, device, computer equipment and storage medium.
A kind of method for processing video frequency, which comprises
Monitor video is obtained, the service sub-video of target monitoring object is intercepted from the monitor video;
Extracted from the service sub-video include the target monitoring object the first video image set, from described the
Target object image is extracted in one video image set;
The second video image comprising service object is extracted from the service sub-video, according to the target object image
Service image set is obtained with second video image;
Micro- Expression analysis is carried out to each integrated images in the service image set respectively, is obtained and each set figure
Micro- expression is preset as matched;
According to the service image set and the information on services shelves for presetting target monitoring object described in micro- expression generation
Case.
The service sub-video of target monitoring object is intercepted from the monitor video in one of the embodiments, comprising:
The service identifiers for obtaining the target monitoring object search the service identifiers corresponding service time and target person
Face image;
Shooting time and the service time matched monitor video segment are extracted from the monitor video;
Face datection is carried out to the monitor video segment according to the target facial image, and from the monitor video piece
The video sub-segments that the face that matches with the target face figure is not detected are extracted in section;
The clip durations are compared by the clip durations for obtaining each video sub-segments with default missing threshold value;
The first video sub-segments that the clip durations are greater than the default missing threshold value are deleted from the video clip
It removes, obtains service sub-video.
Target object image is extracted from first video image set in one of the embodiments, comprising:
The first picture frame is extracted from first video image set, detects the portrait in each the first image frame
Number;
More people's picture frames that the portrait number is greater than 1 are extracted from the first image frame;
Face datection is carried out to more people's picture frames according to the service facial image prestored in service face database, detects institute
Stating in more people's picture frames whether there is and the unmatched facial image of service facial image;
If it is detected that in the presence of with the unmatched facial image of service facial image, the corresponding more people are schemed
As being extracted as target object image with the matched face-image of target facial image in frame.
It states in one of the embodiments, and extracts the second video figure comprising service object from the service sub-video
Picture, comprising:
The second video sub-segments that the clip durations are less than the default missing threshold value are obtained, from second video
It is extracted in sub-piece and the unmatched first facial image of service facial image;
It is extracted from more people's picture frames and unmatched second face-image of service facial image;
The second video image is obtained according to the first facial image and second face-image.
Micro- expression point is carried out to each integrated images in the service image set respectively in one of the embodiments,
Analysis, obtains and each integrated images are matched presets micro- expression, comprising:
Face feature point is extracted from each integrated images, face action feature is calculated according to the face feature point;
The face action feature is inputted into micro- Expression analysis model and obtains each matching probability value for presetting micro- expression;
It is chosen and the integrated images are matched presets micro- expression according to the matching probability value.
In one of the embodiments, according to the service image set and it is described preset target described in micro- expression generation prison
Control the information on services archives of object, comprising:
It presets micro- expression by described and is associated with corresponding integrated images in the service image set;
Obtain the corresponding object type of each integrated images;
Search it is described preset the corresponding expression label of micro- expression, and determine that described to preset micro- expression corresponding according to the label
Emotional category;
According to the object type and the emotional category, the integrated images in the service image set are divided into more
A image subset generates information on services archives according to described image subset.
Method in one of the embodiments, further include:
By in the service image set, the first set image of the matched target object classification of shooting time and service pair
As the second set image of classification is associated;
Judge that the first set image is associated and presets micro- expression and the second set image is associated presets micro- table
Feelings, if corresponding identical emotional category;
When the different emotional category of correspondence, the associated first set image and the second set image are carried out
Splicing obtains expression comparison chart.
A kind of video process apparatus, described device include:
Video intercepting module intercepts the service of target monitoring object for obtaining monitor video from the monitor video
Sub-video;
Target image extraction module includes the first of the target monitoring object for extracting from the service sub-video
Video image set extracts target object image from first video image set;
Image collection generation module, for extracting the second video figure comprising service object from the service sub-video
Picture obtains service image set according to the target object image and second video image;
Expression analysis module, for carrying out micro- Expression analysis to each integrated images in the service image set respectively,
It obtains and each integrated images are matched presets micro- expression;
Archives generation module, for according to the service image set and described presetting target monitoring described in micro- expression generation
The information on services archives of object.
A kind of computer equipment, including memory and processor, the memory are stored with computer program, the processing
The step of device realizes the above method when executing the computer program.
A kind of computer readable storage medium, is stored thereon with computer program, and the computer program is held by processor
The step of above method is realized when row.
Above-mentioned method for processing video frequency, device, computer equipment and storage medium can intercept out target from monitor video
The service sub-video in service position of monitored object, and detect from service sub-video the sum comprising service object comprising
The image of target object, so as to automatic fitration to useless redundant image information;The image filtered out can also be carried out
Micro- Expression analysis analyzes the micro- expression of target object and service object in the picture, so that image information is further processed,
Obtain to help the effective information of evaluation goal monitored object, to substantially increase the acquisition efficiency of video effective information.
Detailed description of the invention
Fig. 1 is the application scenario diagram of method for processing video frequency in one embodiment;
Fig. 2 is the flow diagram of method for processing video frequency in one embodiment;
Fig. 3 is the flow diagram that expression compares drawing generating method in one embodiment;
Fig. 4 is the structural block diagram of video process apparatus in one embodiment;
Fig. 5 is the internal structure chart of computer equipment in one embodiment.
Specific embodiment
It is with reference to the accompanying drawings and embodiments, right in order to which the objects, technical solutions and advantages of the application are more clearly understood
The application is further elaborated.It should be appreciated that specific embodiment described herein is only used to explain the application, not
For limiting the application.
Method for processing video frequency provided by the present application can be applied in application environment as shown in Figure 1.Wherein, terminal 102
It is communicated by network with server 104.Terminal 102 sends monitor video to server 104, and server 104 receives monitoring
After video, the service sub-video of target monitoring object is intercepted from monitor video;It extracts from service sub-video and is supervised comprising target
The first video image set for controlling object extracts target object image from the first video image set;From service sub-video
The second video image comprising service object is extracted, service graph image set is obtained according to target object image and the second video image
It closes;Micro- Expression analysis is carried out to each integrated images in service image set respectively, is obtained matched default with each integrated images
Micro- expression;According to service image set and the information on services archives for presetting micro- expression generation target monitoring object.Server 104 will
The information on services archives of generation return to terminal 102.
Wherein, terminal 102 can be, but not limited to be various personal computers, laptop, smart phone, tablet computer
With portable wearable device, server 104 can use the server set of the either multiple server compositions of independent server
Group realizes.
In one embodiment, as shown in Fig. 2, providing a kind of method for processing video frequency, it is applied in Fig. 1 in this way
It is illustrated for server 104, comprising the following steps:
Step 210, monitor video is obtained, the service sub-video of target monitoring object is intercepted from monitor video.
Target monitoring object is the object for needing to carry out quality of service monitor evaluation, such as contact staff.Monitor video is
The video of shooting is monitored to service position locating for target monitoring object, the personage taken in monitor video is in addition to target
Other than monitored object, it is also possible to including service object such as customer or other attendants etc..The shooting duration of video of monitor video is general
For period regular time, such as 1 day, 1 week.The monitor video of shooting periodically can be sent to service by the terminal for monitoring
Device after server receives monitoring record video, can immediately or periodically be handled monitor video.It is previously stored in server
The information for the target monitoring object for needing to be monitored, such as facial image, the service time information of target monitoring image.Clothes
Business device intercepts out from the monitor video of acquisition according to the information of target monitoring object, the corresponding service view of target monitoring object
Frequently, service sub-video is that target monitoring object appears in the movable video of locating service position progress.
Step 220, the first video image set comprising target monitoring object is extracted from service sub-video, from the first view
Target object image is extracted in frequency image collection.
Service sub-video is the preliminary and rough screening that service position is in target monitoring object, wherein may include
Other people informations or redundancy.It services in sub-video other than the image comprising target monitoring object, since monitoring regards
The rotation and change of shooting angle when frequency is shot, it is also possible to the image or more of image, other attendants comprising service object
The image etc. that a who object occurs jointly.Server carries out people to service sub-video according to the face information of target monitoring object
Face monitoring, therefrom identifies the first video image set comprising target monitoring object, can detect target monitoring object
The set of the video image of face.
Further, it due to including large number of video image frame in service sub-video, is carried out to service sub-video
Before processing, server first can extract video image frame from service sub-video at regular intervals, so as to subtract
The treating capacity of small video image, but the setting of the time interval extracted needs to combine amount of video information, cannot be arranged too big
And lose too many effective information.
After server extracts the video collection comprising target monitoring object, server is further to the image in set
Screening, judges whether the target monitoring object in image is in service behavior state, target monitoring object is in service behavior
The target object image of state is extracted from the first video image set.For example, service behavior state can be customer service people
The state that member services customer.In addition, target monitoring object may also be in other states, such as with other servers personnel
The state of exchange, the idle state not serviced, server can by number of person in image, target monitoring object it
The information such as the case where other outer personages judge whether target monitoring object is in service behavior state.
Step 230, the second video image comprising service object is extracted from the service sub-video, according to the target
Object images and second video image obtain service image set.
Service object is the object that target monitoring object is serviced, such as customer.Server is mentioned from service sub-video
Take out the second video image comprising service object.The face information of all attendants can be stored in advance in server,
Carrying out face monitoring to each image can determine that presence service in the image if detecting not with the matched face of attendant
Object.It is possible to further further judge whether service object is in by service state according to the number of person etc. in image,
By comprising service object and in by the image zooming-out of service state be the second video image.
Server obtains service object's set according to the target object image and the second video image that extract.Server can
With the object type to image each in service object, service object in this way or target monitoring object progress classification mark, and
It can be ranked up and arrange according to the corresponding video time of each image in service image set.
Step 240, each integrated images stated respectively in service image set carry out micro- Expression analysis, obtain and each set figure
Micro- expression is preset as matched.
After obtaining service image set, server carries out micro- expression point to integrated images each in service image set respectively
Analysis, specifically, server can extract the face image of each object from integrated images, and face is extracted from face image
Feature, then search and think matched to preset micro- expression with face feature.Be previously stored in the database of server it is a variety of preset it is micro-
Expression, presetting micro- expression can be configured according to face position, if micro- expression of presetting of eye may include narrowing one's eyes, staring
Eyeball etc., therefore, may having with the matched quantity for presetting micro- expression of each integrated images for obtaining are multiple, such as can be multiple positions
Institute is matched to preset micro- expression.Presetting micro- expression may be the comprehensive obtained micro- expression of each genius loci, server according to
The face feature at multiple positions matches the comprehensive micro- expression of a face, such as smiles, laughs.
Step 250, according to service image set and the information on services archives for presetting micro- expression generation target monitoring object.
Server according to the corresponding object type of each service image set, shooting time and matched can preset micro- expression
Taxonomic revision is carried out, and matched to target monitoring object in same time period and service object institute can preset micro- expression and carry out
Comparative analysis obtains the service score of target monitoring object according to micro- expression of presetting of both sides' object, and can be by each period
Service score carry out COMPREHENSIVE CALCULATING, obtain target monitoring object in the time cycle integrity service evaluation score, server
Each period section service score can also be compared with threshold value of warning is serviced, judge the period server score whether
It is qualified and whether to carry out pre-warning service quality prompt information etc..Server can be according to above-mentioned service image set, each set
Image etc. presets the combination of one or more of each information such as micro- expression, the service score of each period, early warning information
Generate information on services archives, server according to integrated images and can also preset micro- expression and carry out other processing modes and obtain other
Information is analyzed, information on services archives are obtained.
Specifically, server is when calculating each period section service score, can set to preset and each preset micro- expression
Corresponding service score value, service object and target monitoring object matching preset micro- expression and can set different service score values,
And different weights is set to micro- expression of presetting of service object and target monitoring object, according to the service score value and weight of both sides
The service score of each period is calculated.In other embodiments, server can also be serviced using other methods calculating
Point.
In the present embodiment, server can be intercepted out from monitor video target monitoring object in service position
Sub-video is serviced, and detects that the sum comprising service object includes the image of target object from service sub-video, so as to
Automatic fitration is to useless redundant image information;Micro- Expression analysis can also be carried out to the image filtered out, analyze target pair
As and service object's micro- expression in the picture obtain evaluation goal capable of being helped to supervise to be further processed image information
The effective information for controlling object, to substantially increase the acquisition efficiency of video effective information.
In one embodiment, the step of service sub-video of target monitoring object is intercepted from monitor video can wrap
It includes: obtaining the service identifiers of target monitoring object, search service identifiers corresponding service time and target facial image;From monitoring
Shooting time and service time matched monitor video segment are extracted in video;According to target facial image to monitor video segment
Face datection is carried out, and extracts the video sub-pieces that the face that matches with target face figure is not detected from monitor video segment
Section;Clip durations are compared by the clip durations for obtaining each video sub-segments with default missing threshold value;Clip durations are greater than
First video sub-segments of default missing threshold value are deleted from video clip, obtain service sub-video.
Service identifiers are used to carry out unique identification to attendant, and service identifiers can be employee code, name, work number
It is previously stored in server Deng, the service identifiers of each attendant and the mapping relations of attendant's essential information, services people
Member's essential information may include service time such as in the customer service time of class, personal information such as gender, age, human face image information
Deng.Server obtains the service identifiers of target monitoring object, searches service identifiers corresponding service time and target facial image,
Service time is compared server with the shooting time of monitor video, and shooting time and service are extracted from monitor video
The monitor video segment of time match such as can intercept corresponding piece according to the initial time of service time from monitor video
Section, and obtains fixed time of having a rest, such as mealtime, will wherein fix time of having a rest corresponding video clip and reject and be supervised
Control video clip.
Server carries out Face datection to monitor video segment according to the target facial image found, can be every fixation
Time interval extracts picture frame from monitor video segment, whether there is in detection image frame matched with target facial image
Facial image, the picture frame that matching facial image will be not detected extract, and obtain extraction sequence and do not detect continuously
Multiple images frame with facial image obtains video sub-segments, and the number of obtained video sub-segments may be multiple, server
It obtains the initial time of each video sub-segments and terminates the time, each video sub-segments are calculated according to termination time and initial time
Lasting clip durations, server obtains default missing threshold value, and whether default missing threshold value judge attendant leaves the post
Time threshold determines that attendant leaves the post if missing time of the attendant in monitor video is more than default missing threshold value.
The clip durations of each video sub-segments are compared by server with default missing threshold value, and clip durations are greater than default missing threshold
First video sub-segments of value are deleted from video clip, obtain service sub-video.Wherein, when carrying out recognition of face detection
Algorithm, can be using the recognition methods based on template matching, Principal Component Analysis, the method based on singular value features, subspace
Analytic approach, locality preserving projections method scheduling algorithm.
In the present embodiment, can be gone out by video matching and Face datection with preliminary screening includes the view of target monitoring object
Frequency segment, so as to effectively reduce the redundant video segment unrelated with target monitoring target.And pass through the default missing threshold of setting
Value, can be with preliminary judgement target monitoring object abandon their respective regions for long periods of time such as go to toilet, it is outgoing leave the post or the short time leave the post such as to
Other attendants consulting, acquisition data etc., and retaining video information of short time when leaving the post (can include the expression of service object
Information), to reduce the loss of effective information.
In one embodiment, from the first video image set extract target object image the step of may include: from
The first picture frame is extracted in first video image set, detects the portrait number in each first picture frame;From the first picture frame
Extract more people's picture frames that portrait number is greater than 1;According to the service facial image prestored in service face database to more people's images
Frame carries out Face datection, and detect whether there is and the service unmatched facial image of facial image in more people's picture frames;If inspection
Measure in the presence of and service the unmatched facial image of facial image, then by corresponding more people's picture frames with target facial image
Matched face-image is extracted as target object image.
First video image set is the image collection of the facial image comprising target monitoring object, and server can be with solid
It fixes time to be spaced and extracts the first picture frame from the first video set of graphs, reduce the data volume of image procossing.Server detection
The portrait number in the first picture frame respectively extracted, portrait detection are distinct from Face datection, need to only detect each first picture frame
Present in number, do not need accurately to identify face, such as portrait number can be detected by detection human body contour outline.
Server extracts more people's picture frames that portrait number is greater than 1 from the first picture frame, i.e. exclusion target monitoring object individually goes out
The now picture frame of non-serving state in video.
Servicing face database is a face information bank in server, and all attendants include the people of target monitoring object
Face image is stored in service face database, and server carries out face videos to each more people's picture frames, detected each one
The service facial image of the Servers-all personnel prestored in face feature, the face characteristic that will test and service face database carries out
Aspect ratio to and matching, judge the face characteristic detected whether with service face database in some service facial image match,
When detecting in more people's picture frames in the presence of facial image matched with all service facial image army headquaters, then more people's figures are determined
It as presence service object in frame, will extract, extract with the matched face-image of target facial image in more people's picture frames
For target object image.
In the present embodiment, by more people's image detections, the picture frame of target monitoring object individualism can be excluded, is led to
The detection matching for crossing the service facial image of attendant, can exclude non-existing for only target monitoring object and attendant
The picture frame of service state effectively reduces redundant image information to further reduce video image range.
In one embodiment, the step of including the second video image of service object is extracted from service sub-video can be with
Include: the second video sub-segments for obtaining clip durations and being less than default missing threshold value, is extracted from the second video sub-segments
With the service unmatched first facial image of facial image;It is extracted from more people's picture frames unmatched with service facial image
Second face-image;The second video image is obtained according to first facial image and the second face-image.
Server obtains the second video sub-segments that clip durations are less than default missing threshold value from service sub-video, the
Two video sub-segments are not occur target monitoring object, and the missing duration of target monitoring object is less than default missing threshold value
Video clip, server from first identified in the second video sub-segments with service face database in service facial image it is matched
Facial image, then other facial images are extracted as to the first facial image of service object.
More people's picture frames are more people's images comprising target monitoring object, and similarly, server is first deposited in more people's picture frames
Identify with the matched facial image of service facial image in service face database, then other facial images are extracted as servicing
Second face-image of object.Server generates the second video image according to first facial image and the second face-image jointly.
Further, the image category that server can carry out service object's classification to the second video image marks, and can mark each
The shooting time of second video image.
In the present embodiment, server by comprising or image not comprising the service object in target monitoring object extract
It out, can be to avoid the facial expression information for losing service object.
In one embodiment, micro- Expression analysis is carried out to each integrated images in service image set respectively, obtain with
Each integrated images matched the step of presetting micro- expression may include: to extract face feature point from each integrated images, according to face
Portion's characteristic point calculates face action feature;Face action feature is inputted into micro- Expression analysis model and obtains each for presetting micro- expression
With probability value;It is chosen and integrated images are matched presets micro- expression according to matching probability value.
Each integrated images are target monitoring object or the face-image of service object.Server is extracted from integrated images
Face feature point, face feature point are the characteristic point of face and face mask, such as the feature of eyes, mouth, nose, eyebrow is sat
Mark.Specifically, server can be by preparatory trained 3D faceform or deep learning neural network to current face
Image carries out face feature point extraction.
Server can pass through preparatory trained 3D faceform or depth again based on the face feature point extracted
Learning neural network model extracts face action feature from integrated images, each facial characteristics extracted can also be clicked through
Corresponding face action feature calculation model is inputted after row classification, corresponding facial motion characteristic is obtained, for example, eye will be belonged to
Face feature point inputs the available face action feature about eye of eye movement model, and feature of such as blinking narrows a feature, stares
Feature etc..3D faceform, deep learning neural network model, face action feature calculation model are all by advance to multiple
The training of facial image deep learning obtains.
Server can be according to 3D faceform or deep learning neural network model or face action feature calculation
Model calculates the value of each face action feature, and face action feature and value are inputted trained micro- expression point in advance
In class model, the various probability values for presetting micro- expression are obtained.Micro- expression classification model can be using SVM classifier, depth nerve
A variety of models for classification such as network learning model, Decision-Tree Classifier Model, micro- expression classification model pass through in advance to multiple
The face action feature training of facial image obtains.It is maximum pre- that server can select probability value according to model output result
If micro- expression.
In the present embodiment, by face feature is extracted and tagsort training, it is available more accurately
Micro- expression is preset, obtained micro- expression of presetting can be the attitude and quality of evaluation attendant and expiring for service object
Meaning degree provides important data reference.
In one embodiment, according to service image set and the information on services for presetting micro- expression generation target monitoring object
The step of archives may include: that will preset micro- expression to be associated with corresponding integrated images in service image set;It obtains each
The corresponding object type of integrated images;The corresponding expression label of micro- expression is preset in lookup, and presets micro- expression according to label judgement
Corresponding emotional category;According to object type and emotional category, the integrated images in service image set are divided into multiple figures
As subset, information on services archives are generated according to image subset.
Server is preset micro- expression by each integrated images and accordingly and is associated, and can such as carry out to each integrated images pre-
If micro- expression mark or set of records ends image and the mapping relations etc. for accordingly presetting micro- expression.Server obtains each integrated images
Corresponding object type, in the present embodiment, object type are divided according to the face object in image, and object type can be with
Including two classes, i.e. target monitoring object type and service object type, face is carried out in the integrated images to various object type
When detection matching, i.e., classification mark is carried out to integrated images according to the object of the facial image detected.
Expression label is to preset the corresponding mood mode tag of micro- expression, expression label can happy, excited, contempt,
It is angry, gentle etc., a kind of expression label can correspond to it is a variety of preset micro- expression, such as happy expression label is corresponding presets micro- table
Feelings may include narrow one's eyes, the corners of the mouth raises up etc. presets micro- expression.Expression label and the mapping relations preset between micro- expression are prior
It is stored in server.The corresponding expression label of micro- expression is preset in server lookup.
Expression label can be divided into multiple emotional categories, and a kind of emotional category can correspond to multiple expression labels.Such as
In one embodiment, the emotional category of expression label can be divided into three kinds, including positive emotion classification, neutral emotion class
Other and Negative Affect classification, such as happy, excitement expression label belong to positive emotion classification, the expression labels category such as contempt, indignation
In Negative Affect classification, gentle expression label belongs to neutral emotional category etc..In other embodiments, its other party can also be used
Formula divides emotional category.The mapping relations of emotional category and expression label can be stored in advance in server, and server obtains
Each integrated images preset the corresponding emotional category of micro- expression, and the emotional category found is carried out with corresponding integrated images
Association.
Server can classify to integrated images according to the corresponding object type of each integrated images and emotional category, such as
Integrated images can be first divided into according to object type the service graph image set of multiple object type, then the clothes in each object type
The emotional category according to belonging to integrated images is divided into multiple small image subsets in business image set, while by each integrated images pair
The mapping relations that the shooting time answered presets the information such as micro- expression, object type and emotional category are arranged, and each service is formed
The image information table of image set generates information on services archives according to the multiple images subset of division and corresponding image information table.
Information on services archives can be pushed to terminal by server, so that terminal is according to information on services archives to the clothes of target monitoring object
Business quality is rationally evaluated and tested.
In the present embodiment, pass through the judgement to micro- expression progress emotional category is preset, and the object according to integrated images
Classification and emotional category carry out taxonomic revision to integrated images, can facilitate and search integrated images, and convenient for obtaining collection
Close the object expression information in image.
In one embodiment, it as shown in figure 3, Fig. 3 is the flow chart that expression compares drawing generating method, can specifically include
Following steps:
Step 310, by service image set, the first set image and clothes of the matched target object classification of shooting time
The second set image of business object type is associated.
In above-described embodiment, the integrated images in server images set are divided into different image according to object type
Collection, and each image subset has corresponding image information table, server is from respectively from target object classification, that is, target monitoring object
Classification, and service object type image subset image information table in obtain the shooting time of each integrated images, and search
The first set image for the mark object type that shooting time matches and the second set image of service object's classification, will be matched
It is associated between two class images.Wherein, the shooting time not necessarily shooting time that matches is completely the same, is also possible to institute
Belong to time range and unanimously then determine that shooting time matches, time range length can be such as set as to 10 seconds, 20 seconds, 30 seconds
Deng.
Step 320, judge that first set image is associated and preset micro- expression and second set image is associated presets micro- table
Feelings, if corresponding identical emotional category.
Server obtains micro- expression of presetting of the first set image and second set image to match respectively, and searches each
The corresponding emotional category of micro- expression is preset, determines whether the corresponding emotional category of two images is consistent, that is, judges target monitoring pair
As whether being consistent with the expression affective state of service object at that time, the feelings of target monitoring object and service object in communication process
Sense variation it is general all can relative synchronization, be at this moment easy to the attitude of evaluation goal monitored object, but when both affective state not
When unanimously there is conflict, customer analysis evaluation can not be carried out to the service state of target monitoring object, generally require artificial determination
Real service situation at that time.
Step 330, when the different emotional category of correspondence, associated first set image and second set image are carried out
Splicing obtains expression comparison chart.
When server, which determines two images, corresponds to different emotional categories, server is by associated first set image
Spliced to obtain expression comparison chart with second set image, the position of the two splicing and form can be according to the need of monitoring personnel
It asks and is configured.Further, server further can be recorded and be marked to the corresponding shooting time of expression comparison chart
Note.After the expression comparison chart of multiple shooting times can also be ranked up by server according to shooting time, generates expression and compare
Cardon.The expression comparison chart or expression of generation can be compared cardon and be sent to terminal by server, to carry out collision table to terminal
The early warning of feelings.
In the present embodiment, the image of the target monitoring object of same shooting time and service object is associated, and
Automatically matching detection being carried out to personage's affective state in both sides' image, the image that affective state can be gone out to figure splices, from
And it is compared convenient for monitoring personnel.
It should be understood that although each step in the flow chart of Fig. 2-3 is successively shown according to the instruction of arrow,
These steps are not that the inevitable sequence according to arrow instruction successively executes.Unless expressly stating otherwise herein, these steps
Execution there is no stringent sequences to limit, these steps can execute in other order.Moreover, at least one in Fig. 2-3
Part steps may include that perhaps these sub-steps of multiple stages or stage are not necessarily in synchronization to multiple sub-steps
Completion is executed, but can be executed at different times, the execution sequence in these sub-steps or stage is also not necessarily successively
It carries out, but can be at least part of the sub-step or stage of other steps or other steps in turn or alternately
It executes.
In one embodiment, as shown in figure 4, providing a kind of video process apparatus, comprising: video intercepting module 410,
Target image extraction module 420, image collection generation module 430, Expression analysis module 440 and archives generation module 450,
In:
Video intercepting module 410 intercepts the clothes of target monitoring object for obtaining monitor video from the monitor video
Business sub-video.
Target image extraction module 420, for extracting from the service sub-video comprising the target monitoring object
First video image set extracts target object image from first video image set.
Image collection generation module 430, for extracting the second video comprising service object from the service sub-video
Image obtains service image set according to the target object image and second video image.
Expression analysis module 440, for carrying out micro- expression point to each integrated images in the service image set respectively
Analysis, obtains and each integrated images are matched presets micro- expression.
Archives generation module 450, for according to the service image set and described presetting target described in micro- expression generation
The information on services archives of monitored object.
In one embodiment, video intercepting module 410 may include:
It is corresponding to search the service identifiers for obtaining the service identifiers of the target monitoring object for information searching unit
Service time and target facial image.
Snippet extraction unit, for extracting shooting time and the service time matched monitoring from the monitor video
Video clip.
Screening unit again, for carrying out Face datection to the monitor video segment according to the target facial image, and
The video sub-segments that the face that matches with the target face figure is not detected are extracted from the monitor video segment.
Time length comparison unit by the clip durations and is preset for obtaining the clip durations of each video sub-segments
Missing threshold value is compared.
Video delete unit, for by the clip durations be greater than it is described it is default missing threshold value the first video sub-segments from
It is deleted in the video clip, obtains service sub-video.
In one embodiment, target image extraction module 420 may include:
Portrait detection unit detects each described for extracting the first picture frame from first video image set
Portrait number in one picture frame.
More people's detection units, the more people's images for being greater than 1 for extracting the portrait number from the first image frame
Frame.
Face matching unit, for according to the service facial image that prestores in service face database to more people's picture frames into
Row Face datection, detecting in more people's picture frames whether there is and the unmatched facial image of service facial image.
Target object extraction unit, be used for if it is detected that in the presence of with the unmatched face figure of the service facial image
Picture then will be extracted as target object figure with the matched face-image of target facial image in corresponding more people's picture frames
Picture.
In one embodiment, image collection generation module 430 may include:
First extraction unit, the second video sub-pieces for being less than the default missing threshold value for obtaining the clip durations
Section, extracts and the unmatched first facial image of service facial image from second video sub-segments.
Second extraction unit, for being extracted from more people's picture frames and the service facial image unmatched
Two face-images;
Image collection unit, for obtaining the second video figure according to the first facial image and second face-image
Picture.
In one embodiment, Expression analysis module 440 may include:
Feature extraction unit, for extracting face feature point from each integrated images, according to the face feature point
Calculate face action feature.
Probability calculation unit obtains each presetting micro- expression for the face action feature to be inputted micro- Expression analysis model
Matching probability value.
Expression selection unit, for being chosen and the integrated images are matched presets micro- table according to the matching probability value
Feelings.
In one embodiment, archives generation module 450 may include:
Associative cell, for presetting micro- expression by described and being closed with corresponding integrated images in the service image set
Connection.
Classification acquiring unit, for obtaining the corresponding object type of each integrated images.
Emotion judging unit, for search it is described preset the corresponding expression label of micro- expression, and determined according to the label
It is described to preset the corresponding emotional category of micro- expression.
Subset division unit is used for according to the object type and the emotional category, will be in the service image set
Integrated images be divided into multiple images subset, according to described image subset generate information on services archives.
In one embodiment, device can also include:
Image relating module, for by the service image set, the of the matched target object classification of shooting time
One integrated images and the second set image of service object's classification are associated.
Categorical match module presets micro- expression and the second set figure for judging that the first set image is associated
Micro- expression is preset as associated, if corresponding identical emotional category.
Image mosaic module, for when the different emotional category of correspondence, by the associated first set image and institute
Second set image is stated to be spliced to obtain expression comparison chart.
Specific about video process apparatus limits the restriction that may refer to above for method for processing video frequency, herein not
It repeats again.Modules in above-mentioned video process apparatus can be realized fully or partially through software, hardware and combinations thereof.On
Stating each module can be embedded in the form of hardware or independently of in the processor in computer equipment, can also store in a software form
In memory in computer equipment, the corresponding operation of the above modules is executed in order to which processor calls.
In one embodiment, a kind of computer equipment is provided, which can be server, internal junction
Composition can be as shown in Figure 5.The computer equipment include by system bus connect processor, memory, network interface and
Database.Wherein, the processor of the computer equipment is for providing calculating and control ability.The memory packet of the computer equipment
Include non-volatile memory medium, built-in storage.The non-volatile memory medium is stored with operating system, computer program and data
Library.The built-in storage provides environment for the operation of operating system and computer program in non-volatile memory medium.The calculating
The database of machine equipment is for storing video processing data.The network interface of the computer equipment is used to pass through with external terminal
Network connection communication.To realize a kind of method for processing video frequency when the computer program is executed by processor.
It will be understood by those skilled in the art that structure shown in Fig. 5, only part relevant to application scheme is tied
The block diagram of structure does not constitute the restriction for the computer equipment being applied thereon to application scheme, specific computer equipment
It may include perhaps combining certain components or with different component layouts than more or fewer components as shown in the figure.
In one embodiment, a kind of computer equipment, including memory and processor are provided, which is stored with
Computer program, which performs the steps of acquisition monitor video when executing computer program, from the monitor video
Intercept the service sub-video of target monitoring object;Extracting from the service sub-video includes the first of the target monitoring object
Video image set extracts target object image from first video image set;It is extracted from the service sub-video
The second video image comprising service object obtains service image according to the target object image and second video image
Set;Micro- Expression analysis is carried out to each integrated images in the service image set respectively, is obtained and each integrated images
It is matched to preset micro- expression;According to the service image set and the clothes for presetting target monitoring object described in micro- expression generation
Business news file.
In one embodiment, it is realized when processor executes computer program and intercepts target monitoring from the monitor video
It when the step of the service sub-video of object, is also used to: obtaining the service identifiers of the target monitoring object, search the service mark
Know corresponding service time and target facial image;Shooting time is extracted from the monitor video to match with the service time
Monitor video segment;Face datection is carried out to the monitor video segment according to the target facial image, and from the prison
The video sub-segments that the face that matches with the target face figure is not detected are extracted in control video clip;Obtain each view
The clip durations are compared by the clip durations of frequency sub-piece with default missing threshold value;The clip durations are greater than institute
The first video sub-segments for stating default missing threshold value are deleted from the video clip, obtain service sub-video.
In one embodiment, it realizes when processor executes computer program and is extracted from first video image set
It when the step of target object image, is also used to: extracting the first picture frame from first video image set, detect each described
Portrait number in first picture frame;More people's picture frames that the portrait number is greater than 1 are extracted from the first image frame;
Face datection is carried out to more people's picture frames according to the service facial image prestored in service face database, detects more people's figures
As whether there is and the unmatched facial image of service facial image in frame;If it is detected that in the presence of with the service face
The unmatched facial image of image, then by corresponding more people's picture frames with the matched face of target facial image
Image zooming-out is target object image.
In one embodiment, it realizes when processor executes computer program and extracts from the service sub-video comprising clothes
It when the step of the second video image of business object, is also used to: obtaining the clip durations and be less than the default missing threshold value
Second video sub-segments extract and the unmatched first facial of the service facial image from second video sub-segments
Image;It is extracted from more people's picture frames and unmatched second face-image of service facial image;According to described
First facial image and second face-image obtain the second video image.
In one embodiment, it realizes when processor executes computer program respectively to each in the service image set
Integrated images carry out micro- Expression analysis and are also used to when obtaining the step for presetting micro- expression matched with each integrated images: from
Face feature point is extracted in each integrated images, face action feature is calculated according to the face feature point;By the face
Motion characteristic inputs micro- Expression analysis model and obtains each matching probability value for presetting micro- expression;It is chosen according to the matching probability value
With the integrated images are matched presets micro- expression.
In one embodiment, it realizes when processor executes computer program according to the service image set and described pre-
If being also used to when the step of the information on services archives of target monitoring object described in micro- expression generation: by it is described preset micro- expression with
Corresponding integrated images are associated in the service image set;Obtain the corresponding object class of each integrated images
Not;Search it is described preset the corresponding expression label of micro- expression, and determined described to preset the corresponding feelings of micro- expression according to the label
Feel classification;According to the object type and the emotional category, the integrated images in the service image set are divided into more
A image subset generates information on services archives according to described image subset.
In one embodiment, it also performs the steps of when processor executes computer program by the service graph image set
In conjunction, the first set image of the matched target object classification of shooting time and the second set image of service object's classification are carried out
Association;Judge that the first set image is associated and preset micro- expression and the second set image is associated presets micro- expression,
Whether identical emotional category is corresponded to;When the different emotional category of correspondence, by the associated first set image and described
Second set image is spliced to obtain expression comparison chart.
In one embodiment, a kind of computer readable storage medium is provided, computer program is stored thereon with, is calculated
Machine program performs the steps of acquisition monitor video when being executed by processor, target monitoring pair is intercepted from the monitor video
The service sub-video of elephant;The first video image set comprising the target monitoring object is extracted from the service sub-video,
Target object image is extracted from first video image set;It extracts from the service sub-video comprising service object
Second video image obtains service image set according to the target object image and second video image;Respectively to institute
It states each integrated images in service image set and carries out micro- Expression analysis, obtain and each integrated images are matched presets micro- table
Feelings;According to the service image set and the information on services archives for presetting target monitoring object described in micro- expression generation.
In one embodiment, it is realized when computer program is executed by processor and intercepts target prison from the monitor video
When controlling the step of service sub-video of object, it is also used to: obtains the service identifiers of the target monitoring object, search the service
Identify corresponding service time and target facial image;Shooting time and the service time are extracted from the monitor video
The monitor video segment matched;Face datection is carried out to the monitor video segment according to the target facial image, and from described
The video sub-segments that the face that matches with the target face figure is not detected are extracted in monitor video segment;It obtains each described
The clip durations are compared by the clip durations of video sub-segments with default missing threshold value;The clip durations are greater than
First video sub-segments of the default missing threshold value are deleted from the video clip, obtain service sub-video.
In one embodiment, it realizes when computer program is executed by processor and is mentioned from first video image set
It when taking the step of target object image, is also used to: extracting the first picture frame from first video image set, detect each institute
State the portrait number in the first picture frame;More people's images that the portrait number is greater than 1 are extracted from the first image frame
Frame;Face datection is carried out to more people's picture frames according to the service facial image prestored in service face database, is detected described more
It whether there is and the unmatched facial image of service facial image in people's picture frame;If it is detected that in the presence of with the service
The unmatched facial image of facial image, then will be matched with the target facial image in corresponding more people's picture frames
Face-image is extracted as target object image.
In one embodiment, it realizes to extract from the service sub-video when computer program is executed by processor and includes
It when the step of the second video image of service object, is also used to: obtaining the clip durations and be less than the default missing threshold value
The second video sub-segments, extracted from second video sub-segments and unmatched first face of service facial image
Portion's image;It is extracted from more people's picture frames and unmatched second face-image of service facial image;According to institute
It states first facial image and second face-image obtains the second video image.
In one embodiment, it realizes when computer program is executed by processor respectively in the service image set
Each integrated images carry out micro- Expression analysis and are also used to when obtaining the step for presetting micro- expression matched with each integrated images:
Face feature point is extracted from each integrated images, face action feature is calculated according to the face feature point;By the face
Portion's motion characteristic inputs micro- Expression analysis model and obtains each matching probability value for presetting micro- expression;It is selected according to the matching probability value
It takes and the integrated images are matched presets micro- expression.
In one embodiment, computer program is realized when being executed by processor according to the service image set and described
It when presetting the step of information on services archives of target monitoring object described in micro- expression generation, is also used to: presetting micro- expression for described
It is associated with corresponding integrated images in the service image set;Obtain the corresponding object class of each integrated images
Not;Search it is described preset the corresponding expression label of micro- expression, and determined described to preset the corresponding feelings of micro- expression according to the label
Feel classification;According to the object type and the emotional category, the integrated images in the service image set are divided into more
A image subset generates information on services archives according to described image subset.
In one embodiment, it is also performed the steps of when computer program is executed by processor by the service image
In set, the second set image of the first set image of the matched target object classification of shooting time and service object's classification into
Row association;Judge that the first set image is associated and presets micro- expression and the second set image is associated presets micro- table
Feelings, if corresponding identical emotional category;When the different emotional category of correspondence, by the associated first set image and institute
Second set image is stated to be spliced to obtain expression comparison chart.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be with
Relevant hardware is instructed to complete by computer program, the computer program can be stored in a non-volatile computer
In read/write memory medium, the computer program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein,
To any reference of memory, storage, database or other media used in each embodiment provided herein,
Including non-volatile and/or volatile memory.Nonvolatile memory may include read-only memory (ROM), programming ROM
(PROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include
Random access memory (RAM) or external cache.By way of illustration and not limitation, RAM is available in many forms,
Such as static state RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDRSDRAM), enhancing
Type SDRAM (ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM
(RDRAM), direct memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM) etc..
Each technical characteristic of above embodiments can be combined arbitrarily, for simplicity of description, not to above-described embodiment
In each technical characteristic it is all possible combination be all described, as long as however, the combination of these technical characteristics be not present lance
Shield all should be considered as described in this specification.
The several embodiments of the application above described embodiment only expresses, the description thereof is more specific and detailed, but simultaneously
It cannot therefore be construed as limiting the scope of the patent.It should be pointed out that coming for those of ordinary skill in the art
It says, without departing from the concept of this application, various modifications and improvements can be made, these belong to the protection of the application
Range.Therefore, the scope of protection shall be subject to the appended claims for the application patent.
Claims (10)
1. a kind of method for processing video frequency, which comprises
Monitor video is obtained, the service sub-video of target monitoring object is intercepted from the monitor video;
The first video image set comprising the target monitoring object is extracted from the service sub-video, from first view
Target object image is extracted in frequency image collection;
The second video image comprising service object is extracted from the service sub-video, according to the target object image and institute
It states the second video image and obtains service image set;
Micro- Expression analysis is carried out to each integrated images in the service image set respectively, is obtained and each integrated images
That matches presets micro- expression;
According to the service image set and the information on services archives for presetting target monitoring object described in micro- expression generation.
2. the method according to claim 1, wherein described intercept target monitoring object from the monitor video
Service sub-video, comprising:
The service identifiers for obtaining the target monitoring object search the service identifiers corresponding service time and target face figure
Picture;
Shooting time and the service time matched monitor video segment are extracted from the monitor video;
Face datection is carried out to the monitor video segment according to the target facial image, and from the monitor video segment
Extract the video sub-segments that the face that matches with the target face figure is not detected;
The clip durations are compared by the clip durations for obtaining each video sub-segments with default missing threshold value;
The first video sub-segments that the clip durations are greater than the default missing threshold value are deleted from the video clip, are obtained
To service sub-video.
3. according to the method described in claim 2, it is characterized in that, described extract target from first video image set
Object images, comprising:
The first picture frame is extracted from first video image set, detects the portrait number in each the first image frame;
More people's picture frames that the portrait number is greater than 1 are extracted from the first image frame;
Face datection is carried out to more people's picture frames according to the service facial image prestored in service face database, is detected described more
It whether there is and the unmatched facial image of service facial image in people's picture frame;
If it is detected that in the presence of with the unmatched facial image of service facial image, by corresponding more people's picture frames
In with the matched face-image of target facial image be extracted as target object image.
4. according to the method described in claim 3, it is characterized in that, described extract from the service sub-video comprising service pair
The second video image of elephant, comprising:
The second video sub-segments that the clip durations are less than the default missing threshold value are obtained, from the second video sub-pieces
It is extracted and the unmatched first facial image of service facial image in section;
It is extracted from more people's picture frames and unmatched second face-image of service facial image;
The second video image is obtained according to the first facial image and second face-image.
5. according to the method described in claim 4, it is characterized in that, described respectively to each set in the service image set
Image carries out micro- Expression analysis, obtains and each integrated images are matched presets micro- expression, comprising:
Face feature point is extracted from each integrated images, face action feature is calculated according to the face feature point;
The face action feature is inputted into micro- Expression analysis model and obtains each matching probability value for presetting micro- expression;
It is chosen and the integrated images are matched presets micro- expression according to the matching probability value.
6. the method according to claim 1, wherein it is described according to the service image set and it is described preset it is micro-
The information on services archives of target monitoring object described in expression generation, comprising:
It presets micro- expression by described and is associated with corresponding integrated images in the service image set;
Obtain the corresponding object type of each integrated images;
Search it is described preset the corresponding expression label of micro- expression, and determined described to preset the corresponding feelings of micro- expression according to the label
Feel classification;
According to the object type and the emotional category, the integrated images in the service image set are divided into multiple figures
As subset, information on services archives are generated according to described image subset.
7. according to the method described in claim 6, it is characterized in that, the method also includes:
By in the service image set, the first set image and service object's class of the matched target object classification of shooting time
Other second set image is associated;
Judge that the first set image is associated and preset micro- expression and the second set image is associated presets micro- expression, is
The identical emotional category of no correspondence;
When the different emotional category of correspondence, the associated first set image and the second set image are spliced
Obtain expression comparison chart.
8. a kind of video process apparatus, which is characterized in that described device includes:
Video intercepting module intercepts the service view of target monitoring object for obtaining monitor video from the monitor video
Frequently;
Target image extraction module, for extracting the first video comprising the target monitoring object from the service sub-video
Image collection extracts target object image from first video image set;
Image collection generation module, for extracting the second video image comprising service object, root from the service sub-video
Service image set is obtained according to the target object image and second video image;
Expression analysis module is obtained for carrying out micro- Expression analysis to each integrated images in the service image set respectively
With each integrated images are matched presets micro- expression;
Archives generation module, for according to the service image set and described presetting target monitoring object described in micro- expression generation
Information on services archives.
9. a kind of computer equipment, including memory and processor, the memory are stored with computer program, feature exists
In the step of processor realizes any one of claims 1 to 7 the method when executing the computer program.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program
The step of method described in any one of claims 1 to 7 is realized when being executed by processor.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910599356.4A CN110458008A (en) | 2019-07-04 | 2019-07-04 | Method for processing video frequency, device, computer equipment and storage medium |
PCT/CN2020/087694 WO2021000644A1 (en) | 2019-07-04 | 2020-04-29 | Video processing method and apparatus, computer device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910599356.4A CN110458008A (en) | 2019-07-04 | 2019-07-04 | Method for processing video frequency, device, computer equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110458008A true CN110458008A (en) | 2019-11-15 |
Family
ID=68482120
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910599356.4A Pending CN110458008A (en) | 2019-07-04 | 2019-07-04 | Method for processing video frequency, device, computer equipment and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110458008A (en) |
WO (1) | WO2021000644A1 (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111935453A (en) * | 2020-07-27 | 2020-11-13 | 浙江大华技术股份有限公司 | Learning supervision method and device, electronic equipment and storage medium |
CN112017339A (en) * | 2020-09-24 | 2020-12-01 | 柳州柳工挖掘机有限公司 | Excavator control system |
CN112052357A (en) * | 2020-04-15 | 2020-12-08 | 上海摩象网络科技有限公司 | Video clip marking method and device and handheld camera |
WO2021000644A1 (en) * | 2019-07-04 | 2021-01-07 | 深圳壹账通智能科技有限公司 | Video processing method and apparatus, computer device and storage medium |
CN113392271A (en) * | 2021-05-25 | 2021-09-14 | 珠海格力电器股份有限公司 | Cat eye data processing method, module, electronic device and storage medium |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112818960B (en) * | 2021-03-25 | 2023-09-05 | 平安科技(深圳)有限公司 | Waiting time processing method, device, equipment and medium based on face recognition |
CN113873191B (en) * | 2021-10-12 | 2023-11-28 | 苏州万店掌软件技术有限公司 | Video backtracking method, device and system based on voice |
CN113925511A (en) * | 2021-11-08 | 2022-01-14 | 北京九州安华信息安全技术有限公司 | Muscle nerve vibration time-frequency image processing method and device |
CN114445896B (en) * | 2022-01-28 | 2024-04-05 | 北京百度网讯科技有限公司 | Method and device for evaluating confidence of content of person statement in video |
CN114866843B (en) * | 2022-05-06 | 2023-08-11 | 杭州登虹科技有限公司 | Video data encryption system for network video monitoring |
CN115512427B (en) * | 2022-11-04 | 2023-04-25 | 北京城建设计发展集团股份有限公司 | User face registration method and system combined with matched biopsy |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108830633A (en) * | 2018-04-26 | 2018-11-16 | 华慧视科技(天津)有限公司 | A kind of friendly service evaluation method based on smiling face's detection |
CN109766859A (en) * | 2019-01-17 | 2019-05-17 | 平安科技(深圳)有限公司 | Campus monitoring method, device, equipment and storage medium based on micro- expression |
CN109766766A (en) * | 2018-12-18 | 2019-05-17 | 深圳壹账通智能科技有限公司 | Employee work condition monitoring method, device, computer equipment and storage medium |
CN109766770A (en) * | 2018-12-18 | 2019-05-17 | 深圳壹账通智能科技有限公司 | QoS evaluating method, device, computer equipment and storage medium |
CN109829388A (en) * | 2019-01-07 | 2019-05-31 | 平安科技(深圳)有限公司 | Video data handling procedure, device and computer equipment based on micro- expression |
CN109858410A (en) * | 2019-01-18 | 2019-06-07 | 深圳壹账通智能科技有限公司 | Service evaluation method, apparatus, equipment and storage medium based on Expression analysis |
CN109858949A (en) * | 2018-12-26 | 2019-06-07 | 秒针信息技术有限公司 | A kind of customer satisfaction appraisal procedure and assessment system based on monitoring camera |
CN109871751A (en) * | 2019-01-04 | 2019-06-11 | 平安科技(深圳)有限公司 | Attitude appraisal procedure, device and storage medium based on facial expression recognition |
CN109886111A (en) * | 2019-01-17 | 2019-06-14 | 深圳壹账通智能科技有限公司 | Match monitoring method, device, computer equipment and storage medium based on micro- expression |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108665147A (en) * | 2018-04-18 | 2018-10-16 | 深圳市云领天下科技有限公司 | A kind of method and device of children education credit early warning |
CN109190601A (en) * | 2018-10-19 | 2019-01-11 | 银河水滴科技(北京)有限公司 | Recongnition of objects method and device under a kind of monitoring scene |
CN109168052B (en) * | 2018-10-31 | 2021-04-27 | 杭州比智科技有限公司 | Method and device for determining service satisfaction degree and computing equipment |
CN110458008A (en) * | 2019-07-04 | 2019-11-15 | 深圳壹账通智能科技有限公司 | Method for processing video frequency, device, computer equipment and storage medium |
-
2019
- 2019-07-04 CN CN201910599356.4A patent/CN110458008A/en active Pending
-
2020
- 2020-04-29 WO PCT/CN2020/087694 patent/WO2021000644A1/en active Application Filing
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108830633A (en) * | 2018-04-26 | 2018-11-16 | 华慧视科技(天津)有限公司 | A kind of friendly service evaluation method based on smiling face's detection |
CN109766766A (en) * | 2018-12-18 | 2019-05-17 | 深圳壹账通智能科技有限公司 | Employee work condition monitoring method, device, computer equipment and storage medium |
CN109766770A (en) * | 2018-12-18 | 2019-05-17 | 深圳壹账通智能科技有限公司 | QoS evaluating method, device, computer equipment and storage medium |
CN109858949A (en) * | 2018-12-26 | 2019-06-07 | 秒针信息技术有限公司 | A kind of customer satisfaction appraisal procedure and assessment system based on monitoring camera |
CN109871751A (en) * | 2019-01-04 | 2019-06-11 | 平安科技(深圳)有限公司 | Attitude appraisal procedure, device and storage medium based on facial expression recognition |
CN109829388A (en) * | 2019-01-07 | 2019-05-31 | 平安科技(深圳)有限公司 | Video data handling procedure, device and computer equipment based on micro- expression |
CN109766859A (en) * | 2019-01-17 | 2019-05-17 | 平安科技(深圳)有限公司 | Campus monitoring method, device, equipment and storage medium based on micro- expression |
CN109886111A (en) * | 2019-01-17 | 2019-06-14 | 深圳壹账通智能科技有限公司 | Match monitoring method, device, computer equipment and storage medium based on micro- expression |
CN109858410A (en) * | 2019-01-18 | 2019-06-07 | 深圳壹账通智能科技有限公司 | Service evaluation method, apparatus, equipment and storage medium based on Expression analysis |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021000644A1 (en) * | 2019-07-04 | 2021-01-07 | 深圳壹账通智能科技有限公司 | Video processing method and apparatus, computer device and storage medium |
CN112052357A (en) * | 2020-04-15 | 2020-12-08 | 上海摩象网络科技有限公司 | Video clip marking method and device and handheld camera |
CN112052357B (en) * | 2020-04-15 | 2022-04-01 | 上海摩象网络科技有限公司 | Video clip marking method and device and handheld camera |
CN111935453A (en) * | 2020-07-27 | 2020-11-13 | 浙江大华技术股份有限公司 | Learning supervision method and device, electronic equipment and storage medium |
CN112017339A (en) * | 2020-09-24 | 2020-12-01 | 柳州柳工挖掘机有限公司 | Excavator control system |
CN113392271A (en) * | 2021-05-25 | 2021-09-14 | 珠海格力电器股份有限公司 | Cat eye data processing method, module, electronic device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
WO2021000644A1 (en) | 2021-01-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110458008A (en) | Method for processing video frequency, device, computer equipment and storage medium | |
CN109729383B (en) | Double-recording video quality detection method and device, computer equipment and storage medium | |
Joo et al. | Automated facial trait judgment and election outcome prediction: Social dimensions of face | |
CN109766766A (en) | Employee work condition monitoring method, device, computer equipment and storage medium | |
Abd El Meguid et al. | Fully automated recognition of spontaneous facial expressions in videos using random forest classifiers | |
CN109767261A (en) | Products Show method, apparatus, computer equipment and storage medium | |
CN110427881B (en) | Cross-library micro-expression recognition method and device based on face local area feature learning | |
CN109766474A (en) | Inquest signal auditing method, device, computer equipment and storage medium | |
CN111160275B (en) | Pedestrian re-recognition model training method, device, computer equipment and storage medium | |
CN109886110A (en) | Micro- expression methods of marking, device, computer equipment and storage medium | |
Garain et al. | Identification of reader specific difficult words by analyzing eye gaze and document content | |
CN109376598A (en) | Facial expression image processing method, device, computer equipment and storage medium | |
Xia et al. | Cross-database micro-expression recognition with deep convolutional networks | |
Zhang et al. | Facial action unit detection with local key facial sub-region based multi-label classification for micro-expression analysis | |
CN109697421A (en) | Evaluation method, device, computer equipment and storage medium based on micro- expression | |
CN109766773A (en) | Match monitoring method, device, computer equipment and storage medium | |
Ray et al. | Design and implementation of affective e-learning strategy based on facial emotion recognition | |
Dadiz et al. | Analysis of depression based on facial cues on a captured motion picture | |
CN109241864A (en) | Emotion prediction technique, device, computer equipment and storage medium | |
Nguyen et al. | Towards recognizing facial expressions at deeper level: Discriminating genuine and fake smiles from a sequence of images | |
CN113920575A (en) | Facial expression recognition method and device and storage medium | |
Alattab et al. | Semantic features selection and representation for facial image retrieval system | |
Cao et al. | Outlier detection for spotting micro-expressions | |
CN112487980A (en) | Micro-expression-based treatment method, device, system and computer-readable storage medium | |
Nadeeshani et al. | Automated analysis of children emotion expression levels |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |