CN109344770A - Resource allocation methods and device - Google Patents
Resource allocation methods and device Download PDFInfo
- Publication number
- CN109344770A CN109344770A CN201811154155.5A CN201811154155A CN109344770A CN 109344770 A CN109344770 A CN 109344770A CN 201811154155 A CN201811154155 A CN 201811154155A CN 109344770 A CN109344770 A CN 109344770A
- Authority
- CN
- China
- Prior art keywords
- identification model
- frame
- video
- multiple images
- sexual behaviour
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/23—Recognition of whole body movements, e.g. for sport training
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Abstract
This application involves a kind of resource allocation methods and devices.The resource allocation methods include: the first video for obtaining video equipment acquisition, first video includes multiple images frame, multiple images frame is input to the first identification model, whether the first identification model is for there is coherent sexual behaviour to identify same first object in multiple images frame, when the first object has coherent sexual behaviour in multiple images frame, pass through the first identification model, obtain the first recognition result, if the first recognition result indicates that the first object is the target object for participating in experiment, data platform is created to participate in the target object of experiment.Resource allocation methods and device provided by the present application, the object that can automatically, accurately, timely participate in experiment distribute data platform, realize that the resource of data platform maximally utilizes, and save the human resources of administrator.
Description
Technical field
This application involves big data technical field more particularly to a kind of resource allocation methods and device.
Background technique
Big data Training Room platform is to carry out big data personnel training for colleges and universities and the Platform of Experimental Teaching released.Currently,
It is imparted knowledge to students on course using data platform, it is in advance teacher that administrator, which needs to participate in the number of experiment according to each course,
Data platform is created with student, and discharges the resource of data platform after End-of-Course, so that the course of subsequent other classes makes
With.
But the mode that colleges and universities give lessons is very flexible, the opening of course teaching causes the number for participating in course to be not fixed,
Administrator can not predict the number for participating in experiment in advance.For example, some are come to audit in the object for participating in course
Raw, some are the students for carrying mobile terminal (such as laptop, tablet computer etc.) and participating in experiment, some are to utilize classroom
The desktop computer of offer participates in the student of teaching implementation.Administrator is difficult to accurately distribute data platform to participate in the object of experiment.
Summary of the invention
In view of this, present applicant proposes a kind of resource allocation methods and device, it is accurate to solve to be difficult in the related technology
The problem of being the object distribution data platform for participating in experiment.
According to the one side of the application, a kind of resource allocation methods are provided, which comprises
The first video of video equipment acquisition is obtained, first video includes multiple images frame;
Described multiple images frame is input to the first identification model, first identification model is used for described multiple images
Whether same first object in frame there is coherent sexual behaviour to be identified;
When first object has coherent sexual behaviour in multiple images frame, by first identification model, obtain
To the first recognition result;
If first recognition result indicates that first object is the target object for participating in experiment, for the participation
The target object of experiment creates data platform.
According to the another aspect of the application, a kind of resource allocation device is provided, described device includes:
First obtains module, and for obtaining the first video of video equipment acquisition, first video includes multiple images
Frame;
First input module, for described multiple images frame to be input to the first identification model, first identification model
For whether there is coherent sexual behaviour to identify same first object in described multiple images frame;
First processing module, for passing through institute when first object has coherent sexual behaviour in multiple images frame
The first identification model is stated, the first recognition result is obtained;
Creation module, if indicating that first object is the target pair for participating in experiment for first recognition result
As then creating data platform for the target object for participating in experiment.
According to the another aspect of the application, a kind of resource allocation device is provided, comprising: processor;It is handled for storage
The memory of device executable instruction;Wherein, the processor is configured to executing the above method.
According to the another aspect of the application, a kind of non-volatile computer readable storage medium storing program for executing is provided, is stored thereon with
Computer program instructions, wherein the computer program instructions realize the above method when being executed by processor.
Resource allocation methods and device provided by the present application obtain the first video of video equipment acquisition, the first video bag
Multiple images frame is included, multiple images frame is input to the first identification model, the first identification model is used for in multiple images frame
Whether same first object there is coherent sexual behaviour to be identified, when the first object has coherent sexual behaviour in multiple images frame
When, by the first identification model, the first recognition result is obtained, if the first recognition result indicates that the first object is to participate in experiment
Target object then creates data platform to participate in the target object of experiment, and thus, it is possible to be automatically, accurately, timely ginseng
Data platform is distributed with the object of experiment, realizes that the resource of data platform maximally utilizes, and save the human resources of administrator.
According to below with reference to the accompanying drawings becoming to detailed description of illustrative embodiments, other features and aspect
It is clear.
Detailed description of the invention
Comprising in the description and constituting the attached drawing of part of specification and specification together illustrates the application's
Exemplary embodiment, feature and aspect, and the principle for explaining the application.
Fig. 1 shows the flow chart of the resource allocation methods according to one embodiment of the application.
Fig. 2 shows the flow charts according to the resource allocation methods of one embodiment of the application.
Fig. 3 shows the block diagram of the resource allocation device according to one embodiment of the application.
Fig. 4 shows the block diagram of the resource allocation device according to one embodiment of the application.
Fig. 5 shows the block diagram of the resource allocation device according to one embodiment of the application.
Specific embodiment
Various exemplary embodiments, feature and the aspect of the application are described in detail below with reference to attached drawing.It is identical in attached drawing
Appended drawing reference indicate element functionally identical or similar.Although the various aspects of embodiment are shown in the attached drawings, remove
It non-specifically points out, it is not necessary to attached drawing drawn to scale.
Dedicated word " exemplary " means " being used as example, embodiment or illustrative " herein.Here as " exemplary "
Illustrated any embodiment should not necessarily be construed as preferred or advantageous over other embodiments.
It is only to be not intended to be limiting the application merely for for the purpose of describing particular embodiments in term used in this application.
It is also intended in the application and the "an" of singular used in the attached claims, " described " and "the" including majority
Form, unless the context clearly indicates other meaning.It is also understood that term "and/or" used herein refers to and wraps
It may be combined containing one or more associated any or all of project listed.
It will be appreciated that though various information, but this may be described using term first, second, third, etc. in the application
A little information should not necessarily be limited by these terms.These terms are only used to for same type of information being distinguished from each other out.For example, not departing from
In the case where the application range, the first information can also be referred to as the second information, and similarly, the second information can also be referred to as
One information.Depending on context, word as used in this " if " it is understood that become " ... when " or " when ... "
Or " in response to determination " etc..
In addition, giving numerous details in specific embodiment below to better illustrate the application.
It will be appreciated by those skilled in the art that without certain details, the application equally be can be implemented.In some instances, for
Method, means, element and circuit well known to those skilled in the art are not described in detail, in order to highlight the purport of the application.
In order to make those skilled in the art better understand the technical solutions in the application, below first to being related in the application
And portion of techniques term be briefly described.
Data platform: being for providing massive store, high performance data for all trades and professions, plurality of application scenes
Calculate the platform with analysis ability.
Big data Training Room platform: being the data platform released for education sector.In other words, big data Training Room platform
It is to carry out big data personnel training for colleges and universities and the Platform of Experimental Teaching released.Big data Training Room platform is to virtualize or container
(Docker) mode is that each user (such as teacher, student etc.) provides data platform, uses and learns to realize to impart knowledge to students
It uses.
In the related technology, due to the resource shortage of data platform in big data Training Room platform, in big data reality
In the use process for instructing room platform, in order to realize that the resource of data platform maximally utilizes, administrator needs according to user's
Number creates data platform in advance for user, and terminates in user using data platform rear, that destruction has created, release
The resource of data platform is for other users use.
But in the use process of big data Training Room platform, the number of user is to change unfixed, administrator
The number of user can not be predicted in advance.If the data platform of administrator's creation is more than the number of user, will will cause
The wasting of resources of data platform.If the data platform of administrator's creation is less than the number of user, data will be will cause and put down
The offer of platform is insufficient.Can notify in time administrator increase data platform creation, but by will increase user etc.
To the time, the usage experience of user is reduced.
In the related technology, target detection network achieves very big breakthrough.Currently, popular target detection network master
It include: R-CNN (the Regions with Convolutional Neural based on candidate region (Region Proposal)
It Network) is network (such as R-CNN, Fast R-CNN or Faster R-CNN) and Yolo (You Only Look
Once) algorithm etc..
R-CNN system network: it is the network of two stages (Two-stage), needs to first pass through heuristic (Selective
Search) or CNN (Convolutional Neural Network, convolutional neural networks) extracts multiple candidate regions on the image
Then domain judges whether the multiple candidate regions extracted include object and comprising what object, finally to the time comprising object
Favored area position carries out refine.
Yolo algorithm: being the network of a stage (One-stage), i.e., the class of different target is directly predicted by a CNN
Not and position.The image of input is divided into S*S grid by Yolo algorithm, if the center of an object falls in some grid
Interior, then corresponding grid is responsible for detecting the object.
RNN (Recurrent Neural Network, Recognition with Recurrent Neural Network): being a kind of mind for processing sequence data
Through network.Sequence data is also possible to time series data or word sequence data etc..Sequence data have subsequent data with
The related feature of the data of front.For example, time series data refers to the data being collected in different time points.When passing through
Between sequence data can reflect a certain things or phenomenon etc. and change with time state or degree.
The technical issues of in the presence of above-mentioned the relevant technologies, Fig. 1 are shown according to the resource of one embodiment of the application point
The flow chart of method of completing the square.This method is suitable for big data Training Room platform.As shown in Figure 1, the method comprising the steps of S11 is to step
S14。
In step s 11, the first video of video equipment acquisition is obtained, the first video includes multiple images frame.
Wherein, video equipment is the equipment for referring to shooting video, such as camera etc..
In one implementation, video equipment is set in the classroom for carrying out big data course teaching.Video is obtained to set
First video of standby acquisition, and the first video is carried out to cut frame, obtain the frame image that the first video includes.Wherein, acquired
The first video frame image for including may include one or more people (such as teacher, student etc.).
In step s 12, multiple images frame is input to the first identification model, the first identification model is used for multiple images
Whether same first object in frame there is coherent sexual behaviour to be identified.
In one implementation, identification model includes target detection network and target following network.
Specifically, multiple images frame is input to identification model, identification model is used for same a pair in multiple images frame
As if no there is coherent sexual behaviour to be identified, comprising: by target detection network, the object in picture frame is identified,
Export the corresponding label of object in picture frame, the behavioural characteristic of the corresponding tag representation object of object;According in picture frame
The corresponding label of object, is determined for compliance with the tracking object of preset condition from the object in picture frame;By target following network,
The coherent sexual behaviour of tracking object is identified, the corresponding recognition result of coherent sexual behaviour of output tracking object.
In one implementation, the first identification model includes first object detection network and first object tracking network.
Wherein, first object detection network can be R-CNN network or Yolo algorithm.First object tracking network can be RNN network.
First object detection network and first object tracking network can be obtained by training.
Wherein, the input of first object detection network is video, is exported in the frame image for including for video included
Label.Specifically, the frame image for including for obtained first video, detecting network by first object can identify
Included label in image.Wherein, label can be the information that the behavior of people is described.For example, label may include
It stands, walk, sitting down, open computer (such as raise notebook, open desktop computer etc.), operate computer, close computer, tidy up object
Product into classroom or leave classroom etc., the embodiment of the present application to this with no restriction.
Wherein, the input of first object tracking network is video, is exported to indicate that the first object participates in experiment or indicates the
An object is not involved in the recognition result of experiment.Specifically, for the first object in the first video, pass through first object tracking network
Network tracks the first object, the continuity movement of the first object of identification.It is acted according to the continuity of the first object of identification,
The output of first object tracking network indicates that the first object participates in experiment or indicates that the first object is not involved in the recognition result of experiment.
In step s 13, when there is in multiple images frame coherent sexual behaviour when the first object, pass through the first identification mould
Type obtains the first recognition result.
In one implementation, the first identification model includes first object detection network and first object tracking network.
Network is detected by first object, determines the first object in the first video;By first object tracking network, to the first video
In the behavior of the first object identified, corresponding first recognition result of the first object of output;Wherein, the first recognition result is
It indicates that the first object participates in experiment, or indicates that the first object is not involved in experiment.
Wherein, the first object refers to the object for meeting the pre-set behavior condition for participating in experiment.For obtained
The frame image that first video includes, label included in image can be identified by detecting network by first object.According to
Included label determines the first object in image, to realize the range for reducing and needing the object tracked.Determining first pair
As later, being tracked by first object tracking network to the first object, the continuity movement of the first object of identification.
As an example, if the continuity movement of " walk into classroom-- sitting down-open computer-operation computer " is
The pre-set behavior condition for participating in experiment, then the first object can be the people with " entering classroom " behavior.Determining
After an object, the first object is tracked by first object tracking network, the continuity movement of the first object of identification.Such as
Fruit meets the continuity movement of " walk into classroom-- sitting down-open computer-operation computer ", then first object tracking network is defeated
Indicate that the first object participates in the recognition result of experiment out.If be unsatisfactory for " walk into classroom-- sitting down-open computer-operation
The continuity of computer " acts, then the output of first object tracking network indicates that the first object is not involved in the recognition result of experiment.
In one implementation, multiple images frame is being input to the first identification model, the first identification model for pair
Before whether same first object in multiple images frame there is coherent sexual behaviour to be identified, this method further include: to multiple
Object in training image frame is labeled, and obtains the corresponding mark label of object in multiple training image frames;By multiple instructions
Practice picture frame and input first object detection network to be trained, obtains the corresponding pre- mark of object in multiple training image frames
Label;According to the corresponding mark of object in the corresponding prediction label of object and multiple training image frames in multiple training image frames
Label determines first-loss value;The value of parameter in first object detection network to be trained is adjusted according to first-loss value.
Wherein, first-loss value is by the corresponding prediction label of object in training image frame and pair in training image frame
The result obtained as the corresponding loss function of corresponding mark label input target detection network.According to first-loss value adjustment to
The value of parameter in trained target detection network.It tends towards stability in first-loss value or first-loss result is less than preset threshold
When, available target detection network.
In one implementation, multiple images frame is being input to the first identification model, the first identification model for pair
Before whether same first object in multiple images frame there is coherent sexual behaviour to be identified, this method further include: to multiple
The object in picture frame in training video is labeled, and obtains the corresponding mark of object in the picture frame in multiple training videos
Infuse coherent sexual behaviour;Picture frame in multiple training videos is inputted to first object tracking network to be trained, obtains multiple instructions
Practice the coherent sexual behaviour of the corresponding prediction of object in the picture frame in video;According to pair in the picture frame in multiple training videos
Coherent sexual behaviour is predicted as the object in the corresponding picture frame marked in coherent sexual behaviour and multiple training videos is corresponding, really
Fixed second penalty values;The value of parameter in first object tracking network to be trained is adjusted according to the second penalty values.
Wherein, the second penalty values are by the corresponding prediction label of object in training image frame and pair in training image frame
The result obtained as the corresponding loss function of corresponding mark label input target following network.According to the second penalty values adjustment to
The value of parameter in trained target following network.It tends towards stability in the second penalty values or the second loss result is less than preset threshold
When, available target following network.
In step S14, if the first recognition result indicates that first object is the target object for participating in experiment, for
The target object for participating in experiment creates data platform.
Resource allocation methods provided by the embodiments of the present application obtain the first video of video equipment acquisition, the first video bag
Multiple images frame is included, multiple images frame is input to the first identification model, the first identification model is used for in multiple images frame
Whether same first object there is coherent sexual behaviour to be identified, when the first object has coherent sexual behaviour in multiple images frame
When, by the first identification model, the first recognition result is obtained, if the first recognition result indicates that the first object is to participate in experiment
Target object then creates data platform to participate in the target object of experiment, and thus, it is possible to be automatically, accurately, timely ginseng
Data platform is distributed with the object of experiment, realizes that the resource of data platform maximally utilizes, and save the human resources of administrator.
Example one:
The continuity of " walk into classroom-- sitting down-open computer-operation computer " is acted as pre-set participation
The behavior condition of experiment, the first object are the people with " entering classroom " behavior.First object, which is obtained, by training detects network
With first object tracking network.
The video 1 of video equipment acquisition is obtained, and video 1 is carried out to cut frame, obtains the image 1 that video 1 includes.Pass through
One target detection network identifies image 1, exports label included in image 1.For example, including 5 right in image 1
As the tag representation of output: object 1 corresponds to label and enters classroom, and object 2 is stood corresponding to label, and object 3 corresponds to label
Into classroom, object 4 is walked corresponding to label, and object 5 corresponds to label and enters classroom.Then object 1, object 3 and object 5 are the
An object.
For object 1, object 3 and the object 5 in video 1, respectively by first object tracking network to object 1, object 3
It is tracked with object 5, the continuity movement of identification object 1, object 3 and object 5.If object 1, which meets, " enters classroom-row
Walk-sitting down-and open computer-operation computer " continuity movement, then first object tracking network output indicate object 1 participate in reality
The recognition result tested creates data platform 1.If object 3 be unsatisfactory for " walk into classroom-- sitting down-open computer-operation electricity
The continuity of brain " acts, then the output of first object tracking network indicates that object 3 is not involved in the recognition result of experiment.If object 5
Meet the continuity movement of " walk into classroom-- sitting down-open computer-operation computer ", then first object tracking network exports
It indicates that object 5 participates in the recognition result of experiment, creates data platform 2.
Object 5 is logged in by user account number a, then the data platform of creation 1 is pushed to user account number a.Object 5 passes through number
It participates in testing according to platform 1.Object 1 is logged in by user account number b, then the data platform of creation 2 is pushed to user account number b.It is right
As 2 by data platform 2 participate in experiment.After the data platform of creation is pushed to user account number, establishing indicates user's account
Number, the relation table of corresponding relationship between the terminal and data platform of login user account number.
Relation table may include user account number, login user account number terminal IP (Internet Protocol, network
Agreement) address and MAC (Media Access Control, media access control layer) address, the number etc. of data platform.Table 1
Relation table according to one embodiment of the application is shown.As shown in table 1, (object 5) is by user account number usename a in IP address
For ip 1, the terminal that MAC Address is mac 1 is logged in, and participates in testing by data platform 1.(object 1) passes through user account number
Usename b is ip 2 in IP address, and the terminal that MAC Address is mac 2 logs in, and participates in testing by data platform 2.
Table 1
User account number | The MAC Address of terminal | The IP address of terminal | The number of data platform |
usename a | mac 1 | ip 1 | number 1 |
usename b | mac2 | ip 2 | number 2 |
Fig. 2 shows the flow charts according to the resource allocation methods of one embodiment of the application.It is real that this method is suitable for big data
Instruct room platform.As shown in Fig. 2, the method comprising the steps of S21 to step S24.
In the step s 21, the second video of video equipment acquisition is obtained, the second video includes multiple images frame.
In one implementation, video equipment is set in the classroom for carrying out big data course teaching.Video is obtained to set
Second video of standby acquisition, and the second video is carried out to cut frame, obtain the frame image that the second video includes.Wherein, acquired
The second video frame image for including may include one or more people (such as teacher, object etc.).
In step S22, multiple images frame is input to the second identification model, the second identification model is used for multiple images
Whether same second object in frame there is coherent sexual behaviour to be identified.
In one implementation, identification model includes target detection network and target following network.
Specifically, multiple images frame is input to identification model, identification model is used for same a pair in multiple images frame
As if no there is coherent sexual behaviour to be identified, comprising: by target detection network, the object in picture frame is identified,
Export the corresponding label of object in picture frame, the behavioural characteristic of the corresponding tag representation object of object;According in picture frame
The corresponding label of object, is determined for compliance with the tracking object of preset condition from the object in picture frame;By target following network,
The coherent sexual behaviour of tracking object is identified, the corresponding recognition result of coherent sexual behaviour of output tracking object.
In one implementation, the second identification model includes the second target detection network and the second target following network.
Wherein, the second target detection network can be R-CNN network or Yolo algorithm.Second target following network can be RNN network.
Second target detection network and the second target following network can be obtained by training.
Wherein, the input of the second target detection network is video, is exported in the frame image for including for video included
Label.Specifically, the frame image for including for obtained second video, can be identified by the second target detection network
Included label in image.Wherein, label can be the information that the behavior of people is described.For example, label may include
It stands, walk, sitting down, open computer (such as raise notebook, open desktop computer etc.), operate computer, close computer, tidy up object
Product into classroom or leave classroom etc., the embodiment of the present application to this with no restriction.
Wherein, the input of the second target following network is video, is exported to indicate that the second object terminates to test or indicate the
The recognition result of experiment is not finished in two objects.Specifically, for the second object in the second video, pass through the second target following net
Network tracks the second object, the continuity movement of the second object of identification.It is acted according to the continuity of the second object of identification,
The output of second target following network indicates that the second object terminates to test or indicate that the recognition result of experiment is not finished in the second object.
In step S23, if the second recognition result indicates that the second object is the target object for terminating experiment, fortune is detected
The operating status of each terminal of each data platform of row.
In one implementation, the second identification model includes the second target detection network and the second target following network.
By the second target detection network, the second object in the second video is determined;By the second target following network, to the second video
In the and the behavior of object is identified, corresponding second recognition result of the second object of output;Wherein, the second recognition result is
It indicates that the second object terminates to test, or indicates that experiment is not finished in the second object.
Wherein, the second object refers to the object for meeting the pre-set behavior condition for terminating experiment.For obtained
The frame image that second video includes can identify label included in image by the second target detection network.According to
Included label determines the second object in image, to realize the range for reducing and needing the object tracked.Determining second pair
As later, being tracked by the second target following network to the second object, the continuity movement of the second object of identification.
As an example, if " close computer-tidy up article-walk-leave classroom " continuity movement be preparatory
The behavior condition of the end experiment of setting, then the second object can be the people with " closing computer " behavior.Determining second pair
As later, being tracked by the second target following network to the second object, the continuity movement of the second object of identification.If full
Foot " close computer-tidy up article-walk-leave classroom " continuity movement, then the second target following network output indicates the
Two objects terminate the recognition result of experiment.If be unsatisfactory for " close computer-tidy up article-walk-leave classroom " continuity
Movement, then the output of the second target following network indicates that the recognition result of experiment is not finished in the second object.
In one implementation, multiple images frame is being input to the second identification model, the second identification model for pair
Before whether same second object in multiple images frame there is coherent sexual behaviour to be identified, this method further include: to multiple
Object in training image frame is labeled, and obtains the corresponding mark label of object in multiple training image frames;By multiple instructions
Practice picture frame and input the second target detection network to be trained, obtains the corresponding pre- mark of object in multiple training image frames
Label;According to the corresponding mark of object in the corresponding prediction label of object and multiple training image frames in multiple training image frames
Label determines first-loss value;The value of parameter in the second target detection network to be trained is adjusted according to first-loss value.
Wherein, first-loss value is by the corresponding prediction label of object in training image frame and pair in training image frame
The result obtained as the corresponding loss function of corresponding mark label input target detection network.According to first-loss value adjustment to
The value of parameter in trained target detection network.It tends towards stability in first-loss value or first-loss result is less than preset threshold
When, available target detection network.
In one implementation, multiple images frame is being input to the second identification model, the second identification model for pair
Before whether same second object in multiple images frame there is coherent sexual behaviour to be identified, this method further include: to multiple
The object in picture frame in training video is labeled, and obtains the corresponding mark of object in the picture frame in multiple training videos
Infuse coherent sexual behaviour;Picture frame in multiple training videos is inputted to the second target following network to be trained, obtains multiple instructions
Practice the coherent sexual behaviour of the corresponding prediction of object in the picture frame in video;According to pair in the picture frame in multiple training videos
Coherent sexual behaviour is predicted as the object in the corresponding picture frame marked in coherent sexual behaviour and multiple training videos is corresponding, really
Fixed second penalty values;The value of parameter in the second target following network to be trained is adjusted according to the second penalty values.
Wherein, the second penalty values are by the corresponding prediction label of object in training image frame and pair in training image frame
The result obtained as the corresponding loss function of corresponding mark label input target following network.According to the second penalty values adjustment to
The value of parameter in trained target following network.It tends towards stability in the second penalty values or the second loss result is less than preset threshold
When, available target following network.
In step s 24, the matched data platform of terminal institute that shutdown is in operating status is destroyed.
Resource allocation methods provided by the embodiments of the present application obtain the second video of video equipment acquisition, the second video bag
Multiple images frame is included, multiple images frame is input to the second identification model, the second identification model is used for in multiple images frame
Whether same second object there is coherent sexual behaviour to be identified, if the second recognition result indicates that the second object is to terminate experiment
Target object, then detect the operating status for running each terminal of each data platform, will with operating status be in shutdown
The matched data platform of terminal institute is destroyed, thus, it is possible to automatically, accurately, timely determine the data platform of end experiment,
And destroy the data platform for terminating experiment, the resource of data platform is discharged for other users use, realizes data platform
Resource maximally utilize, and save the human resources of administrator.
Example two:
By " close computer-tidy up article-walk-leave classroom " continuity act for it is pre-set end test
Behavior condition, the second object are the people with " closing computer " behavior.The second target detection network and second are obtained by training
Target following network.
Example one is accepted, the video 2 of video equipment acquisition is obtained, and video 2 is carried out to cut frame, obtaining video 2 includes
Image 2.Image 2 is identified by the second target detection network, exports label included in image 2.For example, image 2
In include 5 objects, the tag representation of output: object 1 correspond to label close computer, object 2 correspond to label standing, object
3 sit down corresponding to label, and object 4 corresponds to label and tidies up article, and object 5 is sat down corresponding to label.Then object 1 is second pair
As.
For the object 1 in video 2, object 1 is tracked by the second target following network, identifies the company of object 1
Coherence movement.If object 1 meet " close computer-tidy up article-walk-leave classroom " continuity movement, the second mesh
Marking tracking network output indicates that object 1 terminates the recognition result of experiment.Detect the terminal of each data platform in table 1
Operating status, the i.e. terminal (IP address is ip 1, and MAC Address is mac 1) and operation data platform of detection operation data platform 1
The operating status of 2 terminal (IP address is ip 2, and MAC Address is mac 2).
By the operating status of the terminal of each data platform in detection table 1, discovery IP address is ip2, MAC
The terminal that location is mac 2 is in shutdown.Between terminal and data platform by user account number, login user account number in table 1
Corresponding relationship, determine data platform 2 terminate experiment.Data platform 2 is destroyed, discharges the resource of data platform 2 so that other make
User uses.
Fig. 3 shows the block diagram of the resource allocation device according to one embodiment of the application.The device is suitable for big data real training
Room platform.As shown in figure 3, this states device includes:
First obtains module 31, and for obtaining the first video of video equipment acquisition, first video includes multiple figures
As frame;
First input module 32, for described multiple images frame to be input to the first identification model, the first identification mould
Whether type is for there is coherent sexual behaviour to identify same first object in described multiple images frame;
First processing module 33, for passing through when first object has coherent sexual behaviour in multiple images frame
First identification model, obtains the first recognition result;
Creation module 34, if indicating that first object is the target pair for participating in experiment for first recognition result
As then creating data platform for the target object for participating in experiment.
Fig. 4 shows the block diagram of the resource allocation device according to one embodiment of the application.The device is suitable for big data real training
Room platform.As shown in figure 4, the device further include:
Second obtains module 41, and for obtaining the second video of video equipment acquisition, second video includes multiple figures
As frame;
Second input module 42, for described multiple images frame to be input to the second identification model, the second identification mould
Whether type is for there is coherent sexual behaviour to identify same second object in described multiple images frame;
Second processing module 43, if indicating that second object is the mesh for terminating experiment for second recognition result
Object is marked, then detects the operating status for running each terminal of each data platform;
Module 44 is destroyed, for destroying the matched data platform of terminal institute for being in shutdown with operating status.
In one implementation, identification model includes target detection network and target following network;
First input module 32 and second input module 42, respectively include: the first identification module, for passing through
The target detection network, identifies the object in described image frame, exports the corresponding mark of object in described image frame
Label, the behavioural characteristic of object described in the corresponding tag representation of the object;
Determining module, for according to the corresponding label of object in described image frame, from the object in described image frame
It is determined for compliance with the tracking object of preset condition;
Second identification module, for being carried out to the coherent sexual behaviour of the tracking object by the target following network
Identification, exports the corresponding recognition result of coherent sexual behaviour of the tracking object.
In one implementation, if the identification model is the first identification model, the first recognition result includes pair
Experiment is not involved in as participating in experiment or object;If the identification model is the second identification model, the second recognition result includes
Object terminates to test or experiment is not finished in object.
In one implementation, described device further includes the first training module 51, is used for:
Object in multiple training image frames is labeled, the object obtained in the multiple training image frame is corresponding
Mark label;
The multiple training image frame is inputted to target detection network to be trained, is obtained in the multiple training image frame
The corresponding prediction label of object;
According in the corresponding prediction label of object and the multiple training image frame in the multiple training image frame
The corresponding mark label of object, determines first-loss value;
According to the value of parameter in the first-loss value adjustment target detection network to be trained.
In one implementation, described device further includes the second training module 52, is used for:
Object in picture frame in multiple training videos is labeled, the image in the multiple training video is obtained
The coherent sexual behaviour of the corresponding mark of object in frame;
Picture frame in the multiple training video is inputted to target following network to be trained, obtains the multiple training
The coherent sexual behaviour of the corresponding prediction of the object in picture frame in video;
Coherent sexual behaviour and the multiple is marked according to the object in the picture frame in the multiple training video is corresponding
The coherent sexual behaviour of the corresponding prediction of the object in picture frame in training video, determines the second penalty values;
According to the value of parameter in second penalty values adjustment target following network to be trained.
Resource allocation device provided by the embodiments of the present application obtains the first video of video equipment acquisition, the first video bag
Multiple images frame is included, multiple images frame is input to the first identification model, the first identification model is used for in multiple images frame
Whether same first object there is coherent sexual behaviour to be identified, when the first object has coherent sexual behaviour in multiple images frame
When, by the first identification model, the first recognition result is obtained, if the first recognition result indicates that the first object is to participate in experiment
Target object then creates data platform to participate in the target object of experiment, and thus, it is possible to be automatically, accurately, timely ginseng
Data platform is distributed with the object of experiment, realizes that the resource of data platform maximally utilizes, and save the human resources of administrator.
Fig. 5 shows the block diagram of the resource allocation device according to one embodiment of the application.Referring to Fig. 5, which may include
Processor 901, the machine readable storage medium 902 for being stored with machine-executable instruction.Processor 901 and machine readable storage are situated between
Matter 902 can be communicated via system bus 903.Also, processor 901 by read machine readable storage medium storing program for executing 902 with resource
The corresponding machine-executable instruction of logic is distributed to execute resource allocation methods described above.
Machine readable storage medium 902 referred to herein can be any electronics, magnetism, optics or other physical stores
Device may include or store information, such as executable instruction, data, etc..For example, machine readable storage medium may is that
RAM (Radom Access Memory, random access memory), volatile memory, nonvolatile memory, flash memory, storage are driven
Dynamic device (such as hard disk drive), solid state hard disk, any kind of storage dish (such as CD, dvd) or similar storage medium,
Or their combination.
Present embodiments are described above, above description is exemplary, and non-exclusive, and
It is not limited to disclosed each embodiment.Without departing from the scope and spirit of illustrated each embodiment, for this skill
Many modifications and changes are obvious for the those of ordinary skill in art field.The selection of term used herein, purport
In the principle, practical application or technological improvement to the technology in market for best explaining each embodiment, or make the art
Other those of ordinary skill can understand each embodiment disclosed herein.
Claims (12)
1. a kind of resource allocation methods, which is characterized in that the described method includes:
The first video of video equipment acquisition is obtained, first video includes multiple images frame;
Described multiple images frame is input to the first identification model, first identification model is used for in described multiple images frame
Same first object whether there is coherent sexual behaviour to be identified;
When first object has coherent sexual behaviour in multiple images frame, by first identification model, the is obtained
One recognition result;
If first recognition result indicates that first object is the target object for participating in experiment, for participation experiment
Target object create data platform.
2. the method according to claim 1, wherein the method also includes:
The second video of video equipment acquisition is obtained, second video includes multiple images frame;
Described multiple images frame is input to the second identification model, second identification model is used for in described multiple images frame
Same second object whether there is coherent sexual behaviour to be identified;
If second recognition result indicates that second object is the target object for terminating experiment, each number of operation is detected
According to the operating status of each terminal of platform;
The matched data platform of terminal institute that shutdown is in operating status is destroyed.
3. method according to claim 1 or 2, which is characterized in that identification model include target detection network and target with
Track network;
Described multiple images frame is input to identification model, the identification model is used for same a pair in described multiple images frame
As if no there is coherent sexual behaviour to be identified, comprising:
By the target detection network, the object in described image frame is identified, exports the object in described image frame
Corresponding label, the behavioural characteristic of object described in the corresponding tag representation of the object;
According to the corresponding label of object in described image frame, preset condition is determined for compliance with from the object in described image frame
Tracking object;
By the target following network, the coherent sexual behaviour of the tracking object is identified, the tracking object is exported
The corresponding recognition result of coherent sexual behaviour.
4. according to the method described in claim 3, it is characterized in that,
If the identification model is the first identification model, the first recognition result includes that object participation experiment or object are not joined
With experiment;
If the identification model is the second identification model, the second recognition result includes that object terminates to test or object is not tied
Beam experiment.
5. according to the method described in claim 3, it is characterized in that, described multiple images frame is input to identification model, institute
Identification model is stated for before whether there is coherent sexual behaviour to identify the same target in described multiple images frame, it is described
Method further include:
Object in multiple training image frames is labeled, the corresponding mark of object in the multiple training image frame is obtained
Label;
The multiple training image frame is inputted to target detection network to be trained, obtains pair in the multiple training image frame
As corresponding prediction label;
According to the object in the corresponding prediction label of object and the multiple training image frame in the multiple training image frame
Corresponding mark label, determines first-loss value;
According to the value of parameter in the first-loss value adjustment target detection network to be trained.
6. according to the method described in claim 3, it is characterized in that, described multiple images frame is input to identification model, institute
Identification model is stated for before whether there is coherent sexual behaviour to identify the same target in described multiple images frame, it is described
Method further include:
Object in picture frame in multiple training videos is labeled, is obtained in the picture frame in the multiple training video
Object corresponding mark coherent sexual behaviour;
Picture frame in the multiple training video is inputted to target following network to be trained, obtains the multiple training video
In picture frame in object corresponding predict coherent sexual behaviour;
Coherent sexual behaviour and the multiple training are marked according to the object in the picture frame in the multiple training video is corresponding
The coherent sexual behaviour of the corresponding prediction of the object in picture frame in video, determines the second penalty values;
According to the value of parameter in second penalty values adjustment target following network to be trained.
7. a kind of resource allocation device, which is characterized in that described device includes:
First obtains module, and for obtaining the first video of video equipment acquisition, first video includes multiple images frame;
First input module, for described multiple images frame to be input to the first identification model, first identification model is used for
Whether there is coherent sexual behaviour to identify same first object in described multiple images frame;
First processing module passes through described for when first object has coherent sexual behaviour in multiple images frame
One identification model obtains the first recognition result;
Creation module, if indicating that first object is the target object for participating in experiment for first recognition result,
Data platform is created for the target object for participating in experiment.
8. device according to claim 7, which is characterized in that described device further include:
Second obtains module, and for obtaining the second video of video equipment acquisition, second video includes multiple images frame;
Second input module, for described multiple images frame to be input to the second identification model, second identification model is used for
Whether there is coherent sexual behaviour to identify same second object in described multiple images frame;
Second processing module, if indicating that second object is the target pair for terminating experiment for second recognition result
As then detecting the operating status for running each terminal of each data platform;
Module is destroyed, for destroying the matched data platform of terminal institute for being in shutdown with operating status.
9. device according to claim 7 or 8, which is characterized in that identification model include target detection network and target with
Track network;
First input module and second input module, respectively include:
First identification module exports institute for being identified to the object in described image frame by the target detection network
State the corresponding label of object in picture frame, the behavioural characteristic of object described in the corresponding tag representation of the object;
Determining module, for being determined from the object in described image frame according to the corresponding label of object in described image frame
Meet the tracking object of preset condition;
Second identification module, for being identified to the coherent sexual behaviour of the tracking object by the target following network,
Export the corresponding recognition result of coherent sexual behaviour of the tracking object.
10. device according to claim 9, which is characterized in that
If the identification model is the first identification model, the first recognition result includes that object participation experiment or object are not joined
With experiment;
If the identification model is the second identification model, the second recognition result includes that object terminates to test or object is not tied
Beam experiment.
11. device according to claim 9, which is characterized in that described device further includes the first training module, is used for:
Object in multiple training image frames is labeled, the corresponding mark of object in the multiple training image frame is obtained
Label;
The multiple training image frame is inputted to target detection network to be trained, obtains pair in the multiple training image frame
As corresponding prediction label;
According to the object in the corresponding prediction label of object and the multiple training image frame in the multiple training image frame
Corresponding mark label, determines first-loss value;
According to the value of parameter in the first-loss value adjustment target detection network to be trained.
12. device according to claim 9, which is characterized in that described device further includes the second training module, is used for:
Object in picture frame in multiple training videos is labeled, is obtained in the picture frame in the multiple training video
Object corresponding mark coherent sexual behaviour;
Picture frame in the multiple training video is inputted to target following network to be trained, obtains the multiple training video
In picture frame in object corresponding predict coherent sexual behaviour;
Coherent sexual behaviour and the multiple training are marked according to the object in the picture frame in the multiple training video is corresponding
The coherent sexual behaviour of the corresponding prediction of the object in picture frame in video, determines the second penalty values;
According to the value of parameter in second penalty values adjustment target following network to be trained.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811154155.5A CN109344770B (en) | 2018-09-30 | 2018-09-30 | Resource allocation method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811154155.5A CN109344770B (en) | 2018-09-30 | 2018-09-30 | Resource allocation method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109344770A true CN109344770A (en) | 2019-02-15 |
CN109344770B CN109344770B (en) | 2020-10-09 |
Family
ID=65307917
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811154155.5A Active CN109344770B (en) | 2018-09-30 | 2018-09-30 | Resource allocation method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109344770B (en) |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102831683A (en) * | 2012-08-28 | 2012-12-19 | 华南理工大学 | Pedestrian flow counting-based intelligent detection method for indoor dynamic cold load |
CN105518734A (en) * | 2013-09-06 | 2016-04-20 | 日本电气株式会社 | Customer behavior analysis system, customer behavior analysis method, non-temporary computer-readable medium, and shelf system |
CN105791299A (en) * | 2016-03-11 | 2016-07-20 | 南通职业大学 | Unattended monitoring type intelligent on-line examination system |
CN105976659A (en) * | 2016-05-05 | 2016-09-28 | 成都世纪智慧科技有限公司 | Internet-based information safety on-line open practical training platform |
CN106941602A (en) * | 2017-03-07 | 2017-07-11 | 中国铁道科学研究院 | Trainman's Activity recognition method, apparatus and system |
CN107045623A (en) * | 2016-12-30 | 2017-08-15 | 厦门瑞为信息技术有限公司 | A kind of method of the indoor dangerous situation alarm based on human body attitude trace analysis |
CN107103503A (en) * | 2017-03-07 | 2017-08-29 | 阿里巴巴集团控股有限公司 | A kind of sequence information determines method and apparatus |
CN107480618A (en) * | 2017-08-02 | 2017-12-15 | 深圳微品时代网络技术有限公司 | A kind of data analysing method of big data platform |
WO2018033155A1 (en) * | 2016-08-19 | 2018-02-22 | 北京市商汤科技开发有限公司 | Video image processing method, apparatus and electronic device |
US20180124437A1 (en) * | 2016-10-31 | 2018-05-03 | Twenty Billion Neurons GmbH | System and method for video data collection |
US20180137362A1 (en) * | 2016-11-14 | 2018-05-17 | Axis Ab | Action recognition in a video sequence |
CN108198030A (en) * | 2017-12-29 | 2018-06-22 | 深圳正品创想科技有限公司 | A kind of trolley control method, device and electronic equipment |
-
2018
- 2018-09-30 CN CN201811154155.5A patent/CN109344770B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102831683A (en) * | 2012-08-28 | 2012-12-19 | 华南理工大学 | Pedestrian flow counting-based intelligent detection method for indoor dynamic cold load |
CN105518734A (en) * | 2013-09-06 | 2016-04-20 | 日本电气株式会社 | Customer behavior analysis system, customer behavior analysis method, non-temporary computer-readable medium, and shelf system |
CN105791299A (en) * | 2016-03-11 | 2016-07-20 | 南通职业大学 | Unattended monitoring type intelligent on-line examination system |
CN105976659A (en) * | 2016-05-05 | 2016-09-28 | 成都世纪智慧科技有限公司 | Internet-based information safety on-line open practical training platform |
WO2018033155A1 (en) * | 2016-08-19 | 2018-02-22 | 北京市商汤科技开发有限公司 | Video image processing method, apparatus and electronic device |
US20180124437A1 (en) * | 2016-10-31 | 2018-05-03 | Twenty Billion Neurons GmbH | System and method for video data collection |
US20180137362A1 (en) * | 2016-11-14 | 2018-05-17 | Axis Ab | Action recognition in a video sequence |
CN107045623A (en) * | 2016-12-30 | 2017-08-15 | 厦门瑞为信息技术有限公司 | A kind of method of the indoor dangerous situation alarm based on human body attitude trace analysis |
CN106941602A (en) * | 2017-03-07 | 2017-07-11 | 中国铁道科学研究院 | Trainman's Activity recognition method, apparatus and system |
CN107103503A (en) * | 2017-03-07 | 2017-08-29 | 阿里巴巴集团控股有限公司 | A kind of sequence information determines method and apparatus |
CN107480618A (en) * | 2017-08-02 | 2017-12-15 | 深圳微品时代网络技术有限公司 | A kind of data analysing method of big data platform |
CN108198030A (en) * | 2017-12-29 | 2018-06-22 | 深圳正品创想科技有限公司 | A kind of trolley control method, device and electronic equipment |
Non-Patent Citations (5)
Title |
---|
GUANGMING ZHU ET AL.: "An Online Continuous Human Action Recognition Algorithm Based on the Kinect Sensor", 《SENSORS》 * |
MIAO WANG ET AL: "Human action recognition based on feature level fusion and random projection", 《2016 5TH INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND NETWORK TECHNOLOGY (ICCSNT)》 * |
洪红: "视频分析中的特定对象自动跟踪技术研究", 《万方数据知识服务平台》 * |
范顺良: "基于云平台的高校机房管理系统设计探索与研究", 《电脑知识与技术》 * |
陈展荣 等: "大学计算机实验教学平台建设与资源共享研究", 《中国教育信息化》 * |
Also Published As
Publication number | Publication date |
---|---|
CN109344770B (en) | 2020-10-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Mun et al. | Marioqa: Answering questions by watching gameplay videos | |
CN105678250B (en) | Face identification method and device in video | |
Zhang et al. | Social attribute-aware force model: exploiting richness of interaction for abnormal crowd detection | |
CN111973996B (en) | Game resource release method and device | |
Gilbert et al. | Automaticity and control in prospective memory: A computational model | |
Klein et al. | The Wits Intelligent Teaching System: Detecting student engagement during lectures using convolutional neural networks | |
CN108229262B (en) | Pornographic video detection method and device | |
Coolen et al. | Predictive inference for system reliability after common-cause component failures | |
WO2020088491A1 (en) | Method, system, and device for classifying motion behavior mode | |
GB2602415A (en) | Labeling images using a neural network | |
Pham et al. | A method for detection of learning styles in learning management systems | |
CN111814587A (en) | Human behavior detection method, teacher behavior detection method, and related system and device | |
Grau et al. | On-device training of machine learning models on microcontrollers with a look at federated learning | |
Leiva et al. | Playing soccer without colors in the spl: A convolutional neural network approach | |
Jensen et al. | Automating agent-based modeling: Data-driven generation and application of innovation diffusion models | |
Anzar et al. | Random interval attendance management system (RIAMS): A novel multimodal approach for post-COVID virtual learning | |
Liu et al. | An improved method of identifying learner's behaviors based on deep learning | |
Pérez-Lemonche et al. | Analysing event transitions to discover student roles and predict grades in MOOCs | |
CN109344770A (en) | Resource allocation methods and device | |
Reza et al. | Automatic annotation for semantic segmentation in indoor scenes | |
Sebastian et al. | Multimodal group activity state detection for classroom response system using convolutional neural networks | |
KR20230028130A (en) | Method for, device for, and system for evaluating learning ability | |
Tao et al. | Extracting highlights from a badminton video combine transfer learning with players’ velocity | |
Chianese et al. | Self and social network behaviours of users in cultural spaces | |
Luong et al. | Detecting exams fraud using transfer learning and fine-tuning for ResNet50 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |