CN109344770B - Resource allocation method and device - Google Patents

Resource allocation method and device Download PDF

Info

Publication number
CN109344770B
CN109344770B CN201811154155.5A CN201811154155A CN109344770B CN 109344770 B CN109344770 B CN 109344770B CN 201811154155 A CN201811154155 A CN 201811154155A CN 109344770 B CN109344770 B CN 109344770B
Authority
CN
China
Prior art keywords
image frames
objects
consistency
training
experiment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811154155.5A
Other languages
Chinese (zh)
Other versions
CN109344770A (en
Inventor
杜鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
New H3C Big Data Technologies Co Ltd
Original Assignee
New H3C Big Data Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by New H3C Big Data Technologies Co Ltd filed Critical New H3C Big Data Technologies Co Ltd
Priority to CN201811154155.5A priority Critical patent/CN109344770B/en
Publication of CN109344770A publication Critical patent/CN109344770A/en
Application granted granted Critical
Publication of CN109344770B publication Critical patent/CN109344770B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The application relates to a resource allocation method and device. The resource allocation method comprises the following steps: the method comprises the steps of obtaining a first video collected by a video device, wherein the first video comprises a plurality of image frames, inputting the plurality of image frames into a first recognition model, the first recognition model is used for recognizing whether the same first object in the plurality of image frames has a consistency behavior, obtaining a first recognition result through the first recognition model when the first object has the consistency behavior in the plurality of image frames, and if the first recognition result shows that the first object is a target object participating in an experiment, creating a data platform for the target object participating in the experiment. The resource allocation method and the resource allocation device can automatically, accurately and timely allocate the data platform for the objects participating in the experiment, realize the maximum utilization of the resources of the data platform and save the human resources of the administrator.

Description

Resource allocation method and device
Technical Field
The present application relates to the field of big data technologies, and in particular, to a resource allocation method and apparatus.
Background
The big data training room platform is an experimental teaching platform which is provided for colleges and universities to carry out big data talent training. At present, a data platform is used for teaching in a course, an administrator needs to create the data platform for teachers and students in advance according to the number of people participating in an experiment in each course, and resources of the data platform are released after the course is finished so as to be used by the courses of other subsequent classes.
However, the teaching mode of colleges and universities is very flexible, the openness of course teaching causes the number of people participating in courses to be unfixed, and administrators cannot predict the number of people participating in experiments in advance. For example, some of the subjects participating in the class are students listening to the front, some are students participating in the experiment with mobile terminals (e.g., laptop, tablet, etc.), and some are students participating in the teaching with desktop computers provided in classrooms. It is difficult for an administrator to accurately assign a data platform to subjects participating in an experiment.
Disclosure of Invention
In view of this, the present application provides a resource allocation method and device to solve the problem in the related art that it is difficult to accurately allocate a data platform to an object participating in an experiment.
According to an aspect of the present application, there is provided a resource allocation method, the method including:
acquiring a first video collected by video equipment, wherein the first video comprises a plurality of image frames;
inputting the plurality of image frames to a first recognition model, wherein the first recognition model is used for recognizing whether the same first object in the plurality of image frames has consistency behaviors;
when the first object has consistency behaviors in a plurality of image frames, obtaining a first recognition result through the first recognition model;
and if the first identification result shows that the first object is a target object participating in an experiment, creating a data platform for the target object participating in the experiment.
According to another aspect of the present application, there is provided a resource allocation apparatus, the apparatus including:
the device comprises a first acquisition module, a second acquisition module and a display module, wherein the first acquisition module is used for acquiring a first video acquired by video equipment, and the first video comprises a plurality of image frames;
a first input module, configured to input the multiple image frames into a first recognition model, where the first recognition model is used to recognize whether a same first object in the multiple image frames has a consistency behavior;
the first processing module is used for obtaining a first recognition result through the first recognition model when the first object has consistency behaviors in a plurality of image frames;
and the creating module is used for creating a data platform for the target object participating in the experiment if the first identification result shows that the first object is the target object participating in the experiment.
According to another aspect of the present application, there is provided a resource allocation apparatus, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to perform the above method.
According to another aspect of the application, a non-transitory computer-readable storage medium is provided, having computer program instructions stored thereon, wherein the computer program instructions, when executed by a processor, implement the above-described method.
The resource allocation method and the resource allocation device are used for acquiring a first video acquired by video equipment, the first video comprises a plurality of image frames, the plurality of image frames are input into a first recognition model, the first recognition model is used for recognizing whether the same first object in the plurality of image frames has the consistency behavior, when the first object has the consistency behavior in the plurality of image frames, a first recognition result is obtained through the first recognition model, and if the first recognition result shows that the first object is a target object participating in an experiment, a data platform is established for the target object participating in the experiment, so that the data platform can be automatically, accurately and timely allocated for the target participating in the experiment, the resource maximum utilization of the data platform is realized, and the human resources of a manager are saved.
Other features and aspects of the present application will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features, and aspects of the application and, together with the description, serve to explain the principles of the application.
Fig. 1 shows a flow chart of a resource allocation method according to an embodiment of the present application.
Fig. 2 shows a flow chart of a resource allocation method according to an embodiment of the present application.
Fig. 3 shows a block diagram of a resource allocation apparatus according to an embodiment of the present application.
Fig. 4 shows a block diagram of a resource allocation apparatus according to an embodiment of the present application.
Fig. 5 shows a block diagram of a resource allocation apparatus according to an embodiment of the present application.
Detailed Description
Various exemplary embodiments, features and aspects of the present application will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" as used herein may be understood as "at … …" or "when … …" or "in response to a determination" or the like, depending on the context.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present application. It will be understood by those skilled in the art that the present application may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present application.
In order to make the technical solutions in the present application better understood by those skilled in the art, a brief description of some technical terms involved in the present application is provided below.
A data platform: the platform is used for providing large-capacity storage and high-performance data calculation and analysis capability aiming at various industries and application scenes.
Big real standard room platform of data: the method is a data platform aiming at the education industry. In other words, the big data training room platform is an experimental teaching platform developed for big data talent training in colleges and universities. The big data training room platform provides a data platform for each user (such as teachers, students and the like) in a virtualization or container (Docker) mode so as to achieve teaching use and learning use.
In the related art, due to the resource scarcity of the data platform in the big data training room platform, in order to achieve maximum utilization of the resources of the data platform in the use process of the big data training room platform, an administrator needs to create the data platform for a user in advance according to the number of the user, and after the user finishes using the data platform, the created data platform is destroyed, and the resources of the data platform are released for other users to use.
However, in the use process of the big data training room platform, the number of users is not constant, and administrators cannot predict the number of users in advance. If the administrator creates more data platforms than users, the resources of the data platforms are wasted. If the administrator creates a data platform that is less than the number of users, the data platform will be under provisioned. Even if the administrator can be informed in time to increase the creation of the data platform, the waiting time of the user is increased, and the use experience of the user is reduced.
In the related art, the target detection network makes a great breakthrough. At present, the popular target detection networks mainly include: candidate Region (Region pro common) -based R-CNN (regions with conditional neural network) based networks (e.g., R-CNN, Fast R-CNN or Fast R-CNN), and the Yolo (you Only LookOne) algorithm.
R-CNN-based network: in a Two-stage (Two-stage) Network, a plurality of candidate regions are extracted from an image by a heuristic method (selected search) or a Convolutional Neural Network (CNN), whether the extracted candidate regions contain an object and what object is contained is determined, and finally, the positions of the candidate regions containing the object are refined.
The Yolo algorithm: is a One-stage network, that is, the category and position of different targets are directly predicted by a CNN. The Yolo algorithm divides the input image into S × S grids, and if the center of an object falls within a certain grid, the corresponding grid is responsible for detecting the object.
RNN (current Neural Network, Recurrent Neural Network): is a neural network for processing sequence data. The sequence data may be time-series data, text-series data, or the like. The sequence data has the characteristic that the following data is related to the preceding data. For example, time series data refers to data collected at different points in time. The time-series data can reflect the changing state or degree of a certain thing or phenomenon and the like along with time.
In view of the technical problems in the related art, fig. 1 shows a flowchart of a resource allocation method according to an embodiment of the present application. The method is suitable for a big data training room platform. As shown in fig. 1, the method includes steps S11 through S14.
In step S11, a first video captured by a video device is acquired, the first video including a plurality of image frames.
The video device refers to a device capable of capturing video, such as a camera.
In one implementation, the video device is disposed in a classroom where big data course teaching is performed. The method comprises the steps of obtaining a first video collected by video equipment, and carrying out frame cutting on the first video to obtain a frame of image included by the first video. The resulting first video may include one frame of images that may include one or more persons (e.g., teachers, students, etc.).
In step S12, the plurality of image frames are input to a first recognition model for recognizing whether the same first object in the plurality of image frames has a consistency behavior.
In one implementation, a recognition model includes a target detection network and a target tracking network.
Specifically, the method for inputting a plurality of image frames into a recognition model, wherein the recognition model is used for recognizing whether the same object in the plurality of image frames has consistency behaviors, and comprises the following steps: identifying the object in the image frame through a target detection network, and outputting a label corresponding to the object in the image frame, wherein the label corresponding to the object represents the behavior characteristic of the object; determining a tracking object meeting a preset condition from the objects in the image frame according to the label corresponding to the object in the image frame; and identifying the consistency behaviors of the tracked object through a target tracking network, and outputting an identification result corresponding to the consistency behaviors of the tracked object.
In one implementation, the first recognition model includes a first target detection network and a first target tracking network. The first target detection network may be an R-CNN network or a Yolo algorithm. The first target tracking network may be an RNN network. The first target detection network and the first target tracking network may be obtained through training.
The input of the first target detection network is a video, and the output is a label included in a frame of image included in the video. Specifically, for one frame of image included in the obtained first video, the tag included in the image can be identified by the first object detection network. The tag may be information describing the behavior of a person. For example, the tags may include standing, walking, sitting, opening a computer (e.g., opening a laptop, opening a desktop, etc.), operating a computer, closing a computer, picking up objects, entering a classroom, exiting a classroom, etc., which are not limited by the embodiments of the present application.
The input of the first target tracking network is a video, and the output of the first target tracking network is an identification result which indicates that the first object participates in the experiment or indicates that the first object does not participate in the experiment. Specifically, for a first object in a first video, the first object is tracked through a first target tracking network, and the consistency action of the first object is identified. According to the identified consistency action of the first object, the first target tracking network outputs an identification result indicating that the first object participates in the experiment or indicating that the first object does not participate in the experiment.
In step S13, when the first object has a coherent behavior in a plurality of image frames, a first recognition result is obtained by the first recognition model.
In one implementation, the first recognition model includes a first target detection network and a first target tracking network. Determining a first object in a first video through a first target detection network; identifying the behavior of a first object in a first video through a first target tracking network, and outputting a first identification result corresponding to the first object; wherein the first identification result indicates that the first object participates in the experiment or indicates that the first object does not participate in the experiment.
Wherein, the first object refers to an object satisfying a preset behavior condition participating in the experiment. For one frame of image included in the obtained first video, the tag included in the image can be identified by the first object detection network. The first object is determined according to the label included in the image, thereby reducing the range of the object to be tracked. After the first object is determined, the first object is tracked through a first target tracking network, and the consistency action of the first object is identified.
As an example, if the coherence of "enter classroom-walk-sit-turn on computer-operate computer" is taken as a pre-set behavior condition for participating in the experiment, the first object may be a person having an "enter classroom" behavior. After the first object is determined, the first object is tracked through a first target tracking network, and the consistency action of the first object is identified. If the consistency action of 'entering classroom-walking-sitting-opening computer-operating computer' is satisfied, the first target tracking network outputs a recognition result representing that the first object participates in the experiment. If the consistency action of 'enter classroom-walk-sit-open computer-operate computer' is not satisfied, the first target tracking network outputs a recognition result indicating that the first object does not participate in the experiment.
In one implementation, before the plurality of image frames are input to a first recognition model, the first recognition model is used for recognizing whether the same first object in the plurality of image frames has the consistency behavior, the method further includes: labeling the objects in the training image frames to obtain labeling labels corresponding to the objects in the training image frames; inputting a plurality of training image frames into a first target detection network to be trained to obtain prediction labels corresponding to objects in the plurality of training image frames; determining a first loss value according to the prediction labels corresponding to the objects in the plurality of training image frames and the labeling labels corresponding to the objects in the plurality of training image frames; and adjusting the value of the parameter in the first target detection network to be trained according to the first loss value.
And inputting the prediction label corresponding to the object in the training image frame and the label corresponding to the object in the training image frame into a loss function corresponding to the target detection network to obtain the first loss value. And adjusting the value of the parameter in the target detection network to be trained according to the first loss value. And when the first loss value tends to be stable or the first loss result is smaller than a preset threshold value, obtaining the target detection network.
In one implementation, before the plurality of image frames are input to a first recognition model, the first recognition model is used for recognizing whether the same first object in the plurality of image frames has the consistency behavior, the method further includes: labeling objects in image frames in a plurality of training videos to obtain labeling consistency behaviors corresponding to the objects in the image frames in the plurality of training videos; inputting image frames in a plurality of training videos into a first target tracking network to be trained to obtain predicted consistency behaviors corresponding to objects in the image frames in the plurality of training videos; determining a second loss value according to the labeling consistency behaviors corresponding to the objects in the image frames in the plurality of training videos and the prediction consistency behaviors corresponding to the objects in the image frames in the plurality of training videos; and adjusting the value of the parameter in the first target tracking network to be trained according to the second loss value.
And inputting the prediction label corresponding to the object in the training image frame and the label corresponding to the object in the training image frame into a loss function corresponding to the target tracking network to obtain a second loss value. And adjusting the value of the parameter in the target tracking network to be trained according to the second loss value. And when the second loss value tends to be stable or the second loss result is smaller than a preset threshold value, obtaining the target tracking network.
In step S14, if the first recognition result indicates that the first object is a target object participating in an experiment, a data platform is created for the target object participating in the experiment.
According to the resource allocation method provided by the embodiment of the application, a first video collected by a video device is obtained, the first video comprises a plurality of image frames, the plurality of image frames are input into a first recognition model, the first recognition model is used for recognizing whether the same first object in the plurality of image frames has the consistency behavior, when the first object has the consistency behavior in the plurality of image frames, a first recognition result is obtained through the first recognition model, and if the first recognition result shows that the first object is a target object participating in an experiment, a data platform is established for the target object participating in the experiment, so that the data platform can be automatically, accurately and timely allocated for the object participating in the experiment, the resource maximum utilization of the data platform is realized, and the human resources of an administrator are saved.
Example one:
the coherence of 'entering classroom-walking-sitting-opening computer-operating' is taken as a preset behavior condition for participating in the experiment, and the first object is a person with 'entering classroom' behavior. And obtaining a first target detection network and a first target tracking network through training.
The method comprises the steps of obtaining a video 1 collected by video equipment, and carrying out frame cutting on the video 1 to obtain an image 1 included in the video 1. The image 1 is recognized through the first object detection network, and the tag included in the image 1 is output. For example, image 1 includes 5 objects, and the output label represents: object 1 corresponds to the tag entering the classroom, object 2 corresponds to the tag standing, object 3 corresponds to the tag entering the classroom, object 4 corresponds to the tag walking, and object 5 corresponds to the tag entering the classroom. Object 1, object 3 and object 5 are the first objects.
For the object 1, the object 3 and the object 5 in the video 1, the object 3 and the object 5 are tracked through a first target tracking network respectively, and the consistency action of the object 1, the object 3 and the object 5 is identified. If the object 1 meets the consistency action of 'enter classroom-walk-sit-open computer-operate computer', the first target tracking network outputs a recognition result indicating that the object 1 participates in the experiment, and a data platform 1 is created. If the object 3 does not satisfy the consistency action of "enter classroom-walk-sit-open computer-operate computer", the first target tracking network outputs a recognition result indicating that the object 3 does not participate in the experiment. If the object 5 satisfies the consistency action of "enter classroom-walk-sit-open computer-operate computer", the first target tracking network outputs a recognition result indicating that the object 5 participates in the experiment, creating the data platform 2.
And the object 5 logs in through the user account a, and pushes the created data platform 1 to the user account a. Subject 5 participated in the experiment via data platform 1. And the object 1 logs in through the user account b, and pushes the created data platform 2 to the user account b. Subject 2 participated in the experiment via data platform 2. After the created data platform is pushed to the user account, a relation table representing the corresponding relation among the user account, the terminal for logging in the user account and the data platform is established.
The relationship table may include a user account, an IP (Internet Protocol) address and an MAC (Media Access Control) address of a terminal that logs in the user account, a number of the data platform, and the like. Table 1 shows a relationship table according to an embodiment of the present application. As shown in table 1, (object 5) logs in at a terminal with IP address IP 1 and MAC address MAC 1 through user account usemame a, and participates in an experiment through data platform 1. (object 1) logs in at a terminal with an IP address IP2 and an MAC address MAC 2 through a user account usename b, and participates in an experiment through a data platform 2.
TABLE 1
User account number MAC address of terminal IP address of terminal Numbering of data platforms
usename a mac 1 ip 1 number 1
usename b mac2 ip 2 number 2
Fig. 2 shows a flow chart of a resource allocation method according to an embodiment of the present application. The method is suitable for a big data training room platform. As shown in fig. 2, the method includes steps S21 through S24.
In step S21, a second video captured by the video device is acquired, the second video including a plurality of image frames.
In one implementation, the video device is disposed in a classroom where big data course teaching is performed. And acquiring a second video acquired by the video equipment, and performing frame truncation on the second video to obtain a frame of image included by the second video. The obtained second video may include one frame of image including one or more persons (e.g., teachers, objects, etc.).
In step S22, the plurality of image frames are input to a second recognition model for recognizing whether the same second object in the plurality of image frames has a consistency behavior.
In one implementation, a recognition model includes a target detection network and a target tracking network.
Specifically, the method for inputting a plurality of image frames into a recognition model, wherein the recognition model is used for recognizing whether the same object in the plurality of image frames has consistency behaviors, and comprises the following steps: identifying the object in the image frame through a target detection network, and outputting a label corresponding to the object in the image frame, wherein the label corresponding to the object represents the behavior characteristic of the object; determining a tracking object meeting a preset condition from the objects in the image frame according to the label corresponding to the object in the image frame; and identifying the consistency behaviors of the tracked object through a target tracking network, and outputting an identification result corresponding to the consistency behaviors of the tracked object.
In one implementation, the second recognition model includes a second target detection network and a second target tracking network. Wherein, the second target detection network can be an R-CNN network or a Yolo algorithm. The second target tracking network may be an RNN network. The second target detection network and the second target tracking network may be obtained through training.
The input of the second target detection network is a video, and the output is a label included in a frame of image included in the video. Specifically, for one frame of image included in the obtained second video, the tag included in the image can be identified by the second object detection network. The tag may be information describing the behavior of a person. For example, the tags may include standing, walking, sitting, opening a computer (e.g., opening a laptop, opening a desktop, etc.), operating a computer, closing a computer, picking up objects, entering a classroom, exiting a classroom, etc., which are not limited by the embodiments of the present application.
The input of the second target tracking network is a video, and the output of the second target tracking network is an identification result which represents that the second object finishes the experiment or represents that the second object does not finish the experiment. Specifically, for a second object in the second video, the second object is tracked through a second target tracking network, and the consistency action of the second object is identified. And according to the identified consistency action of the second object, the second target tracking network outputs an identification result which indicates that the second object finishes the experiment or indicates that the second object does not finish the experiment.
In step S23, if the second recognition result indicates that the second object is the target object for ending the experiment, the operation state of each terminal operating each data platform is detected.
In one implementation, the second recognition model includes a second target detection network and a second target tracking network. Determining a second object in the second video through a second target detection network; identifying the behavior of a first object in a second video through a second target tracking network, and outputting a second identification result corresponding to the second object; and the second identification result indicates that the second object finishes the experiment or indicates that the second object does not finish the experiment.
Wherein, the second object refers to an object satisfying a preset behavior condition for ending the experiment. For one frame of image included in the obtained second video, the tag included in the image can be identified by the second object detection network. The second object is determined according to the label included in the image, thereby reducing the range of the object to be tracked. After the second object is determined, the second object is tracked through a second target tracking network, and the consistency action of the second object is identified.
As an example, if a coherence action of "computer off-pick-up-walk-leave classroom" is used as a pre-set behavior condition to end the experiment, the second object may be a person having a "computer off" behavior. After the second object is determined, the second object is tracked through a second target tracking network, and the consistency action of the second object is identified. If the consistency action of 'computer off-pick-up-walk-leave classroom' is satisfied, the second target tracking network outputs a recognition result indicating that the second object ends the experiment. If the consistency action of 'computer off-pick-up-walk-leave classroom' is not satisfied, the second target tracking network outputs a recognition result indicating that the second object has not finished the experiment.
In one implementation, before the plurality of image frames are input to the second recognition model, the second recognition model is used for recognizing whether the same second object in the plurality of image frames has the consistency behavior, the method further includes: labeling the objects in the training image frames to obtain labeling labels corresponding to the objects in the training image frames; inputting a plurality of training image frames into a second target detection network to be trained to obtain prediction labels corresponding to objects in the plurality of training image frames; determining a first loss value according to the prediction labels corresponding to the objects in the plurality of training image frames and the labeling labels corresponding to the objects in the plurality of training image frames; and adjusting the value of the parameter in the second target detection network to be trained according to the first loss value.
And inputting the prediction label corresponding to the object in the training image frame and the label corresponding to the object in the training image frame into a loss function corresponding to the target detection network to obtain the first loss value. And adjusting the value of the parameter in the target detection network to be trained according to the first loss value. And when the first loss value tends to be stable or the first loss result is smaller than a preset threshold value, obtaining the target detection network.
In one implementation, before the plurality of image frames are input to the second recognition model, the second recognition model is used for recognizing whether the same second object in the plurality of image frames has the consistency behavior, the method further includes: labeling objects in image frames in a plurality of training videos to obtain labeling consistency behaviors corresponding to the objects in the image frames in the plurality of training videos; inputting image frames in the plurality of training videos into a second target tracking network to be trained to obtain predicted consistency behaviors corresponding to objects in the image frames in the plurality of training videos; determining a second loss value according to the labeling consistency behaviors corresponding to the objects in the image frames in the plurality of training videos and the prediction consistency behaviors corresponding to the objects in the image frames in the plurality of training videos; and adjusting the value of the parameter in the second target tracking network to be trained according to the second loss value.
And inputting the prediction label corresponding to the object in the training image frame and the label corresponding to the object in the training image frame into a loss function corresponding to the target tracking network to obtain a second loss value. And adjusting the value of the parameter in the target tracking network to be trained according to the second loss value. And when the second loss value tends to be stable or the second loss result is smaller than a preset threshold value, obtaining the target tracking network.
In step S24, the data platform matching the terminal whose operation state is shutdown is destroyed.
According to the resource allocation method provided by the embodiment of the application, a second video acquired by a video device is acquired, the second video comprises a plurality of image frames, the plurality of image frames are input into a second recognition model, the second recognition model is used for recognizing whether the same second object in the plurality of image frames has a consistency behavior, if the second recognition result shows that the second object is a target object for finishing an experiment, the running state of each terminal for running each data platform is detected, and the data platform matched with the terminal in the running state and in the shutdown state is destroyed, so that the data platform for finishing the experiment can be automatically, accurately and timely determined, the data platform for finishing the experiment is destroyed, resources of the data platform are released for other users to use, the resources of the data platform are maximally utilized, and human resources of a manager are saved.
Example two:
the consistency of 'computer off-pick-up-walk-leave classroom' is used as a preset behavior condition for ending the experiment, and the second object is a person with 'computer off' behavior. And obtaining a second target detection network and a second target tracking network through training.
Carrying out the first example, a video 2 acquired by a video device is acquired, and the video 2 is subjected to frame truncation to obtain an image 2 included in the video 2. The image 2 is identified by the second object detection network, and the label included in the image 2 is output. For example, image 2 includes 5 objects, and the output label represents: object 1 corresponds to the tag turning off the computer, object 2 corresponds to the tag standing, object 3 corresponds to the tag sitting down, object 4 corresponds to the tag picking up the item, and object 5 corresponds to the tag sitting down. Object 1 is the second object.
For the object 1 in the video 2, the object 1 is tracked through a second target tracking network, and the consistency action of the object 1 is identified. If the object 1 satisfies the consistency action of "computer off-pick up-walk-leave classroom", the second target tracking network outputs a recognition result indicating that the object 1 has finished the experiment. And detecting the operation states of the terminals of the data platforms in the operation table 1, namely detecting the operation states of the terminal (IP address is IP 1, MAC address is MAC 1) of the operation data platform 1 and the terminal (IP address is IP2, MAC address is MAC 2) of the operation data platform 2.
By detecting the operation state of the terminal of each data platform in the operation table 1, it is found that the terminal with the IP address IP2 and the MAC address MAC 2 is powered off. And determining the data platform 2 to finish the experiment according to the corresponding relation among the user accounts, the terminals for logging in the user accounts and the data platform in the table 1. And destroying the data platform 2, and releasing the resources of the data platform 2 for other users to use.
Fig. 3 shows a block diagram of a resource allocation apparatus according to an embodiment of the present application. The device is suitable for a big data training room platform. As shown in fig. 3, the apparatus comprises:
a first obtaining module 31, configured to obtain a first video collected by a video device, where the first video includes a plurality of image frames;
a first input module 32, configured to input the plurality of image frames into a first recognition model, where the first recognition model is used to recognize whether a same first object in the plurality of image frames has a consistency behavior;
a first processing module 33, configured to obtain a first recognition result through the first recognition model when the first object has a consistency behavior in a plurality of image frames;
a creating module 34, configured to create a data platform for the target object participating in the experiment if the first identification result indicates that the first object is the target object participating in the experiment.
Fig. 4 shows a block diagram of a resource allocation apparatus according to an embodiment of the present application. The device is suitable for a big data training room platform. As shown in fig. 4, the apparatus further includes:
a second obtaining module 41, configured to obtain a second video collected by a video device, where the second video includes a plurality of image frames;
a second input module 42, configured to input the plurality of image frames into a second recognition model, where the second recognition model is used to recognize whether a same second object in the plurality of image frames has a consistency behavior;
a second processing module 43, configured to detect an operation state of each terminal operating each data platform if the second identification result indicates that the second object is a target object for ending an experiment;
and the destruction module 44 is configured to destroy the data platform matched with the terminal in the shutdown state.
In one implementation, a recognition model includes a target detection network and a target tracking network;
the first input module 32 and the second input module 42 respectively include: the first identification module is used for identifying the object in the image frame through the target detection network and outputting a label corresponding to the object in the image frame, wherein the label corresponding to the object represents the behavior characteristic of the object;
the determining module is used for determining a tracking object meeting a preset condition from the objects in the image frame according to the label corresponding to the object in the image frame;
and the second identification module is used for identifying the consistency behaviors of the tracked object through the target tracking network and outputting an identification result corresponding to the consistency behaviors of the tracked object.
In one implementation, if the recognition model is a first recognition model, the first recognition result includes that the subject participates in the experiment, or the subject does not participate in the experiment; and if the identification model is the second identification model, the second identification result comprises that the experiment of the object is ended or the experiment of the object is not ended.
In one implementation, the apparatus further comprises a first training module 51 for:
labeling objects in a plurality of training image frames to obtain labeling labels corresponding to the objects in the plurality of training image frames;
inputting the training image frames into a target detection network to be trained to obtain prediction labels corresponding to objects in the training image frames;
determining a first loss value according to the prediction labels corresponding to the objects in the plurality of training image frames and the labeling labels corresponding to the objects in the plurality of training image frames;
and adjusting the value of the parameter in the target detection network to be trained according to the first loss value.
In one implementation, the apparatus further includes a second training module 52 configured to:
labeling objects in image frames in a plurality of training videos to obtain labeling consistency behaviors corresponding to the objects in the image frames in the plurality of training videos;
inputting image frames in the training videos into a target tracking network to be trained to obtain predicted consistency behaviors corresponding to objects in the image frames in the training videos;
determining a second loss value according to the labeling consistency behaviors corresponding to the objects in the image frames in the plurality of training videos and the prediction consistency behaviors corresponding to the objects in the image frames in the plurality of training videos;
and adjusting the value of the parameter in the target tracking network to be trained according to the second loss value.
The resource allocation device provided by the embodiment of the application acquires a first video acquired by video equipment, the first video comprises a plurality of image frames, the plurality of image frames are input into the first recognition model, the first recognition model is used for recognizing whether the same first object in the plurality of image frames has the consistency behavior, when the first object has the consistency behavior in the plurality of image frames, the first recognition result is acquired through the first recognition model, and if the first recognition result indicates that the first object is a target object participating in an experiment, a data platform is created for the target object participating in the experiment, so that the data platform can be automatically, accurately and timely allocated for the object participating in the experiment, the resource maximum utilization of the data platform is realized, and the human resources of a manager are saved.
Fig. 5 shows a block diagram of a resource allocation apparatus according to an embodiment of the present application. Referring to fig. 5, the apparatus 900 may include a processor 901, a machine-readable storage medium 902 having stored thereon machine-executable instructions. The processor 901 and the machine-readable storage medium 902 may communicate via a system bus 903. Also, the processor 901 performs the resource allocation method described above by reading machine executable instructions in the machine readable storage medium 902 corresponding to the resource allocation logic.
The machine-readable storage medium 902 referred to herein may be any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and the like. For example, the machine-readable storage medium may be: RAM (random Access Memory), volatile Memory, non-volatile Memory, flash Memory, a storage drive (e.g., a hard drive), a solid state drive, any type of storage disk (e.g., an optical disk, dvd, etc.), or similar storage medium, or a combination thereof.
Having described embodiments of the present application, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or technical improvements to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (13)

1. A resource allocation method is used for allocating resources of a big data training room platform, and comprises the following steps:
acquiring a first video collected by video equipment, wherein the first video comprises a plurality of image frames;
inputting the plurality of image frames to a first recognition model, wherein the first recognition model is used for recognizing whether the same first object in the plurality of image frames has consistency behaviors;
when the first object has consistency behaviors in a plurality of image frames, obtaining a first recognition result through the first recognition model;
the consistency behavior comprises: a preset consistency action for indicating the first object to participate in the experiment;
and if the first identification result shows that the first object is a target object participating in an experiment, creating a data platform for the target object participating in the experiment.
2. The method of claim 1, further comprising:
acquiring a second video acquired by video equipment, wherein the second video comprises a plurality of image frames;
inputting the plurality of image frames to a second recognition model, wherein the second recognition model is used for recognizing whether the same second object in the plurality of image frames has consistency behaviors;
if the second identification result indicates that the second object is a target object for finishing the experiment, detecting the operation state of each terminal for operating each data platform;
and destroying the data platform matched with the terminal in the shutdown state.
3. The method according to claim 1 or 2, wherein the recognition model comprises an object detection network and an object tracking network;
inputting the plurality of image frames to a recognition model for recognizing whether a same object in the plurality of image frames has a consistency behavior, comprising:
identifying an object in the image frame through the target detection network, and outputting a label corresponding to the object in the image frame, wherein the label corresponding to the object represents a behavior characteristic of the object;
determining a tracking object meeting a preset condition from the objects in the image frame according to the label corresponding to the object in the image frame;
and identifying the consistency behaviors of the tracked object through the target tracking network, and outputting an identification result corresponding to the consistency behaviors of the tracked object.
4. The method of claim 3,
if the identification model is a first identification model, the first identification result comprises that the object participates in the experiment or the object does not participate in the experiment;
and if the identification model is the second identification model, the second identification result comprises that the experiment of the object is ended or the experiment of the object is not ended.
5. The method of claim 3, wherein prior to inputting the plurality of image frames to a recognition model for recognizing whether a same object in the plurality of image frames has a consistent behavior, the method further comprises:
labeling objects in a plurality of training image frames to obtain labeling labels corresponding to the objects in the plurality of training image frames;
inputting the training image frames into a target detection network to be trained to obtain prediction labels corresponding to objects in the training image frames;
determining a first loss value according to the prediction labels corresponding to the objects in the plurality of training image frames and the labeling labels corresponding to the objects in the plurality of training image frames;
and adjusting the value of the parameter in the target detection network to be trained according to the first loss value.
6. The method of claim 3, wherein prior to inputting the plurality of image frames to a recognition model for recognizing whether a same object in the plurality of image frames has a consistent behavior, the method further comprises:
labeling objects in image frames in a plurality of training videos to obtain labeling consistency behaviors corresponding to the objects in the image frames in the plurality of training videos;
inputting image frames in the training videos into a target tracking network to be trained to obtain predicted consistency behaviors corresponding to objects in the image frames in the training videos;
determining a second loss value according to the labeling consistency behaviors corresponding to the objects in the image frames in the plurality of training videos and the prediction consistency behaviors corresponding to the objects in the image frames in the plurality of training videos;
and adjusting the value of the parameter in the target tracking network to be trained according to the second loss value.
7. A resource allocation device, wherein the device is used for allocating resources of a big data training room platform, and the device comprises:
the device comprises a first acquisition module, a second acquisition module and a display module, wherein the first acquisition module is used for acquiring a first video acquired by video equipment, and the first video comprises a plurality of image frames;
a first input module, configured to input the multiple image frames into a first recognition model, where the first recognition model is used to recognize whether a same first object in the multiple image frames has a consistency behavior; the consistency behavior comprises: a preset consistency action for indicating the first object to participate in the experiment;
the first processing module is used for obtaining a first recognition result through the first recognition model when the first object has consistency behaviors in a plurality of image frames;
and the creating module is used for creating a data platform for the target object participating in the experiment if the first identification result shows that the first object is the target object participating in the experiment.
8. The apparatus of claim 7, further comprising:
the second acquisition module is used for acquiring a second video acquired by video equipment, and the second video comprises a plurality of image frames;
the second input module is used for inputting the image frames into a second recognition model, and the second recognition model is used for recognizing whether the same second object in the image frames has the consistency behavior;
the second processing module is used for detecting the running state of each terminal running each data platform if the second identification result indicates that the second object is a target object for finishing the experiment;
and the destruction module is used for destroying the data platform matched with the terminal in the shutdown state.
9. The apparatus of claim 7 or 8, wherein the recognition model comprises an object detection network and an object tracking network;
the first input module includes:
the first identification module is used for identifying the object in the image frame through the target detection network and outputting a label corresponding to the object in the image frame, wherein the label corresponding to the object represents the behavior characteristic of the object;
the determining module is used for determining a tracking object meeting a preset condition from the objects in the image frame according to the label corresponding to the object in the image frame;
and the second identification module is used for identifying the consistency behaviors of the tracked object through the target tracking network and outputting an identification result corresponding to the consistency behaviors of the tracked object.
10. The apparatus of claim 8, wherein the recognition model comprises a target detection network and a target tracking network;
the second input module includes:
the first identification module is used for identifying the object in the image frame through the target detection network and outputting a label corresponding to the object in the image frame, wherein the label corresponding to the object represents the behavior characteristic of the object;
the determining module is used for determining a tracking object meeting a preset condition from the objects in the image frame according to the label corresponding to the object in the image frame;
and the second identification module is used for identifying the consistency behaviors of the tracked object through the target tracking network and outputting an identification result corresponding to the consistency behaviors of the tracked object.
11. The apparatus of claim 9,
if the identification model is a first identification model, the first identification result comprises that the object participates in the experiment or the object does not participate in the experiment;
and if the identification model is the second identification model, the second identification result comprises that the experiment of the object is ended or the experiment of the object is not ended.
12. The apparatus of claim 9, further comprising a first training module to:
labeling objects in a plurality of training image frames to obtain labeling labels corresponding to the objects in the plurality of training image frames;
inputting the training image frames into a target detection network to be trained to obtain prediction labels corresponding to objects in the training image frames;
determining a first loss value according to the prediction labels corresponding to the objects in the plurality of training image frames and the labeling labels corresponding to the objects in the plurality of training image frames;
and adjusting the value of the parameter in the target detection network to be trained according to the first loss value.
13. The apparatus of claim 9, further comprising a second training module to:
labeling objects in image frames in a plurality of training videos to obtain labeling consistency behaviors corresponding to the objects in the image frames in the plurality of training videos;
inputting image frames in the training videos into a target tracking network to be trained to obtain predicted consistency behaviors corresponding to objects in the image frames in the training videos;
determining a second loss value according to the labeling consistency behaviors corresponding to the objects in the image frames in the plurality of training videos and the prediction consistency behaviors corresponding to the objects in the image frames in the plurality of training videos;
and adjusting the value of the parameter in the target tracking network to be trained according to the second loss value.
CN201811154155.5A 2018-09-30 2018-09-30 Resource allocation method and device Active CN109344770B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811154155.5A CN109344770B (en) 2018-09-30 2018-09-30 Resource allocation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811154155.5A CN109344770B (en) 2018-09-30 2018-09-30 Resource allocation method and device

Publications (2)

Publication Number Publication Date
CN109344770A CN109344770A (en) 2019-02-15
CN109344770B true CN109344770B (en) 2020-10-09

Family

ID=65307917

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811154155.5A Active CN109344770B (en) 2018-09-30 2018-09-30 Resource allocation method and device

Country Status (1)

Country Link
CN (1) CN109344770B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831683A (en) * 2012-08-28 2012-12-19 华南理工大学 Pedestrian flow counting-based intelligent detection method for indoor dynamic cold load
CN105518734A (en) * 2013-09-06 2016-04-20 日本电气株式会社 Customer behavior analysis system, customer behavior analysis method, non-temporary computer-readable medium, and shelf system
CN105791299A (en) * 2016-03-11 2016-07-20 南通职业大学 Unattended monitoring type intelligent on-line examination system
CN107045623A (en) * 2016-12-30 2017-08-15 厦门瑞为信息技术有限公司 A kind of method of the indoor dangerous situation alarm based on human body attitude trace analysis
CN107103503A (en) * 2017-03-07 2017-08-29 阿里巴巴集团控股有限公司 A kind of sequence information determines method and apparatus
CN107480618A (en) * 2017-08-02 2017-12-15 深圳微品时代网络技术有限公司 A kind of data analysing method of big data platform
WO2018033155A1 (en) * 2016-08-19 2018-02-22 北京市商汤科技开发有限公司 Video image processing method, apparatus and electronic device
CN108198030A (en) * 2017-12-29 2018-06-22 深圳正品创想科技有限公司 A kind of trolley control method, device and electronic equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105976659A (en) * 2016-05-05 2016-09-28 成都世纪智慧科技有限公司 Internet-based information safety on-line open practical training platform
US20180124437A1 (en) * 2016-10-31 2018-05-03 Twenty Billion Neurons GmbH System and method for video data collection
EP3321844B1 (en) * 2016-11-14 2021-04-14 Axis AB Action recognition in a video sequence
CN106941602B (en) * 2017-03-07 2020-10-13 中国铁路总公司 Locomotive driver behavior identification method and device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831683A (en) * 2012-08-28 2012-12-19 华南理工大学 Pedestrian flow counting-based intelligent detection method for indoor dynamic cold load
CN105518734A (en) * 2013-09-06 2016-04-20 日本电气株式会社 Customer behavior analysis system, customer behavior analysis method, non-temporary computer-readable medium, and shelf system
CN105791299A (en) * 2016-03-11 2016-07-20 南通职业大学 Unattended monitoring type intelligent on-line examination system
WO2018033155A1 (en) * 2016-08-19 2018-02-22 北京市商汤科技开发有限公司 Video image processing method, apparatus and electronic device
CN107045623A (en) * 2016-12-30 2017-08-15 厦门瑞为信息技术有限公司 A kind of method of the indoor dangerous situation alarm based on human body attitude trace analysis
CN107103503A (en) * 2017-03-07 2017-08-29 阿里巴巴集团控股有限公司 A kind of sequence information determines method and apparatus
CN107480618A (en) * 2017-08-02 2017-12-15 深圳微品时代网络技术有限公司 A kind of data analysing method of big data platform
CN108198030A (en) * 2017-12-29 2018-06-22 深圳正品创想科技有限公司 A kind of trolley control method, device and electronic equipment

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Human action recognition based on feature level fusion and random projection;Miao Wang et al;《2016 5th International Conference on Computer Science and Network Technology (ICCSNT)》;20171019;第767-770页 *
基于云平台的高校机房管理系统设计探索与研究;范顺良;《电脑知识与技术》;20180415;第14卷(第11期);第240-241页 *
大学计算机实验教学平台建设与资源共享研究;陈展荣 等;《中国教育信息化》;20150305(第05期);第41-43页 *

Also Published As

Publication number Publication date
CN109344770A (en) 2019-02-15

Similar Documents

Publication Publication Date Title
KR102260553B1 (en) Method for recommending related problem based on meta data
De Geest et al. Online action detection
US10769496B2 (en) Logo detection
WO2018006727A1 (en) Method and apparatus for transferring from robot customer service to human customer service
CN112232293B (en) Image processing model training method, image processing method and related equipment
CN111046819B (en) Behavior recognition processing method and device
US20170193286A1 (en) Method and device for face recognition in video
CN110674664A (en) Visual attention recognition method and system, storage medium and processor
CN110166789B (en) Method for monitoring video live broadcast sensitive information, computer equipment and readable storage medium
CN108228421A (en) data monitoring method, device, computer and storage medium
CN111814817A (en) Video classification method and device, storage medium and electronic equipment
CN110837586A (en) Question-answer matching method, system, server and storage medium
Ahmadi et al. Efficient and fast objects detection technique for intelligent video surveillance using transfer learning and fine-tuning
CN109963072B (en) Focusing method, focusing device, storage medium and electronic equipment
Sathayanarayana et al. Towards automated understanding of student-tutor interactions using visual deictic gestures
CN109344770B (en) Resource allocation method and device
CN113420763A (en) Text image processing method and device, electronic equipment and readable storage medium
WO2021185317A1 (en) Action recognition method and device, and storage medium
WO2023040233A1 (en) Service state analysis method and apparatus, and electronic device, storage medium and computer program product
CN109993165A (en) The identification of tablet plate medicine name and tablet plate information acquisition method, device and system
CN109960745A (en) Visual classification processing method and processing device, storage medium and electronic equipment
CN111491195B (en) Method and device for online video display
CN111310028A (en) Recommendation method and device based on psychological characteristics
CN111666786A (en) Image processing method, image processing device, electronic equipment and storage medium
CN112102147B (en) Background blurring identification method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant