CN115552483A - Data collection method, device, equipment and storage medium - Google Patents

Data collection method, device, equipment and storage medium Download PDF

Info

Publication number
CN115552483A
CN115552483A CN202180002736.0A CN202180002736A CN115552483A CN 115552483 A CN115552483 A CN 115552483A CN 202180002736 A CN202180002736 A CN 202180002736A CN 115552483 A CN115552483 A CN 115552483A
Authority
CN
China
Prior art keywords
image data
determining
collection rule
preset
recognition result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202180002736.0A
Other languages
Chinese (zh)
Inventor
吴佳成
刘智恒
张帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sensetime International Pte Ltd
Original Assignee
Sensetime International Pte Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sensetime International Pte Ltd filed Critical Sensetime International Pte Ltd
Priority claimed from PCT/IB2021/058763 external-priority patent/WO2023041970A1/en
Publication of CN115552483A publication Critical patent/CN115552483A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/96Management of image or video recognition tasks
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07FCOIN-FREED OR LIKE APPARATUS
    • G07F17/00Coin-freed apparatus for hiring articles; Coin-freed facilities or services
    • G07F17/32Coin-freed apparatus for hiring articles; Coin-freed facilities or services for games, toys, sports, or amusements
    • G07F17/3241Security aspects of a gaming system, e.g. detecting cheating, device integrity, surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07FCOIN-FREED OR LIKE APPARATUS
    • G07F17/00Coin-freed apparatus for hiring articles; Coin-freed facilities or services
    • G07F17/32Coin-freed apparatus for hiring articles; Coin-freed facilities or services for games, toys, sports, or amusements
    • G07F17/3202Hardware aspects of a gaming system, e.g. components, construction, architecture thereof
    • G07F17/3216Construction aspects of a gaming system, e.g. housing, seats, ergonomic aspects
    • G07F17/322Casino tables, e.g. tables having integrated screens, chip detection means
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07FCOIN-FREED OR LIKE APPARATUS
    • G07F17/00Coin-freed apparatus for hiring articles; Coin-freed facilities or services
    • G07F17/32Coin-freed apparatus for hiring articles; Coin-freed facilities or services for games, toys, sports, or amusements
    • G07F17/3244Payment aspects of a gaming system, e.g. payment schemes, setting payout ratio, bonus or consolation prizes
    • G07F17/3248Payment aspects of a gaming system, e.g. payment schemes, setting payout ratio, bonus or consolation prizes involving non-monetary media of fixed value, e.g. casino chips of fixed value

Abstract

A data collection method, a device, equipment and a storage medium are provided, wherein the method comprises the following steps: acquiring image data of a picture including a preset scene; in the image data, identifying an object in the preset scene to obtain an identification result; and collecting the image data and the recognition result in response to the recognition result and/or the image data meeting a preset collection rule.

Description

Data collection method, device, equipment and storage medium
Cross Reference to Related Applications
The present application claims priority from the singapore intellectual property office, singapore patent application No. 10202110226V filed on 16/9/2021, the entire contents of which are incorporated herein by reference.
Technical Field
The embodiment of the application relates to the technical field of image processing, and relates to but is not limited to a data collection method, a data collection device, data collection equipment and a storage medium.
Background
In the process of image identification, because the data distribution of the production environment is different from that of the test environment, the image identification accuracy is reduced; in the related art, supplementary information is collected manually to improve the image recognition effect, which is time-consuming and labor-consuming.
Disclosure of Invention
The embodiment of the application provides a technical scheme for data collection.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides a data collection method, which comprises the following steps:
acquiring image data of a picture including a preset scene;
in the image data, identifying an object in the preset scene to obtain an identification result;
and collecting the image data and the recognition result in response to the recognition result and/or the image data meeting a preset collection rule.
In some embodiments, after the recognizing, in the image data, the object in the preset scene to obtain a recognition result, the method further includes: determining task requirements corresponding to the image data; and determining the preset collection rule based on the task requirement. Therefore, the collection rule of the image data is determined by analyzing the task requirements, and the user requirements can be better met.
In some embodiments, in the case that the image data is a single frame image, the determining the preset collection rule based on the task requirement includes: determining parameter information associated with the recognition result; determining a first collection rule based on the task requirements and the parameter information, wherein the preset collection rule comprises the first collection rule. In this way, for a single frame image, the collection rule is determined by combining the parameter information of the image recognition result and the application requirement of the image, and the image data meeting the requirement of the user can be selectively and automatically collected.
In some embodiments, the parameter information includes at least one of: confidence, object type, data state, said determining a first collection rule based on said task requirements and said parameter information, comprising: determining a target parameter in at least one of the confidence level, the object type, the data state based on the task requirement; determining the first collection rule based on the target parameter. Thus, the first collection rule is determined according to the target parameters matched with the task requirements, and the collected image data can meet the task requirements.
In some embodiments, the collecting the image data and the recognition result in response to the recognition result and/or the image data satisfying a preset collection rule includes: determining the value of the target parameter of the identification result; storing the image data and the recognition result in response to the value of the target parameter of the recognition result satisfying the first collection rule. Therefore, images meeting task requirements can be automatically and selectively collected by judging whether the data of the target parameters of the identification results meet the first collection rule or not.
In some embodiments, the determining the preset collection rule based on the task requirement includes: determining service information associated with the object in the preset scene; and determining a second collection rule based on the business information and the task requirement, wherein the preset collection rule comprises the second collection rule. Therefore, according to the task requirement, the target business information is determined in the business information, the second collection rule related to the business information of the object is determined, and the video data meeting the task requirement can be collected.
In some embodiments, in a case where the image data is video data, the acquiring the picture includes image data of a preset scene, including: determining a service phase included in the operation process of the object in the preset scene; and determining video data generated from the initial service stage to the end service stage of the object in the preset scene. In this way, the running process of the object is divided into a plurality of stages, and the whole video data from the starting stage to the ending stage is determined, so that the video data needing to be collected can be selected from the collected video data more logically.
In some embodiments, said collecting said image data and said recognition result in response to said recognition result and/or said image data satisfying a preset collection rule comprises: determining service information of the video data; and in response to the service information of the video data meeting the second collection rule, storing the video data and the identification result. Therefore, by judging whether the service information in the video data meets the second collection rule or not, the video stream data meeting the task requirements can be automatically and selectively collected.
In some embodiments, when the preset scene is a game scene, an object in the preset scene is a game object, the image data is video data of the game object in any game, and the determining the service information of the video data includes: determining at least one of the following as the service information: the video time of the video data, the type of a game object included in the video data, and alarm information in the video data. Therefore, in a game scene, the alarm information or the game duration appearing in the game is taken as the service information, so that the video data meeting the service information can be automatically selected according to the determined collection rule.
In some embodiments, after collecting the image data and the recognition result in response to the recognition result and/or the image data satisfying a preset collection rule, the method further comprises: determining a network to be trained for recognizing an object in the image data; updating a training data set of the network to be trained based on the collected image data and the recognition result to obtain production environment data; and training the network to be trained by adopting the production environment data to obtain a trained network capable of identifying the object in the image data. Therefore, the problem that the data of the production environment is inconsistent with the data of the test environment can be solved, and the network model is retrained on the basis of the collected image data to achieve a better recognition effect.
An embodiment of the present application provides a data collection device, the device includes:
the first acquisition module is used for acquiring image data of a picture including a preset scene;
the first identification module is used for identifying the object in the preset scene in the image data to obtain an identification result;
the first collection module is used for responding to the recognition result and/or the image data meeting a preset collection rule, and collecting the image data and the recognition result.
Correspondingly, embodiments of the present application provide a computer storage medium, where computer-executable instructions are stored, and when the computer-executable instructions are executed, the steps of the method can be implemented.
The embodiment of the present application provides a computer device, where the computer device includes a memory and a processor, where the memory stores computer-executable instructions, and the processor can implement the steps of the method when executing the computer-executable instructions on the memory.
The embodiment of the application provides a data collection method, a data collection device, data collection equipment and a storage medium, wherein objects in image data comprising a preset scene are identified; judging whether the recognition result and the image data meet a preset collection rule or not; and if the recognition result and the image data meet the preset collection rule, automatically storing the image data and the recognition result. In this manner, image data is selectively collected in a storage device by way of automated collection for subsequent processing after collection.
Drawings
Fig. 1 is a schematic flow chart illustrating an implementation of a data collection method according to an embodiment of the present disclosure;
FIG. 2 is a schematic flow chart of another implementation of a data collection method according to an embodiment of the present disclosure;
FIG. 3 is a schematic structural diagram of a data collection device according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, specific technical solutions of the present invention will be described in further detail below with reference to the accompanying drawings in the embodiments of the present application. The following examples are intended to illustrate the present application but are not intended to limit the scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
In the following description, references to the terms "first \ second \ third" are only to distinguish similar objects and do not denote a particular order, but rather the terms "first \ second \ third" are used to interchange specific orders or sequences, where appropriate, so as to enable the embodiments of the application described herein to be practiced in other than the order shown or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
Before further detailed description of the embodiments of the present application, terms and expressions referred to in the embodiments of the present application will be described, and the terms and expressions referred to in the embodiments of the present application will be used for the following explanation.
1) Computer Vision (Computer Vision) is a science for studying how to make a machine "look" and refers to using a camera and a Computer to identify, track and measure a target instead of human eyes, and further performing image processing.
2) The image recognition is a technology for processing, analyzing and understanding images by using a computer to recognize various targets and objects in different modes, and is a practical application of applying a deep learning algorithm; the identification process is divided into four steps: image acquisition → image preprocessing → feature extraction → image recognition.
An exemplary application of the data collection device provided in the embodiments of the present application is described below, and the device provided in the embodiments of the present application may be implemented as various types of user terminals such as a notebook computer with an image capture function, a tablet computer, a desktop computer, a camera, a mobile device (e.g., a personal digital assistant, a dedicated messaging device, and a portable game device), and may also be implemented as a server. In the following, an exemplary application will be explained when the device is implemented as a terminal or a server.
The method can be applied to a computer device, and the functions realized by the method can be realized by calling a program code by a processor in the computer device, of course, the program code can be stored in a computer storage medium, and the computer device at least comprises the processor and the storage medium.
The embodiment of the present application provides a data collection method, as shown in fig. 1, which is described with reference to the steps shown in fig. 1:
step S101, acquiring image data of a picture including a preset scene.
In some embodiments, the preset scene may be a user-specified capture scene, such as a scene in which a game location is located, an outdoor scene (e.g., a road with pedestrians), or an indoor scene (e.g., an indoor public place such as a mall, a hospital, etc.). The image data is a single frame of image or video data acquired under the scene. Taking a preset scene as an example of a game place, the image data may be an image captured in the game place in the course of the game place, or may be video data captured in the course of the game.
Step S102, in the image data, identifying the object in the preset scene to obtain an identification result.
In some embodiments, a convolutional neural network is used to identify the object in the image data, resulting in an identification result. Taking the preset scene as a game place as an example, the objects are objects involved in the game, such as game coins, players, game managers, game tables, and the like in the game. The game object is recognized in the image data, and the recognition result is obtained.
Step S103, in response to the recognition result and/or the image data meeting a preset collection rule, collecting the image data and the recognition result.
In some embodiments, the preset collection rule may be a rule related to the recognition result, may be a rule related to the image data, and may be a rule related to both the recognition result and the image data; if the collection rule is only relevant to the recognition result, the recognition result meets the collection rule, namely the recognition result and/or the image data are determined to meet the preset collection rule; similarly, if the collection rule is associated with both the recognition result and the image data, the recognition result and the image data are collected in the case where both the recognition result and the image data are associated while satisfying the collection rule. Taking a preset scene as a game place, taking image data as a single-frame image, and setting a collection rule as that the image comprises a preset object (such as a preset type of game currency); judging the recognition result of the image data to determine whether the recognition result includes a predetermined type of game chip; if a predetermined type of medal is included in the recognition result, the image data and the recognition result are stored.
In the embodiment of the application, an object in image data comprising a preset scene is identified; judging whether the recognition result and the image data meet a preset collection rule or not; and if the recognition result and the image data meet the preset collection rule, automatically storing the image data and the recognition result. In this manner, image data is selectively collected in a storage device by way of automated collection for subsequent processing after collection.
In some embodiments, the preset collection rule is determined by analyzing the application requirement of the image data, that is, before step S102, the following steps are further included, as shown in fig. 2, and the following description is made in conjunction with the steps shown in fig. 2:
step S201, determining a task requirement corresponding to the image data.
In some embodiments, the task requirements are requirements for applying the image data. For example, task requirements include: the image data is used for training an image recognition model, obtaining certain types of objects by recognizing the image data, searching abnormal images and the like.
Step S202, based on the task requirement, the preset collection rule is determined.
In some embodiments, to meet the task requirements, rules for collecting image data are determined. For example, if the task requirement is that the image data is adopted to train an image recognition model, the determined preset collection rule can collect the image data which is the same as the image data in the test environment; if the task requirement is that some types of objects are obtained by identifying the image data, the determined preset collection rule can be that the image data of the objects of the types are collected in the identification result; if the task requirement is to search for abnormal images, the determined preset collection rule can be to collect image data with abnormal images in the identification process. Therefore, the collection rule of the image data is determined by analyzing the task requirements, and the user requirements can be better met.
In the embodiment of the application, different collection rules are adopted for collecting single-frame images and video data respectively, wherein the process of collecting the single-frame images is shown in a mode one, and the process of collecting the video data is shown in a mode two.
The first method is as follows: in some embodiments, in the case that the image data is a single frame image, the collection rule is determined to be a rule related to the recognition result, i.e., the above step S202 may be implemented by the following steps S221 and 222 (not shown in the figure):
step S221, determining parameter information associated with the recognition result.
In one possible implementation manner, in the case that the image data is a single-frame image, the object in the single-frame image is identified, and the identification result is obtained. The parameter information associated with the recognition result is a parameter involved in the recognition result, and includes at least one of the following: confidence, number of objects, object type, data state, etc.; the confidence coefficient is the confidence coefficient of the recognition result, the object type is the type of the object included in the recognition result, and the data state is whether the recognition result is a normal recognition result or not; for example, whether an abnormality occurs in the process of identifying an object in a single frame image.
Step S222, determining a first collection rule based on the task requirement and the parameter information.
In one possible implementation, the preset collection rule includes the first collection rule. And combining the task requirement corresponding to the single-frame image with the parameter information, and determining a first collection rule related to the identification result. In this way, for a single frame image, the collection rule is determined by combining the parameter information of the image recognition result and the application requirement of the image, and the image data meeting the requirement of the user can be selectively and automatically collected.
In some possible implementations, the first collection rule is determined based on at least one of task requirement and confidence, the object type, and the data state, that is, the step S222 may be implemented by:
a first step of determining a target parameter in at least one of the confidence level, the object type, the data state based on the task requirement.
In one possible implementation, the task requirement is to obtain some types of objects by recognizing the image data, and then in the confidence level, the object type, and the data state, the target parameter is determined to be the object type in the parameter information. Or the task requirement is to search for an abnormal image, and the target parameter is a data state.
And secondly, determining the first collection rule based on the target parameter.
In a possible implementation manner, the first collection rule is determined according to the target parameter, for example, if the target parameter is an object type, according to a specific object type required in the task requirement, the first collection rule is determined to collect a single-frame image of which the recognition result includes the specific object type. And if the target parameter is the confidence coefficient, determining that the first collection rule is to collect the single-frame image of which the confidence coefficient of the recognition result meets the confidence coefficient threshold value according to the requirement on the confidence coefficient threshold value in the task requirements. Thus, the first collection rule is determined according to the target parameters matched with the task requirements, and the collected image data can meet the task requirements.
In some embodiments, in the case that the image data is a single-frame image, after determining the first collection rule according to the parameter information of the identification result and the task requirement, determining whether to collect the single-frame image by judging whether the identification result of the single-frame image meets the first collection rule; that is, the above step S103 can be realized by the following steps S131 and 132 (not shown in the figure):
step S131, determining the value of the target parameter of the recognition result.
In one possible implementation, the data of each parameter in the recognition result is first determined, for example, how much the confidence of the recognition result is, whether the data state is abnormal, and the specific type and number of the objects included in the recognition result.
Step S132, in response to the value of the target parameter of the recognition result satisfying the first collection rule, storing the image data and the recognition result.
In a possible implementation manner, if the target parameter in the recognition result satisfies the first collection rule, the frame image is an image which needs to be collected in the task requirement, so the image data and the recognition result are stored in the storage system for realizing the task later. Therefore, images meeting task requirements can be automatically and selectively collected by judging whether the data of the target parameters of the identification result meet the first collection rule.
The second method comprises the following steps: in the case that the image data is video data, the service phase is divided for the implementation process of the object, and the video in the preset task phase is collected, that is, the step S101 may be implemented by the following steps S111 and 112 (not shown in the figure):
step S111, determining a service phase included in the operation process of the object in the preset scene.
In some embodiments, the objects in the preset scene include business phases of: the object realizes the service phase included in the whole implementation process in the preset scene. Taking a preset scene as a game scene and an object as a game as an example, the service stage includes each game stage from the beginning to the end of the game, for example, a preparation start game stage, a manager join game stage, a player join game stage, a game end and result output stage, and the like.
Step S112, determining video data generated from an initial service stage to an end service stage of the object in the preset scene.
In some embodiments, the video data includes a video stream of the object during the process from the initial business stage to the end business stage, as well as other business information generated by the object during the process, such as alarm information throughout the process. Taking a preset scene as a game scene and an object as a game as an example, the video data includes a video stream of a game from a game preparation starting stage to a game ending and result outputting stage, the number of times of alarms and the types of alarms of the game in one game, and the like. In this way, the running process of the object is divided into a plurality of stages, and the whole video data from the starting stage to the ending stage is determined, so that the video data needing to be collected can be selected from the collected video data more logically.
In some embodiments, for the case that the image data is a video, the second collection rule is determined by analyzing the object service information in the preset scene in combination with the task requirement, that is, the step S202 may also be implemented by the following steps S21 and S22:
step S21, determining the business information associated with the object in the preset scene.
In a possible implementation manner, the service information associated with the object is service information of an object that can exist in the preset scene, and the service information is information describing a service where the object is located. For example, the service information includes: whether alarm information exists in the object, and the like. Taking an object as an example, the service information includes: the number of times or types of alarms occurring in a game, the duration of a game, etc. Namely, at least one of the video time length of the video data, the type of the game object included in the video data and the alarm information in the video data is determined as the service information. Therefore, in a game scene, the alarm information or the game duration appearing in the game is taken as the business information, so that the determined collection rule can automatically select the video data meeting the business information.
And S22, determining a second collection rule based on the service information and the task requirement.
In one possible implementation, the preset collection rule includes the second collection rule. And combining the service information with the task requirements, and determining a collection rule meeting the task requirements. And under the condition that the image data is video data, according to the task requirement, determining target business information matched with the task requirement in the business information, and then determining a second collection rule based on the target business information. For example, the task requirement is to detect alarm information of a preset category appearing in the video; then the target service information is the preset type of alarm information, and the second collection rule is to collect video data including the preset type of alarm information. Therefore, according to the task requirement, the target business information is determined in the business information, the second collection rule related to the business information of the object is determined, and the video data meeting the task requirement can be collected.
In some embodiments, in the case that the image data is video data, after determining the second collection rule according to the business information of the object and the task requirement, determining whether to collect the video data by judging whether the video data satisfies the second collection rule, that is, the step S103 may be implemented by the following steps S141 and 142 (not shown in the figure):
step S141, determining service information of the video data.
In one possible implementation, the target traffic information of the video data is determined according to the target traffic information for which the second collection rule is determined. For example, when the second collection rule is determined, the target service information is alarm information of a preset category, and then the target service information of the video data is an alarm information category.
Step S142, in response to the service information of the video data meeting the second collection rule, storing the video data and the identification result.
In a possible implementation manner, if the target service information in the service information of the video data meets the second collection rule, it indicates that the video data is the video data that needs to be collected in the task requirement, so the video data and the identification result are stored in the storage system for the subsequent implementation of the task. For example, taking a preset scene as a game scene as an example, the video data is a video stream (for example, a video stream in the whole process from the beginning to the end of a game) collected for one game and warning information generated in the game process; and if the second collection rule is to collect the video data comprising the alarm information of the preset type, judging whether the type of the alarm information in the video data is the preset type or not to determine whether to collect the video data or not.
In the embodiment of the application, by judging whether the service information in the video data meets the second collection rule or not, the video stream data meeting the task requirement can be automatically and selectively collected.
In some embodiments, after collecting the image data, in order to improve the recognition effect of the image recognition network, taking the image data as training data of the network model to be trained, may be implemented by the following processes:
first, a network to be trained for recognizing an object in the image data is determined.
In one possible implementation, the network to be trained may be any network to be trained for performing image recognition, such as a convolutional neural network to be trained.
Secondly, updating the training data set of the network to be trained based on the collected image data and the recognition result to obtain production environment data.
In one possible implementation, the collected image data and the recognition result are used as supplementary training data of the network to be trained, and the training data set is added to obtain the generation environment data.
And finally, training the network to be trained by adopting the production environment data to obtain a trained network capable of identifying the object in the image data.
In a possible implementation manner, the production environment data is used as an updated training data set to train the network to be trained, so as to obtain a trained network with a good image recognition effect. Therefore, the problem that the data of the production environment is inconsistent with the data of the test environment can be solved, and the network model is retrained on the basis of the collected image data to achieve a better recognition effect.
In the following, an exemplary application of the embodiment of the present application in a practical application scenario will be described, taking a game venue as an example, and an automatic recovery of Artificial Intelligence (AI) system data of the game venue will be described.
Generally, image recognition has a problem of reduced accuracy in a real production environment. The reason is that: firstly, the data distribution of the production environment is different from that of the test environment (for example, a large amount of face wearing masks appear after epidemic outbreaks, and the face information accounts for a small proportion in the test data); secondly, abnormal data possibly occurs in the production environment, so that system errors are caused.
To solve the problem of inconsistent data between the production environment and the testing environment, in the related art, personnel are required to collect sufficient supplementary information (e.g., images) from the production environment and retrain the production environment based on the supplementary information to achieve better recognition effect. Thus, it is time-consuming and labor-intensive to collect data again after a change in the production environment.
The embodiment of the application provides a data collection method, which selectively collects production environment data to a certain storage device in an automatic collection mode so as to perform subsequent processing after collection. In some embodiments, a data collection module is added to the edge AI device (this module integrates the processing result, time, device configuration parameters, service warning and other relevant information of the AI system), and determines whether this data should be recovered according to a determined collection rule, and automatically writes the data that needs to be recovered into a specified storage device. In the embodiment of the present application, data collection may be performed based on images or videos, wherein:
image-based data collection process steps:
in the first step, the recognition result of each image is collected.
Second, the determined data collection rules in the configuration are obtained.
And thirdly, judging whether to collect the image and the processing result into a storage system according to the data collection rule.
In one possible implementation, the data collection rules include: less than the number of tokens in the processed result, the confidence of the processed result or whether the result is erroneous, etc.
Video-based data collection process steps:
the first step, relevant data in the game is distributed according to the game stage fed back by the business layer, and the game data is recovered by taking the game distribution as a unit. The different games are stored in respective folders.
In the second step, the recovery data contains the video stream, the recognition result and the warning information generated during the program execution.
And thirdly, determining a collection rule through a configuration file issued by the cloud. Such as collecting video of a game that has less than 30 seconds of total play time, or collecting video of a game in which more than 3 warnings appear in the game.
And fourthly, judging whether the game video is automatically collected into the storage system or not according to the collection rule.
After data collection is performed by the two methods, the collected data can be applied to the following business service directions, including:
first, the data is stored for the test team to reproduce problems that occur in the production environment.
Second, the data is stored for model training.
Third, the data is stored as a test data set for other new services.
In the embodiment of the application, the side-end equipment acquires a data collection rule, automatically and selectively collects data according to the rule, groups videos according to game innings according to information fed back by a service layer and stores game inning information; therefore, the data of the production environment is automatically collected, effective information is selectively collected according to the data collection rule, the game videos are stored in groups according to the feedback information of the service layer, and the data can be more conveniently managed.
An embodiment of the present application provides a data collection device, fig. 3 is a schematic structural component diagram of the data collection device in the embodiment of the present application, and as shown in fig. 3, the data collection device 300 includes:
a first obtaining module 301, configured to obtain image data of a picture including a preset scene;
a first identification module 302, configured to identify an object in the preset scene in the image data to obtain an identification result;
a first collecting module 303, configured to collect the image data and the recognition result in response to the recognition result and/or the image data satisfying a preset collecting rule.
In some embodiments, the apparatus further comprises:
the first determining module is used for determining task requirements corresponding to the image data;
and the second determining module is used for determining the preset collection rule based on the task requirement.
In some embodiments, in a case where the image data is a single frame image, the second determining module includes:
a first determination submodule for determining parameter information associated with the recognition result;
and the second determining submodule is used for determining a first collecting rule based on the task requirement and the parameter information, and the preset collecting rule comprises the first collecting rule.
In some embodiments, the parameter information includes at least one of: confidence, object type, data state, the second determination submodule including:
a first determining unit, configured to determine a target parameter in at least one of the confidence, the object type, and the data state based on the task requirement;
a second determining unit configured to determine the first collection rule based on the target parameter.
In some embodiments, the first collection module 303 comprises:
a third determining submodule, configured to determine a numerical value of a target parameter of the recognition result;
a first storage sub-module for storing the image data and the recognition result in response to a value of a target parameter of the recognition result satisfying the first collection rule.
In some embodiments, the second determining module comprises:
a fourth determining submodule, configured to determine service information associated with the object in the preset scene;
and a fifth determining submodule, configured to determine a second collection rule based on the service information and the task requirement, where the preset collection rule includes the second collection rule.
In some embodiments, in the case that the image data is video data, the first obtaining module 301 includes:
a sixth determining submodule, configured to determine a service phase included in an operation process of an object in the preset scene;
and the seventh determining submodule is used for determining the video data generated from the initial service stage to the end service stage of the object in the preset scene.
In some embodiments, the first collection module 303 comprises:
an eighth determining sub-module, configured to determine service information of the video data;
and the second storage submodule is used for responding to the service information of the video data meeting the second collection rule and storing the video data and the identification result.
In some embodiments, in a case that the preset scene is a game scene, an object in the preset scene is a game object, the image data is video data of the game object in any game, and the eighth determining submodule includes:
a third determining unit, configured to determine at least one of the following as the service information: the video time of the video data, the type of a game object included in the video data, and alarm information in the video data.
In some embodiments, the apparatus further comprises:
a third determining module, configured to determine a network to be trained for identifying an object in the image data;
the first updating module is used for updating the training data set of the network to be trained based on the collected image data and the recognition result to obtain production environment data;
and the first training module is used for training the network to be trained by adopting the production environment data to obtain a trained network capable of identifying the object in the image data.
It should be noted that the above description of the embodiment of the apparatus, similar to the description of the embodiment of the method, has similar beneficial effects as the embodiment of the method. For technical details not disclosed in the embodiments of the apparatus of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
It should be noted that, in the embodiment of the present application, if the data collection method is implemented in the form of a software functional module and sold or used as a standalone product, the data collection method may also be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or portions thereof contributing to the prior art may be embodied in the form of a software product stored in a storage medium, and including several instructions for causing a computer device (which may be a terminal, a server, etc.) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a hard disk drive, a Read Only Memory (ROM), a magnetic disk, or an optical disk. Thus, embodiments of the present application are not limited to any specific combination of hardware and software.
Correspondingly, the embodiment of the present application further provides a computer program product, where the computer program product includes computer-executable instructions, and after the computer-executable instructions are executed, the steps in the data collection method provided by the embodiment of the present application can be implemented.
Accordingly, an embodiment of the present application further provides a computer storage medium, where computer-executable instructions are stored on the computer storage medium, and when executed by a processor, the computer-executable instructions implement the steps of the data collection method provided by the foregoing embodiment.
Accordingly, an embodiment of the present application provides a computer device, fig. 4 is a schematic structural diagram of the computer device in the embodiment of the present application, and as shown in fig. 4, the electronic device 400 includes: a processor 401, at least one communication bus, a communication interface 402, at least one external communication interface, and a memory 403. Wherein the communication interface 402 is configured to enable connected communication between these components. Wherein the communication interface 402 may include a display screen and the external communication interface may include standard wired and wireless interfaces. Wherein the processor 401 is configured to execute the image processing program in the memory to implement the steps of the data collection method provided by the above-mentioned embodiments.
The above descriptions of the embodiments of the data collection device, the computer device and the storage medium are similar to the above descriptions of the embodiments of the method, have similar technical descriptions and advantages to the corresponding embodiments of the method, and are limited by the space. For technical details not disclosed in the embodiments of the data collection device, the computer device and the storage medium of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application. The above-mentioned serial numbers of the embodiments of the present application are merely for description, and do not represent the advantages and disadvantages of the embodiments. It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one of 8230, and" comprising 8230does not exclude the presence of additional like elements in a process, method, article, or apparatus comprising the element.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units; can be located in one place or distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit. Those of ordinary skill in the art will understand that: all or part of the steps of implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer-readable storage medium, and when executed, executes the steps including the method embodiments; and the aforementioned storage medium includes: various media that can store program codes, such as a removable Memory device, a Read Only Memory (ROM), a magnetic disk, or an optical disk.
Alternatively, the integrated unit described above may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present application or portions thereof that contribute to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a removable storage device, a ROM, a magnetic or optical disk, or other various media that can store program code. The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (20)

1. A method of data collection, the method comprising:
acquiring image data of a picture including a preset scene;
in the image data, identifying an object in the preset scene to obtain an identification result;
collecting the image data and the recognition result in response to at least one of the recognition result and the image data satisfying a preset collection rule.
2. The method according to claim 1, wherein after the identifying the object in the preset scene in the image data and obtaining the identification result, the method further comprises:
determining task requirements corresponding to the image data;
and determining the preset collection rule based on the task requirement.
3. The method according to claim 2, wherein in the case that the image data is a single frame image, the determining the preset collection rule based on the task requirement comprises:
determining parameter information associated with the recognition result;
determining a first collection rule based on the task requirements and the parameter information, wherein the preset collection rule comprises the first collection rule.
4. The method of claim 3, wherein the parameter information comprises at least one of: confidence, object type, data state; the determining a first collection rule based on the task requirements and the parameter information includes:
determining a target parameter in at least one of the confidence level, the object type, the data state based on the task requirement;
determining the first collection rule based on the target parameter.
5. The method according to claim 2 or 3, wherein collecting the image data and the recognition result in response to at least one of the recognition result and the image data satisfying a preset collection rule comprises:
determining the value of the target parameter of the identification result;
storing the image data and the recognition result in response to the value of the target parameter of the recognition result satisfying the first collection rule.
6. The method of claim 2, wherein determining the preset collection rule based on the task requirement comprises:
determining service information associated with the object in the preset scene;
and determining a second collection rule based on the business information and the task requirement, wherein the preset collection rule comprises the second collection rule.
7. The method according to claim 6, wherein in a case where the image data is video data, the acquiring the picture includes image data of a preset scene, including:
determining a service phase included in the operation process of the object in the preset scene;
and determining video data generated from the initial service stage to the end service stage of the object in the preset scene.
8. The method according to claim 6 or 7, wherein collecting the image data and the recognition result in response to at least one of the recognition result and the image data satisfying a preset collection rule comprises:
determining service information of the video data;
and in response to the service information of the video data meeting the second collection rule, storing the video data and the identification result.
9. The method according to claim 8, wherein in a case that the preset scene is a game scene, an object in the preset scene is a game object, the image data is video data of the game object in any game, and the determining the service information of the video data includes:
determining at least one of the following as the service information: the video time of the video data, the type of the game object included in the video data, and the alarm information in the video data.
10. The method according to any one of claims 1 to 9, wherein after collecting the image data and the recognition result in response to at least one of the recognition result and the image data satisfying a preset collection rule, the method further comprises:
determining a network to be trained for recognizing an object in the image data;
updating the training data set of the network to be trained based on the collected image data and the recognition result to obtain production environment data;
and training the network to be trained by adopting the production environment data to obtain a trained network capable of identifying the object in the image data.
11. A computer device comprising a memory having computer-executable instructions stored thereon and a processor configured to, when the processor executes the computer-executable instructions on the memory:
acquiring image data of a picture including a preset scene;
in the image data, identifying an object in the preset scene to obtain an identification result;
collecting the image data and the recognition result in response to at least one of the recognition result and the image data satisfying a preset collection rule.
12. The computer device of claim 11, wherein after identifying the object in the preset scene in the image data and obtaining the identification result, the processor is further configured to:
determining task requirements corresponding to the image data;
and determining the preset collection rule based on the task requirement.
13. The computer device of claim 12, wherein when determining the preset collection rule based on the task requirement in the case that the image data is a single frame image, the processor is configured to:
determining parameter information associated with the recognition result;
determining a first collection rule based on the task requirements and the parameter information, wherein the preset collection rule comprises the first collection rule.
14. The computer device of claim 13, wherein the parameter information includes at least one of: confidence, object type, data state; in determining a first collection rule based on the task requirements and the parameter information, the processor is configured to:
determining a target parameter in at least one of the confidence level, the object type, the data state based on the task requirement;
determining the first collection rule based on the target parameter.
15. The computer device of claim 12 or 13, wherein, in collecting the image data and the recognition result in response to at least one of the recognition result and the image data satisfying a preset collection rule, the processor is configured to:
determining the value of the target parameter of the identification result;
storing the image data and the recognition result in response to the value of the target parameter of the recognition result satisfying the first collection rule.
16. The computer device of claim 12, wherein in determining the preset collection rule based on the task requirements, the processor is configured to:
determining service information associated with the object in the preset scene;
and determining a second collection rule based on the business information and the task requirement, wherein the preset collection rule comprises the second collection rule.
17. The computer device according to claim 16, wherein in a case where the image data is video data, when the acquisition picture includes image data of a preset scene, the processor is configured to:
determining a service phase included in the operation process of the object in the preset scene;
and determining video data generated from the initial service stage to the end service stage of the object in the preset scene.
18. The computer device of claim 16 or 17, wherein, in collecting the image data and the recognition result in response to at least one of the recognition result and the image data satisfying a preset collection rule, the processor is configured to:
determining service information of the video data;
and in response to the service information of the video data meeting the second collection rule, storing the video data and the identification result.
19. A computer storage medium having computer-executable instructions stored thereon that, when executed, are configured to:
acquiring image data of a picture including a preset scene;
in the image data, identifying an object in the preset scene to obtain an identification result;
collecting the image data and the recognition result in response to at least one of the recognition result and the image data satisfying a preset collection rule.
20. A computer program comprising computer instructions executable by an electronic device, wherein the computer instructions, when executed by a processor in the electronic device, are configured to:
acquiring image data of a picture including a preset scene;
in the image data, identifying an object in the preset scene to obtain an identification result;
collecting the image data and the recognition result in response to at least one of the recognition result and the image data satisfying a preset collection rule.
CN202180002736.0A 2021-09-16 2021-09-26 Data collection method, device, equipment and storage medium Withdrawn CN115552483A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
SG10202110226V 2021-09-16
SG10202110226V 2021-09-16
PCT/IB2021/058763 WO2023041970A1 (en) 2021-09-16 2021-09-26 Data collection method and apparatus, device and storage medium

Publications (1)

Publication Number Publication Date
CN115552483A true CN115552483A (en) 2022-12-30

Family

ID=80053725

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180002736.0A Withdrawn CN115552483A (en) 2021-09-16 2021-09-26 Data collection method, device, equipment and storage medium

Country Status (3)

Country Link
KR (1) KR20220007703A (en)
CN (1) CN115552483A (en)
AU (1) AU2021240232A1 (en)

Also Published As

Publication number Publication date
AU2021240232A1 (en) 2023-03-30
KR20220007703A (en) 2022-01-18

Similar Documents

Publication Publication Date Title
CN109858371B (en) Face recognition method and device
CN111723786B (en) Method and device for detecting wearing of safety helmet based on single model prediction
CN109697416A (en) A kind of video data handling procedure and relevant apparatus
CN108446681B (en) Pedestrian analysis method, device, terminal and storage medium
CN112016485A (en) Passenger flow statistical method and system based on face recognition
CN107133629B (en) Picture classification method and device and mobile terminal
CN112200081A (en) Abnormal behavior identification method and device, electronic equipment and storage medium
CN111860377A (en) Live broadcast method and device based on artificial intelligence, electronic equipment and storage medium
CN113111838A (en) Behavior recognition method and device, equipment and storage medium
CN111626303B (en) Sex and age identification method, sex and age identification device, storage medium and server
WO2022222445A1 (en) Event detection output method, event policy determination method and apparatus, electronic device, and computer-readable storage medium
CN114639152A (en) Multi-modal voice interaction method, device, equipment and medium based on face recognition
CN114723652A (en) Cell density determination method, cell density determination device, electronic apparatus, and storage medium
WO2021051568A1 (en) Method and apparatus for constructing road network topological structure, and computer device and storage medium
CN111539390A (en) Small target image identification method, equipment and system based on Yolov3
JP2021026744A (en) Information processing device, image recognition method, and learning model generation method
CN115552483A (en) Data collection method, device, equipment and storage medium
CN112906671B (en) Method and device for identifying false face-examination picture, electronic equipment and storage medium
CN114333065A (en) Behavior identification method, system and related device applied to monitoring video
CN114842411A (en) Group behavior identification method based on complementary space-time information modeling
CN114038044A (en) Face gender and age identification method and device, electronic equipment and storage medium
CN113837066A (en) Behavior recognition method and device, electronic equipment and computer storage medium
CN109448287B (en) Safety monitoring method and device, readable storage medium and terminal equipment
WO2023041970A1 (en) Data collection method and apparatus, device and storage medium
CN111694982A (en) Song recommendation method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20221230

WW01 Invention patent application withdrawn after publication