CN116206181A - Data processing method and device, nonvolatile storage medium and electronic equipment - Google Patents

Data processing method and device, nonvolatile storage medium and electronic equipment Download PDF

Info

Publication number
CN116206181A
CN116206181A CN202211698890.9A CN202211698890A CN116206181A CN 116206181 A CN116206181 A CN 116206181A CN 202211698890 A CN202211698890 A CN 202211698890A CN 116206181 A CN116206181 A CN 116206181A
Authority
CN
China
Prior art keywords
data
machine learning
training
sample data
target area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211698890.9A
Other languages
Chinese (zh)
Inventor
崔江鹤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Telecom Corp Ltd
Original Assignee
China Telecom Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Telecom Corp Ltd filed Critical China Telecom Corp Ltd
Priority to CN202211698890.9A priority Critical patent/CN116206181A/en
Publication of CN116206181A publication Critical patent/CN116206181A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/778Active pattern-learning, e.g. online learning of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a data processing method and device, a nonvolatile storage medium and electronic equipment. Wherein the method comprises the following steps: acquiring sample data, wherein the sample data is monitoring video data of a target area; classifying the sample data according to the target area to obtain a plurality of groups of training data; respectively utilizing each group of training data in the plurality of groups of training data to correspondingly train the machine learning model to obtain a plurality of trained machine learning models; and storing the plurality of trained machine learning models into a database for the monitoring task of the target area to call. The method and the device solve the technical problems that the matching degree of the machine learning model and the monitoring scene is low and the accuracy of data analysis is low because the same machine learning model is used for monitoring data analysis of different scenes in the related technology.

Description

Data processing method and device, nonvolatile storage medium and electronic equipment
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a data processing method and apparatus, a nonvolatile storage medium, and an electronic device.
Background
In recent years, with the development of security industry, the application of artificial intelligence technology (Artificial Intelligence, AI) based on video monitoring is mature gradually, the personalized demand of users is also explosive growth while the video AI technology is popularized, in the related technology, the generalization capability of video AI is poorer, and the accuracy is lower when the video AI technology is used for analyzing the monitoring data of different monitoring scenes.
In view of the above problems, no effective solution has been proposed at present.
Disclosure of Invention
The embodiment of the application provides a data processing method and device, a nonvolatile storage medium and electronic equipment, which at least solve the technical problems of low matching degree of a machine learning model and a monitoring scene and low accuracy of data analysis caused by using the same machine learning model to analyze monitoring data of different scenes in a related technology.
According to an aspect of an embodiment of the present application, there is provided a data processing method, including: acquiring sample data, wherein the sample data is monitoring video data of a target area; classifying the sample data according to the target area to obtain a plurality of groups of training data; respectively utilizing each group of training data in the plurality of groups of training data to correspondingly train the machine learning model to obtain a plurality of trained machine learning models; and storing the plurality of trained machine learning models into a database for the monitoring task of the target area to call.
Optionally, classifying the sample data according to the target area to obtain a plurality of sets of training data, including: determining an actual monitoring area of the sample data, and determining the accuracy of the sample data according to the actual monitoring area and a target area corresponding to the sample data; deleting the sample data with the accuracy rate smaller than a preset threshold value to obtain residual sample data; and taking sample data belonging to the same target area in the residual sample data as one group of training data to obtain a plurality of groups of training data.
Optionally, after obtaining the plurality of sets of training data, the data processing method further includes: and marking the plurality of sets of training data by utilizing a plurality of algorithm identifications, wherein the plurality of algorithm identifications have corresponding relations with a plurality of machine learning models.
Optionally, after training the machine learning model by using each set of training data in the sets of training data, the data processing method includes: acquiring a plurality of algorithm packages corresponding to a plurality of target areas, and storing the plurality of algorithm packages into an algorithm warehouse, wherein the algorithm packages are obtained by training a machine learning model by using training data; the original machine learning model of a plurality of target areas is replaced by a trained machine learning model in an algorithm warehouse by utilizing a plurality of algorithm packages.
Optionally, the original machine learning model corresponding to each target region in the plurality of target regions is the same.
Optionally, after acquiring the sample data, the data processing method further includes: determining a target area of the sample data and the accuracy of the sample data; sample data is marked with a target area and accuracy.
Optionally, after obtaining the plurality of trained machine learning models, the data processing method further comprises: correspondingly processing the monitoring video data of a plurality of target areas by using a plurality of trained machine learning models; the method comprises the steps of obtaining marking information sent by a target object, wherein the marking information is used for indicating that monitoring video data processed by a machine learning model after training is not target monitoring video data, and the target monitoring video data is monitoring video data of a target area corresponding to the machine learning model; and determining the accuracy of the machine learning model after training according to the marking information.
According to another aspect of the embodiments of the present application, there is also provided a data processing apparatus, including: the acquisition module is used for acquiring sample data, wherein the sample data is monitoring video data of a target area; the classification module is used for classifying the sample data according to the target area to obtain a plurality of groups of training data; the training module is used for correspondingly training the machine learning model by utilizing each group of training data in the plurality of groups of training data respectively to obtain a plurality of trained machine learning models; and the calling module is used for storing the plurality of trained machine learning models into a database so as to be called by the monitoring task of the target area.
According to another aspect of the embodiments of the present application, there is also provided a nonvolatile storage medium having a program stored therein, wherein when the program runs, a device on which the nonvolatile storage medium is controlled to execute the above-described data processing method.
According to another aspect of the embodiments of the present application, there is also provided an electronic device, including: the device comprises a memory and a processor, wherein the processor is used for running a program stored in the memory, and the program runs to execute the data processing method.
In the embodiment of the application, sample data is acquired, wherein the sample data is monitoring video data of a target area; classifying the sample data according to the target area to obtain a plurality of groups of training data; respectively utilizing each group of training data in the plurality of groups of training data to correspondingly train the machine learning model to obtain a plurality of trained machine learning models; the method comprises the steps of storing a plurality of trained machine learning models into a database for a mode of calling a monitoring task of a target area, obtaining a plurality of groups of sample data by classifying video data according to a monitoring scene, respectively training the machine learning models by marking and recycling the samples as training data, obtaining different versions of the same machine learning model as specific machine learning models of different scenes, and achieving the aim of targeted training and optimization for a certain video monitoring or a certain video monitoring, thereby realizing the technical effect of converting the processing of a plurality of monitoring video data from one machine learning model into the processing of a plurality of monitoring video data by a plurality of machine learning models, and further solving the technical problems of low matching degree of the machine learning models and the monitoring scenes and low accuracy of data analysis caused by the analysis of the monitoring data of different scenes by using the same machine learning model in the related art.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application. In the drawings:
fig. 1 is a hardware block diagram of a computer terminal (or mobile device) for implementing a data processing method according to an embodiment of the present application;
FIG. 2 is a flow chart of a method of processing data according to an embodiment of the present application;
FIG. 3 is a block diagram of a data processing apparatus according to an embodiment of the present application;
fig. 4 is a schematic diagram of processing surveillance video data according to an embodiment of the present application.
Detailed Description
In order to make the present application solution better understood by those skilled in the art, the following description will be made in detail and with reference to the accompanying drawings in the embodiments of the present application, it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
For better understanding of the embodiments of the present application, technical terms related in the embodiments of the present application are explained below:
machine learning model: generating by a training algorithm, and revealing a relation between input data and output data by a machine learning model; the algorithm learning model in the embodiment of the application is generated by training an AI algorithm and is used for analyzing video data of a monitoring area.
In the related art, the same machine learning model is adopted to analyze the monitoring video data of different scenes, the generalization capability of the machine learning model is poor, a plurality of scenes cannot be compatible, and the monitoring video data of different positions, different environments and different angles cannot be accurately analyzed. Related solutions are provided in the embodiments of the present application, and are described in detail below.
According to the embodiments of the present application, there is provided a method embodiment of a data processing method, it should be noted that, steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer executable instructions, and, although a logical order is illustrated in the flowchart, in some cases, steps illustrated or described may be performed in an order different from that herein.
The method embodiments provided by the embodiments of the present application may be performed in a mobile terminal, a computer terminal, or similar computing device. Fig. 1 shows a block diagram of a hardware structure of a computer terminal (or mobile device) for implementing a data processing method. As shown in fig. 1, the computer terminal 10 (or mobile device 10) may include one or more processors 102 (shown as 102a, 102b, … …,102 n) which may include, but are not limited to, a microprocessor MCU or a processing device such as a programmable logic device FPGA, a memory 104 for storing data, and a transmission module 106 for communication functions. In addition, the method may further include: a display, an input/output interface (I/O interface), a Universal Serial BUS (USB) port (which may be included as one of the ports of the BUS), a network interface, a power supply, and/or a camera. It will be appreciated by those of ordinary skill in the art that the configuration shown in fig. 1 is merely illustrative and is not intended to limit the configuration of the electronic device described above. For example, the computer terminal 10 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
It should be noted that the one or more processors 102 and/or other data processing circuits described above may be referred to generally herein as "data processing circuits. The data processing circuit may be embodied in whole or in part in software, hardware, firmware, or any other combination. Furthermore, the data processing circuitry may be a single stand-alone processing module, or incorporated, in whole or in part, into any of the other elements in the computer terminal 10 (or mobile device). As referred to in the embodiments of the present application, the data processing circuit acts as a processor control (e.g., selection of the path of the variable resistor termination to interface).
The memory 104 may be used to store software programs and modules of application software, such as program instructions/data storage devices corresponding to the method for processing data in the embodiments of the present application, and the processor 102 executes the software programs and modules stored in the memory 104, thereby executing various functional applications and data processing, that is, implementing the method for processing data of the application program. Memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the computer terminal 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission means 106 is arranged to receive or transmit data via a network. The specific examples of the network described above may include a wireless network provided by a communication provider of the computer terminal 10. In one example, the transmission device 106 includes a network adapter (Network Interface Controller, NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module for communicating with the internet wirelessly.
The display may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of the computer terminal 10 (or mobile device).
Fig. 2 is a flowchart of a data processing method according to an embodiment of the present application, as shown in fig. 2, including the following steps:
step S202, sample data is obtained, wherein the sample data is monitoring video data of a target area.
According to the embodiment of the application, the scene of the target area is constructed by analyzing the monitoring video data through the machine learning model, so that the scene of the target area is realized; in step S202, the monitoring video data of the target area is acquired, including the monitoring video data outside the target area, the monitoring video data inside the target area, the monitoring video data of the target area in the light direction, the monitoring video data of the target area in the backlight direction, and other related data as sample data.
Step S204, classifying the sample data according to the target area to obtain a plurality of groups of training data.
In step S204, the sample data acquired in step S202 is classified according to the target area to which the sample data belongs, and the sample data of the same target area is classified according to the scene of the target area, for example, the sample data belonging to the outside of the target area is classified into one type, the sample data belonging to the inside of the target area is classified into one type, the sample data belonging to the target area in the light direction is classified into one type, and the sample data belonging to the target area in the backlight direction is classified into one type, so as to obtain a plurality of groups of training data; for sample data of different target areas, firstly classifying the sample data of the same target area into one type, and then classifying the sample data according to different scenes of the same target area to obtain a plurality of groups of training data.
Step S206, training the machine learning model correspondingly by utilizing each group of training data in the plurality of groups of training data respectively to obtain a plurality of trained machine learning models.
In step S206, the machine learning model applied to the different scenes is trained respectively using the plurality of sets of training data obtained in step S204 for the scenes of the target area to which each set of training data belongs.
Step S208, storing the plurality of trained machine learning models in a database for the monitoring task of the target area to call.
In step S208, the machine learning models applied to different scenes after training in step S206 are stored in a database, so that the machine learning models can be conveniently invoked when constructing the scenes of the target area.
Through the steps, the conversion from processing the monitoring video data of a plurality of scenes by one machine learning model to processing the monitoring video data of a plurality of scenes corresponding to the machine learning models can be realized, the fine management scheduling of the algorithm, the scenes and the video monitoring is realized, the matching degree of the algorithm and the monitoring scenes is improved, the data analysis accuracy is improved, and the problem of poor generalization capability of the AI algorithm is weakened.
According to an alternative embodiment of the present application, classifying sample data according to a target area to obtain a plurality of sets of training data includes the following steps: determining an actual monitoring area of the sample data, and determining the accuracy of the sample data according to the actual monitoring area and a target area corresponding to the sample data; deleting the sample data with the accuracy rate smaller than a preset threshold value to obtain residual sample data; and taking sample data belonging to the same target area in the residual sample data as one group of training data to obtain a plurality of groups of training data.
In this embodiment, the training data is acquired by the following method, and each sample data acquired in step S202 is marked with a region identifier, so that the scene of the target region to which the sample data belongs is determined by the region identifier of each sample data, and the monitored region actually used by the sample data when the target region is being scenized is determined; if the target area of the sample data is consistent with the actual monitoring area, the sample data is considered to be accurately used, namely the accuracy of the sample data is hundred percent, otherwise, if the target area of the sample data is inconsistent with the actual monitoring area, the accuracy of the sample data is considered to be zero; acquiring data of the sample data which are used for a plurality of times, calculating the percentage of the number of times of the sample data which are correctly used in the number of times of all the sample data which are used, and taking the percentage as the accuracy of the sample data; by the method, the accuracy of the classified multiple groups of sample data is calculated, the sample data with the accuracy smaller than a preset threshold value is deleted, and the sample data with the accuracy larger than the preset threshold value is reserved as training data; for example, sample data with an accuracy rate of less than 80% is deleted, and sample data with an accuracy rate of more than 80% is retained as training data.
According to another optional embodiment of the present application, after obtaining the plurality of sets of training data, the data processing method further includes: and marking the plurality of sets of training data by utilizing a plurality of algorithm identifications, wherein the plurality of algorithm identifications have corresponding relations with a plurality of machine learning models.
In this embodiment, after obtaining multiple sets of training data in step S204, marking multiple sets of training data according to the machine learning model to be trained according to each set of training data, for example, marking the training data of the machine learning model for training the outdoor scene of the target area as outdoor, and correspondingly marking the machine learning model of the outdoor scene of the target area as an outdoor scene model; marking training data of a machine learning model for training an indoor scene of a target area as indoor, and correspondingly marking the machine learning model of the indoor scene of the target area as an indoor scene model; marking training data of a machine learning model for training a light-oriented scene of a target area as light oriented, and correspondingly marking the machine learning model of an indoor scene of the target area as a light oriented scene model; marking training data of a machine learning model for training a backlight scene of a target area as backlight, and correspondingly marking the machine learning model of an indoor scene of the target area as a backlight scene model; by marking the training data and the scene, the problem of training the machine learning model corresponding to the scene by using sample data which does not belong to the scene is avoided, and the training effect of the machine learning model is improved.
According to some preferred embodiments of the present application, after training the machine learning model by using each set of training data in the plurality of sets of training data, the data processing method includes the following steps: acquiring a plurality of algorithm packages corresponding to a plurality of target areas, and storing the plurality of algorithm packages into an algorithm warehouse, wherein the algorithm packages are obtained by training a machine learning model by using training data; the original machine learning model of a plurality of target areas is replaced by a trained machine learning model in an algorithm warehouse by utilizing a plurality of algorithm packages.
In some preferred embodiments, when training a machine learning model using training data, classifying algorithms according to scenes corresponding to the machine learning model, packaging algorithms to be used for training the machine learning model of each scene to generate algorithm packages corresponding to the machine learning model, and storing the plurality of algorithm packages in an algorithm warehouse; the algorithm warehouse is responsible for upgrading the algorithm of the corresponding scene/point location, and in the algorithm warehouse, the machine learning model of the corresponding scene in the target area is trained by utilizing the algorithm package, and the original machine learning model of each scene is replaced by the machine learning model trained by utilizing the algorithm package, so that the upgrading of the algorithm is realized. For example, an algorithm for training the indoor scene model is packaged into a single algorithm package, an original indoor scene model is trained by using the algorithm package to obtain a trained indoor scene model, and the original indoor scene model is replaced by the trained indoor scene model; packaging an algorithm for training the outdoor scene model into a single algorithm package, uploading the single algorithm package to an algorithm warehouse, training an original outdoor scene model by using the algorithm package in the algorithm warehouse to obtain a trained outdoor scene model, and replacing the original outdoor scene model with the trained outdoor scene model; packaging an algorithm for training the light-oriented scene model into a single algorithm package, uploading the single algorithm package to an algorithm warehouse, training an original light-oriented scene model by using the algorithm package in the algorithm warehouse to obtain a trained light-oriented scene model, and replacing the original light-oriented scene model with the trained light-oriented scene model; packaging an algorithm for training the backlight scene model into a single algorithm package, uploading the single algorithm package to an algorithm warehouse, training an original backlight scene model by using the algorithm package in the algorithm warehouse to obtain a trained backlight scene model, and replacing the original backlight scene model with the trained backlight scene model; the algorithm of each machine learning model is packaged, so that the model training speed is increased, and the misuse of the algorithm is avoided.
When the algorithm is updated, the algorithm warehouse directly sends the algorithm packet to the corresponding front-end intelligent camera for updating the algorithm aiming at the front-end intelligent scene; aiming at the intelligent scene of the central terminal, the algorithm warehouse sends the algorithm packet to the corresponding front-end code stream so as to upgrade the algorithm.
According to an alternative embodiment of the present application, the original machine learning model corresponding to each of the plurality of target areas is the same.
The original outdoor scene model, the original indoor scene model, the original machine learning model of different scenes such as the original light-oriented scene model and the original backlight scene model mentioned in the above embodiment are the same, and the machine learning models after training are different and correspond to different scenes and are obtained through training of different algorithm packages and different training data.
According to another preferred embodiment of the present application, after acquiring the sample data, the data processing method further includes: determining a target area of the sample data and the accuracy of the sample data; sample data is marked with a target area and accuracy.
In this embodiment, after the sample data is obtained in step S202, before classifying the sample data, the target area to which each sample data belongs and the accuracy of each sample data are determined, and the sample data are marked, so that the sample data are classified according to the target area, and meanwhile, the sample data are conveniently screened, the sample data with the accuracy lower than the preset threshold are deleted, and the accuracy of the machine learning model is improved.
According to other preferred embodiments of the present application, after obtaining a plurality of trained machine learning models, the method for processing data further comprises: correspondingly processing the monitoring video data of a plurality of target areas by using a plurality of trained machine learning models; the method comprises the steps of obtaining marking information sent by a target object, wherein the marking information is used for indicating that monitoring video data processed by a machine learning model after training is not target monitoring video data, and the target monitoring video data is monitoring video data of a target area corresponding to the machine learning model; and determining the accuracy of the machine learning model after training according to the marking information.
In other preferred embodiments, after replacing the original machine learning model of each scene with the trained machine learning model, the user marks the misinformation during the process of applying the trained machine learning model, and the system receives the marked information of the user and generates the accuracy information of the trained machine learning model; and by monitoring the accuracy of the data analyzed by the trained machine learning model, the machine learning model is repeatedly trained by using the data with higher accuracy, so that the accuracy of the data analyzed by the machine learning model is improved.
Fig. 3 is a block diagram of a data processing apparatus according to an embodiment of the present application, and as shown in fig. 3, the apparatus includes: the obtaining module 30 is configured to obtain sample data, where the sample data is monitoring video data of a target area; the classification module 32 is configured to classify the sample data according to the target area, so as to obtain multiple sets of training data; the training module 34 is configured to correspondingly train the machine learning models by using each set of training data in the multiple sets of training data, so as to obtain multiple trained machine learning models; and a calling module 36, configured to store the plurality of trained machine learning models in a database for calling a monitoring task of the target area.
Fig. 4 is a schematic diagram showing a processing apparatus for processing surveillance video data, as shown in fig. 4, the processing apparatus for processing data starts to operate, the surveillance video data is acquired in the surveillance a of the target area a by the acquisition module 30, the surveillance video data is acquired in the surveillance B of the target area B, the surveillance video data is acquired in the surveillance C of the target area C, the surveillance video data is acquired in the surveillance D of the target area D, the surveillance video data is acquired in the surveillance E of the target area E, the surveillance video data is acquired in the surveillance F of the target area F, and the acquired surveillance video data is stored in the sample library as sample data, wherein each sample data is marked with the attributive target area/scene and the accuracy; the sample data is classified according to the target area/scene to which the sample data belongs by the classification module 32, and the sample data with the accuracy lower than a preset threshold value is deleted to obtain multiple groups of training data. Training the same original machine learning model A1.0 by using multiple sets of training data through a training module 34 to obtain machine learning models A-1-2.0, A-2-2.0 and A-3-2.0 corresponding to different scenes; and machine learning models a-1-2.0, machine learning models a-2-2.0, and machine learning models a-3-2.0 are stored in a database by call module 36 for call by different monitoring tasks for the target area.
It should be noted that, the preferred implementation manner of the embodiment shown in fig. 3 may refer to the related description of the embodiment shown in fig. 2, which is not repeated herein.
According to the method provided by the embodiment of the application, scene classification is carried out on the training sample from the source, the algorithm which is more in line with project scenes and video monitoring scenes can be trained through performance optimization of the basic algorithm model, the proper algorithm can be matched into proper scene video monitoring, the performance of the AI algorithm can be exerted to the maximum extent, the usability and accuracy are improved, and the problem of poor generalization capability of the AI algorithm is weakened; and does not add additional system resource consumption.
The embodiment of the application also provides a nonvolatile storage medium, wherein the nonvolatile storage medium stores a program, and the program is used for controlling a device where the nonvolatile storage medium is located to execute the data processing method.
The above-described nonvolatile storage medium is used to store a program that performs the following functions: acquiring sample data, wherein the sample data is monitoring video data of a target area; classifying the sample data according to the target area to obtain a plurality of groups of training data; respectively utilizing each group of training data in the plurality of groups of training data to correspondingly train the machine learning model to obtain a plurality of trained machine learning models; and storing the plurality of trained machine learning models into a database for the monitoring task of the target area to call.
The embodiment of the application also provides electronic equipment, which comprises: the device comprises a memory and a processor, wherein the processor is used for running a program stored in the memory, and the program runs to execute the data processing method.
The processor in the electronic device is configured to execute a program that performs the following functions: acquiring sample data, wherein the sample data is monitoring video data of a target area; classifying the sample data according to the target area to obtain a plurality of groups of training data; respectively utilizing each group of training data in the plurality of groups of training data to correspondingly train the machine learning model to obtain a plurality of trained machine learning models; and storing the plurality of trained machine learning models into a database for the monitoring task of the target area to call.
The respective modules in the data processing apparatus may be program modules (for example, a set of program instructions for realizing a specific function), or may be hardware modules, and the latter may be expressed in the following form, but are not limited thereto: the expression forms of the modules are all a processor, or the functions of the modules are realized by one processor.
The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology content may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, for example, may be a logic function division, and may be implemented in another manner, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be essentially or a part contributing to the related art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application and are intended to be comprehended within the scope of the present application.

Claims (10)

1. A method of processing data, comprising:
acquiring sample data, wherein the sample data is monitoring video data of a target area;
classifying the sample data according to the target area to obtain a plurality of groups of training data;
respectively utilizing each group of training data in the plurality of groups of training data to correspondingly train the machine learning model to obtain a plurality of trained machine learning models;
and storing the plurality of trained machine learning models into a database for the monitoring task of the target area to call.
2. The method of claim 1, wherein classifying the sample data according to the target region results in a plurality of sets of training data, comprising:
determining an actual monitoring area of the sample data, and determining the accuracy of the sample data according to the actual monitoring area and a target area corresponding to the sample data;
deleting the sample data with the accuracy rate smaller than a preset threshold value to obtain residual sample data;
and taking sample data belonging to the same target area in the residual sample data as one group of training data to obtain the plurality of groups of training data.
3. The method of claim 1, wherein after obtaining the plurality of sets of training data, the method further comprises:
and marking the plurality of sets of training data by using a plurality of algorithm identifications, wherein the plurality of algorithm identifications have corresponding relations with the plurality of machine learning models.
4. The method of claim 1, wherein after training the machine learning model with each set of training data in the plurality of sets of training data, respectively, the method comprises:
acquiring a plurality of algorithm packages corresponding to a plurality of target areas, and storing the plurality of algorithm packages into an algorithm warehouse, wherein the algorithm packages are obtained by training the machine learning model by using the training data;
and replacing the original machine learning model of a plurality of target areas with the trained machine learning model in the algorithm warehouse by using a plurality of algorithm packages.
5. The method of claim 4, wherein the original machine learning model corresponding to each of a plurality of the target regions is the same.
6. The method of claim 1, wherein after obtaining the sample data, the method further comprises:
determining a target area of the sample data and an accuracy of the sample data;
and marking the sample data by utilizing the target area and the accuracy.
7. The method of claim 1, wherein after deriving a plurality of trained machine learning models, the method further comprises:
correspondingly processing the monitoring video data of a plurality of target areas by using a plurality of trained machine learning models;
the method comprises the steps of obtaining marking information sent by a target object, wherein the marking information is used for indicating that monitoring video data processed by a machine learning model after training is not target monitoring video data, and the target monitoring video data is monitoring video data of a target area corresponding to the machine learning model;
and determining the accuracy of the machine learning model after training according to the marking information.
8. A data processing apparatus, comprising:
the acquisition module is used for acquiring sample data, wherein the sample data is monitoring video data of a target area;
the classification module is used for classifying the sample data according to the target area to obtain a plurality of groups of training data;
the training module is used for correspondingly training the machine learning model by utilizing each group of training data in the plurality of groups of training data respectively to obtain a plurality of trained machine learning models;
and the calling module is used for storing the plurality of trained machine learning models into a database so as to be called by the monitoring task of the target area.
9. A non-volatile storage medium, wherein a program is stored in the non-volatile storage medium, and wherein the program, when executed, controls a device in which the non-volatile storage medium is located to perform the method of processing data according to any one of claims 1 to 7.
10. An electronic device, comprising: a memory and a processor for executing a program stored in the memory, wherein the program is executed to perform the method of processing data according to any one of claims 1 to 7.
CN202211698890.9A 2022-12-28 2022-12-28 Data processing method and device, nonvolatile storage medium and electronic equipment Pending CN116206181A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211698890.9A CN116206181A (en) 2022-12-28 2022-12-28 Data processing method and device, nonvolatile storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211698890.9A CN116206181A (en) 2022-12-28 2022-12-28 Data processing method and device, nonvolatile storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN116206181A true CN116206181A (en) 2023-06-02

Family

ID=86512088

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211698890.9A Pending CN116206181A (en) 2022-12-28 2022-12-28 Data processing method and device, nonvolatile storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN116206181A (en)

Similar Documents

Publication Publication Date Title
CN110995810B (en) Object identification method based on artificial intelligence and related device
CN113222170B (en) Intelligent algorithm and model for AI collaborative service platform of Internet of things
CN110610169B (en) Picture marking method and device, storage medium and electronic device
CN113313280B (en) Cloud platform inspection method, electronic equipment and nonvolatile storage medium
CN108647235A (en) A kind of data analysing method, equipment and medium based on data warehouse
CN114722091A (en) Data processing method, data processing device, storage medium and processor
CN112906806A (en) Data optimization method and device based on neural network
CN113591576A (en) Food material information detection method and device, storage medium and electronic device
CN111507268A (en) Alarm method and device, storage medium and electronic device
CN110807050A (en) Performance analysis method and device, computer equipment and storage medium
CN114266288A (en) Network element detection method and related device
CN104937613A (en) Heuristics to quantify data quality
CN116206181A (en) Data processing method and device, nonvolatile storage medium and electronic equipment
CN113971455A (en) Distributed model training method and device, storage medium and computer equipment
CN111683280A (en) Video processing method and device and electronic equipment
CN112231167A (en) Cloud resource monitoring method, device, equipment and storage medium
CN112749150B (en) Error labeling data identification method, device and medium
CN117875746A (en) Method and device for determining index threshold, storage medium and electronic device
CN110781878B (en) Target area determination method and device, storage medium and electronic device
CN114564249A (en) Recommendation scheduling engine, recommendation scheduling method, and computer-readable storage medium
CN114006945A (en) Intelligent grouping distribution method of Internet of things data and Internet of things platform
CN114328038A (en) Method for remotely capturing process information, related equipment and storage medium
CN110888573A (en) Method and device for identifying control and storage medium
CN115237503B (en) VR-based ecological model building method and system
CN113434612B (en) Data statistics method and device, storage medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination