CN114120192A - Multi-working-condition model automatic selection method and device based on video signal - Google Patents

Multi-working-condition model automatic selection method and device based on video signal Download PDF

Info

Publication number
CN114120192A
CN114120192A CN202111409112.9A CN202111409112A CN114120192A CN 114120192 A CN114120192 A CN 114120192A CN 202111409112 A CN202111409112 A CN 202111409112A CN 114120192 A CN114120192 A CN 114120192A
Authority
CN
China
Prior art keywords
working condition
model
video signal
analyzed
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111409112.9A
Other languages
Chinese (zh)
Inventor
杨明明
李佳鹤
阮志坚
刘贤康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Lanzhuo Industrial Internet Information Technology Co ltd
Original Assignee
Zhejiang Lanzhuo Industrial Internet Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Lanzhuo Industrial Internet Information Technology Co ltd filed Critical Zhejiang Lanzhuo Industrial Internet Information Technology Co ltd
Priority to CN202111409112.9A priority Critical patent/CN114120192A/en
Publication of CN114120192A publication Critical patent/CN114120192A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/04Manufacturing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Business, Economics & Management (AREA)
  • General Health & Medical Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • Marketing (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Game Theory and Decision Science (AREA)
  • Development Economics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Operations Research (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Evolutionary Computation (AREA)
  • Quality & Reliability (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Manufacturing & Machinery (AREA)
  • Primary Health Care (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method and a device for automatically selecting a multi-working-condition model based on video signals, wherein shot video signals to be analyzed are acquired in real time in the process of executing a service containing various working conditions; analyzing a video signal to be analyzed based on a pre-established visual model to obtain a working condition model of the video signal to be analyzed corresponding to a working condition; and calling the working condition model for prediction to obtain predicted working condition parameters. According to the scheme, after a visual model is established in advance, the visual model is used for analyzing the video signal to be analyzed, and after the working condition model of the working condition corresponding to the video signal to be analyzed is determined, the determined working condition model is called for working condition parameter prediction, so that the popularization and reliability of each working condition model are improved.

Description

Multi-working-condition model automatic selection method and device based on video signal
Technical Field
The invention relates to the technical field of signal processing, in particular to a multi-working-condition model automatic selection method and device based on video signals.
Background
The multi-working-condition scenes generally exist in various industries, and in the process of simulation modeling, most of the views fuse various working condition changes into one model, so that the complexity of the model is reduced.
In the prior art, multiple models are fused under the characteristics of multiple working conditions, multiple environments and multiple stages through different models to obtain one model, but for scenes with different working conditions and large stage difference, the reliability and the precision are difficult to guarantee only by using a single model, and the method can not be widely applied.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method and an apparatus for automatically selecting a multi-condition model based on a video signal, so as to achieve the purpose of improving the generalization and reliability of each condition model.
In order to achieve the above purpose, the embodiments of the present invention provide the following technical solutions:
the first aspect of the embodiment of the invention discloses a method for automatically selecting a multi-working-condition model based on a video signal, which comprises the following steps:
acquiring a shot video signal to be analyzed in real time in the process of executing a service containing multiple working conditions;
analyzing the video signal to be analyzed based on a pre-established visual model to obtain a working condition model of the video signal to be analyzed corresponding to a working condition;
and calling the working condition model for prediction to obtain predicted working condition parameters.
Preferably, the analyzing the video signal to be analyzed based on the pre-established visual model to obtain a working condition model of the video signal to be analyzed corresponding to the working condition includes:
performing image processing on the video signal to be analyzed based on a pre-established visual model to obtain a color image;
calculating the total pixel point area of the color image;
determining a working condition model of the video signal to be analyzed corresponding to the working condition by using the total pixel point area; and the total pixel point area and the working condition model have a preset corresponding relation.
Preferably, the calling the operating condition model to predict to obtain the predicted operating condition parameter includes:
and calling the working condition model, and predicting by using the total pixel point area to obtain a predicted working condition parameter.
Preferably, the process of pre-establishing a visual model includes:
adding marks to the monitoring areas in the video signals under different working conditions, and acquiring color images in the monitoring areas;
calculating three-component brightness in the color image, and taking the maximum value of the three-component brightness as the gray value of a gray scale image to obtain a gray scale image represented by pixel points;
performing mean filtering processing on the gray level image by using Kernel data to obtain a filtered gray level image;
based on a preset threshold value, carrying out image binarization processing on the filtered gray level image to obtain a gray level value of a pixel point of the filtered gray level image;
calculating the total pixel point area of all the gray values, and dividing the total pixel point area to obtain pixel point area ranges corresponding to the different types of working conditions;
determining working condition models corresponding to the different types of working conditions based on the pixel point area ranges corresponding to the different types of working conditions;
and establishing a visual model based on scheduling logic among the working condition models.
Preferably, if the operating conditions include an idle operating condition, a continuous variation operating condition, and a full load operating condition, the analyzing the video signal to be analyzed based on the pre-established visual model to obtain an operating condition model of the operating condition corresponding to the video signal to be analyzed, including:
performing image processing on the video signal to be analyzed based on a pre-established visual model to obtain a color image;
calculating the total pixel point area of the color image;
if the total pixel point area is smaller than R1, determining a working condition model of the video signal to be analyzed corresponding to a working condition as the no-load working condition model;
if the total pixel point area is larger than R1 and smaller than R2, determining the working condition model of the video signal to be analyzed corresponding to the working condition as the continuous change working condition model;
if the total pixel point area is larger than R2, determining the working condition model of the video signal to be analyzed corresponding to the working condition as the full-load working condition model; r1, R2 are positive integers, and R1 is less than R2.
Preferably, if the currently executed service including multiple working conditions is a rubber blanking amount monitoring service, where the working conditions include an idle working condition, a continuous variation working condition and a full loading working condition, the calling the working condition model to predict to obtain predicted working condition parameters includes:
if the working condition model of the video signal to be analyzed corresponding to the working condition is the no-load working condition model, calling a scheduling logic corresponding to the no-load working condition model, and predicting the blanking amount of the rubber blanking conveyor belt by using the total pixel point area corresponding to the no-load working condition to obtain a predicted blanking amount A;
if the working condition model of the video signal to be analyzed corresponding to the working condition is the continuous change working condition model, calling a scheduling logic corresponding to the continuous change working condition model, and predicting the blanking amount of the rubber blanking conveyor belt by using the total pixel point area corresponding to the continuous change working condition to obtain a predicted blanking amount B;
if the working condition model of the video signal to be analyzed corresponding to the working condition is the full-load working condition model, calling the scheduling logic corresponding to the full-load working condition model, and predicting the blanking amount of the rubber blanking conveyor belt by using the total pixel point area corresponding to the full-load working condition and the working state parameters of a downstream extruder to obtain the predicted blanking amount C; A. b and C are positive integers.
The second aspect of the embodiment of the invention discloses a multi-working-condition model automatic selection device based on video signals, which comprises:
the acquisition module is used for acquiring a shot video signal to be analyzed in real time in the process of executing a service containing multiple working conditions;
the analysis module is used for analyzing the video signal to be analyzed based on a pre-established visual model to obtain a working condition model of the video signal to be analyzed corresponding to a working condition;
and the prediction module is used for calling the working condition model to predict to obtain predicted working condition parameters.
Preferably, the analysis module is specifically configured to:
performing image processing on the video signal to be analyzed based on a pre-established visual model to obtain a color image; calculating the total pixel point area of the color image; determining a working condition model of the video signal to be analyzed corresponding to the working condition by using the total pixel point area; and the total pixel point area and the working condition model have a preset corresponding relation.
Preferably, the prediction module is specifically configured to:
and calling the working condition model, and predicting by using the total pixel point area to obtain a predicted working condition parameter.
Preferably, the method further comprises the following steps: building a module;
the building module comprises: the device comprises an acquisition unit, a graying unit, a mean filtering unit, an image binarization processing unit, a dividing unit, a determining unit and an establishing unit;
the acquisition unit is used for adding marks to the monitored areas in the video signals under different working conditions and acquiring color images in the monitored areas;
the graying unit is used for calculating three-component brightness in the color image, and taking the maximum value of the three-component brightness as the gray value of a gray image to obtain a gray image represented by pixel points;
the mean filtering unit is used for carrying out mean filtering processing on the gray level image by using Kernel data to obtain a filtered gray level image;
the image binarization processing unit is used for carrying out image binarization processing on the filtered gray level image based on a preset threshold value to obtain a gray level value of a pixel point of the filtered gray level image;
the dividing unit is used for calculating the total pixel point area of all the gray values and dividing the total pixel point area to obtain pixel point area ranges corresponding to the different types of working conditions;
the determining unit is used for determining the working condition models corresponding to the different types of working conditions based on the pixel point area ranges corresponding to the different types of working conditions;
and the establishing unit is used for establishing a visual model based on the scheduling logic between the working condition models.
Based on the above multi-condition model automatic selection method and device based on video signals provided by the embodiments of the present invention, the method includes: acquiring a shot video signal to be analyzed in real time in the process of executing a service containing multiple working conditions; analyzing the video signal to be analyzed based on a pre-established visual model to obtain a working condition model of the video signal to be analyzed corresponding to a working condition; and calling the working condition model for prediction to obtain predicted working condition parameters. According to the scheme, after a visual model is established in advance, the visual model is used for analyzing the video signal to be analyzed, and after the working condition model of the working condition corresponding to the video signal to be analyzed is determined, the determined working condition model is called for working condition parameter prediction, so that the popularization and reliability of each working condition model are improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a schematic flowchart of a method for automatically selecting a multi-condition model based on a video signal according to an embodiment of the present invention;
fig. 2 is a schematic flowchart illustrating an analysis process of a video signal to be analyzed according to an embodiment of the present invention;
fig. 3 is an interaction diagram of automatic selection of a multi-condition model based on a video signal according to an embodiment of the present invention;
FIG. 4 is a schematic flow chart of pre-building a visual model according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a calculation formula of Kernel data according to an embodiment of the present invention;
fig. 6 is a schematic flowchart of a working condition model for determining a working condition corresponding to a video signal to be analyzed according to an embodiment of the present invention;
fig. 7 is a schematic flow chart illustrating a process of calling a working condition model for prediction in a rubber blanking amount monitoring service according to an embodiment of the present invention;
FIG. 8 is a schematic view of a rubber process flow provided in an embodiment of the present invention;
FIG. 9 is a characteristic diagram of a neural network modeling under a full-load condition according to an embodiment of the present invention;
FIGS. 10(a) and 10(b) are comparative diagrams of a rubber blanking conveyor belt under no-load condition and full-load condition according to an embodiment of the present invention;
11(a) and 11(b) are schematic diagrams of the effects of a rubber blanking conveyor belt before and after machine vision treatment under no-load working condition and full-load working condition according to an embodiment of the invention;
fig. 12 is a schematic structural diagram of an automatic selection apparatus for a multi-condition model based on a video signal according to an embodiment of the present invention;
fig. 13 is a schematic structural diagram of another automatic selection apparatus for a multi-condition model based on a video signal according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In this application, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
As known from the background art, in the prior art, for scenes with different working conditions and large stage difference, only a single model is used for prediction, the reliability and the precision are difficult to guarantee, and the method can not be widely applied.
In the scheme, after a visual model is established in advance, the visual model is used for analyzing the video signal to be analyzed, and after the working condition model of the working condition corresponding to the video signal to be analyzed is determined, the determined working condition model is called for working condition parameter prediction, so that the popularization and reliability of each working condition model are improved.
As shown in fig. 1, a schematic flow chart of a method for automatically selecting a multi-condition model based on a video signal according to an embodiment of the present invention is provided, and the method mainly includes the following steps:
step S101: and acquiring the shot video signal to be analyzed in real time in the process of executing the business containing various working conditions.
In the process of implementing step S101 specifically, when the service processing is performed, the service may include multiple working conditions, and in the process of executing the service including the multiple working conditions, the image acquisition device is used to acquire the shot video signal to be analyzed in real time.
It should be noted that the industrial personal computer can be used to execute services including various working conditions, and is connected with the image acquisition device.
Preferably, the image acquisition device is connected to the industrial personal computer through a network, corresponding drivers are installed and debugged, and after the debugging is successful, the image acquisition device normally displays a monitoring picture (a shot video signal to be analyzed) and stores the monitoring picture in the local industrial personal computer in real time.
Optionally, the hardware configuration of the industrial personal computer is shown in table 1.
Table 1:
CPU I5-7500
memory device 8G
Hard disk 1T
Network card Double network card
Operating system Ubuntu16.04
In the embodiment of the present invention, the image capturing apparatus includes, but is not limited to, a camera device.
Note that the image pickup apparatus includes, but is not limited to, an apparatus having a photographing function, such as a video camera.
Optionally, in the embodiment of the present invention, the camera is a network camera, and the selected model is a warrior great wall camera.
Step S102: and analyzing the video signal to be analyzed based on the pre-established visual model to obtain a working condition model of the video signal to be analyzed corresponding to the working condition.
In the process of implementing step S102 specifically, a visual model is established in advance, the video signal to be analyzed is input to the visual model established in advance for analysis, a working condition model of the video signal to be analyzed corresponding to the working condition is obtained, and the working condition model of the video signal to be analyzed corresponding to the working condition is output.
Optionally, the step S102 is executed to analyze the video signal to be analyzed based on the pre-established visual model to obtain a working condition model of the working condition corresponding to the video signal to be analyzed, as shown in fig. 2, a schematic flow diagram for analyzing the video signal to be analyzed provided in the embodiment of the present invention mainly includes the following steps:
step S201: and carrying out image processing on the video signal to be analyzed based on the pre-established visual model to obtain a color image.
In the process of implementing step S201 specifically, a visual model is established in advance, and a video signal to be analyzed is input to the visual model established in advance for image processing, so as to obtain a color image.
Step S202: and calculating the total pixel point area of the color image.
In the process of implementing step S202, the three-component brightness in the color image is calculated first, and the maximum value of the three-component brightness is used as the gray value of the gray scale image, so as to obtain the gray scale image represented by the pixel points.
Next, the Kernel data is used to perform an average filtering process on the grayscale image, so as to obtain a filtered grayscale image.
Then, based on a preset threshold value, carrying out image binarization processing on the filtered gray level image to obtain the gray level value of the pixel point of the filtered gray level image.
And finally, calculating the total pixel area of all gray values to further obtain the total pixel area of the color image.
Step S203: and determining a working condition model of the video signal to be analyzed corresponding to the working condition by using the total pixel point area.
In step S203, a preset corresponding relationship exists between the total pixel area and the operating condition model.
In the process of implementing step S203, the operating condition model of the video signal to be analyzed corresponding to the operating condition is determined according to the preset corresponding relationship between the total pixel area of the color image and the operating condition model.
Step S103: and calling the working condition model for prediction to obtain predicted working condition parameters.
In the process of implementing step S103 specifically, the output working condition model of the video signal to be analyzed corresponding to the working condition is called, and the working condition parameter of the working condition model of the video signal to be analyzed corresponding to the working condition is predicted to obtain the predicted working condition parameter.
Optionally, the step S103 is executed to call the operating condition model to perform prediction to obtain a process of the predicted operating condition parameter, including:
and calling a working condition model, and predicting by using the total pixel point area to obtain a predicted working condition parameter.
For better understanding of the above description, fig. 3 is an interactive diagram for automatic selection of a multi-condition model based on a video signal according to an embodiment of the present invention.
In fig. 3, firstly, in the process of executing a service including multiple working conditions, a shot video signal to be analyzed is obtained in real time, then, a pre-established visual model is used to analyze the video signal to be analyzed, a working condition model of the video signal to be analyzed corresponding to the working conditions is obtained, then, the working condition model is called to predict, a predicted working condition parameter is obtained, and the predicted working condition parameter is output.
Based on the method for automatically selecting the multi-working-condition model based on the video signal, provided by the embodiment of the invention, the shot video signal to be analyzed is obtained in real time in the process of executing the business containing various working conditions; analyzing a video signal to be analyzed based on a pre-established visual model to obtain a working condition model of the video signal to be analyzed corresponding to a working condition; and calling the working condition model for prediction to obtain predicted working condition parameters. According to the scheme, after a visual model is established in advance, the visual model is used for analyzing the video signal to be analyzed, and after the working condition model of the working condition corresponding to the video signal to be analyzed is determined, the determined working condition model is called for working condition parameter prediction, so that the popularization and reliability of each working condition model are improved.
Based on the method for automatically selecting a multi-condition model based on a video signal provided by the embodiment of the present invention, a process of analyzing the video signal to be analyzed based on a pre-established visual model in step S102 to obtain a condition model of the video signal to be analyzed corresponding to a condition is executed, as shown in fig. 4, a schematic flow diagram of the pre-established visual model provided by the embodiment of the present invention mainly includes the following steps:
step S401: and adding marks to monitoring areas in the video signals under different working conditions, and acquiring color images in the monitoring areas.
In step S401, the monitoring area refers to a designated area in the video signal, and may refer to a carrier.
In the specific implementation process of step S401, video signals of different working conditions are acquired, and monitored areas in the video signals of different working conditions are marked, so as to eliminate other environmental interference factors, and further acquire color images in the monitored areas.
Step S402: and calculating the three-component brightness in the color image, and taking the maximum value of the three-component brightness as the gray value of the gray image to obtain the gray image represented by the pixel points.
In the embodiment of the present invention, the grayscale image is a grayscale image with pixel points of [0,255], and is obtained by converting an original image (color image).
In the process of specifically implementing step S402, the three-component brightness in the color image is calculated, the obtained values of all the three-component brightness are compared to obtain the maximum value of the three-component brightness, and the maximum value of the three-component brightness is used as the gray value of the gray scale image, so as to enhance the contrast between the target element and the surface of the bearing object, and obtain the gray scale image represented by the pixel points.
Specifically, the calculation formula of the three-component luminance in the color image is as follows:
Gary(i,j)=max{R(i,j),G(i,j),B(i,j)}, (1),
wherein, Gary (i, j) represents the gray value of the pixel point after calculation; r (i, j) represents the red brightness value of the original pixel point; g (i, j) represents the green brightness value of the original pixel point; b (i, j) represents the original pixel point blue luminance value.
Step S403: and performing mean filtering processing on the gray level image by using Kernel data to obtain a filtered gray level image.
In step S303, an average filtering process is performed to remove image noise.
The Kernel data is a 3 × 3 matrix, and the calculation formula is shown in fig. 5, formula (2) and formula (3):
Figure BDA0003364436470000101
Figure BDA0003364436470000102
wherein f (i, j) is an original image (color image); g (i, j) is the filtered gray image;
Figure BDA0003364436470000103
in the process of implementing step S403 specifically, each pixel point of the grayscale image is calculated from left to right and from top to bottom by using Kernel data, and the filtered grayscale image is obtained according to each pixel point of all the grayscale images.
Step S404: and based on a preset threshold value, carrying out image binarization processing on the filtered gray level image to obtain the gray level value of the pixel point of the filtered gray level image.
In the embodiment of the present invention, the threshold may be represented by threshold, and the preset threshold is 160, but is not limited thereto.
In the process of specifically implementing step S404, a threshold is set, and image binarization processing is performed on the filtered grayscale image by using the set threshold, so as to further distinguish target elements from the surface of the support object and other interferences, and to effectively screen out pixel points of the target elements in the grayscale image, so that the grayscale value of the pixel points of the filtered grayscale image is 255.
The calculation formula for performing image binarization processing on the filtered grayscale image by using the set threshold is as follows:
Figure BDA0003364436470000111
step S405: and calculating the total pixel point area of all the gray values, and dividing the total pixel point area to obtain pixel point area ranges corresponding to different types of working conditions.
In the process of implementing step S405 specifically, the total pixel area of all gray values of 255 is calculated, and the total pixel area is divided to obtain pixel area ranges corresponding to different types of operating conditions.
Alternatively, the different types of operating conditions include, but are not limited to, no-load conditions, continuously variable conditions, and full-load conditions.
Optionally, in the embodiment of the present invention, no idle work is performedThe pixel point area ranges corresponding to the condition, the continuous change condition and the full load condition are respectively (0, R)1),(R1,R2),(R2,∞)。
Step S406: and determining the working condition models corresponding to the different types of working conditions respectively based on the pixel point area ranges corresponding to the different types of working conditions.
In the process of implementing step S406 specifically, if the total pixel point area of the color image in the video signal to be analyzed currently is (0, R)1) Determining that the working condition model of the video signal to be analyzed corresponding to the working condition is an idle working condition model; if the total pixel point area of the color image in the video signal to be analyzed is (R)1,R2) Determining the working condition model of the video signal to be analyzed corresponding to the working condition as a continuous change working condition model; if the total pixel point area of the color image in the video signal to be analyzed is (R)2And infinity), determining the working condition model of the video signal to be analyzed corresponding to the working condition as a full-load working condition model.
Step S407: and establishing a visual model based on scheduling logic between the working condition models.
In step S407, the scheduling logic of the working condition model refers to calling the working condition model corresponding to the working condition in the pixel area range according to the pixel area range where the total pixel area of the color image in the video signal to be analyzed is located.
In the process of implementing step S407, the pixel area range according to the total pixel area of the color image in the video signal to be analyzed is (0, R)1) And (0, R)1) If the corresponding working condition is the no-load working condition, calling a working condition model (no-load working condition model) of the no-load working condition; according to the area range of the pixel points where the total pixel point area of the color image in the video signal to be analyzed is (R)1,R2) And (R) to1,R2) If the corresponding working condition is a continuously changing working condition, calling a working condition model (continuously changing working condition model) of the continuously changing working condition; according to the area range of the pixel points where the total pixel point area of the color image in the video signal to be analyzed is (R)2Infinity), and (R)2Infinity) corresponding working condition is a full-load working condition, and a working condition model (full-load working condition model) of the full-load working condition is called; the visual model is built according to the above.
According to the method for automatically selecting the multi-working-condition model based on the video signal, provided by the embodiment of the invention, the visual model is obtained by performing visual model training on the video signals under different working conditions, so that the video signal to be analyzed is analyzed by utilizing the visual model in the subsequent process, and the popularization and reliability of each working-condition model are improved.
Based on the above-mentioned multi-condition model automatic selection method based on video signals provided by the embodiment of the present invention, if the working conditions include no-load working conditions, continuously changing working conditions and full-load working conditions, a process of analyzing the video signals to be analyzed based on a pre-established visual model in step S102 to obtain a working condition model of the video signals to be analyzed corresponding to the working conditions is performed, as shown in fig. 6, a flow diagram of determining the working condition model of the video signals to be analyzed corresponding to the working conditions provided by the embodiment of the present invention is mainly provided, which mainly includes the following steps:
step S601: and carrying out image processing on the video signal to be analyzed based on the pre-established visual model to obtain a color image.
In the process of implementing step S601 specifically, a video signal to be analyzed is input to the established visual model for image processing, so as to obtain a color image.
Step S602: and calculating the total pixel point area of the color image.
In the process of implementing step S602 specifically, the three-component luminance in the color image is calculated first, and the maximum value of the three-component luminance is taken as the gray value of the gray scale map, so as to obtain the gray scale image represented by the pixel points.
Next, the Kernel data is used to perform an average filtering process on the grayscale image, so as to obtain a filtered grayscale image.
Then, based on a preset threshold value, carrying out image binarization processing on the filtered gray level image to obtain the gray level value of the pixel point of the filtered gray level image.
And finally, calculating the total pixel area of all gray values to further obtain the total pixel area of the color image.
Step S603: and judging whether the total pixel area is smaller than R1, if so, executing step S604, and if not, executing step S605.
In step S503, R1 is a positive integer.
In the process of implementing step S603 specifically, it is determined whether the total pixel area is smaller than R1, if so, it is determined that the total pixel area of the color image is smaller than R1, step S604 is executed, and if not, it is determined that the total pixel area of the color image is larger than R1 but the total pixel area of the color image may be smaller than R2, step S605 is executed.
Step S604: and determining the working condition model of the video signal to be analyzed corresponding to the working condition as an idle working condition model.
In the process of implementing step S604, the total pixel area of the color image obtained by calculation is compared with R1, and it is determined that the total pixel area of the color image is smaller than R1, and it can be known from the above visual model that the total pixel area of the color image is (0, R)1) And if the working condition corresponding to the video signal to be analyzed can be determined to be the no-load working condition, the working condition model corresponding to the working condition corresponding to the video signal to be analyzed can be further determined to be the no-load working condition model.
Step S605: whether the total pixel area is larger than R1 and smaller than R2 is determined, if yes, step S606 is executed, and if not, step S607 is executed.
In step S603, R2 is a positive integer, and R1 is less than R2.
In the process of implementing step S605 specifically, it is determined whether the total pixel area is greater than R1 and less than R2, if so, it indicates that the total pixel area of the color image is greater than R1 and less than R2, step S606 is executed, otherwise, it indicates that the total pixel area of the color image is greater than R2, and step S607 is executed.
Step S606: and determining the working condition model of the video signal to be analyzed corresponding to the working condition as a continuous change working condition model.
In the process of implementing step S606, the total pixel point area of the color image obtained by calculation is compared with R1 and R2 to determine the colorThe total pixel area of the color image is larger than R1 and smaller than R2, and the total pixel area of the color image is (R) according to the visual model1,R2) And if the working condition corresponding to the video signal to be analyzed can be determined to be a continuous change working condition, the working condition model corresponding to the working condition corresponding to the video signal to be analyzed can be further determined to be a continuous change working condition model.
Step S607: whether the total pixel area is larger than R2 is determined, if yes, step S608 is executed, and if no, step S603 is executed.
In the process of implementing step S607, it is determined whether the total pixel area is greater than R2, if so, it indicates that the total pixel area of the color image is greater than R2, step S608 is executed, otherwise, it indicates that the total pixel area of the color image is less than R2, and step S603 is executed.
Step S608: and determining the working condition model of the video signal to be analyzed corresponding to the working condition as a full-load working condition model.
In the process of implementing step S608 specifically, the total pixel area of the color image obtained by calculation is compared with R1 and R2, and it is determined that the total pixel area of the color image is greater than R2, and it can be known from the above-mentioned visual model that the total pixel area of the color image is (R2)2Infinity), the working condition corresponding to the video signal to be analyzed can be determined as the full-load working condition, and then the working condition model corresponding to the working condition corresponding to the video signal to be analyzed can be further determined as the full-load working condition model.
According to the method for automatically selecting the multi-working-condition model based on the video signal, provided by the embodiment of the invention, the video signal to be analyzed is analyzed by utilizing the visual model, the working condition of the video signal to be analyzed is determined, and then the working condition model corresponding to the working condition of the video signal to be analyzed is determined, so that the popularization and the reliability of each working condition model are improved.
Based on the method for automatically selecting a multi-working-condition model based on a video signal provided by the embodiment of the invention, if a currently executed service containing multiple working conditions is a rubber blanking amount monitoring service, and the working conditions comprise an idle working condition, a continuous change working condition and a full load working condition, a process of calling a working condition model to predict in step S103 to obtain a predicted working condition parameter is executed, as shown in fig. 7, a flow diagram for calling the working condition model to predict in the rubber blanking amount monitoring service provided by the embodiment of the invention mainly comprises the following steps:
step S701: and judging a working condition model of the video signal to be analyzed corresponding to a working condition, if the working condition model is an idle working condition model, executing step S702, if the working condition model is a continuous change working condition model, executing step S703, and if the working condition model is a full load working condition model, executing step S704.
Step S702: and calling a scheduling logic corresponding to the no-load working condition model, and predicting the blanking amount of the rubber blanking conveyor belt by using the total pixel point area corresponding to the no-load working condition to obtain the predicted blanking amount A.
In step S702, a is a positive integer.
In the examples of the present invention, A was 0 kg/h.
It should be noted that, under the no-load working condition, no raw material exists on the conveyor belt, at this time, the blanking rate approaches 0kg/h, at this time, only the black surface of the conveyor belt can be seen in the video signal, and almost no rubber raw material can be monitored.
In the process of implementing the step S702 specifically, it is known that the determined operating condition model corresponding to the operating condition of the video signal to be analyzed is the no-load operating condition model, and the operating condition corresponding to the video signal to be analyzed is the no-load operating condition, and after obtaining the total pixel point area corresponding to the no-load operating condition, the no-load operating condition model is called, and the total pixel point area corresponding to the no-load operating condition is used to predict the blanking amount of the rubber blanking conveyor belt, so as to obtain the predicted blanking amount of 0 kg/h.
Step S703: and calling a scheduling logic corresponding to the continuous change working condition model, and predicting the blanking amount of the rubber blanking conveyor belt by using the total pixel point area corresponding to the continuous change working condition to obtain a predicted blanking amount B.
In step S703, B is a positive integer.
In the embodiment of the invention, B is 0-6000 kg/h.
It should be noted that, in rubber continuous production appearing in the continuously changing working condition, the unloading amount on the conveyer belt is continuously fluctuated and sometimes increased and sometimes decreased due to the fact that upstream production is unstable under the continuously changing working condition, and at the moment, the unloading speed range is 0-6000 kg/h, and the conveyer belt has a vibration function, so that the raw materials are uniformly paved on the conveyer belt, but the surface of the conveyer belt is not completely covered, and the naked surface of the conveyer belt is in sharp contrast with the white rubber raw materials.
In the process of specifically implementing the step S703, it is known that the determined operating condition model corresponding to the operating condition of the video signal to be analyzed is the continuously-changing operating condition model, and the operating condition corresponding to the video signal to be analyzed is the continuously-changing operating condition, and after the total pixel point area corresponding to the continuously-changing operating condition is obtained, the continuously-changing operating condition model is called, and the total pixel point area corresponding to the continuously-changing operating condition is used for predicting the blanking amount of the rubber blanking conveyor belt, so that the predicted blanking amount is 0-6000 kg/h.
Optionally, the total pixel point area corresponding to the continuous change working condition is utilized, the blanking amount of the rubber blanking conveyor belt is predicted according to a pre-established linear model, and the predicted blanking amount is 0-6000 kg/h.
The formula for the linear model is as follows:
f(xi)=ωxi+b, (5),
wherein x isiCalculating the area of the pixel point; f (x)i) Is the preset blanking amount.
Step S704: and calling a scheduling logic corresponding to the full-load working condition model, and predicting the blanking amount of the rubber blanking conveyor belt by using the total pixel point area corresponding to the full-load working condition and the working state parameters of the downstream pressurizing machine to obtain the predicted blanking amount C.
In step S704, C is a positive integer.
In the embodiment of the invention, C is 6000-8000 kg/h.
It should be noted that, in rubber continuous production under the full-load working condition, the feeding amount on the conveyor belt also enters the full-load state and is stabilized after upstream production enters the stable state under the full-load working condition, and at the moment, the feeding speed range is 6000-8000 kg/h. The raw materials completely cover the surface of the conveyor belt, barely naked surface of the conveyor belt can not be seen, and the thickness is continuously accumulated, and the process is stopped after reaching a certain accumulation degree.
When the material is discharged on the conveyor belt, the discharging amount is 0-7t/h, the material starts to be stacked when the discharging amount exceeds about 5.5t/h, and the 7t/h reaches the maximum stacking degree.
In the process of implementing the step S704 specifically, it is known that the determined operating condition model corresponding to the operating condition of the video signal to be analyzed is a full-load operating condition model, and the operating condition corresponding to the video signal to be analyzed is a full-load operating condition, and after obtaining the total pixel point area corresponding to the full-load operating condition, the full-load operating condition model is called, and the total pixel point area corresponding to the full-load operating condition and the operating state parameters of the downstream extruder are used to predict the blanking amount of the rubber blanking conveyor belt, so as to obtain the predicted blanking amount of 0 to 6000 kg/h.
It should be noted that the extruder operating condition parameters include, but are not limited to, extruder current level, grinding head pressure, and inner wall temperature.
And establishing a mapping relation between the working state parameters of the extruder and the blanking amount on the conveying belt through a neural network training model.
It should be noted that, in order to better understand the above description, the rubber process flow is explained below.
FIG. 8 is a schematic view of a rubber process flow provided by an embodiment of the present invention.
In fig. 8, the rubber process mainly includes a feed opening, a vibrating screen and an extruder, wherein the feed opening is used for feeding, the raw material can enter the vibrating screen, and then the raw material on the vibrating screen can directly enter the extruder for compression and water filtration.
Fig. 9 is a characteristic diagram of a neural network modeling under a full-load condition according to an embodiment of the present invention.
In fig. 9, after the characteristics of the process mechanism and the equipment operation analysis are extracted, the extruder current, the grinding head pressure, the inner wall temperature and the like are taken as key parameters, the real-time blanking amount is fitted and predicted through a neural network algorithm, a corresponding model is established, and the online monitoring under the full load condition is realized.
Fig. 10(a) and 10(b) are comparative diagrams of a rubber blanking conveyor belt under an empty working condition and a full working condition according to an embodiment of the present invention, and fig. 11(a) and 11(b) are schematic diagrams of effects before and after machine vision processing of the rubber blanking conveyor belt under the empty working condition and the full working condition according to the embodiment of the present invention.
Fig. 10(a) is a state diagram of the rubber blanking conveyor belt under no-load condition, in no-load condition, there is no material on the conveyor belt, at this time, the blanking rate is close to 0kg/h, at this time, only the black surface of the conveyor belt can be seen in the video signal, almost no rubber material can be monitored, and the effect is as shown in fig. 11 (a).
FIG. 10(b) is a state diagram of the rubber blanking conveyor belt under the full-load working condition, and under the full-load working condition, after the upstream production enters the stable state, the blanking amount on the conveyor belt also enters the full-load state and is stabilized, and at the moment, the blanking speed range is 6000-8000 kg/h. The raw material completely covered the surface of the belt, barely visible on the surface of the belt, and the thickness continued to build up, with the effect of fig. 11 (b).
According to the method for automatically selecting the multi-working-condition model based on the video signal, provided by the embodiment of the invention, the video signal to be analyzed of the rubber blanking amount monitoring service under the no-load working condition, the continuous change working condition or the full-load working condition is processed, and the working condition model of the working condition corresponding to the determined video signal to be analyzed is called to predict the working condition parameters, so that the popularization and the reliability of each working condition model are improved.
Corresponding to the method for automatically selecting a multi-condition model based on a video signal shown in fig. 1 in the embodiment of the present invention, an embodiment of the present invention further provides an apparatus for automatically selecting a multi-condition model based on a video signal, as shown in fig. 12, the apparatus for automatically selecting a multi-condition model based on a video signal includes: an acquisition module 1201, an analysis module 1202, and a prediction module 1203.
The obtaining module 1201 is configured to obtain a captured video signal to be analyzed in real time during a service process including multiple working conditions.
The analysis module 1202 is configured to analyze the video signal to be analyzed based on a pre-established visual model, so as to obtain a working condition model corresponding to a working condition of the video signal to be analyzed.
And a prediction module 1203, configured to invoke the operating condition model to perform prediction, so as to obtain a predicted operating condition parameter.
Optionally, based on the analysis module 1202 shown in fig. 12, the analysis module 1202 is specifically configured to:
performing image processing on a video signal to be analyzed based on a pre-established visual model to obtain a color image; calculating the total pixel point area of the color image; determining a working condition model of the video signal to be analyzed corresponding to the working condition by using the total pixel point area; the total pixel point area and the working condition model have a preset corresponding relation.
Optionally, based on the prediction module 1203 shown in fig. 12, the prediction module 1203 is specifically configured to:
and calling a working condition model, and predicting by using the total pixel point area to obtain a predicted working condition parameter.
It should be noted that, the specific principle and the implementation process of each module or each unit in the video signal-based multi-condition model automatic selection apparatus disclosed in the above embodiment of the present invention are the same as the method for implementing the video signal-based multi-condition model automatic selection according to the above embodiment of the present invention, and reference may be made to the corresponding parts in the video signal-based multi-condition model automatic selection method disclosed in the above embodiment of the present invention, which are not described herein again.
The multi-working-condition model automatic selection device based on the video signal provided by the embodiment of the invention acquires the shot video signal to be analyzed in real time in the process of executing the business containing various working conditions; analyzing a video signal to be analyzed based on a pre-established visual model to obtain a working condition model of the video signal to be analyzed corresponding to a working condition; and calling the working condition model for prediction to obtain predicted working condition parameters. According to the scheme, after a visual model is established in advance, the visual model is used for analyzing the video signal to be analyzed, and after the working condition model of the working condition corresponding to the video signal to be analyzed is determined, the determined working condition model is called for working condition parameter prediction, so that the popularization and reliability of each working condition model are improved.
Optionally, based on the video signal-based multi-condition model automatic selection apparatus shown in fig. 12, in combination with fig. 12, as shown in fig. 13, the video signal-based multi-condition model automatic selection apparatus further includes a building module 1304.
The building module 1304 includes: the device comprises an acquisition unit, a graying unit, a mean filtering unit, an image binarization processing unit, a dividing unit, a determining unit and an establishing unit.
And the acquisition unit is used for adding marks to the monitoring areas in the video signals under different working conditions and acquiring color images in the monitoring areas.
And the graying unit is used for calculating the three-component brightness in the color image, and taking the maximum value of the three-component brightness as the gray value of the gray image to obtain the gray image expressed by the pixel points.
And the mean filtering unit is used for carrying out mean filtering processing on the gray level image by using the Kernel data to obtain a filtered gray level image.
And the image binarization processing unit is used for carrying out image binarization processing on the filtered gray level image based on a preset threshold value to obtain the gray level value of the pixel point of the filtered gray level image.
And the dividing unit is used for calculating the total pixel point area of all the gray values and dividing the total pixel point area to obtain pixel point area ranges corresponding to different types of working conditions.
And the determining unit is used for determining the working condition models corresponding to the different types of working conditions based on the pixel point area ranges corresponding to the different types of working conditions.
And the establishing unit is used for establishing the visual model based on the dispatching logic among the working condition models.
According to the video signal-based multi-working-condition model automatic selection device provided by the embodiment of the invention, the visual model is obtained by performing visual model training on the video signals under different working conditions, and the video signals to be analyzed are analyzed by utilizing the visual model in the aspect of follow-up, so that the popularization and reliability of each working condition model are improved.
Optionally, based on the analysis module 1202 shown in fig. 12, if the operating conditions include an idle operating condition, a continuously changing operating condition, and a full-load operating condition, the analysis module 1202 is specifically configured to:
performing image processing on a video signal to be analyzed based on a pre-established visual model to obtain a color image; calculating the total pixel point area of the color image; if the total pixel point area is smaller than R1, determining that the working condition model of the video signal to be analyzed corresponding to the working condition is an idle working condition model; if the total pixel point area is larger than R1 and smaller than R2, determining the working condition model of the video signal to be analyzed corresponding to the working condition as a continuous change working condition model; if the total pixel point area is larger than R2, determining the working condition model of the video signal to be analyzed corresponding to the working condition as a full-load working condition model; r1, R2 are positive integers, and R1 is less than R2.
According to the video signal-based multi-working-condition model automatic selection device provided by the embodiment of the invention, the video signal to be analyzed is analyzed by utilizing the visual model, the working condition of the video signal to be analyzed is determined, and the working condition model of the video signal to be analyzed corresponding to the working condition is further determined, so that the popularization and the reliability of each working condition model are improved.
Optionally, based on the prediction module 1203 shown in fig. 12, if the currently executed service including multiple working conditions is a rubber blanking amount monitoring service, where the working conditions include an idle working condition, a continuous variation working condition, and a full loading working condition, the prediction module 1203 is specifically configured to:
if the working condition model of the video signal to be analyzed corresponding to the working condition is the no-load working condition model, calling the scheduling logic corresponding to the no-load working condition model, and predicting the blanking amount of the rubber blanking conveyor belt by using the total pixel point area corresponding to the no-load working condition to obtain the predicted blanking amount A; if the working condition model of the video signal to be analyzed corresponding to the working condition is a continuous change working condition model, calling a scheduling logic corresponding to the continuous change working condition model, and predicting the blanking amount of the rubber blanking conveyor belt by using the total pixel point area corresponding to the continuous change working condition to obtain a predicted blanking amount B; if the working condition model of the video signal to be analyzed corresponding to the working condition is a full-load working condition model, calling a scheduling logic corresponding to the full-load working condition model, and predicting the blanking amount of the rubber blanking conveyor belt by using the total pixel point area corresponding to the full-load working condition and the working state parameters of a downstream extruder to obtain a predicted blanking amount C; A. b and C are positive integers.
According to the video signal-based multi-working-condition model automatic selection device provided by the embodiment of the invention, the video signal to be analyzed of the rubber blanking amount monitoring service under the no-load working condition, the continuous change working condition or the full-load working condition is processed, and the working condition model of the working condition corresponding to the determined video signal to be analyzed is called to predict the working condition parameters, so that the popularization and the reliability of each working condition model are improved.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, the system or system embodiments are substantially similar to the method embodiments and therefore are described in a relatively simple manner, and reference may be made to some of the descriptions of the method embodiments for related points. The above-described system and system embodiments are only illustrative, wherein the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method for multi-condition model automatic selection based on video signals, the method comprising:
acquiring a shot video signal to be analyzed in real time in the process of executing a service containing multiple working conditions;
analyzing the video signal to be analyzed based on a pre-established visual model to obtain a working condition model of the video signal to be analyzed corresponding to a working condition;
and calling the working condition model for prediction to obtain predicted working condition parameters.
2. The method according to claim 1, wherein the analyzing the video signal to be analyzed based on the pre-established visual model to obtain a working condition model of the video signal to be analyzed corresponding to a working condition comprises:
performing image processing on the video signal to be analyzed based on a pre-established visual model to obtain a color image;
calculating the total pixel point area of the color image;
determining a working condition model of the video signal to be analyzed corresponding to the working condition by using the total pixel point area; and the total pixel point area and the working condition model have a preset corresponding relation.
3. The method of claim 2, wherein invoking the behavior model for prediction to obtain predicted behavior parameters comprises:
and calling the working condition model, and predicting by using the total pixel point area to obtain a predicted working condition parameter.
4. The method of claim 1, wherein the pre-modeling of the visual model comprises:
adding marks to the monitoring areas in the video signals under different working conditions, and acquiring color images in the monitoring areas;
calculating three-component brightness in the color image, and taking the maximum value of the three-component brightness as the gray value of a gray scale image to obtain a gray scale image represented by pixel points;
performing mean filtering processing on the gray level image by using Kernel data to obtain a filtered gray level image;
based on a preset threshold value, carrying out image binarization processing on the filtered gray level image to obtain a gray level value of a pixel point of the filtered gray level image;
calculating the total pixel point area of all the gray values, and dividing the total pixel point area to obtain pixel point area ranges corresponding to the different types of working conditions;
determining working condition models corresponding to the different types of working conditions based on the pixel point area ranges corresponding to the different types of working conditions;
and establishing a visual model based on scheduling logic among the working condition models.
5. The method according to claim 1, wherein if the operating conditions include an idle operating condition, a continuously changing operating condition, and a full operating condition, the analyzing the video signal to be analyzed based on the pre-established visual model to obtain an operating condition model of the video signal to be analyzed corresponding to the operating condition comprises:
performing image processing on the video signal to be analyzed based on a pre-established visual model to obtain a color image;
calculating the total pixel point area of the color image;
if the total pixel point area is smaller than R1, determining a working condition model of the video signal to be analyzed corresponding to a working condition as the no-load working condition model;
if the total pixel point area is larger than R1 and smaller than R2, determining the working condition model of the video signal to be analyzed corresponding to the working condition as the continuous change working condition model;
if the total pixel point area is larger than R2, determining the working condition model of the video signal to be analyzed corresponding to the working condition as the full-load working condition model; r1, R2 are positive integers, and R1 is less than R2.
6. The method according to any one of claims 3 or 5, wherein if the currently executed service including multiple working conditions is a rubber blanking amount monitoring service, the working conditions include an idle working condition, a continuous change working condition and a full working condition, the calling of the working condition model for prediction to obtain predicted working condition parameters comprises:
if the working condition model of the video signal to be analyzed corresponding to the working condition is the no-load working condition model, calling a scheduling logic corresponding to the no-load working condition model, and predicting the blanking amount of the rubber blanking conveyor belt by using the total pixel point area corresponding to the no-load working condition to obtain a predicted blanking amount A;
if the working condition model of the video signal to be analyzed corresponding to the working condition is the continuous change working condition model, calling a scheduling logic corresponding to the continuous change working condition model, and predicting the blanking amount of the rubber blanking conveyor belt by using the total pixel point area corresponding to the continuous change working condition to obtain a predicted blanking amount B;
if the working condition model of the video signal to be analyzed corresponding to the working condition is the full-load working condition model, calling the scheduling logic corresponding to the full-load working condition model, and predicting the blanking amount of the rubber blanking conveyor belt by using the total pixel point area corresponding to the full-load working condition and the working state parameters of a downstream extruder to obtain the predicted blanking amount C; A. b and C are positive integers.
7. An apparatus for automatically selecting a multi-condition model based on a video signal, the apparatus comprising:
the acquisition module is used for acquiring a shot video signal to be analyzed in real time in the process of executing a service containing multiple working conditions;
the analysis module is used for analyzing the video signal to be analyzed based on a pre-established visual model to obtain a working condition model of the video signal to be analyzed corresponding to a working condition;
and the prediction module is used for calling the working condition model to predict to obtain predicted working condition parameters.
8. The apparatus of claim 7, wherein the analysis module is specifically configured to:
performing image processing on the video signal to be analyzed based on a pre-established visual model to obtain a color image; calculating the total pixel point area of the color image; determining a working condition model of the video signal to be analyzed corresponding to the working condition by using the total pixel point area; and the total pixel point area and the working condition model have a preset corresponding relation.
9. The apparatus of claim 8, wherein the prediction module is specifically configured to:
and calling the working condition model, and predicting by using the total pixel point area to obtain a predicted working condition parameter.
10. The apparatus of claim 7, further comprising: building a module;
the building module comprises: the device comprises an acquisition unit, a graying unit, a mean filtering unit, an image binarization processing unit, a dividing unit, a determining unit and an establishing unit;
the acquisition unit is used for adding marks to the monitored areas in the video signals under different working conditions and acquiring color images in the monitored areas;
the graying unit is used for calculating three-component brightness in the color image, and taking the maximum value of the three-component brightness as the gray value of a gray image to obtain a gray image represented by pixel points;
the mean filtering unit is used for carrying out mean filtering processing on the gray level image by using Kernel data to obtain a filtered gray level image;
the image binarization processing unit is used for carrying out image binarization processing on the filtered gray level image based on a preset threshold value to obtain a gray level value of a pixel point of the filtered gray level image;
the dividing unit is used for calculating the total pixel point area of all the gray values and dividing the total pixel point area to obtain pixel point area ranges corresponding to the different types of working conditions;
the determining unit is used for determining the working condition models corresponding to the different types of working conditions based on the pixel point area ranges corresponding to the different types of working conditions;
and the establishing unit is used for establishing a visual model based on the scheduling logic between the working condition models.
CN202111409112.9A 2021-11-19 2021-11-19 Multi-working-condition model automatic selection method and device based on video signal Pending CN114120192A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111409112.9A CN114120192A (en) 2021-11-19 2021-11-19 Multi-working-condition model automatic selection method and device based on video signal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111409112.9A CN114120192A (en) 2021-11-19 2021-11-19 Multi-working-condition model automatic selection method and device based on video signal

Publications (1)

Publication Number Publication Date
CN114120192A true CN114120192A (en) 2022-03-01

Family

ID=80372545

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111409112.9A Pending CN114120192A (en) 2021-11-19 2021-11-19 Multi-working-condition model automatic selection method and device based on video signal

Country Status (1)

Country Link
CN (1) CN114120192A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116522758A (en) * 2023-03-29 2023-08-01 三一重工股份有限公司 Engineering machinery power consumption optimization method and device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116522758A (en) * 2023-03-29 2023-08-01 三一重工股份有限公司 Engineering machinery power consumption optimization method and device

Similar Documents

Publication Publication Date Title
CN110296997B (en) Method and device for detecting defects of ceramic tiles based on machine vision
DE60013102T2 (en) FAST DETERMINISTIC APPROACH TO DETECT DEFECTIVE PIXELS IN AN IMAGE SENSOR
CN109580656B (en) Mobile phone light guide plate defect detection method and system based on dynamic weight combination classifier
CN110490862B (en) Method and device for improving continuous casting flaw detection qualification rate
CN109584175B (en) Image processing method and device
CN105869175A (en) Image segmentation method and system
CN107274364B (en) image enhancement method and device
CN114120192A (en) Multi-working-condition model automatic selection method and device based on video signal
CN111767822A (en) Garbage detection method and related equipment and device
CN113297885A (en) Belt conveyor surface state detection method and device based on convolutional neural network
JPH1127672A (en) Post-processing method for reducing small defect of block coding digital video image and post-processing unit for executing the method
CN107991309A (en) Product quality detection method, device and electronic equipment
CN110458789B (en) Image definition evaluating method and device and electronic equipment
CN114155241A (en) Foreign matter detection method and device and electronic equipment
CN108108678B (en) Tungsten ore identification and separation method
CN110688977B (en) Industrial image identification method and device, server and storage medium
CN113938671B (en) Image content analysis method, image content analysis device, electronic equipment and storage medium
US6771804B1 (en) Method and apparatus for signal segmentation
CN111147693B (en) Noise reduction method and device for full-size photographed image
CN109978029B (en) Invalid image sample screening method based on convolutional neural network
CN110751618B (en) Floater detection method and device and electronic equipment
CN113989285A (en) Belt deviation monitoring method, device and equipment based on image and storage medium
JP2941511B2 (en) Method for detecting boil in mold
CN115487951B (en) Medium removal sieve material cutoff identification method and system
CN116306764B (en) Electronic component counting system based on machine vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination