CN117372920A - Pool boiling process identification method, model training method, device and equipment - Google Patents

Pool boiling process identification method, model training method, device and equipment Download PDF

Info

Publication number
CN117372920A
CN117372920A CN202311231934.1A CN202311231934A CN117372920A CN 117372920 A CN117372920 A CN 117372920A CN 202311231934 A CN202311231934 A CN 202311231934A CN 117372920 A CN117372920 A CN 117372920A
Authority
CN
China
Prior art keywords
image
model
images
pool boiling
boiling process
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311231934.1A
Other languages
Chinese (zh)
Inventor
衡益
黄明鸣
罗玖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN202311231934.1A priority Critical patent/CN117372920A/en
Publication of CN117372920A publication Critical patent/CN117372920A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Abstract

The embodiment of the disclosure relates to a pool boiling process identification method, a model training device and model training equipment. The main steps of the method comprise: the method comprises the steps of obtaining a target image of fluid in a container, wherein the target image comprises a plurality of area images corresponding to the target object, the target object comprises at least one type of object in a liquid film generated by heating of bubbles, a gas film and pool boiling fluid, determining geometrical attribute values of the target object corresponding to the plurality of area images, grouping the plurality of area images according to the geometrical attribute values, extracting characteristic data of each group of area images, inputting the characteristic data of each group of area images into a pre-trained recognition model, outputting a recognition result of a pool boiling process, and the recognition model is a machine learning model obtained through training according to a training data set obtained by the plurality of pool boiling sample data. By adopting the method, the accuracy and the recognition efficiency of pool boiling process recognition can be improved.

Description

Pool boiling process identification method, model training method, device and equipment
Technical Field
The disclosure relates to the technical field of computer data processing, in particular to a pool boiling process identification method, a model training device and model training equipment.
Background
The fluid within the vessel typically undergoes several boiling states during pool boiling, such as single phase boiling, nucleate boiling, transition boiling, and film boiling, each of which exhibits different heat transfer characteristics and behavior. In the scenes of electronic cooling, aircraft safety design, industrial processes and the like, accurate identification of the pool boiling process plays a key role in the determination of a thermal control strategy.
At present, the identification of the pool boiling process cannot be realized by utilizing a single sensor measurement technology, and researchers usually need to carry out identification judgment by combining personal experience by means of collected various state data such as temperature, pressure, conductivity and the like of fluid in a container, but the method has the problem of low identification accuracy.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a pool boiling process identification method, a model training device, and a model training apparatus capable of accurately identifying a pool boiling process.
In a first aspect, embodiments of the present disclosure provide a pool boiling process identification method, the method comprising:
acquiring a target image of fluid in a container, wherein the target image comprises a plurality of area images corresponding to target objects, and the target objects comprise at least one kind of objects in bubbles, an air film and a liquid film generated by heating pool boiling fluid;
Determining geometrical attribute values of the target objects corresponding to the plurality of area images;
grouping the plurality of region images according to the geometric attribute values;
extracting characteristic data of each group of regional images;
and inputting the characteristic data of each group of regional images into a pre-trained recognition model, and outputting a recognition result of the pool boiling process, wherein the recognition model is a machine learning model obtained by training according to a training data set obtained by a plurality of pool boiling sample data.
In some embodiments, the pool boiling process identification method further comprises: dividing the target image by using a cellular automaton algorithm to obtain a plurality of region images corresponding to at least one type of target object;
the geometric property values include the area and/or size of the region image.
In some embodiments, grouping the plurality of region images according to the geometric property value comprises:
sequencing the plurality of region images according to the geometric attribute values;
according to the ordered sequence relation, the plurality of region images are divided into N groups, so that the geometric attribute values of the region images in the M-th group in the N groups of region images are smaller than or larger than those of the region images in the M+1th group, wherein M is smaller than N, and N is a positive integer larger than 1.
In some embodiments, extracting feature data for each set of region images includes: acquiring binary images corresponding to each group of region images, and extracting characteristic data of the binary images as characteristic data of each group of region images; and/or the number of the groups of groups,
extracting feature data of each set of region images, including: extracting attribute characteristic data of a target object of each group of regional images, and/or carrying out morphological processing on each group of regional images to obtain image pixel value data corresponding to different filter elements; wherein the attribute characteristic data includes at least one of a number, an area, a volume, a diameter, a filling area, a perimeter, a maximum Feret angle, a minimum Feret angle, a roundness, and an eccentricity of the target object, and the image pixel value data includes at least one of a total number of local maxima along a vertical axis and a total number of local maxima along a horizontal axis.
In some embodiments, acquiring a target image of a fluid within a container includes:
carrying out foreground and background segmentation on an original image of fluid in a container to obtain a target image corresponding to a foreground region of the original image;
the original image is a pool boiling process image of the fluid in the container acquired by the sensor, or the original image is a video frame image of the pool boiling process of the fluid in the container shot by the camera.
In some embodiments, the pool boiling process identification method further comprises:
controlling a display to display the identification result of the pool boiling process;
and according to the identification result, controlling the heat flux density adjusting assembly to adjust part or all of the heat flux density in the container according to a preset control parameter, or controlling the heat flux density adjusting assembly to adjust part or all of the heat flux density in the container in response to the heat flux density adjusting instruction input by the instruction input assembly.
In a second aspect, embodiments of the present disclosure provide a pool boiling process identification model training method, the identification model being a machine learning model, the method comprising:
acquiring a sample image of fluid in a container, wherein the sample image comprises a plurality of area images corresponding to a target object, and the target object comprises at least one kind of object in a bubble, a gas film and a liquid film generated by heating pool boiling fluid;
determining geometrical attribute values of the target objects corresponding to the plurality of area images;
grouping the plurality of region images according to the geometric attribute values;
extracting characteristic data of each group of regional images as sample data;
and generating a training data set according to the sample data, and training the initial model to obtain the identification model.
In some embodiments, the initial model is a plurality of machine learning models, and the pool boiling process identification model training method further comprises:
respectively inputting the training data set into each initial model for training to obtain a plurality of candidate recognition models;
performing performance evaluation on the candidate recognition model to obtain a target recognition model;
the initial model is a supervised machine learning model and comprises at least one of a linear model, a decision tree model, a neural network model, a support vector machine model, a Bayesian classifier model and an integrated learning model.
In a third aspect, embodiments of the present disclosure provide a pool boiling process identification apparatus, the apparatus comprising:
a processor for performing the steps of the pool boiling process identification method in any of the embodiments of the present disclosure in the first aspect;
and the display is used for displaying the identification result of the boiling process of the fluid pool in the container identified by the processor.
In a fourth aspect, embodiments of the present disclosure provide a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the pool boiling process identification method in any of the embodiments of the first aspect when the computer program is executed.
Embodiments of the present disclosure provide a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the pool boiling process identification model training method in any of the embodiments of the second aspect of the present disclosure when the computer program is executed.
According to the pool boiling process identification method, the model training method, the device and the equipment, the geometric attribute values of the target objects in the region images are extracted by dividing the region images corresponding to the target objects in the target images, sorting and grouping are carried out, the characteristic data of each group of grouped region images are respectively extracted, and the characteristic data are input into the identification model to obtain the pool boiling process identification result. The geometric attribute values according to the grouping can reflect the shape, the size and other characteristics of the target object in the target image, the region images to be extracted with the characteristic data are grouped according to the geometric attribute values before identification, the accuracy of identification can be improved, and meanwhile, the characteristic data input models are respectively extracted for each group of grouped region images for identification, and the identification speed can be improved.
Drawings
FIG. 1 is a diagram of an application environment for a pool boiling process identification method in some embodiments;
FIG. 2 is a flow diagram of a pool boiling process identification method in some embodiments;
FIG. 3 is a flow diagram of yet another pool boiling process identification method in some embodiments;
FIG. 4 is a flow chart of a cellular automaton algorithm in some embodiments;
FIG. 5 is a flow diagram of a pool boiling process identification model training method in some embodiments;
FIG. 6 is a flow diagram of a method involving sample image acquisition in some embodiments;
FIG. 7 is a flow diagram of yet another example pool boiling process identification model training method;
FIG. 8 is a flow diagram of a feature extraction method according to some embodiments;
FIG. 9 is a flow chart of steps for object recognition model acquisition in a pool boiling process recognition model training method in some embodiments;
FIG. 10 is a block diagram of a pool boiling process identification device in some embodiments;
FIG. 11 is an internal block diagram of a computer device in some embodiments.
Detailed Description
In order to make the objects, technical solutions and advantages of the present disclosure more apparent, the present disclosure will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present disclosure.
The pool boiling process identification method provided by the embodiment of the disclosure can be applied to an application environment as shown in fig. 1. Wherein the processor 101 may communicate with the sensor 102 by wire or wirelessly to obtain video or images of the pool boiling process acquired by the sensor 102. Processor 101 may be implemented in at least one hardware form of a Programmable Logic Array (PLA), field Programmable Gate Array (FPGA), digital Signal Processor (DSP), application Specific Integrated Circuit (ASIC), general purpose processor, or other programmable logic device. In some alternative embodiments, the sensor 102 may be positioned outside the vessel and the angle at which the pool boiling process video or image is acquired may be perpendicular to the bottom wall of the vessel. In still other alternative embodiments, the sensor 102 may also be positioned inside the vessel, such as on the vessel side or bottom wall, and the angle at which the pool boiling process video or image is acquired may be perpendicular or parallel to the vessel bottom wall orientation or other angles. The sensor 102 may include, but is not limited to, a camera, an optical sensor, an infrared sensor, etc., and is not particularly limited with respect to the chip type of the sensor, the resolution size, the output signal speed, etc. The camera may include at least one of a high-speed video camera, a normal-speed video camera, a high-speed camera, a normal-speed camera, a high-resolution camera, or a normal-resolution camera.
In a first aspect, an embodiment of the present disclosure provides a method for identifying a pool boiling process, which is described by taking an example that the method is applied to the processor 101 in fig. 1, as shown in fig. 2, and the method for identifying a pool boiling process includes steps S201 to S205 that can be executed by the processor 101, and each step is described below.
Step S201: a target image of the fluid within the container is acquired. The target image comprises a plurality of area images corresponding to target objects, and the target objects comprise at least one kind of objects in bubbles, an air film and a liquid film generated by heating pool boiling fluid.
The processor 101 acquires a target image of fluid in a container acquired by the image acquisition terminal 102, wherein the target image is a pool boiling process image to be identified, and the target image comprises an area image corresponding to a liquid film such as bubbles, an air film and the like generated by heating the pool boiling fluid.
The container may be made of metal, glass or other materials, and the fluid in the container may include water, liquid metal or other liquid materials, and in some embodiments, the fluid may further include gas or a mixture of gas and liquid.
Pool boiling processes experience several boiling states, single phase boiling, nucleate boiling, transition boiling, and film boiling. Each state exhibits different heat transfer characteristics and behavior, and by analyzing the image features identifying the pool boiling process, the corresponding pool boiling process state can be identified.
The region image refers to imaging of a target object such as a bubble, a gas film and the like, wherein the target image comprises imaging of one or more types of target objects, and one such imaging can be regarded as one region image. For example, imaging of one bubble may be regarded as one area image, and imaging of a plurality of bubbles may be regarded as a plurality of area images. For example, imaging of one bubble and imaging of one gas film can be regarded as two area images.
In general, in the target image acquired in different pool boiling process states, there may be a difference in distribution and morphology of the target object, and the target image is segmented according to the target object, so that the target image may include a plurality of region images, each region image corresponding to each target object. In particular, the method of each region image division may be diversified. For example, the target object may be identified and the corresponding region image may be segmented by means of machine learning; for example, by acquiring an instruction of dividing a region image input by a researcher, recognition of a target object is performed by experience of the researcher and the region image corresponding to the target object is divided; of course, the region image may be acquired or segmented in other ways mentioned below or more ways not mentioned here, and is not particularly limited here.
In some embodiments, step S201 includes foreground-background segmentation of an original image of the fluid in the container to obtain a target image corresponding to a foreground region of the original image. The raw image may be a pool boiling process image of the fluid in the container acquired by the sensor, in particular, may also be a video frame image of the pool boiling process of the fluid in the container taken by the camera. In some alternative embodiments, before the foreground and background segmentation is performed on the original image, the original image may be subjected to clipping, contrast enhancement, and other processing by using an image preprocessing technology, and then the foreground region segmentation is performed by using a watershed method or other image processing technologies such as a thresholding method, a region growing method, and the like, which may, of course, also be performed by using a deep learning technology through a pretrained convolutional neural network.
The original image can be a pool boiling image acquired by a sensor, or can be a pool boiling image obtained by extracting video frames by utilizing video shot by the sensor. The acquired pool boiling image content range can be global pool boiling images or local images.
Step S202: and determining the geometric attribute values of the target objects corresponding to the plurality of area images.
The processor 101 analyzes the obtained target image to obtain a geometric attribute value of the segmented region image corresponding to the target object, where the geometric attribute value may be an area of the region image, or may be a perimeter of the region image or size data of other geometric dimensions.
In some embodiments, the geometric attribute value of the area image may be obtained by converting the area image into a black-and-white binary image by using a binarization method, so as to extract the geometric attribute value corresponding to the target object in the area image. The regional image can be subjected to grid division by adopting a regional grid statistical method, geometrical attribute values corresponding to the target object in the regional image can be extracted, and other commonly used image feature extraction modes can be adopted, such as a contour extraction method, a gray scale method and the like.
Step S203: the plurality of region images are grouped according to the geometric attribute values.
The processor 101 groups the plurality of area images according to the geometric attribute value size of each area image.
The grouping mode can group the region images according to a preset attribute value interval, or can group the region images according to a preset ordering sequence interval by ordering the region images according to the geometric attribute value.
In some embodiments, step S203 may include sorting the plurality of region images according to the geometric attribute values, and sorting the plurality of region images into N groups according to the sorted order relationship, so that the geometric attribute values of the region images in the M-th group of the N sorted groups of region images are smaller than or larger than the geometric attribute values of the region images in the m+1th group, where M is smaller than N and N is a positive integer greater than 1.
The region images are divided into N groups, and N can be 2, 3, 4, 6 or other values. The ordering of the plurality of region images may be a big-to-small ordering or a small-to-big ordering. Under the condition of sorting from large to small, the geometric property values of the region images in the M-th group are larger than those of the region images in the M+1th group, and correspondingly, the geometric property values of the region images in the M+1th group are larger than those of the region images in the M+2th group. Wherein the region images within the mth group may be presented in ascending, descending, or other ordered sequence according to geometric attribute values. It will be appreciated that in the case of a small to large order of geometric property values according to the region image, the geometric property values in the M groups are smaller than those in the m+1 groups.
In some alternative embodiments, the area of each area image may be sorted from large to small, the area images may be sequentially grouped according to the sorting order, and the number of the area images in each group after grouping may be determined according to a specific proportion of the sorting order of the area images to the total number of the area images. For example, in the case where the number of groups is 3, the area images are sorted from large to small, the sorted plurality of area images are grouped in a sorting order, the sorting order is divided into large area groups in a first section, the sorting number is divided into middle area groups in a second section, the sorting order is divided into small area groups in a third section, and specifically, the first section may be the first 0% -5% of the sorting order; the second interval may be the first 6% -50% and the third interval may be the first 51% -100% of the ordering order, wherein the first interval, the second interval and the third interval may be continuous intervals or discontinuous intervals.
The manner of sorting packets, the number of packets, and the proportion of packets may be set by those skilled in the art according to actual needs.
Step S204: feature data of each set of region images is extracted.
The processor 101 further extracts feature data of the grouped region images, wherein the feature data may be attribute data of the target object in the region images. The feature data extraction method may be a binarization method, or a gray scale method, a gradient method, or other image feature extraction methods, which are not particularly limited herein, may be used to convert each group of region images into a black-and-white binary image, thereby extracting the attribute features corresponding to the target objects in the region images.
A person skilled in the art can select a corresponding feature extraction mode according to the data format requirement of the recognition model so as to obtain the data format requirement meeting the direct input recognition model.
In some embodiments, the feature data may be extracted from each group of area images, which may be respectively extracted from each area image in the group, or may be extracted from each area image in the group after merging the area images into a new image to be extracted.
In some embodiments, each group of area images is combined into a new image to be extracted, which may be that each area image in the group is combined into a new background image in an equal proportion to generate one image to be extracted, or other area images and elements in the target image, which do not belong to each area image in the group, are discarded, so that one image to be extracted is obtained.
In some implementations, the images to be extracted corresponding to each set of region images are binarized to obtain binary images, i.e., if the region images in the target image are divided into 3 sets, 3 binary images are correspondingly generated.
In some embodiments, geometric attribute features may be obtained for binary images using attribute extraction, and pixel value data may also be obtained by morphological processing. The manner in which the feature data is obtained for the attribute extraction and morphological processing is detailed below.
Step S205: and inputting the characteristic data of each group of regional images into a pre-trained recognition model, and outputting a recognition result of the pool boiling process. The recognition model is a machine learning model obtained through training according to a training data set obtained by a plurality of pool boiling sample data.
The recognition result of the pool boiling process refers to the boiling state recognition result of the pool boiling process. In some classification schemes, boiling state classification can be divided into: a nucleate Boiling phase (LN) corresponding to a single-phase state of low heat flux density, a nucleate Boiling phase (HN) corresponding to a single-phase state of high heat flux density, and a Film Boiling phase (Fi) corresponding to a transition Boiling state and a Film Boiling state. The LN phase is the initial stage of heating, and bubbles start to be generated in the liquid, but the bubbles are small in number and size, and the bubbles are gradually increased in number and size as heating progresses. In the HN stage, a large number of bubbles are generated inside the liquid, and the number of bubbles increases rapidly and breaks rapidly. In Fi phase, a layer of liquid film is formed on the surface of the liquid, and the liquid below the liquid film is rapidly evaporated and generates a large number of bubbles. As the heating proceeds, the liquid film is continuously broken and reformed.
The representation of the recognition result of the pool boiling process may be varied and in some alternative embodiments the recognition result may be a marker for characterizing the boiling state phase corresponding to the target image acquisition instant. For example, different markers may be configured for the LN stage, HN stage, or Fi stage, which may be text, patterns, or sounds, and outputting the recognition result means outputting the corresponding marker so that the researcher can understand the pool boiling state stage.
In still other alternative embodiments, the identification result may be characteristic information of the boiling state corresponding to the fluid, for example, the characteristic information of the boiling state such as the number of bubbles, the area of the air film and the like may be displayed, so that a person skilled in the art can determine the corresponding boiling state according to the characteristic display information; in still other alternative embodiments, the recognition result may also be a predicted result of the boiling state of the next stage after the target image acquisition time. Of course the recognition result may also comprise other aspects, which are capable of characterizing the boiling state.
The recognition model is a machine learning model trained from a training data set obtained from a plurality of pool boiling sample data.
Machine learning may include supervised learning and unsupervised learning from a comprehensive classification perspective. The supervised learning is to learn the labeled data, so as to accurately judge the unlabeled data, and is used for predicting the problems of regression of the data, classification of classification labels, sequencing and the like. The non-supervision learning is to learn the data without labels to find the mode and structure in the data, and is used for clustering, anomaly detection and other problems. Wherein the supervised machine learning model includes: support vector machine models (SVMs), naive bayes models, decision tree models, k-nearest neighbor models (KNNs), k-means models, gaussian Mixture Models (GMMs), neural network models (NNs), and the like.
In some embodiments, the pool boiling process identification model is a supervised machine learning model trained from a training data set obtained from a plurality of pool boiling sample data.
In some embodiments, the pool boiling process identification model is an unsupervised machine learning model trained from a training data set obtained from a plurality of pool boiling sample data.
The plurality of pool boiling sample data includes feature data sets extracted from pool boiling process images acquired at different boiling states. In some alternative embodiments, the plurality of sample data may further include pool boiling process images acquired for the same boiling state, and the plurality of feature data sets obtained by selecting different feature extraction modes.
The above is a description of several main contents of steps S201 to S205.
In step S201 to step S205, the geometric attribute values of the target objects in the region images are extracted by segmenting the region images corresponding to the target objects in the target images, sorting and grouping are performed, the feature data of each group of region images after grouping are respectively extracted, and the feature data are input into the recognition model, so as to obtain the pool boiling process recognition result. The geometric attribute values according to the grouping can reflect the shape, the size and other characteristics of the target object in the target image, the region images to be extracted with the characteristic data are grouped according to the geometric attribute values before identification, the accuracy of identification can be improved, and meanwhile, the characteristic data input models are respectively extracted for each group of grouped region images for identification, and the identification speed can be improved.
In some embodiments, as shown in fig. 3, the method for identifying a pool boiling process may further include step S301: and dividing the target image by adopting a cellular automaton algorithm to obtain a plurality of region images corresponding to at least one type of target object.
The cellular automaton algorithm is one of intelligent optimization algorithms, and the intelligent optimization algorithms comprise a genetic algorithm, a cellular automaton algorithm, a simulated annealing algorithm, a tabu search algorithm, a particle swarm algorithm, an ant colony algorithm and the like. Compared with other intelligent optimization algorithms, the cellular automaton algorithm has the advantages of simple constituent unit, low initial cost, high calculation speed, adaptability, robustness and expandability.
Each cell in the cellular automaton has its own state, and all cell states are updated continuously according to the cell rule. Specifically, as shown in fig. 4, the cellular automaton algorithm may include the following steps.
Step S401: initial state selection, the analyzed image is discretized into units, i.e., cells, each assigned an initial state.
Step S402: the state of the current cell updates and the update depends only on the states of the neighboring cells.
Step S403: in each iteration, each cell is automatically updated by a preset step 402 while specifying boundary conditions.
Step S404: the updated state of the cell is saved and used as an output for further processing.
In some alternative embodiments, step S401 may treat each pixel of the target image as a cell, and the state of each cell may be the category to which it belongs, i.e. the target object and the background; the local rule adopted for performing the update in step S402 may be defined as: if the state of the adjacent cells of one cell accords with the preset rule, the cell updates the state into the category corresponding to the preset rule, and compares the state of the adjacent cells of each cell with the preset rule at each time step according to the defined local rule, and updates the state of the cell, wherein the preset rule can be set according to the color, gray level or binary data of the adjacent cells; the boundary conditions specified in step S403 may be periodic boundary conditions, fixed boundary conditions, reflective boundary conditions, absorptive boundary conditions, etc., and may be custom set by those skilled in the art as required; after executing steps S401 to S403, the state of the cellular automaton is continuously updated in an iterative manner, and the updated cellular state is saved, so that a plurality of segmented region images can be obtained.
In some embodiments, the plurality of acquired region images respectively correspond to the target object in the target image, i.e., the region images may be segmented for a target object such as a bubble, gas film, or other liquid film.
The cellular automaton algorithm is adopted for image segmentation, and the method has the effects of simple constituent units, low initial cost, high calculation speed, strong self-adaptability and the like.
Step S204 may include a variety of ways of extracting the feature data. In some alternative embodiments, step S204 may include: and acquiring a binary image corresponding to each group of area images, and extracting characteristic data of the binary image as characteristic data of each group of area images.
The binary image acquisition mode corresponding to each group of area images is mentioned above, and will not be repeated. In some alternative embodiments, the area images are divided into 3 groups, 1 image to be extracted is obtained according to each group of area images, each image to be extracted is binarized, 3 binary images are correspondingly generated, and feature data in the binary images, namely, feature data of a target object in the area images are extracted respectively. The binary image is a two-dimensional logic array with the same size as a single channel of the target image, and pixels with 1 and 0 in the binary image respectively represent a foreground and a background. The pixel point of 1 in the binary image represents the target object.
Wherein the feature data may be geometric feature data, spatial relationship feature data, shape feature data, etc.
In still other optional embodiments, step S204 may further include extracting attribute feature data of the target object of each set of area images, and/or performing morphological processing on each set of area images to obtain image pixel value data corresponding to different filter elements; wherein the attribute characteristic data includes at least one of a number, an area, a volume, a diameter, a filling area, a perimeter, a maximum Feret angle, a minimum Feret angle, a roundness, and an eccentricity of the target object, and the image pixel value data includes at least one of a total number of local maxima along a vertical axis and a total number of local maxima along a horizontal axis.
In some embodiments, each group of area images correspondingly generates an image to be extracted, the image to be extracted comprises a plurality of target objects, and the feature extraction is carried out on the target objects identified in the image to be extracted after the image preprocessing and the target identification. The preprocessing mode can be graying, binarization, filtering and the like so as to improve the image quality and the accuracy of feature extraction.
The extracted features may include the number, area, volume, diameter, major axis length, minor axis length, perimeter, maximum or minimum Feret diameter (the maximum or minimum distance between any two boundary points on the convex hull vertices of the enclosed object), the maximum or minimum Feret angle (the angle of the maximum or minimum Feret diameter relative to the horizontal axis of the image), roundness, eccentricity (the ratio of the distance between the focal point of the ellipse and its major axis length), range (the ratio of pixels in the region to pixels in the bounding box), fill area (the area in the image equal to the region bounding box), etc.
The extracting of the attribute feature data of the target object in each group of regional images can be performed by using a threshold-based extracting method to obtain the attribute data of the target object, or by using an image retrieval method, and of course, a person skilled in the art can select other suitable extracting methods according to actual needs to obtain the attribute feature data of the target object.
In some embodiments, each group of area images correspondingly generates an image to be extracted, the image to be extracted comprises a plurality of target objects, the image to be extracted is processed by adopting a morphological reconstruction method, and the target objects are filtered through a custom component, so that specific characteristics of the target objects are enhanced or weakened and reflected in the generated numerical characteristic data. In some alternative embodiments, the image to be extracted may be morphologically processed with 4 different filter elements, yielding 4 new morphologically processed images, respectively, for the different filter elements. The pixel values of 5 images in total of the image to be extracted and the morphologically processed image are summed along the coordinate axes in a Cartesian coordinate system, and the total number of local maxima of each image along one of the axes is recorded.
Wherein the 4 different filter elements may be circular, narrow rectangular, wide rectangular and square, the recorded local maxima may be local maxima along the vertical axis and/or along the horizontal axis. The 10 features that can be obtained by the morphological reconstruction method can be: for the horizontal and vertical axes, the number of local maxima corresponds to round, narrow rectangular, wide rectangular, square filter and no filter.
In still other optional embodiments, step S204 may further include obtaining a binary image corresponding to each set of area images, extracting feature data of the binary image as feature data of each set of area images, where a manner of extracting feature data of the binary image may be to extract attribute feature data of a target object in the binary image, and/or performing morphological processing on the binary image to obtain image pixel value data corresponding to different filter elements.
Accordingly, the manner of acquiring the binary image of each group of area images, extracting the attribute feature data and performing morphological processing to acquire the pixel value data has been described above, and will not be described herein.
In some embodiments, the method of identifying a pool boiling process further comprises controlling a display to display an identification of the pool boiling process; and according to the identification result and the preset control parameter, controlling the heat flux density adjusting assembly to adjust part or all of the heat flux density in the container.
The display comprises a display interface, and can be a display in electronic equipment such as a computer, a mobile terminal and the like which can be controlled by researchers. The processor 101 displays the identified pool boiling process result to prompt the researcher of the pool boiling process identification result, and for different identification results, the processor 101 obtains preset control parameters, wherein the control parameters can be preset by the researcher for maintaining the heat flux density of the pool boiling fluid and correspond to different pool boiling process results, and the processor 101 controls the heat flux density adjusting component to adjust part or all of the heat flux density in the container according to the control parameters. By adjusting the heat flux density, heat flux management can be achieved, thereby regulating and controlling the temperature of the pool boiling fluid.
The heat flux density adjusting assembly adjusts the heat flux density in the container, which can be the bottom area of the container, the side wall area, or other areas in the container.
The heat flux density adjustment assembly may include at least one of a heat pipe, a heat pump, a radiator, and a cooler.
In some embodiments, the method for identifying a pool boiling process may further include displaying the identification result of the pool boiling process on a display interface; the heat flux density adjusting component is controlled to adjust the heat flux density of the fluid in the container in response to the heat flux density adjusting command input by the command input component. That is, the processor 101 sends the identification result to the display, so that the display displays the identification result on the display interface, so that a researcher can grasp the situation of the pool boiling process, and obtain the heat flux density adjustment instruction input by the instruction input component, and control the heat flux density adjustment component to adjust part or all of the heat flux density in the container.
The instruction input component is used for acquiring a heat flux density adjustment instruction input by an operator, and can adopt modes such as touch input of a touch screen control, click input of a mouse, rotation of a mechanical component or press input.
By accurately identifying the pool boiling process and carrying out heat flow density regulation and control, precise and accurate temperature control can be realized. In a pool boiling theory research scene, the heat flux density regulation and control can be convenient for researchers to adjust the heat flux density to a specific pool boiling stage according to the needs; in the performance test and optimization scene of the electronic equipment, precise temperature control can be realized through the regulation and control of the heat flux density, so that the electronic equipment can be protected from overheat damage, the designated temperature can be quickly reached, and the temperature can be stably controlled in a smaller fluctuation range.
In a second aspect, an embodiment of the present disclosure provides a pool boiling process identification model training method, where the identification model is a machine learning model, and the method is applied to the processor 101 in fig. 1, and as shown in fig. 5, the pool boiling process identification model training method includes steps S501 to S505 that can be executed by the processor 101.
Step S501: and obtaining a sample image of the fluid in the container, wherein the sample image comprises a plurality of area images corresponding to a target object, and the target object comprises at least one kind of object in a bubble, a gas film and a liquid film generated by heating pool boiling fluid.
Step S502: and determining the geometric attribute values of the target objects corresponding to the plurality of area images.
Step S503: the plurality of region images are grouped according to the geometric attribute values.
Step S504: feature data of each group of region images is extracted as sample data.
Step S505: and generating a training data set according to the sample data, training an initial model to obtain an identification model, wherein the initial model is a machine learning model.
In fact, the sample image is subjected to a series of processes to obtain sample data for input into the initial model, and the target image is processed in the recognition method to obtain feature data for input into the trained recognition model, and the two processes involve steps which have the same principle. The method of identifying a pool boiling process can be understood as a way of applying the identification model, and thus, it will be understood by those skilled in the art that the training of the identification model is related to the application of the identification model, i.e. the method of training the pool boiling process identification model and the method of identifying a pool boiling process have a correlation at a principle level or a step level, some of the same terms appearing before will not be explained repeatedly, some of the same principles appearing before will not be explained too much.
In some embodiments, the initial model may be a supervised machine learning model, or an unsupervised machine learning model.
In some embodiments, as shown in fig. 6, step S501 may further include step S601 and step S602.
Step S601: an original image of the fluid within the container is acquired.
Step S602: and carrying out foreground and background segmentation on the original image to obtain a sample image corresponding to the foreground region of the original image.
In some embodiments, step S503 may include sorting the plurality of area images according to the geometric attribute values, and sorting the plurality of area images into N groups according to the sorted order relationship, so that the geometric attribute values of the area images in the M-th group of the N sorted groups of area images are smaller than or larger than the geometric attribute values of the area images in the m+1th group, where M is smaller than N and N is a positive integer greater than 1.
In some alternative embodiments, the area of each area image may be sorted from large to small, the area images may be sequentially grouped according to the sorting order, and the number of the area images in each group after grouping may be determined according to a specific proportion of the total number of the area images. For example, in the case where the number of groups is 3, the area images are sorted from large to small in area, and the sorted first 0% -5%, first 6% -50% and first 51% -100% area images are respectively allocated to 3 groups, that is, to a large area image group, a medium area image group and a small area image group.
After the area sizes of the regional images are sorted, the sorted regional images are sequentially grouped according to a preset proportion, and compared with the grouping of specific numerical values according to the area sizes, the adaptability of a regional image grouping algorithm can be improved.
In some embodiments, as shown in fig. 7, the pool boiling process identification model training method may further include, step S701: and dividing the target image by adopting a cellular automaton algorithm to obtain a plurality of region images corresponding to at least one type of target object.
In some embodiments, as shown in fig. 8, step S504 may include,
step S801: and acquiring binary images corresponding to each group of region images.
Step S802: and extracting attribute characteristic data of the target object in the binary image.
Step S803: and carrying out morphological processing on the binary image to obtain image pixel value data corresponding to different filter elements.
Step S804: and taking the extracted attribute characteristic data and the extracted image pixel value data as sample data.
Wherein the attribute characteristic data includes at least one of a number, an area, a volume, a diameter, a filling area, a perimeter, a maximum Feret angle, a minimum Feret angle, a roundness, and an eccentricity of the target object, and the image pixel value data includes at least one of a total number of local maxima along a vertical axis and a total number of local maxima along a horizontal axis.
In some alternative embodiments, the feature data of the target object in the binary image may be extracted using an attribute extraction method (The properties extraction approach, PEA) and a morphological reconstruction method (The morphological reconstruction approach, MRA). The target objects in each set of region images are PEA and MRA processed to generate 16-dimensional and 10-dimensional feature vectors for each target object, respectively. The same set of data for each image is calculated to obtain an arithmetic average, thereby eliminating the adverse effect of outliers. For example, in the case where the area images are divided into 3 groups, that is, a large-area group, a medium-area group, and a small-area group, respectively, if one sample image has 256 target objects in the large-area object group, the PEA will generate a 256×16 feature matrix. Since they all belong to the same image, an arithmetic average value can be calculated for each column of the matrix, resulting in a vector of 1×16 rows, which becomes the final feature data of the image large-area object group. Since each sample image is divided into three groups, each group of features is a vector of 1×16 rows, a total of 1×48 feature vectors are generated with respect to each sample image. Further, the feature of the total number of target objects in each sample image is added, and the feature vector of each image is 1×49 vectors. Similarly, for MRA, one 1×10 feature vector is generated per group of region images in each sample image, and one 1×31 feature vector is generated for 3 groups of each sample image.
In some alternative embodiments, step S504 may include normalizing the extracted feature data. The feature data normalization processing can reduce the value range of the features, quicken the convergence rate of the model, make the model more stable and improve the generalization capability of the model.
In some embodiments, the pool boiling process identification model training method further comprises: and selecting part of sample data as a training data set according to a preset proportion from the sample data extracted after the sample image is processed, and taking the rest of sample data as a test data set. In some alternative embodiments, the training data set may be selected by randomly selecting 90% of the sample data, with the remaining 10% of the sample data being the test data set. Of course, the proportion of the training data set in the sample data may be set to other proportions.
In some embodiments, the trained candidate recognition models are cross-validated multiple times. In some alternative embodiments, the number of cross-validations may be 5.
In some embodiments, the pool boiling process identification model training method further comprises: and performing sample data dimension reduction on the sample data by using a supervised machine learning algorithm, taking the sample data after dimension reduction as a training data set, and training the initial model to obtain the identification model.
The sample data dimension reduction can be performed by adopting a neighborhood component analysis (Neighborhood component analysis, NCA) algorithm, a principal component analysis (Principal component analysis approach, PCA) algorithm or other sample data characteristic optimization and data dimension reduction algorithms.
Wherein the Neighborhood Component Analysis (NCA) method comprises the following steps.
Step 1.1: the NCA model is fitted using specified regularization parameters and cross-validation, while the model updates feature weights.
Step 1.2: the loss values are calculated separately.
Step 1.3: step 1.1 and step 1.2 are repeated for all folds and all regularization parameters.
Step 1.4: an average loss value for each regularization parameter is calculated.
Step 1.5: an optimal regularization parameter corresponding to the minimum average loss is found.
Step 1.6: the NCA model is fitted using the optimal regularization parameters, while the model updates the feature weights.
Step 1.7: and selecting the features according to the updated feature weights.
And reducing the dimension of the sample data by adopting an NCA method, and setting regularization parameters according to actual needs by a person skilled in the art to obtain the sample data after the dimension reduction.
Principal Component Analysis (PCA) may include the following steps.
Step 2.1: the raw feature data is normalized.
Step 2.2: a covariance matrix of the normalized dataset is calculated.
Step 2.3: eigenvalues and eigenvectors of the covariance matrix are calculated.
Step 2.4: the first k features of the most importance (the largest feature value) are retained, k representing the dimension after dimension reduction.
Step 2.5: and finding the eigenvectors corresponding to the k eigenvalues.
Step 2.6: and multiplying the standardized data set by the k eigenvectors to obtain a dimension-reduced result.
Through dimension reduction on sample data, redundant features can be discarded, only effective feature data is reserved for model construction, redundancy-free feature modeling can be realized, and execution speed is improved.
In some embodiments, the feature number in the sample data can be reduced from 49 to 7 by using the NCA method to reduce the dimension of the sample data extracted by using the PEA method.
In some implementations, the number of features in the sample data can be reduced from 31 to 7 by using the NCA method to reduce the dimension of the sample data extracted in the MRA mode.
In some embodiments, the initial model is a plurality of supervised machine learning models, as shown in FIG. 9, and the pool boiling process identification model training method further includes the following steps.
Step S901: and respectively inputting the training data set into each initial model for training to obtain a plurality of candidate recognition models.
Step S902: and performing performance evaluation on the candidate recognition model to obtain the target recognition model.
The initial model is a supervised machine learning model and comprises at least one of a linear model, a decision tree model, a neural network model, a support vector machine model, a Bayesian classifier model and an integrated learning model.
Of course, the initial model may be other multiple supervised machine learning models, and the number of initial models may be multiple. In particular, the number of initial models and the type of supervised machine learning models employed can be determined by those skilled in the art based on actual requirements.
In some embodiments, the initial model may be a plurality of supervised machine learning models of the same type, or may be a plurality of supervised machine learning models of different types.
In some embodiments, the training data set may be multiple sample data sets obtained according to different feature extraction modes for the same boiling state pool boiling process image, or may be a sample data set obtained by one extraction mode for pool boiling process images of different boiling states.
The prediction results of the candidate models are divided into four groups of True Positive (TP), true Negative (TN), false Positive (FP), and False Negative (FN). TP, TN, FP, FN are four quantities in a Confusion Matrix (fusion Matrix) used to evaluate the performance of the classification model.
Where TP represents the number of samples that are actually positive and predicted by the model as positive; TN represents the number of samples that are actually negative and predicted by the model as negative; FP represents the number of samples that are actually negative but predicted by the model as positive; FN represents the number of samples that are actually positive examples but predicted by the model as negative examples.
In the recognition task, an confusion matrix is made as a decision criterion. In addition, training time, model size, and other important performance-related parameters are also of interest for model performance assessment. In some alternative embodiments, the metrics of candidate model performance evaluation may include accuracy, precision, recall, and F1 score.
Specifically, the Accuracy (ACC) refers to the ratio of correctly classified samples to the total number of samples, acc= (tp+tn)/(tp+tn+fp+fn);
the Precision refers to the proportion of the true positive samples in the predicted positive samples, and can be regarded as the proportion of the true positive samples in the samples predicted as the positive samples, and precision=tp/(tp+fp);
recall (Recall), which is the proportion of samples predicted to be positive among samples actually positive, and can be regarded as the proportion of samples predicted to be positive among samples actually positive, and recall=tp/(tp+fn);
F1 value, F1 value is a weighted average of Precision and Recall, for comprehensively evaluating classification effect of model, f1=2×precision×recall/(precision+recall);
the closer each index value approaches 100%, the better the performance evaluation result of the characterization candidate model. When the F1 value is close to 100%, the overall performance of the model in terms of accuracy and recall is better.
In some embodiments, for one sample picture, pool boiling may divide the acquired sample data into 4 sample data sets according to the feature extraction mode (PEA or MRA) and whether to perform data dimension reduction (NCA), as shown in table 1, and for each sample data set, training may be performed by using X initial models, and if X is 10, 40 candidate models may be obtained for one sample picture. By performing performance evaluation on the 40 candidate models, a target recognition model can be obtained, wherein the target recognition model shows a better model for the performance evaluation result, and the target recognition model is used as a recognition model for carrying out pool boiling process recognition.
Table 1:
in some embodiments, by adopting the pool boiling process recognition model training method provided by the embodiment of the disclosure, the training time range of the obtained candidate recognition model can be between 1.84 seconds and 18.95 seconds, the model size range can be between 23kB and 281kB, and the verification and test accuracy can reach 98.4% and 99.10% respectively. The identification models of the top three after performance evaluation are respectively a high-efficiency linear support vector machine model, a medium Gaussian support vector machine model and a linear discriminant analysis model, and the test precision, the recall rate and the specific stage F1 all reach 100%. The overall performance of the model trained by using the sample data generated by PEA is better than that of the model trained by using the sample data generated by MRA, and the overall performance of the model trained by using all the sample data (without dimension reduction processing) is better than that of the model trained by using the sample data after NCA dimension reduction.
It should be understood that, although the steps in the flowcharts of fig. 2-9 are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps illustrated in fig. 2-9, as well as the steps involved in other embodiments, are not strictly limited to the order of execution unless explicitly stated herein, and may be performed in other orders. Moreover, at least some of the steps of the foregoing embodiments may include multiple sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor do the order in which the sub-steps or stages are performed necessarily performed in sequence, but may be performed alternately or alternately with at least a portion of the sub-steps or stages of other steps or other steps.
In a third aspect, the disclosed embodiments provide a pool boiling process identification apparatus, as shown in fig. 10, a pool boiling process identification apparatus 1000 comprising a processor 1001 and a display 1002.
The processor 1001 may be configured to perform the steps of the pool boiling process identification method provided in any of the embodiments of the present disclosure in the first aspect.
A display 1002 for displaying the identification of the process of boiling the fluid pool in the container by the processor 1001.
In some embodiments, the pool boiling process identification apparatus may further comprise a heat flux density adjustment assembly 1003 for adjusting a heat flux density of a portion or all of the interior of the vessel in accordance with instructions from the processor 1001, and may also be used to adjust a heat flux density of a portion or all of the interior of the vessel in accordance with adjustment instructions entered by a user.
For specific limitations on the pool boiling process identification means, reference is made to the above limitations on the pool boiling process identification method, and no further description is given here.
In a fourth aspect, embodiments of the present disclosure provide a computer device, which may be a server, whose internal structure may be as shown in fig. 11. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The nonvolatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of identifying pool boiling processes in any of the embodiments of the first aspect herein.
Embodiments of the present disclosure provide a computer device, which may be a server, and an internal structure of the computer device may be as shown in fig. 11, or may be different from fig. 11, where the computer device includes a processor, a memory, and a network interface connected through a system bus, where the processor of the computer device may implement a pool boiling process identification model training method in any of the embodiments of the second aspect herein.
It will be appreciated by those skilled in the art that the structure shown in fig. 11 is merely a block diagram of a portion of the structure associated with an embodiment of the present disclosure and is not limiting of the computer device to which an embodiment of the present disclosure is applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
The disclosed embodiments provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method of identifying pool boiling procedures in any of the previous embodiments.
The disclosed embodiments provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the pool boiling process identification model training method of any of the previous embodiments.
Those skilled in the art will appreciate that implementing all or part of the above-described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided by the present disclosure may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples merely represent embodiments of the present disclosure, which are described in more detail and detail, but are not to be construed as limiting the scope of the present disclosure. It should be noted that variations and modifications can be made by those skilled in the art without departing from the spirit of the disclosure, which are within the scope of the disclosure. Accordingly, the scope of the present disclosure should be determined from the following claims.

Claims (10)

1. A pool boiling process identification method, the method comprising:
acquiring a target image of fluid in a container, wherein the target image comprises a plurality of area images corresponding to target objects, and the target objects comprise at least one type of objects in a liquid film generated by heating bubbles, a gas film and pool boiling fluid;
determining geometrical attribute values of the target objects corresponding to the plurality of region images;
Grouping the plurality of region images according to the geometric attribute values;
extracting characteristic data of each group of regional images;
and inputting the characteristic data of each group of regional images into a pre-trained recognition model, and outputting a recognition result of the pool boiling process, wherein the recognition model is a machine learning model obtained by training according to a training data set obtained by a plurality of pool boiling sample data.
2. The method according to claim 1, wherein the method further comprises: dividing the target image by adopting a cellular automaton algorithm to obtain a plurality of area images corresponding to at least one type of target object;
the geometric property values include the area and/or size of the region image.
3. The method of claim 1, wherein said grouping said plurality of region images according to said geometric property value comprises:
sequencing the plurality of region images according to the geometric attribute values;
and dividing the plurality of region images into N groups according to the ordered sequence relation, so that the geometric attribute values of the region images in the M-th group in the N groups of region images are smaller than or larger than those of the region images in the M+1th group, wherein M is smaller than N, and N is a positive integer larger than 1.
4. The method of claim 1, wherein extracting feature data for each set of region images comprises:
acquiring a binary image corresponding to each group of region images, and extracting characteristic data of the binary image as characteristic data of each group of region images;
and/or the number of the groups of groups,
the extracting the characteristic data of each group of regional images comprises the following steps: extracting attribute characteristic data of the target object of each group of region images, and/or carrying out morphological processing on each group of region images to obtain image pixel value data corresponding to different filter elements; wherein the attribute characteristic data includes at least one of a number, an area, a volume, a diameter, a filling area, a perimeter, a maximum Feret angle, a minimum Feret angle, a roundness, and an eccentricity of the target object, and the image pixel value data includes at least one of a total number of local maxima along a vertical axis and a total number of local maxima along a horizontal axis.
5. The method of claim 1, wherein acquiring the target image of the fluid in the container comprises:
carrying out foreground and background segmentation on an original image of fluid in a container to obtain a target image corresponding to a foreground region of the original image;
The original image is an image of a pool boiling process of the fluid in the container acquired by the sensor, or the original image is a video frame image of the pool boiling process of the fluid in the container shot by the camera.
6. The method according to claim 1, wherein the method further comprises:
controlling a display to display the identification result of the pool boiling process;
and according to the identification result and a preset control parameter, controlling the heat flux density adjusting assembly to adjust part or all of the heat flux density in the container, or controlling the heat flux density adjusting assembly to adjust part or all of the heat flux density in the container in response to the heat flux density adjusting instruction input by the instruction input assembly.
7. A pool boiling process identification model training method, wherein the identification model is a machine learning model, the method comprising:
acquiring a sample image of fluid in a container, wherein the sample image comprises a plurality of area images corresponding to a target object, and the target object comprises at least one type of object in a liquid film generated by heating bubbles, a gas film and pool boiling fluid;
determining geometrical attribute values of the target objects corresponding to the plurality of region images;
Grouping the plurality of region images according to the geometric attribute values;
extracting characteristic data of each group of regional images as sample data;
and generating a training data set according to the sample data, and training an initial model to obtain the identification model.
8. The method of claim 7, wherein the initial model is a plurality of machine learning models, the method further comprising:
respectively inputting the training data set into each initial model for training to obtain a plurality of candidate recognition models;
performing performance evaluation on the candidate recognition model to obtain a target recognition model;
the initial model is a supervised machine learning model and comprises at least one of a linear model, a decision tree model, a neural network model, a support vector machine model, a Bayesian classifier model and an integrated learning model.
9. A pool boiling process identification apparatus, the apparatus comprising:
a processor for performing the steps of the method of any one of claims 1 to 6;
and the display is used for displaying the identification result of the boiling process of the fluid pool in the container identified by the processor.
10. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any one of claims 1 to 8 when the computer program is executed by the processor.
CN202311231934.1A 2023-09-21 2023-09-21 Pool boiling process identification method, model training method, device and equipment Pending CN117372920A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311231934.1A CN117372920A (en) 2023-09-21 2023-09-21 Pool boiling process identification method, model training method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311231934.1A CN117372920A (en) 2023-09-21 2023-09-21 Pool boiling process identification method, model training method, device and equipment

Publications (1)

Publication Number Publication Date
CN117372920A true CN117372920A (en) 2024-01-09

Family

ID=89401249

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311231934.1A Pending CN117372920A (en) 2023-09-21 2023-09-21 Pool boiling process identification method, model training method, device and equipment

Country Status (1)

Country Link
CN (1) CN117372920A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110693332A (en) * 2019-09-19 2020-01-17 杭州九阳小家电有限公司 Rice cooking method and device
CN111310808A (en) * 2020-02-03 2020-06-19 平安科技(深圳)有限公司 Training method and device of picture recognition model, computer system and storage medium
CN112101469A (en) * 2020-09-18 2020-12-18 童尚仁 Boiling phenomenon judgment device and method based on deep learning and optical reflection structure
CN112183563A (en) * 2019-07-01 2021-01-05 Tcl集团股份有限公司 Image recognition model generation method, storage medium and application server
KR102379855B1 (en) * 2021-05-31 2022-03-28 한국교통대학교산학협력단 Method and apparatus for generating object detection model
CN115398153A (en) * 2020-04-14 2022-11-25 善肴控股株式会社 Heating state recognition device, heating control method, heating state recognition system, and heating control system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112183563A (en) * 2019-07-01 2021-01-05 Tcl集团股份有限公司 Image recognition model generation method, storage medium and application server
CN110693332A (en) * 2019-09-19 2020-01-17 杭州九阳小家电有限公司 Rice cooking method and device
CN111310808A (en) * 2020-02-03 2020-06-19 平安科技(深圳)有限公司 Training method and device of picture recognition model, computer system and storage medium
CN115398153A (en) * 2020-04-14 2022-11-25 善肴控股株式会社 Heating state recognition device, heating control method, heating state recognition system, and heating control system
CN112101469A (en) * 2020-09-18 2020-12-18 童尚仁 Boiling phenomenon judgment device and method based on deep learning and optical reflection structure
KR102379855B1 (en) * 2021-05-31 2022-03-28 한국교통대학교산학협력단 Method and apparatus for generating object detection model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
肖清泰;黄峻伟;潘建新;刘;徐建新;王华;: "直接接触式沸腾换热过程气泡混合均匀度评价", 化工学报, no. 08, 12 May 2017 (2017-05-12) *
贾涛;刁彦华;张辉亚;: "用数学形态学识别核态沸腾中的汽化核心密度", 华中科技大学学报(自然科学版), no. 11, 28 November 2006 (2006-11-28) *

Similar Documents

Publication Publication Date Title
CN110490202B (en) Detection model training method and device, computer equipment and storage medium
CN112396002B (en) SE-YOLOv 3-based lightweight remote sensing target detection method
CN111260055B (en) Model training method based on three-dimensional image recognition, storage medium and device
CN112241952B (en) Brain midline identification method, device, computer equipment and storage medium
CN115829991A (en) Steel surface defect detection method based on improved YOLOv5s
CN113706481A (en) Sperm quality detection method, sperm quality detection device, computer equipment and storage medium
Cicconet et al. Mirror symmetry histograms for capturing geometric properties in images
CN108537825B (en) Target tracking method based on transfer learning regression network
CN113538331A (en) Metal surface damage target detection and identification method, device, equipment and storage medium
WO2017161443A1 (en) A method and system for tracking objects
CN114821022A (en) Credible target detection method integrating subjective logic and uncertainty distribution modeling
Hou et al. A self-supervised CNN for particle inspection on optical element
Jin et al. Toward efficient object detection in aerial images using extreme scale metric learning
Peeples et al. Comparison of possibilistic fuzzy local information c-means and possibilistic k-nearest neighbors for synthetic aperture sonar image segmentation
CN111428855B (en) End-to-end point cloud deep learning network model and training method
CN104732209B (en) A kind of recognition methods of indoor scene and device
CN117372920A (en) Pool boiling process identification method, model training method, device and equipment
Panchal et al. A local binary pattern based facial expression recognition using K-nearest neighbor (KNN) search
EP4145401A1 (en) Method for detecting anomalies in images using a plurality of machine learning programs
Pulido et al. Multiresolution classification of turbulence features in image data through machine learning
Wang et al. Unsupervised Defect Segmentation in Selective Laser Melting
Lin et al. A novel micro-defect classification system based on attention enhancement
Yang et al. A Multi-Modal Data-Driven Decision Fusion Method for Process Monitoring in Metal Powder Bed Fusion Additive Manufacturing
Balachandran et al. Machine learning based video segmentation of moving scene by motion index using IO detector and shot segmentation
Islam et al. Satellite Imageries for Detection of Bangladesh’s Rural and Urban Areas Using YOLOv5 and CNN

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination