NO20220761A1 - Burner control - Google Patents

Burner control Download PDF

Info

Publication number
NO20220761A1
NO20220761A1 NO20220761A NO20220761A NO20220761A1 NO 20220761 A1 NO20220761 A1 NO 20220761A1 NO 20220761 A NO20220761 A NO 20220761A NO 20220761 A NO20220761 A NO 20220761A NO 20220761 A1 NO20220761 A1 NO 20220761A1
Authority
NO
Norway
Prior art keywords
smoke
image data
flame
mask
combustion
Prior art date
Application number
NO20220761A
Inventor
Charles Toussaint
Salma Benslimane
Hugues Trifol
Francis Allouche
Original Assignee
Schlumberger Technology Bv
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Schlumberger Technology Bv filed Critical Schlumberger Technology Bv
Publication of NO20220761A1 publication Critical patent/NO20220761A1/en

Links

Classifications

    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F23COMBUSTION APPARATUS; COMBUSTION PROCESSES
    • F23GCREMATION FURNACES; CONSUMING WASTE PRODUCTS BY COMBUSTION
    • F23G5/00Incineration of waste; Incinerator constructions; Details, accessories or control therefor
    • F23G5/08Incineration of waste; Incinerator constructions; Details, accessories or control therefor having supplementary heating
    • F23G5/12Incineration of waste; Incinerator constructions; Details, accessories or control therefor having supplementary heating using gaseous or liquid fuel
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F23COMBUSTION APPARATUS; COMBUSTION PROCESSES
    • F23NREGULATING OR CONTROLLING COMBUSTION
    • F23N5/00Systems for controlling combustion
    • F23N5/02Systems for controlling combustion using devices responsive to thermal changes or to thermal expansion of a medium
    • F23N5/08Systems for controlling combustion using devices responsive to thermal changes or to thermal expansion of a medium using light-sensitive elements
    • F23N5/082Systems for controlling combustion using devices responsive to thermal changes or to thermal expansion of a medium using light-sensitive elements using electronic means
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F23COMBUSTION APPARATUS; COMBUSTION PROCESSES
    • F23GCREMATION FURNACES; CONSUMING WASTE PRODUCTS BY COMBUSTION
    • F23G7/00Incinerators or other apparatus for consuming industrial waste, e.g. chemicals
    • F23G7/06Incinerators or other apparatus for consuming industrial waste, e.g. chemicals of waste gases or noxious gases, e.g. exhaust gases
    • F23G7/08Incinerators or other apparatus for consuming industrial waste, e.g. chemicals of waste gases or noxious gases, e.g. exhaust gases using flares, e.g. in stacks
    • F23G7/085Incinerators or other apparatus for consuming industrial waste, e.g. chemicals of waste gases or noxious gases, e.g. exhaust gases using flares, e.g. in stacks in stacks
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F23COMBUSTION APPARATUS; COMBUSTION PROCESSES
    • F23NREGULATING OR CONTROLLING COMBUSTION
    • F23N1/00Regulating fuel supply
    • F23N1/02Regulating fuel supply conjointly with air supply
    • F23N1/022Regulating fuel supply conjointly with air supply using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F23COMBUSTION APPARATUS; COMBUSTION PROCESSES
    • F23JREMOVAL OR TREATMENT OF COMBUSTION PRODUCTS OR COMBUSTION RESIDUES; FLUES 
    • F23J2900/00Special arrangements for conducting or purifying combustion fumes; Treatment of fumes or ashes
    • F23J2900/15004Preventing plume emission at chimney outlet
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F23COMBUSTION APPARATUS; COMBUSTION PROCESSES
    • F23NREGULATING OR CONTROLLING COMBUSTION
    • F23N5/00Systems for controlling combustion
    • F23N5/18Systems for controlling combustion using detectors sensitive to rate of flow of air or fuel
    • F23N2005/181Systems for controlling combustion using detectors sensitive to rate of flow of air or fuel using detectors sensitive to rate of flow of air
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F23COMBUSTION APPARATUS; COMBUSTION PROCESSES
    • F23NREGULATING OR CONTROLLING COMBUSTION
    • F23N5/00Systems for controlling combustion
    • F23N5/18Systems for controlling combustion using detectors sensitive to rate of flow of air or fuel
    • F23N2005/185Systems for controlling combustion using detectors sensitive to rate of flow of air or fuel using detectors sensitive to rate of flow of fuel
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F23COMBUSTION APPARATUS; COMBUSTION PROCESSES
    • F23NREGULATING OR CONTROLLING COMBUSTION
    • F23N2229/00Flame sensors
    • F23N2229/20Camera viewing
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F23COMBUSTION APPARATUS; COMBUSTION PROCESSES
    • F23NREGULATING OR CONTROLLING COMBUSTION
    • F23N2900/00Special features of, or arrangements for controlling combustion
    • F23N2900/05006Controlling systems using neuronal networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Description

BURNER CONTROL
CROSS-REFERENCE TO RELATED APPLICATION
[0001] This application claims priority to U.S. Provisional Application Serial No.: 62/957,648 filed 6 January 2020, which is herein incorporated by reference.
BACKGROUND
Field
[0002] Embodiments described herein generally relate to burners for excess hydrocarbon. Specifically, embodiments described herein relate to control of combustion in such burners.
Description of the Related Art
[0003] The global oil and gas industry is trending toward improved environmental safety and compliance throughout the various phases of a well lifecycle. Exploration and production involves dynamic well testing that can produce a large amount of hydrocarbons at the surface. Excess hydrocarbons cannot be stored, so the most economical viable option is often to dispose of the excess hydrocarbons by flaring. This is even more relevant for offshore operations.
[0004] Combustion of hydrocarbon will typically result in some environmental impact, even for clean burner operation without visible fallout and smoke. Most of the environmental impacts are created by spill and fallout. This can be due to incomplete combustion from change in fluid, poor burner operating parameters, and/or poor monitoring. The startup and shut down phases are critical and need to be monitored closely which requires good human communication and interaction.
[0005] Even the best burner needs constant monitoring and air supply adjustment during such operations to maintain acceptable combustion through variation in fluid properties, flowrates, and weather conditions.
[0006] For the continuous burning phase which can last for days the monitoring and regulation of air supply to the burner becomes difficult. Failing to monitor the combustion and adjust the air supply according to the flame or smoke appearance will have immediate impact on the combustion quality and emissions from the burner. Improved methods of monitoring and control of hydrocarbon burners is needed.
SUMMARY
[0007] Embodiments described herein provide methods of controlling hydrocarbon burner systems, including acquiring image data during operation of a burner system that, via combustion of hydrocarbons, generates a flame and smoke; processing the image data using at least one trained machine learning model to generate a flame mask and a smoke mask; characterizing the combustion by applying the flame mask to the image data and by applying the smoke mask to the image data; and, based on the characterizing, controlling the operation of the burner system. Various other examples of methods, systems, computer-program products, etc., are also described herein.
BRIEF DESCRIPTION OF THE DRAWINGS
[0008] So that the manner in which the above recited features of the present disclosure can be understood in detail, a more particular description of the disclosure, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only exemplary embodiments and are therefore not to be considered limiting of its scope, may admit to other equally effective embodiments.
[0009] Fig. 1 is a series of diagrams of example environments and an example of a burner system.
[0010] Fig. 2 is a diagram of an example of a burner system.
[0011] Fig. 3 is a diagram of an example of a burner boom.
[0012] Fig. 4 is a diagram of an example of a burner system.
[0013] Fig. 5 is a diagram of an example of a method.
[0014] Fig. 6 is a diagram of an example of a color map.
[0015] Fig. 7 is a diagram of an example of a smoke chart.
[0016] Fig. 8 is a diagram of an example of a method.
[0017] Fig. 9 is a diagram of an example of a machine learning model.
[0018] Fig. 10 is a diagram of an example of a computational framework and some examples of models.
[0019] Fig. 11 is a diagram of an example of a method.
[0020] Fig. 12 is a diagram of an example of a method and an example of a system.
[0021] Fig. 13 is a diagram of an example of a computing system.
[0022] Fig. 14 is a diagram of example components of a system and a networked system.
[0023] To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures. It is contemplated that elements and features of one embodiment may be beneficially incorporated in other embodiments without further recitation.
DETAILED DESCRIPTION
[0024] The following description includes the best mode presently contemplated for practicing the described implementations. This description is not to be taken in a limiting sense, but rather is made merely for the purpose of describing the general principles of the implementations. The scope of the described implementations should be ascertained with reference to the issued claims.
[0025] Fig. 1 shows examples of environments 101, including a marine environment 102 and a land environment 104 where the marine environment 102 includes various equipment and where the land environment 104 includes various equipment. As shown, each of the environments 101 can include one or more wellheads 106 (e.g., wellhead equipment). A wellhead can be a surface termination of a wellbore that can include a system of spools, valves and assorted adapters that, for example, can provide for pressure control of a production well. A wellhead may be at a land surface, a subsea surface (e.g., an ocean bottom, etc.), etc. As an example, conduits from multiple wellheads may be joined at one or more manifolds such that fluid from multiple wells can be flow in a common conduit.
[0026] At various times, a well may be tested using a process referred to as well testing. Well testing can include one or more of a variety of well testing operations.
In various instances, fluid can flow from a well or wells to surface where the fluid is subjected to one or more well testing operations and generates scrap (e.g., waste fluid), which must be handled appropriately, for example, according to circumstances, regulations, etc. For example, consider loading waste fluid into a tanker for transport to a facility that can dispose of the waste fluid. Another manner of handling waste fluid can be through combustion, which can be referred to as burning. As an example, burning can be part of a well testing process, whether burning is for handling waste fluid and/or for analyzing one or more aspects of how one or more waste fluids burn. As to the latter, burning may optionally provide data as to one or more characteristics of well fluid (e.g., a component thereof, etc.).
[0027] As an example, well testing can be performed during one or more phases such as during exploration and appraisal where production of hydrocarbons are tested using a temporary production facility that can provide for fluid sampling, flow rate analysis and pressure information generation, for example, to help characterize a reservoir. Various decisions can be based on well testing such as, for example, decisions as to production methods, facilities and possible well productivity improvements.
[0028] As an example, disposal of produced hydrocarbons during well testing may be via burning, which can include on-site burning and/or off-site burning. Burning can be particularly suitable when facilities are not available for storage (e.g., consider mobile offshore drilling rigs, remote locations onshore, etc.).
[0029] As to the example environments 101 of Fig. 1 , well testing may be performed, for example, using equipment shown in the marine environment 102 and/or using equipment shown in the land environment 104. As an example, an environment may be under exploration, development, appraisal, etc., where such an environment includes at least one well where well fluid can be produced (e.g., via natural pressure, via fracturing, via artificial lift, via pumping, via flooding, etc.). In such an environment, various types of equipment may be on-site, which may be operatively coupled to well testing equipment.
[0030] Fig. 1 shows an example of a system 110 that can be operatively coupled to one or more conduits that can transport well fluid, for example, from one or more wellheads. As shown the system 110 can include a computational system 111 (CS), which can include one or more processors 112, memory 114 accessible to at least one of the one or more processors 112, instructions 116 that can be stored in the memory 114 and executable by at least one of the one or more processors 112, and one or more interfaces 118 (e.g., wired, wireless, etc.). In the example of Fig. 1 , the system 110 is shown as including various communication symbols, which may be for transmission and/or reception of information (e.g., data, commands, etc.), for example, to and/or from the computational system 111. As an example, the computational system 111 can be a controller that can issue control instructions to one or more pieces of equipment in an environment such as, for example, the marine environment 102 and/or the land environment 104. As an example, the computational system 111 may be local, may be remote or may be distributed such that it is in part local and in part remote.
[0031] Referring again to the wellhead 106, it can include various types of wellhead equipment such as, for example, casing and tubing heads, a production tree, a blowout preventer, etc. Fluid produced from a well can be routed through the wellhead 106 and into the system 110, which can be configured with various features for well testing operations.
[0032] In the example of Fig. 1 , the system 110 is shown to include various segments, which may be categorized operationally. For example, consider a well control segment 120, a separation segment 122, a fluid management segment 124, and a burning segment 126.
[0033] As shown in the example of Fig. 1 , the well control segment 120 is an assembly of various components such as a manifold 130, a choke manifold 132, a manifold 134, a heat exchanger 136 and a meter 138; the separation segment 122 includes a separator 142; the fluid management segment 124 is an assembly of various components such as manifolds and pumps 144, a manifold 146-1 , a manifold 146-2, a tank 148-1 and a tank 148-2; and the burning segment 126 includes a burner 152 and one or more cameras 154.
[0034] As mentioned, in the example of Fig. 1 , the system 110 includes various features for one or more aspects of well testing operations; noting that the system 110 may include lesser features, more features, alternative features, etc. For example, consider one or more of a gas specific gravity meter, a water-cut meter, a gas-to-oil ratio sensor, a carbon dioxide sensor, a hydrogen sulfide sensor, or a shrinkage measurement device. Various features may be upstream and/or downstream of a separator segment or a separator.
[0035] With respect to flow of fluid from a well or wells, such fluid may be received by the well control segment 120 and then routed via one or more conduits to the separation segment 122. In the example of Fig. 1 , the well control segment 120 the heat exchanger 136 may be provided as a steam-heat exchanger and the meter 138 for measuring flow of fluid through the well control segment 120.
[0036] As mentioned, the well control segment 120 can convey fluid received from one or more wells to the separator 142. As an example, the separator 142 can be a horizontal separator or a vertical separator, and can be a two-phase separator (e.g., for separating gas and liquids) or a three-phase separator (e.g., for separating gas, oil, and water). A separator may include various features for facilitating separation of components of incoming fluid (e.g., diffusers, mist extractors, vanes, baffles, precipitators, etc.).
[0037] As an example, fluid can be single phase or multiphase fluid where “phase” refers to an immiscible component (e.g., consider two or more of oil, water and gas).
[0038] As an example, the separator 142 can be used to substantially separate multiphase fluid into its oil, gas, and water phases, as appropriate and as present, where each phase emerging from the separator 142 may be referred to as a separated fluid. Such separated fluids may be routed away from the separator 142 to the fluid management segment 124. In various instances, the separated fluids may not be entirely homogenous. For example, separated gas exiting the separator 142 can include some residual amount of water or oil and separated water exiting the separator 142 can include some amount of oil or entrained gas. Similarly, separated oil leaving the separator 142 can include some amount of water or entrained gas.
[0039] As shown in the example of Fig. 1 , the fluid management segment 124 includes flow control equipment, such as various manifolds and pumps (generally represented by the block 144) for receiving fluids from the separator 142 and conveying the fluids to other destinations, as well as additional manifolds 146-1 and 146-2 for routing fluid to and from fluid tanks 148-1 and 148-2. While two manifolds 146-1 and 146-2 and two tanks 148-1 and 148-2 are depicted in Fig. 1 , it is noted that the number of manifolds and tanks can be varied. For instance, in one embodiment the fluid management segment 124 can includes a single manifold and a single tank, while in other embodiments the fluid management segment 124 can include more than two manifolds and more than two tanks.
[0040] As to the manifolds and pumps 144, they can include a variety of manifolds and pumps, such as a gas manifold, an oil manifold, an oil transfer pump, a water manifold, and a water transfer pump. In at least some embodiments, the manifolds and pumps 144 can be used to route fluids received from the separator 142 to one or more of the fluid tanks 148-1 and 148-2 via one or more of the additional manifolds 146-1 and 146-2, and to route fluids between the tanks 148-1 and 148-2. As an example, the manifolds and pumps 144 can include features for routing fluids received from the separator 142 directly to the one or more burners 152 for burning gas and oil (e.g., bypassing the tanks 148-1 and 148-2) or for routing fluids from one or more of the tanks 148-1 and 148-2 to the one or more burners 152.
[0041] As noted above, components of the system 110 may vary between different applications. As an example, equipment within each functional group of the system 110 may also vary. For example, the heat exchanger 136 could be provided as part of the separation segment 122, rather than of the well control segment 120.
[0042] In certain embodiments, the system 110 can be a surface well testing system that can be monitored and controlled remotely. Remote monitoring may be effectuated with sensors installed on various components. In some instances, a monitoring system (e.g., sensors, communication systems, and human-machine interfaces) can enable monitoring of one or more of the segments 120, 122, 124 and 126. As shown in the example of Fig. 1 , the one or more cameras 154 can be used to monitor one or more burning operations of the one or more burners 152, which may aim to facilitate control of such one or more burning operations at least in part through analysis of image data acquired by at least one of the one or more cameras 154.
[0043] Fig. 2 shows an example of a system 250, which may be referred to as a surface well testing system. The system 250 can include various features of the system 110 of Fig. 1.
[0044] In Fig. 2, a multiphase fluid (represented here by arrow 252) enters a flowhead 254 and is routed to a separator 270 through a surface safety valve 256, a steam-heat exchanger 260, a choke manifold 262, a flow meter 264, and an additional manifold 266. In the example of Fig. 2, the system 250 includes a chemical injection pump 258 for injecting chemicals into the multiphase fluid flowing toward the separator 270.
[0045] In the depicted embodiment of Fig. 2, the separator 270 is a three-phase separator that generally separates the multiphase fluid 252 into gas, oil, and water components. The separated gas is routed downstream from the separator 270 through a gas manifold 274 to either of the burners 276-1 and 276-2 for flaring gas and burning oil. The gas manifold 274 includes valves that can be actuated to control flow of gas from the gas manifold 274 to one or the other of the burners 276-1 and 276-2. Although shown next to one another in Fig. 2 for sake of clarity, the burners 276-1 and 276-2 may be positioned apart from one another, such as on opposite sides of a rig, etc.
[0046] As shown, the separated oil from the separator 270 can be routed downstream to an oil manifold 280. Valves of the oil manifold 280 can be operated to permit flow of the oil to either of the burners 276-1 and 276-2 or either of the tanks 282 and 284. The tanks 282 and 284 can be of a suitable form, but are depicted in Fig. 2 as vertical surge tanks each having two fluid compartments. This allows each tank to simultaneously hold different fluids, such as water in one compartment and oil in the other compartment. An oil transfer pump 286 may be operated to pump oil through the well testing system 250 downstream of the separator 270. The separated water from the separator 270 can be similarly routed to a water manifold 290. Like the oil manifold 280, the water manifold 290 includes valves that can be opened or closed to permit water to flow to either of the tanks 282 and 284 or to a water treatment and disposal apparatus 294. A water transfer pump 292 may be used to pump the water through the system.
[0047] A well test area in which the well testing system 250 (or other embodiments of a well testing system) is installed may be classified as a hazardous area. In some embodiments, the well test area is classified as a Zone 1 hazardous area according to International Electrotechnical Commission (IEC) standard 60079-10-1 :2015.
[0048] In the example of Fig. 2, a cabin 296 at a wellsite may include various types of equipment to acquire data from the well testing system 250. These acquired data may be used to monitor and control the well testing system 250. In at least some instances, the cabin 296 can be set apart from the well test area having the well testing system 250 in a non-hazardous area. This is represented by the dashed line 298 in Fig. 2, which generally serves as a demarcation between the hazardous area having the well testing system 250 and the non-hazardous area of the cabin 296.
[0049] The equipment of a well testing system can be monitored during a well testing process to verify proper operation and facilitate control of the process. Such monitoring can include taking numerous measurements during a well test, examples of which can include choke manifold temperature and pressures (upstream and downstream), heat exchanger temperature and pressure, separator temperature and pressures (static and differential), oil flow rate and volume from the separator, water flow rate and volume from the separator, and fluid levels in tanks of a system.
[0050] As an example, a mobile monitoring system may be provided with a surface well testing installation. In such an example, monitoring of a well test process can be performed on a mobile device (e.g., a mobile device suitable for use in Zone 1 hazardous area, like the well test area). Various types of information may be automatically acquired by sensors and then presented to an operator via the mobile device. The mobile monitoring system may provide various functions, such as a sensor data display, video display, sensor or video information interpretation for quality-assurance and quality-control purposes, and a manual entry screen (e.g., for a digital tally book for recording measurements taken by the operator).
[0051] Fig. 3 shows an example of a burner boom 300, which can be configured for horizontal mounting, mounting at an angle, vertical mounting, etc. For example, the burner boom 300 can be mounted on a rig with a rotating base plate and guy lines. In such an example, horizontal guy lines can help to orient the burner boom 300; vertical guy lines, which are fixed to the rig’s main structure, can support the burner boom 300. A rotating base can enable horizontal and vertical positioning of the burner boom 300 and its burner. As an example, the burner boom 300 may be positioned slightly above horizontal so that oil left in piping after flaring operations does not leak out.
[0052] As an example, a burner can be boom mounted or mounted on another type of support structure. As an example, a structure can support various conduits that provide fluid such as, for example, one or more of air, water, oil, and propane.
[0053] In the example of Fig. 3, conduits or lines include an additional gas line 310, pilot line cables 320, an oil line 330, a water line 340, a water wall screen line 350, an air line 360 and a main gas line 370.
[0054] As an example, a burner can be configured and controlled to perform in a desirable manner. For example, it may be desired to burn in a fallout-free and smokeless manner for combustion of liquid hydrocarbons produced during well testing. As an example, a burner geometry can utilize pneumatic atomization and enhanced air induction. As an example, a burner can be equipped with one or more pilots, a flame-front ignition system (BRFI), and a built-in water screen to reduce heat radiation. As an example, a burner can be fitted with an automatic shutoff valve that prevents oil spillage at the beginning and end of a burning run.
[0055] As an example, a burner can include a high turn-down feature (e.g., 1 :5), which may be optionally further extended to a higher ratio (e.g., 1:30) using a multirate kit (BMRK) option, which allows for selecting the number of operating nozzles. For onshore operations, a skid may be utilized for skid-mounting.
[0056] As an example, a burner may be suited for high efficiency burning with one or more types of oil (e.g., including particularly heavy and waxy oils). As an example, a burner may operate effectively up to a water cut rating (e.g., up to 25 percent water cut), which may be desirable for various types of cleanup operations.
[0057] As a burner may be operational in a manner that provides for substantially no liquid fallout and substantially no visible smoke emissions, such a burner may be particularly suited for operations in environmentally sensitive areas.
[0058] Fig. 4 is a system diagram of an example of a burner control system 400 according to one embodiment. The system 400 includes at least one camera 407-1 and 407-2 positioned to capture one or more images 402-1 and 402-2 of a flare emitted by a burner 401. In the example of Fig. 4, two cameras 407-1 and 407-2 are shown capturing images 402-1 and 402-2 from different locations to acquire image data from more than one image plane of the flare. The burner 401 includes a fuel feed 403 that flows fuel to the burner 401 (see, e.g., the burners 276-1 and 276-2 of Fig. 2, the burner 300 of Fig. 3, etc.). The burner 401 also includes an air feed 405 that flows air to the burner 401. Flow rate of the air feed 405 is controlled by a control valve 408, where an air flow sensor 411 senses flow rate of air into the burner 401. A fuel flow sensor 413 senses flow rate of fuel to the burner 401 . Other sensors 404, along with the at least one camera 407-1 and 407-2, are operatively coupled to local electronic equipment 420. The sensors 404 may sense, and produce signals representing, combustion effective parameters such as temperature, wind speed, and ambient humidity. The sensors 404, 411 , and 413, and the cameras 407-1 and 407-2 send data, including data representing the images 402-1 and 402-2, along with data representing readings of the sensors 404, 411 , and 413, directly and/or indirectly, to the local electronic equipment 420. The data sent to the local electronic equipment 420 can represent a state of combustion taking place at the burner 401.
[0059] As shown in the example of Fig. 4, the local electronic equipment 420 can be in communication with remote electronic equipment 440. For example, consider use of one or more wired and/or wireless interfaces that allow for communications between the local and remote electronic equipment 420 and 440. In such an example, various computational tasks may be executed locally and/or remotely. For example, consider a local computing device that can include an application that can render a graphical user interface (GUI) to a display. In such an example, the GUI can include control graphics that are selectable to issue instructions such as, for example, one or more application programming interface (API) calls, which may be directed to the local electronic equipment 420, the remote electronic equipment 440, etc., to cause one or more actions to occur such as formulation of a response to an API call.
[0060] As an example, the local and remote electronic equipment 420 and 440 may be configured in a client-server arrangement where the local electronic equipment 420 operates as a client and the remote electronic equipment 440 operates as a server. As an example, data acquired by the local electronic equipment 410 (e.g., as part of the system 400) may be processed for local control and/or transmitted (e.g., as raw and/or processed data) for processing, storage, etc., by the remote electronic equipment 440. As an example, the remote electronic equipment 440 can include one or more cloud-based resources. In such an example, the remote electronic equipment 440 may provide services such as, for example, software as a service. As an example, control effectuated by the system 400 to control the burner 401 can be based on local and/or remote computing (e.g., using the local electronic equipment 420, the remote electronic equipment 440, etc.).
[0061] As an example, control can be model-based, where such a model can be a physics-based model, a machine learning model, a hybrid model, etc. As an example, a model-based approach can allow for prediction of various parameters such as, for example, air control parameters based on the data from the sensors 404, 41 1 , and 41 3 and the at least one of the one or more cameras 407-1 and 407-2. As an example, one or more air control parameters can be applied to the control valve 408 to control air supply to the burner 401 , which can be part of a combustion process that generates a flare that can be captured, as depicted in the one or more images 402-1 and 402-2.
[0062] “Camera,” as used herein, means an imaging device. A camera can capture an image of electromagnetic (EM) radiation in a medium that can be converted to data for use in digital processing. The conversion can take place within the camera or in a separate processor. The camera may capture images in one wavelength or across a spectrum (e.g., or spectra), which may encompass the ultraviolet (UV) spectrum, the visible spectrum, and/or the infrared spectrum. For example, a camera may capture an image of wavelengths from approximately 350 nm to approximately 1 ,500 nm. As an example, one or more of a broad spectrum imaging device such as a LIDAR detector, a narrower spectrum detector such as a chargedcoupled device (CCD) array, and a short-wave infrared detector can be used as an imaging device or as imaging devices. Cameras can be monovision cameras or stereo cameras.
[0063] In the example of Fig. 4, the local electronic equipment 420 is shown as including an image processing unit 410, which may include and/or be operatively coupled to a model or models for purposes of processing one or more of the images 402-1 and 402-2 as captured by the at least one camera 407-1 and 407-2. As an example, a data set, along with sensor data representing oil flow rate, gas flow rate, water or steam flow rate, air flow rate, pressure, temperature, wind speed, ambient humidity, and other combustion effective parameters, can be considered different types of inputs. As an example, a model can receive input or inputs and can output one or more air control parameters, such as flow rate, pressure, and/or temperature, for the burner 401 (e.g., or burners) controlled by the system 400.
[0064] As to a machine learning model (ML model), such a model can be a neural network model (NN model). As an example, a trained ML model can be utilized to control one or more burners. As an example, a trained ML model can be trained with respect to a particular burner and/or type of burner. In such an approach, a trained ML model can be selected based at least in part on burner specifications (e.g., manufacturer, model, features, history, etc.).
[0065] Various types of data may be acquired and optionally stored, which may provide for training one or more ML models and/or for offline analysis, etc. For example, air control parameters output by a trained NN model can be stored in digital storage for later analysis, which may include further training, training a different ML model, etc. During control of an ongoing burning operation, one or more air control parameters can be transmitted to one or more control valves that control air supply to one or more burners as may be operatively coupled to the system controlled by the control system. Subsequent images and sensor data acquisitions can be captured, and the control cycle repeated as many times as desired. Frequency of repetition can depend on various time constants of the system 400. As an example, a cycle may be as short as a fraction of a second or as long as five to ten minutes. As an example, consider a 1 Hz operational frequency where several images are captured in a one second interval as in a video feed where computing air control parameters and applying the computed air control parameters to a control valve controlling air supply to the burner are based on the images in the one second interval. As an example, video may be live, with some amount of latency due to transmission and processing time, or video may be deliberately delayed by a delay amount.
[0066] The image processing unit 410 converts signals derived from photons received by the one or more cameras 407-1 and 407-2 into data. The image processing unit 410 may be within or separate from a camera. As an example, the image processing unit 410 can convert signals received from the one or more cameras 407-1 and 407-2 into digital data representing photointensity in defined areas of the image and can assign position information to each digital data value. As an example, photointensity may be deconvolved into constituent wavelengths to produce a spectrum for each pixel. Such a spectrum may be sampled in defined bins, and the data from such sampling structured into a data set representing spectral intensity of the received image, for example, as a function of x-y position in the image. As an example, a time-stamp can be added.
[0067] As an example, an image can be a pixel image with pixel position coordinate, a pixel depth and a time stamp. As to depth, various conventions may be utilized and depend on equipment and/or processing. Where color is utilized, a color depth can be referenced. For example, 8-bit color and 24-bit color can be the same where, in an RGB color space, 8-bits refers to each R, G and B (e.g., subpixel), while 24-bit is a sum of the three 8-bit channels (e.g., 3x8 = 24). Standards can include, for example, monochrome (e.g., 1-bit) to 4K (e.g., 12-bit color, which provides 4096 colors), etc.
[0068] As to color models, RGB can be mapped to a cube. For example, a horizontal x-axis can be a red axis (R) for red values, a y-axis can be a blue axis (B) for blue values, and a z-axis can be a green axis (G) for green values. The origin, of such a cube (e.g., 0, 0, 0) can be black and an opposing point can be white (e.g., 1 , 1, 1).
[0069] Another type of color model is the Y'UV model, which defines a color space in terms of one luma component (Υ') and two chrominance components, called U (blue projection) and V (red projection) respectively. The Y'UV color model is used in the PAL composite color video (excluding PAL-N) standard.
[0070] Yet another type of color model is the HSV color model. The RGB color model can define a color as percentages of red, green, and blue hues (e.g., as mixed together) while the HSV color model can define color with respect to hue (H), saturation (S) and value (V). For the HSV color model, as hue varies from 0 to 1.0, corresponding colors vary from red through yellow, green, cyan, blue, magenta, and back to red (e.g., red values exist at both at 0 and 1.0); as saturation varies from 0 to 1.0, corresponding colors (hues) vary from unsaturated (e.g., shades of gray) to fully saturated (e.g., no white component); and as value, or brightness, varies from 0 to 1.0, corresponding colors become increasingly brighter.
[0071] Saturation may be described as, for example, representing purity of a color where colors with the highest saturation may have the highest values (e.g., represented as white in terms of saturation) and where mixtures of colors are represented as shades of gray (e.g., cyans, greens, and yellow shades are mixtures of true colors). As an example, saturation may be described as representing the “colorfulness” of a stimulus relative to its own brightness; where “colorfulness” is an attribute of a visual sensation according to which the perceived color of an area appears to be more or less chromatic and where “brightness” is an attribute of a visual sensation according to which an area appears to emit more or less light.
[0072] As an example, a method can include accessing one or more resources as to color models (e.g., as a plug-in, external executable code, etc.). For example, consider a method that includes instructions to access an algorithm of a package, a computing environment, etc., such as, for example, the MATLAB® computing environment (marketed by MathWorks, Inc., Natick, MA). The MATLAB® computing environment includes an image processing toolbox, for example, with algorithms for color space (e.g., color model) conversions, transforms, etc. As an example, the MATLAB® computing environment includes functions “rgb2hsv” and “hsv2rgb" to convert images between the RGB and HSV color spaces (see, e.g., http://www.mathworks.com/help/images/converting-color~data-between-colorspaces.html).
[0073] Fig. 5 shows an example of a method 500 along with an example of a RGB color model 502 and a HSV color model 504 (e.g., where colors are represented by their first letter). In the example of Fig. 5, RGB data 522, 524 and 526 are provided from one or more sources. As shown, a transformation block 550 can transform the continuous RGB color model representation of the data 522, 524 and 526 as H band data 562, V band data 564 and/or S band data for HSV bands of the HSV color model 504. As an example, an algorithm that provides for conversion of RGB data to HSV data may be implemented, optionally with zero filling, etc., for example, where less than three sets of data are provided (e.g., two of data 522, 524 and 526).
[0074] Fig. 6 shows an example of a color map 610, which includes hue (H) and saturation (S) coordinates (e.g., of an HSV color model). As shown, the hue includes a range from red and transitions to red/orange, orange, orange/yellow and yellow.
[0075] Colors of a flame can depend on temperature of the flame and the material being burned. Red can be observed, for example, in a range of temperatures from approximately 525 C (977 F) to approximately 1000 C (1 ,830 F); orange can be observed in a range from approximately 1100 C (2,010 F) to approximately 1200 C (2,190 F) and white can be observed in a range from approximately 1300 C (2,370 F) to approximately 1500 C (2,730 F).
[0076] The orange in a campfire depends on temperature and sodium in firewood while blue streaks can come from carbon and hydrogen in the firewood. The colors of stars can also indicate their temperatures. The closest star to Earth, the sun, has a surface temperature of approximately 5538 C (10,000 F).
[0077] Fig. 7 shows an example of a Ringelmann smoke chart 700 as including four different graduated maps 710, 720, 730 and 740. The Ringelmann system can include graduated shades of gray, for example, consider five equal steps between white and black, which may be represented by the maps 710, 720, 730 and 740 as being rectangular grilles of black lines of definite width and spacing on a white background; noting that white and black are not shown in Fig. 7.
[0078] In terms of numbered cards, Ringelmann charts may be reproduced is as follows: Card 0, white; Card 1 , black lines 1 mm thick, 10 mm apart, leaving white spaces 9 mm square (see map 710); Card 2, lines 2.3 mm thick, spaces 7.7 mm square (see map 720); Card 3, lines 3.7 mm thick, spaces 6.3 mm square (see map 730); Card 4, lines 5.5 mm thick, spaces 4.5 mm square (see map 740); and Card 5, black.
[0079] The chart, as distributed by the Bureau of Mines, provides the shades of cards 1 , 2, 3, and 4 on a single sheet, which are known as Ringelmann No. 1 , 2, 3, and 4, respectively.
[0080] Many municipal, state, and federal regulations prescribe smoke-density limits based on the Ringelmann smoke chart, as published by the U.S. Bureau of Mines. While the chart still serves a useful purpose, it should be remembered that the data obtained by its use are empirical in nature. The apparent darkness or opacity of a stack plume can depend upon the concentration of the particulate matter in the effluent, the size of the particulate, the depth of the smoke column being viewed, natural lighting conditions such as the direction of the sun relative to the observer, and the color of the particles. Since unburned carbon is a principal coloring material in a smoke column from a furnace using coal or oil, the relative shade is a function of the combustion efficiency.
[0081] In various embodiments, a system, a method, etc., can provide for characterization of operational status of a well test burner. As an example, an approach can utilize one or more color maps. For example, consider use of an HSV color map for purposes of determining combustion quality based at least in part on hue (e.g., a hue histogram, etc.) and determining a smoke index (e.g., via a Ringelmann smoke chart, etc.). In such an example, one or more RGB color map images may be acquired and processed using one or more trained machine learning (ML) models to output masks, which can include a flare mask for assessing combustion quality and a smoke mask for assessing smoke. In such an example, the one or more RGB color map images may be transformed to one or more corresponding HSV color map images that can be assessed using one or more masks output by a trained ML model. As an example, operational status of a well test burner can be utilized for purposes of control of the well test burner. For example, consider a combustion quality indicator being utilized to adjust one or more operational parameters associated with flow of fluid to a burner (e.g., gas, oil, water, air, etc.). In such an example, a feedback loop can be established that aims to control how material is burned, for example, to achieve one or more goals (e.g., combustion, smoke, etc.).
[0082] As an example, a method can be based on real-time video feed interpretation to generate status indicators and qualifiers of a burning process of well test equipment. As to a ML model, as an example, a convolutional neural network (CNN) can be trained and utilized to analyze, in real-time, RGB frames generated by a network of cameras placed in sight of one or more burners. In such an approach, the trained CNN can detect and delineate objects such as, for example, one or more flames, one or more regions of smoke and one or more protection water screens (e.g., sprinklers).
[0083] As an example, a method can include defining a set of parameters for characterization of a flare or flares. For example, consider a method that defines and determines via image interpretation the status of one or more of the following parameters: flame presence and size; flame quality indicator; white smoke presence and size; grey smoke presence and size; black smoke presence and size; and sprinkler presence. Such parameters can characterize the operational status of a burner, which can then optionally be controlled such that desired parameter values are achieved.
[0084] As to some examples of flare parameters, which may be additional and/or alternative to one or more other flare parameters, consider one or more of tip type, tip length, tip diameter, angle to horizon, angle to North, flowrate(s), caloric value, molecular weight(s), heat release, fraction radiated, temperature, wind speed, wind direction, solar radiation, transmissivity, etc. As an example, a computational framework such as the FLARESIM framework (Schlumberger Limited, Houston, Texas) may be utilized to determine values of one or more flare parameters. As an example, a framework may include or be operatively coupled to a ML model, which may be of a ML model framework that can output various types of data using a trained ML model or models. For example, a framework may access another framework via application programming interface (API) calls where information (e.g., data, etc.) are returned in response to such calls. As explained, a ML model can be trained for image analysis of image data for one or more flares, etc., which can be input to generate output (see, e.g., Fig. 8, Fig. 9, etc.).
[0085] Monitoring combustion of oil and/or gas in the field during early exploration days of a well can be quite informative. An operator in the field may be referred to as a “flame watcher” who observes a flame constantly to, on the one hand, make sure that the flame is on as the oil and/or gas are expelled and, on the other hand, to adjust combustion steering parameters in case the flame combustion is incomplete (e.g., causing soot particles, toxic particles, etc.).
[0086] As explained, a workflow can be implemented to detect a flame and characterize combustion based on video monitoring in real-time using one or more machine learning techniques, which can include deep learning on data acquired by computerized vision.
[0087] As an example, a workflow can output status of a burning process by detecting the presence of features such as: flame, smoke and protective water screen. Such a workflow can additionally characterize combustion quality, for example, by analyzing a color aspect and distribution within a flame using computerized vision. As to smoke, it may be classified into various categories, for example, consider classification using a Ringelmann smoke chart approach that can provide for a number of classes that can range from white to black or a number of classes that can be between white and black (e.g., a number of grey classes). A computerized approach can augment human observation or, for example, supplant human observation. A computerized approach can help to reduce subjectivity and increase objectivity of operational analysis of a burner or burners, which can, in turn, improve operational control of a burner or burners. As to subjectivity, in some instances, labeling for purposes of training a machine learning model (ML model) may be performed using human experts. Alternatively, or additionally, labeling may be performed using one or more computerized processes.
[0088] As explained, cameras can be utilized to monitor well test burners. A camera can generate frames of a burning process where each frame corresponds to an exposure time and a field of view. For example, a camera can include a lens with a depth of field to focus an image onto an imaging plane where the imaging plane can include a plurality of sensors that generate data that can be represented as pixel (e.g., picture elements) using one or more color maps. As explained, a camera can generate a frame that is a pixel image with a color depth, which may be indicated by a bit depth. As an example, a frame can be a pixel image with a corresponding bit depth for a particular color map such as, for example, RGB or other suitable color map. As to exposure time and/or frame rate, consider an exposure time that is less than the reciprocal of the frame rate. For example, a frame rate of 24 frames per second would correspond to a maximum exposure time per frame of approximately 42 milliseconds or less.
[0089] Actual camera operation can involve setting various parameters such as, for example, a shutter speed mode, a shutter speed, a shutter angle, etc. Various cameras can include compression circuitry, which may compress frames using lossy or lossless compression. A workflow can include using one or more types of image data, which can be represented as a frame, for example, for processing by a trained machine learning model (ML model).
[0090] As an example, a deep learning approach can utilize one or more convolutional neural networks (CNNs) for segmentation of a frame or frames into various categories (e.g., various masks). Segmentation via a CNN can provide for pixel-wise classification such that a pixel of a frame is identified as being part of a class (e.g., flame, smoke or water screen(s)). In such an approach, one or more trained CNNs can provide, for each frame of video, a probability map for each of a plurality of different object types (e.g., a flame object type, a smoke object type, a water screen object type). As explained, the output of a trained CNN can be a plurality of masks that provide, probabilistically, indications of a pixel belonging to a corresponding object type of a plurality of object types.
[0091] As an example, a method can include post-processing using resulting masks via one or more image processing techniques along with applying one or more mathematical transforms of numerical values of an RGB matrix to derive indicators of flame visual aspect and smoke intensity (e.g., via a Ringelmann smoke chart scale, etc.).
[0092] As an example, burner control can include video monitoring of a burner and evaluation of the status of the burner using various criteria where such evaluation can be performed using, for example, a trained machine learning model.
[0093] An embodiment can include using cameras to visualize a burner to generate RGB format frames as data of a burning process of the burner where the data are streamed in real-time to a computerized system that records and processes the data. The acquisition and analysis of such data can include data from the visible light spectrum and/or one or more other types of EM radiation. As an example, an embodiment may utilize one or more camera technologies such as thermal and/or multispectral camera technologies.
[0094] As to a burner operation, combustion of hydrocarbons in the presence of air can generate a flame and smoke. A burner operation can utilize a water screen, which can be a type of sprinkler that utilizes water for cooling and/or one or more other purposes. As an example, a burner controller can implement burner control where the burner controller can include features to characterize a burner operation by detecting flame, smoke and/or water screen presence and/or flame, smoke and/or water screen size.
[0095] As explained, a CNN can be generated and trained to derive one or more probability masks such as one or more of a flame mask, a smoke mask and a water screen mask, which can be classes of “objects” in an image (e.g., or images). As an example, a CNN can provide for image segmentation where a segmentation process allows for classification of individual pixels with respect to probability of membership in one or more classes. As an example, a CNN can provide for classification of pixels into one of a plurality of classes where the number of classes can be greater than or equal to two. As to a two class approach, consider a flame class and a smoke class. As to a three class approach, consider a flame class, a smoke class and a water screen class or, for example, a flame class, a smoke class and a background class. As to a four class approach, consider a flame class, a smoke class, a water screen class and a background class. As an example, a background class can be for objects not belonging to one of the flame, the smoke, or the water screen classes.
[0096] As explained, a burner controller can be computerized where circuitry (e.g., hardware and/or software) operates to characterize a burner operation or burner operations (e.g., for control of multiple burners). As to some examples of characteristics, consider indicators of flame, smoke and water screen presence directly derived from the presence of a blob of pixels where the pixels have corresponding probabilities that are higher than a given threshold. While individual pixels are mentioned, an approach may consider neighboring pixels and/or regional pixels. For example, a flame pixel may be unlikely to exist as a single pixel that is without one or more neighboring flame pixels. As an example, a trained ML model can output one or more probability values for individual pixels where a post-output process aims to improve segmentation (e.g., smoothing of a mask, neighborhood analysis, contiguity analysis, etc.). Such post-output processing can be based at least in part on physics such as, for example, flame physics, smoke physics, etc. Where post-output processing is implemented, a burner controller can be considered a hybrid controller that utilizes non-physics based machine learning through image analysis to generate probabilities for pixels as to one or more classes and that utilizes a physics based analysis to output masks (e.g., filters, etc.). As an example, whether nonphysics based or hybrid, an approach can output data structures that can be applied to camera acquired image data. For example, consider applying a mask to an RGB image frame such that a flame in the RGB image frame can be segmented for purposes of combustion analysis (e.g., via hue analysis, etc.). As another example, consider applying a mask to an RGB image frame such that smoke in the RGB image frame can be segmented for purposes of smoke analysis (e.g., via a Ringelmann smoke chart approach, etc.). As yet another example, consider applying a mask to an RGB image frame such that a water screen can be segmented for purposes of water screen analysis. As a further example, consider applying a mask to an RGB image frame such that non-flame, non-smoke, and non-water screen pixels can be segmented. Such an approach can be utilized to reduce computational demands as to additional processing as the number of pixels of the RGB image frame to be processed can be reduced by ignoring the non-object pixels (e.g., non-object image data).
[0097] As an example, a method can include processing an image via mask generation where individual masks have corresponding classes such that the image can be segmented into regions where each region is smaller than the entire image. In such an approach, the individual masks can be applied to reduce computational demands for particular purposes. For example, a flame-based combustion analysis can proceed with fewer pixels than in an image where a flame mask is applied to the image to identify, at least in part probabilistically, what pixels of the image correspond to a flame (e.g., a flame class). Similarly, a smoke-based analysis can proceed with fewer pixels that in an image where a smoke mask is applied to the image to identify, at least in part probabilistically, what pixels of the image correspond to smoke (e.g., a smoke class).
[0098] As to detection of size, or determination of size, a method can detect a size of a flame and size of smoke by the number of pixels having a probability that is higher than a defined threshold for flame and/or for smoke, once such objects are detected.
[0099] As an example, a method to derive a physical size of a flame, it may be based on a trigonometrical calculation to derive an extension of a pixel into meters (e.g., or feet) by knowledge of a distance between a camera and the flame and, for example, where appropriate, a shooting angle of the camera (e.g., a lens axis in comparison to a flame axis). As an example, a method can be based on identifying an object of known dimensions in a vicinity of a flame in an image (e.g., consider a burner head, etc.) where such a method can deduct the area covered by individual pixels in a depth of field.
[00100] As an example, a burner can include one or more fiducials that can be utilized as references for determining size, angle, etc. For example, consider a burner component or components that are within a field of view of a camera where at least some of the fiducials can be captured in an image. In such an approach, the fiducials can be emitters (e.g., LEDs, etc.), reflectors, etc.
[00101] As an example, a trained ML model can include a fiducial class that can identify one or more fiducials in a pixel image or pixel images. For example, consider a two camera approach where an image from a first camera captures two fiducials and where an image from a second camera captures two fiducials. In such an approach, a trained ML model can output probabilities of an individual pixel being a fiducial (e.g., a member of a fiducial class). Such fiducial pixels can have corresponding coordinates where the coordinates can be combined to reconstruct one or more multi-dimensional reference systems. In such an example, a first reference system may provide for determining physical size of objects in the image from the first camera and a second reference system may provide for determining physical size of objects in the image from the second camera. Given known data as to the fiducials (e.g., spacing, size, orientation, etc.), the images may be compared and/or a threedimensional model constructed.
[00102] As mentioned, a trained ML model can be a multi-class model where various examples can include one or more of a flame class, a smoke class, a water screen class, a background class and a fiducial class. As explained, a fiducial class can facilitate analysis, for example, as to size. As an example, the size of smoke may be expressed as a ratio of flame size, which may be an actual flame size.
[00103] As an example, a method can, once flame and smoke segments are detected, utilize a temporal data of a sequence of images (e.g., video) to ruggedize the presence accuracy of various objects. In such an approach, one or more computer vision techniques may be applied to derive flame quality and smoke classification (e.g., into multiple categories of smoke).
[00104] As mentioned with respect to Fig. 5, RGB color model images can be transformed to a HSV (Hue, Saturation, Value) color model format where data structures as to Hue and Value can be utilized.
[00105] Given visible flame movement in time during real-time video, as an example, optical flow may be computed to find moving objects in a frame with a high concentration in orange and red in Hue value color space (see, e.g., Fig. 6). Such data can be merged with a machine learning approach to detection of flames that can allow for reduction of false detection and improve characterization accuracy.
[00106] As to flame quality characterization, a flame mask can be applied to the Hue component of a flame in an image. In such an example, a method can utilize the results to derive a histogram (of Hue values) of the pixels belonging to the flame. The histogram may be further processed, for example, using a specialized filter with specifically designed coefficients where application of the filter to histogram data can be utilized to derive a flame quality indicator, for example, according to a dominant color of the flame.
[00107] Referring again to Fig. 6, for example, consider a method that defines three colors red, orange and yellow and two “mixed” color zone as red/orange and orange/yellow in the Hue map (see, e.g., the color map 610). As to the HSV color model, Saturation data can be utilized to split data along the Saturation axis to capture one or more bright flame zones. A method can include, for example, counting the split of flame pixels in each cell as a basis for color determination. In such an approach, when more than half of the pixels are within one bin, the corresponding color (or colors) of that bin can be deemed the dominant color of the flame; otherwise, a final color may be determined as the average Hue value over the flame pixels.
[00108] As to a smoke colorimetry indicator, consider flame characterization that utilizes a smoke mask to delineate boundaries of smoke detected in individual frames (e.g., individual acquired images).
[00109] As mentioned, classification of smoke can utilize various categories such as categories in the Ringelmann smoke chart index. Again, the first level of the Ringelmann scale corresponds to white smoke, 2nd and 3rd level of Ringelmann scale are for grey smokes and the 4th level for black smoke category. The index is derived from the averaging of the Value (V) values of the image pixels belonging to the mask of smoke inferred by a trained machine learning model (e.g., a trained CNN, etc.).
[00110] Fig. 8 shows an example of a method 800 that includes an acquisition block 810 for acquiring one or more images, an output block 814 for outputting image frame data (e.g., pixel image data in a color model space such as RGB, etc.), and a process block 818 for processing the image frame data using one or more trained machine learning models where the process block 818 outputs a flame mask 822, a smoke mask 824 and a sprinkler mask 826. As mentioned, a mask can correspond to a class. While the example of Fig. 8 shows three masks as corresponding to three separate classes (e.g., object classes), a method can include outputting a background mask, a fiducial mask and/or one or more other types of masks.
[00111] As shown in the example of Fig. 8, each of the masks 822, 824 and 826 can be applied to pixels, which may be pixels of the image frame data to make determinations as to whether a pixel belongs to a class or not, which is indicated in the blocks 823, 825 and 827. Such an approach can be referred to as segmentation as the image frame data are segmented into classes (e.g., on a pixel-by-pixel basis, etc.).
[00112] As shown in the example of Fig. 8, the method 800 can include a generation block 830 for generating HSV image frame data using the image frame data of the output block 814. In such an example, the generation block 830 can perform a transform that transforms image frame data from a first color model space to a HSV color model space. As explained, hue and saturation data may be utilized for binning and further classifying as to combustion while value data may be utilized to assess smoke.
[00113] In the example of Fig. 8, the method 800 can include a generation block 840 for generating a hue histogram where an identification block 845 can provide for identifying one or more colors as associated with combustion that generates a flame as present in at least a portion of the image frame data. As explained, such an approach can provide for determining combustion quality. As shown in Fig. 8, the method can include a generation block 860 for generating a combustion quality indicator via color identification per the identification block 845 as applied to a generated hue histogram per the generation block 840. For example, generation block 860 can generate a combustion quality indicator that accounts for flame color; noting that overall combustion quality can be determined by a combination of factors, which can depend on smoke characterization and flame characterization (e.g., operations using at least a flame mask and operations using at least a smoke mask, etc.).
[00114] As explained, value (V) of the HSV color model can be utilized to characterize smoke. As shown in the example of Fig. 8, the method 800 can include a generation block 850 for generating a smoke index, which can include, for example, white smoke 852, grey smoke 854 (e.g., one or more grey smoke indicators) and black smoke 856, for example, as explained with respect to the Ringelmann smoke chart (see, e.g., Fig. 7).
[00115] In the example of Fig. 8, the method 800 includes various outputs as represented by the blocks 823, 825, 827, 852, 854, 856 and 860. Such output can characterize a burner operation where such a characterization can be utilized for purposes of burner control. For example, consider utilizing a combustion quality indicator of the generation block 860 as a basis for adjusting one or more flow rates of fluid to a burner in an effort to improve combustion quality. As another example, consider utilizing one or more smoke indexes of the generation block 850 as a basis for adjusting one or more flow rates of fluid to a burner in an effort to control what type of smoke, amount of smoke, etc., generated during an on-going burner operation.
[00116] As explained with respect to Fig. 3, a burner can be oriented via one or more mechanisms. As an example, where an environmental condition impacts one or more aspects of a burner operation, a burner may be adjusted to improve a burner operation of the burner. For example, consider wind direction where the direction of a burner (e.g., via a burner boom, etc.) is adjusted to improve combustion quality, etc.
[00117] Fig. 9 shows an example of a machine learning model 900 (ML model 900). In the example of Fig. 9, the ML model 900 includes a U-shaped architecture and may be referred to as a U-Net architecture. In the example of Fig. 9, various numbers are given for illustration of an embodiment; noting that such numbers can depend on one or more factors such as, for example, image frame data matrix size, etc. Specifically, in the example of Fig. 9, an input image (e.g., image frame data) is shown as being 256 x 256 where operations are performed for 16 channels. As shown, the number of channels increases down the left leg of the U-shaped architecture and decreases up the right leg of the U-shaped architecture.
[00118] The ML model 900 includes a U-Net architecture where max pooling operations are performed at transitions for four layers, starting from an upper layer, to reach a base layer of the U-Net architecture and where multi-dimensional 2x2 transpose convolution operations are performed at transitions for the four layers, starting from the base layer, to return to the upper layer.
[00119] In the example ML model 900, the first four layers involve concatenation across legs of the U-shaped architecture. At each of the five layers, for both legs of the U-shaped architecture as well as the base layer, multi-dimensional 3x3 convolutions are performed. Additionally, to output three classes (e.g., three masks, etc.), the ML model 900 includes a 1x1 convolution. In the ML model 900, skip connections act to improve accuracy compared to models like a fully convolutional neural network (FCN) without skip connections.
[00120] In the example ML model 900, for a 256 x 256 RGB image (e.g., 256 x 256 x 3), the ML model 900 can be a thin U-Net model with approximately 1.9 million weights. Training can utilized “ground truth" labeled pixel arrays for purposes of determining suitable values for the weights.
[00121] In the example ML model 900, the output from the expansive path can be an array that is, for a 256 x 256 input image, a 256 x 256 x N array (e.g., a matrix), where N represents the number of classes, which can, for example, include one or more of a flame class, a smoke class, a water screen (e.g., sprinkler) class, a background class, a fiducial class, etc. In such an approach, the array can include probability values for each element (e.g., pixel) that may range, for example, from zero (not a member of a class) to unity (a member of a class). Such an array can be processed to generate binary masks, for example, consider using one or more thresholds to transform elements above a threshold to unity (1) and elements below the threshold to zero (0). A plurality of binary masks may be applied to an input image to segment the input image into classes where a rule can be applied such that each pixel in the input image is assigned to an appropriate class via application of the binary masks (e.g., including a background class, which may include pixels that are not segmented into one or more other classes).
[00122] As an example, a binary mask can be a probabilistic mask, for example, with zeros (0) to indicate, probabilistically, “not a flame”, “not smoke", etc., and with ones (1 ) to indicate, probabilistically, “a flame", “smoke”, etc.
[00123] As an example, a pixel image can include a color model bit depth and can include an additional depth value, which may be, for example to code a pixel as belonging, probabilistically, to a particular class. For example, where four classes exist, numbers such as 0, 1 , 2 and 3 may be utilized in the additional depth value. As an example, a method such as the method 800 of Fig. 8 may utilize the additional depth value for purposes of processing with respect to flame, smoke, etc. As an example, where an array includes class values directly, use of such class value can be considered applying a corresponding mask. For example, the flame mask 822, the smoke mask 824 and the sprinkler mask 826 of the method 800 can be a single two dimensional array of values.
[00124] As an example, consider a U-Net architecture as described in an article by Ronneberger et ai. , “U-Net: Convolutional Networks for Biomedical Image Segmentation,” arXiv: 1505.04597 (2015), which is incorporated by reference herein. Such an architecture includes a contracting path (see the left side of the ML model 900) and an expansive path (see the right side of the ML model 900). The article by Ronneberger et al. describes a contracting path that follows an architecture of a convolutional network (e.g., a CNN) and that includes repeated application of convolutions and use of a rectified linear unit (ReLU) along with a max pooling operation with a specified stride for downsampling where, in an expansive path, upsampling of a feature map can be performed followed by convolution.
[00125] Specifically, Ronneberger et al. describe a network architecture that includes a contracting path (left side) and an expansive path (right side) where the contracting path follows an architecture type of a convolutional network and includes repeated applications of two 3x3 convolutions (unpadded convolutions), each followed by a rectified linear unit (ReLU) and a 2x2 max pooling operation with stride 2 for downsampling. At each downsampling step, the number of feature channels is doubled. In Ronneberger et al., each step in the expansive path includes an upsampling of the feature map followed by a 2x2 convolution (\up-convolution'') that halves the number of feature channels, a concatenation with the correspondingly cropped feature map from the contracting path, and two 3x3 convolutions, each followed by a ReLU. The cropping handles loss of border pixels in each convolution. At the final layer a 1x1 convolution is used to map each 64-component feature vector to the desired number of classes. In total the network of Ronneberger et al. includes 23 convolutional layers.
[00126] As to training, consider an approach of Ronneberger et al., where input images and their corresponding segmentation maps are used to train the network with the stochastic gradient descent implementation of the CAFFE framework. Due to the unpadded convolutions, the output image is smaller than the input by a constant border width. To minimize the overhead and make maximum use of GPU memory, Ronneberger et al. favored large input tiles over a large batch size and hence reduced the batch to a single image where a high momentum (0.99) was utilized such that a large number of the previously seen training samples determined the update in a current optimization step.
[00127] The example ML model 900 of Fig. 9 can be utilized for mask generation, for example, to generate one or more masks from image data that can segment the image data, which may be in a raw format, a transformed format, a processed format, etc.
[00128] As an example, the TENSORFLOW framework (Google LLC, Mountain View, CA) may be implemented, which is an open source software library for dataflow programming that includes a symbolic math library, which can be implemented for machine learning applications that can include neural networks. As an example, the CAFFE framework may be implemented, which is a DL framework developed by Berkeley Al Research (BAIR) (University of California, Berkeley, California). As another example, consider the SCIKIT platform (e.g., scikit-learn), which utilizes the PYTHON programming language. As an example, a framework such as the APOLLO Al framework may be utilized (APOLLO.AI GmbH, Germany). As an example, a framework such as the PYTORCH framework may be utilized (Facebook Al Research Lab (FAIR), Facebook, Inc., Menlo Park, California).
[00129] As an example, a training method can include various actions that can operate on a dataset to train a ML model. As an example, a dataset can be split into training data and test data where test data can provide for evaluation. A method can include cross-validation of parameters and best parameters, which can be provided for model training.
[00130] The TENSORFLOW framework can run on multiple CPUs and GPUs (with optional CUDA (NVIDIA Corp., Santa Clara, California) and SYCL (The Khronos Group Inc., Beaverton, Oregon) extensions for general-purpose computing on graphics processing units (GPUs)). TENSORFLOW is available on 64-bit LINUX, MACOS (Apple Inc., Cupertino, California), WINDOWS (Microsoft Corp., Redmond, Washington), and mobile computing platforms including ANDROID (Google LLC, Mountain View, California) and IOS (Apple Inc.) operating system based platforms.
[00131] TENSORFLOW computations can be expressed as stateful dataflow graphs; noting that the name TENSORFLOW derives from the operations that such neural networks perform on multidimensional data arrays. Such arrays can be referred to as "tensors".
[00132] Fig. 10 shows an architecture 1000 of a framework such as the TENSORFLOW framework. As shown, the architecture 1000 includes various features. As an example, in the terminology of the architecture 1000, a client can define a computation as a dataflow graph and, for example, can initiate graph execution using a session. As an example, a distributed master can prune a specific subgraph from the graph, as defined by the arguments to “Session. run()”; partition the subgraph into multiple pieces that run in different processes and devices; distributes the graph pieces to worker services; and initiate graph piece execution by worker services. As to worker services (e.g., one per task), as an example, they may schedule the execution of graph operations using kernel implementations appropriate to hardware available (CPUs, GPUs, etc.) and, for example, send and receive operation results to and from other worker services. As to kernel implementations, these may, for example, perform computations for individual graph operations.
[00133] Fig. 10 also shows some examples of types of machine learning models 1010, 1020, 1030 and 1040, one or more of which may be utilized. As explained, a ML model-based approach can include receiving image data that can be spatial image data, which can be 1 D, 2D or 3D spatial image data. As an example, time can be a dimension such that image data can be spatial and temporal. As an example, a convolution neural network and/or one or more other types of neural networks can be utilized for spatial and/or spatial-temporal image analysis.
[00134] As an example, a ML model can address video (spatial-temporal image data) as sequences of short (e.g., N-frame) clips (e.g., consider N being greater than one and less than approximately 100). In such an approach, various aspects of changes with respect to time may be related to one or more types of physical phenomena associated with flaring hydrocarbons, which, once analyzed, can be utilized for burner control. In such an example, framerate (e.g., frames per second) and number of frames in a sequence (e.g., a series) may cover a desired amount of time, which can be for tracking dynamic behavior of one or more burner operations. For example, consider 8 frames at a frame rate of 24 frames per second, which can be a time period of approximately one-third of a second. As an example, analyses may be performed using different sequences, using different number of frames, using different framerates, using different processing techniques as to raw frame data, using different frame resolutions, etc. Such variables may be tailored to provide for processing efficiency, which may help to provide for real-time control (e.g., near realtime control with latency of approximately one-half hour or less for 2D spatial and 1D temporal processing).
[00135] As to 3D, defined by 2D in space and 1 D in time, a separable convolution approach may be utilized or a more standard 3D convolution. A 3D separable convolution approach can factorize a standard 3D convolution into a channel-wise 3D convolution and a point-wise 3D convolution with 1x1x1 kernel. A channel-wise convolution can apply a specific filter to each input channel, then point-wise convolution can combine the outputs of the channel-wise convolution via a 1x1x1 convolution. Such factorization can reduce computation and model size.
[00136] In Fig. 10, the ML models include a 2D convolution model 1010, a 3D convolution model 1020, a 3D spatial and temporal convolution model 1030 and a 3D channel-wise convolution model 1040. In Fig. 10, the model 1030 can be a R2plus1D model that factorizes 3D convolution into spatial and temporal convolutions and the model 1040 can be a 3D separable convolution that includes two convolution processes, a channel-wise convolution and a point-wise convolution.
[00137] As an example, a spatial-temporal architecture can include various features such as one or more features of an architecture that includes an encoder (feature extractor), a pyramid pooling module and a decoder to re-cover the spatial-temporal details gradually. In such an example, the encoder can receive pixel values as input, extract features layer by layer, and generate rich contextual feature as output. In such an example, intermediate results can be merged with a decoder module. As an example, a pyramid pooling module can connect an encoder module and a decoder module. Pyramid pooling can vary a receptive field size, for example, by modifying spatial stride. As to a 3D separable convolution, a frame level features block can involve reduced average pooling and duplication to generate frame level features (e.g., with the same dimension as the other branches, etc.). An article by Hou et al., entitled “An efficient 3D CNN for action/object segmentation in video", arXiv:1907.08895v1 (21 July 2019) is incorporated by reference herein.
[00138] Fig. 11 shows an example of a method 1100 that includes an operation block 1101 for operating a burner system 1110 that includes one or more cameras 1112-1 and 1112-2 that can acquire image data of various objects, which can include one or more of a flame 1114, smoke 1115, a water screen 1116 and fiducials 1117. As shown, the method 1100 includes an image block 1102 for imaging during operation of the burner system 1110 per the operation block 1101 where the image block 1102 can generate image data (e.g., image frame data) 1120-1 and 1120-2 for one or more time periods. As shown, the method 1100 includes a mask block 1103 for generating one or more masks 1122-1 , 1122-2 to 1122-N and/or 1132-1 , 1132-2 to 1132-M using the generated image data of the image block 1102. In such an example, N and M may differ or be the same. For example, a field of view of a first camera may capture fiducials while a field of view of a second camera does not; hence, a fiducial mask may be for the first camera, which may result in N being greater than M.
[00139] As explained, mask generation can be accomplished using a trained machine learning model (a trained ML model), which can segment image data into defined classes, which, for example, can correspond to one or more object types (e.g., flame, smoke, water screen, fiducial, etc.) and/or background (e.g., a class that does not include various defined objects). As explained, a mask can be specific to a particular physical phenomenon such as combustion or smoke. As shown, an analysis block 1104 can provide for one or more types of analysis 1142-1 , 1142-2 to 1 142-P. As an example, P may be equal to N and/or M or may differ. As explained, a separate type of analysis can be applied to masking results (e.g., segmentation results). For example, a method that applies a flame mask to generate flame results can include analyzing the flame results for combustion quality (e.g., via hue and saturation of flame image pixels, etc.); while applying a smoke mask to generate smoke results can include analyzing the smoke results for one or more smoke characteristics such as a Ringelmann smoke chart index value (e.g., via value (V) of an HSV color model for smoke image pixels, etc.). As explained, a fiducial mask can be applied to generate fiducial results, which can be for spatial analysis, for example, to determine size of a flame, size of smoke, etc. As mentioned, fiducial analysis can provide for image correlation, for example, to compare features (e.g., objects) in two or more images.
[00140] As shown in the example of Fig. 11 , the method 1100 can include a control block 1105 for controlling the operation of the burner system 1110 where one or more controls 1152 can correspond to one or more operational parameters such as one or more fluid flow rates, one or more valve settings, one or more nozzle setting, one or more directions of a boom, etc. While the example of Fig. 11 shows a single burner system (the burner system 1110), the method 1100 may be utilized to characterize and/or control more than one burner system.
[00141] As an example, the method 1100 can be utilized to monitor burner operation through startup, shutdown, and continuous burning operations and can be replicated through behavior cloning. For example, when one model is trained and tested, and generates low errors when predicting air control, the model can be installed in a control loop and used to control a burner system. As an example, a trained ML model can apply tolerances to various inputs, noting certain signatures in image data or sensor data may indicate poor or deteriorating combustion. Where one or more issues arise, a controller can take corrective action, such as increasing or decreasing air flow, fuel flow, or air-to-fuel ratio.
[00142] Fig. 12 shows an example of a method 1200 and an example of a system 1290. As shown, the method 1200 includes an acquisition block 1210 for acquiring image data during operation of a burner system that, via combustion of hydrocarbons, generates a flame and smoke; a process block 1220 for processing the image data using at least one trained machine learning model to generate a flame mask and a smoke mask; a characterization block 1230 for characterizing the combustion by applying the flame mask to the image data and by applying the smoke mask to the image data; and a control block 1240 for, based on the characterizing, controlling the operation of the burner system.
[00143] The method 1200 is shown as including various computer-readable storage medium (CRM) blocks 1211 , 1221 , 1231 and 1241 that can include processorexecutable instructions that can instruct a computing system, which can be a control system, to perform one or more of the actions described with respect to the method 1200.
[00144] In the example of Fig. 12, the system 1290 includes one or more information storage devices 1291 , one or more computers 1292, one or more networks 1295 and instructions 1296. As to the one or more computers 1292, each computer may include one or more processors (e.g., or processing cores) 1293 and memory 1294 for storing the instructions 1296, for example, executable by at least one of the one or more processors 1293 (see, e.g., the blocks 1211 , 1221 , 1231 and 1241 ). As an example, a computer may include one or more network interfaces (e.g., wired or wireless), one or more graphics cards, a display interface (e.g., wired or wireless), etc.
[00145] As an example, the method 1200 may be a workflow that can be implemented using one or more frameworks that may be within a framework environment. As an example, the system 1290 can include local and/or remote resources. For example, consider a browser application executing on a client device as being a local resource with respect to a user of the browser application and a cloudbased computing device as being a remote resources with respect to the user. In such an example, the user may interact with the client device via the browser application where information is transmitted to the cloud-based computing device (or devices) and where information may be received in response and rendered to a display operatively coupled to the client device (e.g., via services, APIs, etc.).
[00146] As an example, a method can include acquiring image data during operation of a burner system that, via combustion of hydrocarbons, generates a flame and smoke; processing the image data using at least one trained machine learning model to generate a flame mask and a smoke mask; characterizing the combustion by applying the flame mask to the image data and by applying the smoke mask to the image data; and, based on the characterizing, controlling the operation of the burner system. In such an example, the at least one trained machine learning model can include a single trained machine learning model that generates at least the flame mask and the smoke mask.
[00147] As an example, a trained machine learning model can be or include a U-Net machine learning model architecture.
[00148] As an example, a trained machine learning model can include a contracting path that receives image data and an expansive path that outputs flame mask and a smoke mask.
[00149] As an example, image data can include pixel data, where a flame mask is applied to the image data to identify, probabilistically, flame pixels in the image data, and where the smoke mask is applied to the image data to identify, probabilistically, smoke pixels in the image data.
[00150] As an example, a method can include characterizing combustion by applying a flame mask to image data via applying the flame mask to the image data in a hue, saturation and value (HSV) color model to generate at least flame masked image data for hue. In such an example, the method can include generating a combustion quality indicator to characterize the combustion using at least the flame masked image data for hue. As an example, a method can include generating a combustion quality indicator to characterize combustion using at least a flame masked image data for hue and for saturation.
[00151] As an example, a method can include characterizing combustion by applying a smoke mask to image data via applying the smoke mask to the image data in a hue, saturation and value (HSV) color model to generate at least smoke masked image data for value. In such an example, the method can include generating a smoke indicator to characterize the combustion using at least the smoke masked image data for value. In such an example, the smoke indicator can be a Ringelmann smoke chart indicator.
[00152] As an example, a method can include characterizing combustion by applying a smoke mask to image data via utilizing a Ringelmann smoke chart indicator.
[00153] As an example, a method can include processing image data using at least one trained machine learning model to generate a flame mask, a smoke mask and a sprinkler mask.
[00154] As an example, a method can include processing image data using at least one trained machine learning model to generate a flame mask, a smoke mask and a fiducial mask. In such an example, the method can include applying the fiducial mask to the image data to identify fiducials and spatially characterizing the flame using the identified fiducials. In such an example, spatially characterizing the flame can include determining a size of the flame (e.g., in one or more dimensions).
[00155] As an example, a method can include characterizing combustion via generating a flame to smoke ratio.
[00156] As an example, a method can include acquiring video image data in realtime during operation of a burner system and, for example, controlling operation of the burner system in real-time.
[00157] As an example, a method can include processing image data that using at least one trained machine learning model where the image data comprises spatialtemporal image data (see, e.g., models 1020, 1030 and 1040 of Fig. 10). For example, a series of image frames acquired with respect to a period of time can be received as input and processed in a multi-dimensional manner with time as a dimension. In such an example, output can be, for example, time masks that can be “video masks” that are time-dependent. For example, a flame mask can be timedependent and include a series of frames for individual points in time, a smoke mask can be time-dependent and include a series of frames for individual points in time, etc. As an example, temporal processing can provide for trending of one or more parameters, which may be indicative of flame character, smoke character, combustion character, etc. In such an example, a trend may be identified and utilized to control a burner system. For example, a trend indicator may provide for control as to a favorable trend or for control as to an unfavorable trend. For example, where a series of frames generates a temporal smoke mask that indicates an unfavorable smoke trend over the time associated with the series of frames, the trend can be related to flowrates, composition, environmental conditions, etc., for that period of time, which can provide for dynamic characterization over individual periods of time. Such an approach can characterize control dynamics, including, for example, lags that may exist between issuance of a control signal to adjust an operation and change or changes in imagery of flame, smoke, etc.
[00158] As an example, a system can include a processor; memory accessible to the processor; processor-executable instructions stored in the memory and executable by the processor to instruct the system to: acquire image data during operation of a burner system that, via combustion of hydrocarbons, generates a flame and smoke; process the image data using at least one trained machine learning model to generate a flame mask and a smoke mask; characterize the combustion by applying the flame mask to the image data and by applying the smoke mask to the image data; and, based on the characterization of the combustion, control the operation of the burner system.
[00159] As an example, one or more computer-readable storage media can include computer-executable instructions executable to instruct a computing system to: acquire image data during operation of a burner system that, via combustion of hydrocarbons, generates a flame and smoke; process the image data using at least one trained machine learning model to generate a flame mask and a smoke mask; characterize the combustion by applying the flame mask to the image data and by applying the smoke mask to the image data; and, based on the characterization of the combustion, control the operation of the burner system.
[00160] As an example, a method may be implemented in part using computerreadable media (CRM), for example, as a module, a block, etc. that include information such as instructions suitable for execution by one or more processors (or processor cores) to instruct a computing device or system to perform one or more actions. As an example, a single medium may be configured with instructions to allow for, at least in part, performance of various actions of a method. As an example, a computer-readable medium (CRM) may be a computer-readable storage medium (e.g., a non-transitory medium) that is not a carrier wave.
[00161] According to an embodiment, one or more computer-readable media may include computer-executable instructions to instruct a computing system to output information for controlling a process. For example, such instructions may provide for output to a sensing process, an injection process, a drilling process, an extraction process, an extrusion process, a pumping process, a heating process, a burning process, an analysis process, etc.
[00162] In some embodiments, a method or methods may be executed by a computing system. Fig. 13 shows an example of a system 1300 that can include one or more computing systems 1301-1 , 1301-2, 1301-3 and 1301-4, which may be operatively coupled via one or more networks 1309, which may include wired and/or wireless networks.
[00163] As an example, a system can include an individual computer system or an arrangement of distributed computer systems. In the example of Fig. 13, the computer system 1301-1 can include one or more modules 1302, which may be or include processor-executable instructions, for example, executable to perform various tasks (e.g., receiving information, requesting information, processing information, simulation, outputting information, etc.).
[00164] As an example, a module may be executed independently, or in coordination with, one or more processors 1304, which is (or are) operatively coupled to one or more storage media 1306 (e.g., via wire, wirelessly, etc.). As an example, one or more of the one or more processors 1304 can be operatively coupled to at least one of one or more network interface 1307. In such an example, the computer system 1301-1 can transmit and/or receive information, for example, via the one or more networks 1309 (e.g., consider one or more of the Internet, a private network, a cellular network, a satellite network, etc.).
[00165] As an example, the computer system 1301-1 may receive from and/or transmit information to one or more other devices, which may be or include, for example, one or more of the computer systems 1301 -2, etc. A device may be located in a physical location that differs from that of the computer system 1301-1. As an example, a location may be, for example, a processing facility location, a data center location (e.g., serverfarm, etc.), a rig location, a wellsite location, a downhole location, etc.
[00166] As an example, a processor may be or include a microprocessor, microcontroller, processor module or subsystem, programmable integrated circuit, programmable gate array, or another control or computing device.
[00167] As an example, the storage media 1306 may be implemented as one or more computer-readable or machine-readable storage media. As an example, storage may be distributed within and/or across multiple internal and/or external enclosures of a computing system and/or additional computing systems.
[00168] As an example, a storage medium or storage media may include one or more different forms of memory including semiconductor memory devices such as dynamic or static random access memories (DRAMs or SRAMs), erasable and programmable read-only memories (EPROMs), electrically erasable and programmable read-only memories (EEPROMs) and flash memories, magnetic disks such as fixed, floppy and removable disks, other magnetic media including tape, optical media such as compact disks (CDs) or digital video disks (DVDs), BLUERAY disks, or other types of optical storage, or other types of storage devices.
[00169] As an example, a storage medium or media may be located in a machine running machine-readable instructions, or located at a remote site from which machine-readable instructions may be downloaded over a network for execution.
[00170] As an example, various components of a system such as, for example, a computer system, may be implemented in hardware, software, or a combination of both hardware and software (e.g., including firmware), including one or more signal processing and/or application specific integrated circuits.
[00171] As an example, a system may include a processing apparatus that may be or include a general purpose processors or application specific chips (e.g., or chipsets), such as ASICs, FPGAs, PLDs, or other appropriate devices.
[00172] Fig. 14 shows components of a computing system 1400 and a networked system 1410. The system 1400 includes one or more processors 1402, memory and/or storage components 1404, one or more input and/or output devices 1406 and a bus 1408. According to an embodiment, instructions may be stored in one or more computer-readable media (e.g., memory/storage components 1404). Such instructions may be read by one or more processors (e.g., the processor(s) 1402) via a communication bus (e.g., the bus 1408), which may be wired or wireless. The one or more processors may execute such instructions to implement (wholly or in part) one or more attributes (e.g., as part of a method). A user may view output from and interact with a process via an I/O device (e.g., the device 1406). According to an embodiment, a computer-readable medium may be a storage component such as a physical memory storage device, for example, a chip, a chip on a package, a memory card, etc.
[00173] According to an embodiment, components may be distributed, such as in the network system 1410. The network system 1410 includes components 1422-1, 1422-2, 1422-3, . . . 1422-N. For example, the components 1422-1 may include the processor(s) 1402 while the component(s) 1422-3 may include memory accessible by the processor(s) 1402. Further, the component(s) 1422-2 may include an I/O device for display and optionally interaction with a method. The network may be or include the Internet, an intranet, a cellular network, a satellite network, etc.
[00174] As an example, a device may be a mobile device that includes one or more network interfaces for communication of information. For example, a mobile device may include a wireless network interface (e.g., operable via IEEE 802.1 1 , ETSI GSM, BLUETOOTH, satellite, etc.). As an example, a mobile device may include components such as a main processor, memory, a display, display graphics circuitry (e.g., optionally including touch and gesture circuitry), a SIM slot, audio/video circuitry, motion processing circuitry (e.g., accelerometer, gyroscope), wireless LAN circuitry, smart card circuitry, transmitter circuitry, GPS circuitry, and a battery. As an example, a mobile device may be configured as a cell phone, a tablet, etc. As an example, a method may be implemented (e.g., wholly or in part) using a mobile device. As an example, a system may include one or more mobile devices.
[00175] As an example, a system may be a distributed environment, for example, a so-called “cloud” environment where various devices, components, etc. interact for purposes of data storage, communications, computing, etc. As an example, a device or a system may include one or more components for communication of information via one or more of the Internet (e.g., where communication occurs via one or more Internet protocols), a cellular network, a satellite network, etc. As an example, a method may be implemented in a distributed environment (e.g., wholly or in part as a cloud-based service).
[00176] As an example, information may be input from a display (e.g., consider a touchscreen), output to a display or both. As an example, information may be output to a projector, a laser device, a printer, etc. such that the information may be viewed. As an example, information may be output stereographically or holographically. As to a printer, consider a 2D or a 3D printer. As an example, a 3D printer may include one or more substances that can be output to construct a 3D object. For example, data may be provided to a 3D printer to construct a 3D representation of a subterranean formation. As an example, layers may be constructed in 3D (e.g., horizons, etc.), geobodies constructed in 3D, etc. As an example, holes, fractures, etc., may be constructed in 3D (e.g., as positive structures, as negative structures, etc.).
[00177] Although only a few examples have been described in detail above, those skilled in the art will readily appreciate that many modifications are possible in the examples. Accordingly, all such modifications are intended to be included within the scope of this disclosure as defined in the following claims. In the claims, means-plusfunction clauses are intended to cover the structures described herein as performing the recited function and not only structural equivalents, but also equivalent structures. Thus, although a nail and a screw may not be structural equivalents in that a nail employs a cylindrical surface to secure wooden parts together, whereas a screw employs a helical surface, in the environment of fastening wooden parts, a nail and a screw may be equivalent structures. It is the express intention of the applicant not to invoke 35 U.S.C. § 112, paragraph 6 for any limitations of any of the claims herein, except for those in which the claim expressly uses the words “means for" together with an associated function.

Claims (20)

What is claimed is:
1. A method, comprising:
acquiring image data during operation of a burner system that, via combustion of hydrocarbons, generates a flame and smoke;
processing the image data using at least one trained machine learning model to generate a flame mask and a smoke mask;
characterizing the combustion by applying the flame mask to the image data and by applying the smoke mask to the image data; and
based on the characterizing, controlling the operation of the burner system.
2. The method of claim 1 , wherein the at least one trained machine learning model comprises a single trained machine learning model that generates at least the flame mask and the smoke mask.
3. The method of claim 1 , wherein the at least one trained machine learning model comprises a U-Net machine learning model architecture.
4. The method of claim 1 , wherein the at least one trained machine learning model comprises a trained machine learning model that comprises a contracting path that receives the image data and an expansive path that outputs the flame mask and the smoke mask.
5. The method of claim 1, wherein the image data comprise pixel data, wherein the flame mask is applied to the image data to identify, probabilistically, flame pixels in the image data, and wherein the smoke mask is applied to the image data to identify, probabilistically, smoke pixels in the image data.
6. The method of claim 1 , wherein characterizing the combustion by applying the flame mask to the image data comprises applying the flame mask to the image data in a hue, saturation and value (HSV) color model to generate at least flame masked image data for hue.
7. The method of claim 6, comprising generating a combustion quality indicator to characterize the combustion using at least the flame masked image data for hue.
8. The method of claim 6, comprising generating a combustion quality indicator to characterize the combustion using at least the flame masked image data for hue and for saturation.
9. The method of claim 1 , wherein characterizing the combustion by applying the smoke mask to the image data comprises applying the smoke mask to the image data in a hue, saturation and value (HSV) color model to generate at least smoke masked image data for value.
10. The method of claim 9, comprising generating a smoke indicator to characterize the combustion using at least the smoke masked image data for value.
1 1. The method of claim 10, wherein the smoke indicator comprises a Ringelmann smoke chart indicator.
12. The method of claim 1 , wherein characterizing the combustion by applying the smoke mask to the image data comprises utilizing a Ringelmann smoke chart indicator.
13. The method of claim 1 , wherein the processing the image data using at least one trained machine learning model generates a flame mask, a smoke mask and a sprinkler mask.
14. The method of claim 1 , wherein the processing the image data using at least one trained machine learning model generates a flame mask, a smoke mask and a fiducial mask.
15. The method of claim 14, comprising applying the fiducial mask to the image data to identify fiducials and spatially characterizing the flame using the identified fiducials.
16. The method of claim 15, wherein spatially characterizing the flame comprises determining a size of the flame.
17. The method of claim 1 , wherein characterizing the combustion comprises generating a flame to smoke ratio.
18. The method of claim 1 , wherein the acquiring acquires video image data in realtime during operation of the burner system and wherein the controlling controls the operation of the burner system in real-time.
19. A system comprising:
a processor;
memory accessible to the processor;
processor-executable instructions stored in the memory and executable by the processor to instruct the system to:
acquire image data during operation of a burner system that, via combustion of hydrocarbons, generates a flame and smoke;
process the image data using at least one trained machine learning model to generate a flame mask and a smoke mask;
characterize the combustion by applying the flame mask to the image data and by applying the smoke mask to the image data; and
based on the characterization of the combustion, control the operation of the burner system.
20. One or more computer-readable storage media comprising computerexecutable instructions executable to instruct a computing system to:
acquire image data during operation of a burner system that, via combustion of hydrocarbons, generates a flame and smoke;
process the image data using at least one trained machine learning model to generate a flame mask and a smoke mask;
characterize the combustion by applying the flame mask to the image data and by applying the smoke mask to the image data; and
based on the characterization of the combustion, control the operation of the burner system.
NO20220761A 2020-01-06 2020-12-17 Burner control NO20220761A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202062957648P 2020-01-06 2020-01-06
PCT/US2020/065656 WO2021141749A1 (en) 2020-01-06 2020-12-17 Burner control

Publications (1)

Publication Number Publication Date
NO20220761A1 true NO20220761A1 (en) 2022-07-01

Family

ID=76788709

Family Applications (1)

Application Number Title Priority Date Filing Date
NO20220761A NO20220761A1 (en) 2020-01-06 2020-12-17 Burner control

Country Status (4)

Country Link
BR (1) BR112022013081A2 (en)
GB (1) GB2605904A (en)
NO (1) NO20220761A1 (en)
WO (1) WO2021141749A1 (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6468069B2 (en) * 1999-10-25 2002-10-22 Jerome H. Lemelson Automatically optimized combustion control
US8138927B2 (en) * 2007-03-22 2012-03-20 Honeywell International Inc. Flare characterization and control system
JP2011080754A (en) * 2009-10-07 2011-04-21 John Zink Co Llc Image sensing system, software, apparatus, and method for controlling combustion equipment
WO2017058832A1 (en) * 2015-09-28 2017-04-06 Schlumberger Technology Corporation Burner monitoring and control systems

Also Published As

Publication number Publication date
GB202209144D0 (en) 2022-08-10
GB2605904A (en) 2022-10-19
WO2021141749A1 (en) 2021-07-15
BR112022013081A2 (en) 2022-09-06

Similar Documents

Publication Publication Date Title
US10657443B2 (en) Detection of hazardous leaks from pipelines using optical imaging and neural network
US10839211B2 (en) Systems, methods and computer program products for multi-resolution multi-spectral deep learning based change detection for satellite images
Wang et al. Machine vision for natural gas methane emissions detection using an infrared camera
CN105074789B (en) Fire detection system
Wang et al. VideoGasNet: Deep learning for natural gas methane leak classification using an infrared camera
US20150192437A1 (en) Unmanned vehicle systems and methods of operation
US11270167B2 (en) Methods and systems for hyper-spectral systems
US11521324B2 (en) Terrain-based automated detection of well pads and their surroundings
JP6720876B2 (en) Image processing method and image processing apparatus
Sidike et al. Spectral unmixing of hyperspectral data for oil spill detection
Legleiter et al. Moving Aircraft River Velocimetry (MARV): Framework and proof‐of‐concept on the Tanana River
US10255495B2 (en) System and method for quantifying reflection e.g. when analyzing laminated documents
NO20220761A1 (en) Burner control
Zheng et al. A lightweight algorithm capable of accurately identifying forest fires from UAV remote sensing imagery
Mangiameli et al. A low cost methodology for multispectral image classification
Widipaminto et al. Roof materials identification based on pleiades spectral responses using supervised classification
WO2022187341A1 (en) Unlit flare detection using satellite images
Janssen et al. Automatic flare-stack monitoring
US8891870B2 (en) Substance subtraction in a scene based on hyperspectral characteristics
Zhang et al. Unsupervised wildfire change detection based on contrastive learning
Núñez et al. Estimation of solar resource for vehicle-integrated photovoltaics in urban environments using image-based shade detection
Al Radi et al. AI-Enhanced Gas Flares Remote Sensing and Visual Inspection: Trends and Challenges
Kim et al. Spatial resolution effects of remote sensing images on digital soil models in aquatic ecosystems
CN108604298A (en) The method of object, especially three dimensional object for identification
Araujo et al. Near Real-Time Automated Detection of Small Hazardous Liquid Pipeline Leaks Using Remote Optical Sensing and Machine Learning