Disclosure of Invention
The following presents a simplified summary of the invention in order to provide a basic understanding of some aspects of the invention. It should be understood that this summary is not an exhaustive overview of the invention. It is not intended to identify key or critical elements of the invention or to delineate the scope of the invention. Its sole purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is discussed later.
In view of the above, the invention provides a system and a method for judging the production state of oil and gas field equipment based on big data monitoring, so as to at least solve the problems of long inspection time, high labor intensity, untimely discovery and the like existing in the conventional method for judging the working state of an oil well based on manual inspection and the like.
The invention provides an oil and gas field equipment production state discrimination system based on big data monitoring, which comprises a storage unit, a video inspection control unit and a video analysis and identification unit, wherein the storage unit is used for storing the production state of the oil and gas field equipment; the storage unit is used for storing a preset database; the video inspection control unit is used for setting video inspection preset points, adding corresponding information frames in the database after an inspection instruction is sent each time, wherein the information frames comprise attribute information of the preset points, and the attribute information at least comprises pumping unit information corresponding to the preset points; the video analysis and identification unit is used for acquiring the attribute information of the preset point in the database, analyzing and comparing the image of the preset point to determine the position of the pumping unit in a video picture, judging the working state of the pumping unit according to the motion track of the observation point, and returning the identified result to the database after processing; the video inspection control unit is also used for reading the result returned by the video analysis and identification unit in the database.
Furthermore, the attribute information of each preset point comprises an inspection point data table and an oil pumping unit data table; the inspection point data table comprises one or more of inspection point identification, holder information, camera type, camera information, observation range and pumping unit identification array; the pumping unit data table comprises one or more information of a pumping unit identifier, a located inspection point identifier, a pumping unit type and a pumping unit external rectangle.
Further, the system also comprises a model obtaining unit, wherein the model obtaining unit is used for constructing an identification model for identifying the working state of the pumping unit.
Further, the working state of the pumping unit comprises: idle running and normal running.
Furthermore, in the process of training the recognition model, the model obtaining unit firstly recognizes the relative position of the pumping unit in an image picture by using a database sample model, corrects displacement offset by using a data algorithm, recognizes the corrected pumping unit state according to 1 frame per second, and then judges whether the moving part of the pumping unit moves or not according to image comparison of each frame.
The invention also provides an oil and gas field equipment production state distinguishing method based on big data monitoring, which is characterized by comprising the following steps: step one, setting video inspection preset points, adding corresponding information frames in a database after each inspection instruction is sent, wherein the information frames comprise attribute information of the preset points, and the attribute information at least comprises pumping unit information corresponding to the preset points; secondly, acquiring attribute information of the preset point in the database, analyzing and comparing images of the preset point to determine the position of the pumping unit in a video picture, and judging the working state of the pumping unit according to the motion track of the observation point; step three, the recognized result is returned to the database after being processed; reading the result returned by the video analysis and identification unit in the database; and step five, repeating the step one to the step four when the next video inspection is carried out.
Further, the attribute information of each preset point comprises an inspection point data table and an oil pumping unit data table: the inspection point data table comprises one or more of inspection point identification, holder information, camera type, camera information, observation range and pumping unit identification array; the pumping unit data table comprises one or more information of a pumping unit identifier, a located inspection point identifier, a pumping unit type and a pumping unit external rectangle.
Further, the method further comprises: and constructing an identification model for identifying the working state of the pumping unit.
Further, the working state of the pumping unit comprises: idle running and normal running.
Further, in the process of training the recognition model, the relative position of the oil pumping unit in an image picture is recognized by using a database sample model, displacement offset correction is carried out by using a data algorithm, the corrected oil pumping unit state is recognized according to 1 frame per second, and whether the moving part of the oil pumping unit moves or not is judged according to image comparison of each frame.
The invention provides an oil and gas field equipment production state discrimination system and method based on big data monitoring.
These and other advantages of the present invention will become more apparent from the following detailed description of the preferred embodiments of the present invention, taken in conjunction with the accompanying drawings.
Detailed Description
Exemplary embodiments of the present invention will be described hereinafter with reference to the accompanying drawings. In the interest of clarity and conciseness, not all features of an actual implementation are described in the specification. It will of course be appreciated that in the development of any such actual embodiment, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which will vary from one implementation to another. Moreover, it will be appreciated that such a development effort might be complex and time-consuming, but would nevertheless be a routine undertaking for those of ordinary skill in the art having the benefit of this disclosure.
It should be noted that, in order to avoid obscuring the present invention with unnecessary details, only the device structures and/or processing steps closely related to the solution according to the present invention are shown in the drawings, and other details not so relevant to the present invention are omitted.
The embodiment of the invention provides an oil and gas field equipment production state discrimination system based on big data monitoring, which comprises a storage unit, a video inspection control unit and a video analysis and identification unit; the storage unit is used for storing a preset database; the video inspection control unit is used for setting video inspection preset points, adding corresponding information frames in the database after an inspection instruction is sent each time, wherein the information frames comprise attribute information of the preset points, and the attribute information at least comprises pumping unit information corresponding to the preset points; the video analysis and identification unit is used for acquiring the attribute information of the preset point in the database, analyzing and comparing the image of the preset point to determine the position of the pumping unit in a video picture, judging the working state of the pumping unit according to the motion track of the observation point, and returning the identified result to the database after processing; the video inspection control unit is also used for reading the result returned by the video analysis and identification unit in the database.
Fig. 1 shows a block diagram of an example of the production state discrimination system of oil and gas field equipment based on big data monitoring according to the invention.
As shown in FIG. 1, the oil and gas field equipment production state discrimination system based on big data monitoring comprises a storage unit 1, a video inspection control unit 2 and a video analysis and identification unit 3.
The storage unit 1 is used for storing a predetermined database. The database is used for storing attribute information obtained by routing inspection of each preset point, and the attribute information at least includes pumping unit information corresponding to the preset point, such as the name of a pumping unit, the angle of the pumping unit, the number of the pumping units, and the like.
In addition, the video inspection control unit 2 is configured to set a video inspection preset point location, and may add a corresponding information frame in the database after sending an inspection instruction each time, where the information frame includes attribute information of the preset point, such as a location name, a location of a focus observation target, and the like.
As an example, the attribute information of each preset point includes a patrol data table and a pumping unit data table. Wherein, the same beam-pumping unit has two records, corresponding to infrared and visible light respectively.
Wherein, the data table of the patrol inspection point comprises one or more of the following items: marking a routing inspection point; cradle head information; a camera type; camera information; the oil pumping unit identification array and the observation range.
For example, the patrol point identifier may be a number or a string of characters representing separate warning zones, differing in infrared and visible light.
The pan/tilt information includes, for example, a direction angle, a pitch angle, and the like.
The camera types include, for example, a visible light type camera (as denoted by 0) and an infrared type camera (as denoted by 1).
The camera type may be divided by focal length, for example.
In addition, the observation range of each preset point refers to an effective observation area in the current field, and may be defined as a rectangular area, for example, and may record pixel coordinates of the upper left corner and the lower right corner.
The array of pump identification data corresponds, for example, to a corresponding record in a pump data sheet that records pump information within an observation range.
In addition, the pumping unit data table comprises one or more information of pumping unit identification, located inspection point identification, pumping unit type and external rectangle of the pumping unit.
The pumping unit types include, for example, a beam pumping unit (as indicated by 0) and a tower pumping unit (as indicated by 1).
In addition, the external rectangle of the pumping unit refers to the external rectangle of the pumping unit in the current field of view, and pixel coordinates of the upper left corner and the lower right corner can be recorded.
The video analyzing and identifying unit 3 is configured to obtain attribute information of the preset point in the database, perform image analysis and comparison on the preset point to determine a position of the pumping unit in a video picture (where the position of the target pumping unit may be located in a middle area of the video picture when the preset point is set), determine an operating state of the pumping unit according to a motion trajectory of an observation point (such as a horse head), and return a result after identification to the database after processing.
As an example, when the working state of the pumping unit is determined, for example, the position of an observation point (e.g., horse head) can be found in a video picture, the position of the observation point can be found again at intervals, whether a displacement occurs or not is compared between the observation point and the observation point, and after multiple determinations, if a displacement does occur, it is determined that the machine is working; if the displacement does not occur for a plurality of times, the machine is judged to stop working.
Further, as an example, the processing of the result after the recognition by the video analysis and recognition unit 3 may be, for example: and converting the video analysis result into a data state. For example, if the working state of the pumping unit is normal, displaying work: 1, if no work is displayed, displaying work: 0.
in this way, the video patrol controlling unit 2 can read the results returned by the video analysis recognizing unit 3 in the database. For example, the video inspection control unit 2 may perform verification once every fixed time (e.g., 8 seconds), and perform status notification and recording on the basis of a large number of determination results.
As an example, the system further comprises a model obtaining unit 4, as shown in fig. 2, the model obtaining unit 4 is used for constructing a recognition model for recognizing the working state of the pumping unit.
As an example, the working state of the pumping unit includes: idle running and normal running.
As an example, in the process of training the recognition model, the model obtaining unit 4 first identifies the relative position of the pumping unit in the image picture by using the database sample model (the database sample model is also a sample after training), performs displacement offset correction by using a data algorithm, recognizes the corrected pumping unit state according to 1 frame per second, and then determines whether the moving part of the pumping unit moves according to the image contrast of each frame.
For example, a picture can be prestored in a video picture of each preset point, and a certain error exists in the positioning accuracy of the rotary table part of the camera, so that the formed picture has a little offset, and the identification result can be influenced when the camera is used for detecting the oil pumping unit. In this case, the relative positional relationship between a plurality of points in the original image may be compared with the relative relationship between a plurality of points in the actual image, so as to determine the amount of displacement between the original image and the actual image.
In this way, the calibration frame for recognition is moved by the offset amount based on the offset amount calculated in the previous step, and thus the calibration frame matches the attention target in the actual image.
In another aspect of the present invention, a method for determining the production state of oil and gas field equipment based on big data monitoring is further provided, as shown in fig. 3, the method for determining the production state of oil and gas field equipment based on big data monitoring includes: step one, setting video inspection preset points, adding corresponding information frames in a database after an inspection instruction is sent each time, wherein the information frames comprise attribute information of the preset points, and the attribute information at least comprises pumping unit information corresponding to the preset points; secondly, acquiring attribute information of the preset point in the database, analyzing and comparing images of the preset point to determine the position of the pumping unit in a video picture, and judging the working state of the pumping unit according to the motion track of the observation point; step three, processing the identified result and returning the processed result to the database; reading the result returned by the video analysis and identification unit in the database; and step five, repeating the step one to the step four when the next video inspection is carried out.
As an example, the attribute information of each preset point includes a patrol data table and a pumping unit data table.
Wherein, the data table of the patrol inspection point comprises one or more of the following items: marking a routing inspection point; cradle head information; a camera type; camera information; the oil pumping unit identification array and the observation range.
In addition, the pumping unit data table comprises one or more information of pumping unit identification, located inspection point identification, pumping unit type and external rectangle of the pumping unit.
As an example, the method further comprises: and constructing an identification model for identifying the working state of the pumping unit.
As an example, the working state of the pumping unit includes: idle running and normal running.
As an example, in the process of training the recognition model, the database sample model is used to recognize the relative position of the pumping unit in the image picture, the displacement offset correction is performed by using a data algorithm, the corrected pumping unit state is recognized according to 1 frame per second, and then whether the moving part of the pumping unit moves or not is judged according to the image comparison of each frame.
Referring to the recognition flowchart shown in fig. 4 and the processing principle shown in fig. 5, a preferred embodiment of the present invention is described below.
The conventional video monitoring cannot intelligently identify whether an intrusion behavior exists, the line-crossing alarm and the intrusion alarm cannot identify target information, and whether an intrusion target is a person, a vehicle or an animal cannot be judged. And the conventional video is mostly in a one-to-one mode, namely one camera is opposite to one oil pumping well. The invention adopts a 750mm to 1000mm long-focus lens to collect video images, can cover a plurality of oil wells within the radius of 3km, and through a deep learning mode, a data server can extract learned targets such as people, vehicles and the like from pictures with complex background environment, and can accurately report the types of invasion targets, thereby judging the targets.
In this example, this can be realized by, for example, the following steps one to five.
In the first step, the video inspection control unit plans the video inspection preset point, and inserts an information frame into the database after each inspection instruction is sent. The information frame contains the attribute information of the routing inspection point: such as location name, focal observation target location, etc.
In the second step: the video analysis and identification unit receives the attribute information in the database and performs image analysis and comparison on the preset point. And identifying whether an intrusion target exists, and if so, what object enters the warning area.
In the third step, the video analysis and identification unit returns the identified result to the database after processing.
And in the fourth step, the video inspection control unit reads the result returned by the video analysis and identification unit in the database, and alarms if the result exceeds a preset alarm threshold value.
In the fifth step, the video is subjected to next inspection, and the contents from the first step to the fourth step are repeated.
Thus, referring to the identification flow chart, the identification software can read the video file and select representative picture frames and target objects for calibration; human-computer interactive calibration is supported; the calibration result can be directly used for training the deep learning model.
In order to improve the judgment precision, a routing inspection parameter definition method is designed, and comprises the following data tables:
the patrol point data table includes:
patrol spot identification (number or character string, representing separate alarm zone, different under infrared and visible light)
Tripod head information (Direction angle, Pitch angle)
Camera type (0: visible; 1: infrared)
Camera information (focal length)
Observation scope (meaning the definition of effective observation area in the current visual field, rectangular area, recordable upper left corner and lower right corner pixel coordinates)
Oil extractor identification array (corresponding to the corresponding record in the oil extractor data table, which records the oil extractor information in the observation range)
In addition, the pumping unit data sheet (two records for the same pumping unit, corresponding to infrared and visible light) includes:
beam-pumping unit sign
Location inspection point mark
Types of pumping units (0: beam pumping unit; 1: tower pumping unit)
External rectangle of pumping unit (referring to the external rectangle of the pumping unit in the current view field, and pixel coordinates of the upper left corner and the lower right corner can be recorded)
In addition, when the deep learning sample library is completed, a representative image is selected from 30 ten thousand frames of monitoring images, the shape of the pumping unit under infrared and visible light is labeled by utilizing calibration software, and meanwhile, partial personnel and vehicle image samples are introduced from an open image library for labeling to generate the sample library.
Upon completion of the deep learning model, the PASCAL VOC data set provides a standardized set of excellent data sets for image identification and classification, and the target of the specific purpose is identified by constructing a VOC data set suitable for the specific detection purpose of the user for network construction.
The VOC data set mainly comprises three folders of Annotations/ImageSets/JPEGImages:
JPEGImages: all picture information is included, including training pictures and test pictures. These images are named in "number. jpg" format, where the numbers are all six digits. These images are the image data used for training and test validation.
The indications: and storing tag files in an xml format, wherein each xml records the label information of each picture and is consistent with the picture name.
And (3) writing information such as the holding position, the object type, the picture name size and the like selected by the frame into an xml file by selecting the oil pumping machine head in each frame of image of the video in the frame, wherein the information corresponds to the picture name in the JPEGImages. Taking 000001.jpg as an example, two pumping unit heads are selected, and the type is named as machine, and then corresponding 000001. xml:
in order to prevent overfitting of the model and make the model more robust, data augmentation needs to be applied to perform data expansion. It is possible to use: 1) noise increase: denoising the image by using Gaussian noise; 2) fuzzy processing; 3) turning: including horizontal flipping and vertical flipping.
ImageSets: four txt files are contained under the Main folder, and each txt file records a picture number.
Fast R-CNN: training own data set
Faster R-CNN is an optimized accelerated version of R-CNN as well as Fast R-CNN. The target detection steps are divided into four steps: 1) generation of candidate regions 2) feature extraction 3) detection target classification 4) refinement of framed regions. The diagram is a structural comparison diagram of R-CNN, Fast R-CNN and Fast R-CNN frameworks:
under an Ubuntu16.04 system, a Caffe frame is built, a GPU is used for accelerating calculation of a deep neural network, and a fast R-CNN method is used for training a VOC data set of the user. For example, a ZF model can be selected as a pre-training model, and an alternative training (alt _ opt) can be selected as a training mode. Finally, a preliminary model (ZF _ false _ rcnn _ final. ca ffemodel) is obtained
According to different pumping unit types and routing inspection parameters, the pumping unit state judgment process integrating two methods of deep learning object identification and dynamic target detection is designed in the embodiment, as shown in fig. 5.
The method comprises the steps of firstly identifying the relative position of the pumping unit in an image picture by using a database sample model, carrying out displacement offset correction by using a data algorithm, identifying the corrected state of the pumping unit according to 1 frame per second, then finding whether the moving part of the pumping unit moves according to image comparison of each frame, and carrying out judgment once every 8 seconds.
Therefore, a training model, an algorithm and inspection parameters are integrated, real-time video streams and a database are accessed, and an alarm result is displayed and output by using a video inspection unit on the existing large data platform.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; although the present invention and the advantageous effects thereof have been described in detail with reference to the foregoing embodiments, those skilled in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions.