CN117934729A - Real-time three-dimensional projection fusion method for oil-gas field video monitoring - Google Patents

Real-time three-dimensional projection fusion method for oil-gas field video monitoring Download PDF

Info

Publication number
CN117934729A
CN117934729A CN202410323494.0A CN202410323494A CN117934729A CN 117934729 A CN117934729 A CN 117934729A CN 202410323494 A CN202410323494 A CN 202410323494A CN 117934729 A CN117934729 A CN 117934729A
Authority
CN
China
Prior art keywords
data
monitoring
real
time
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410323494.0A
Other languages
Chinese (zh)
Other versions
CN117934729B (en
Inventor
薛江超
张文斌
王龙
方书锋
王莹伟
聂振举
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Zhongwang Energy Technology Co ltd
Original Assignee
Xi'an Zhongwang Energy Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Zhongwang Energy Technology Co ltd filed Critical Xi'an Zhongwang Energy Technology Co ltd
Priority to CN202410323494.0A priority Critical patent/CN117934729B/en
Publication of CN117934729A publication Critical patent/CN117934729A/en
Application granted granted Critical
Publication of CN117934729B publication Critical patent/CN117934729B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the technical field of image communication, in particular to a real-time three-dimensional projection fusion method for video monitoring of an oil-gas field. The method comprises the following steps: updating the three-dimensional model of the oil and gas field in real time by using sensor equipment to generate a real-time three-dimensional model; presetting a renderable range of the real-time three-dimensional model, and generating a rendering range calibration model; and carrying out rendering area calibration on the rendering range calibration model to generate area data to be rendered. Carrying out projection fusion processing on the region data to be rendered to generate projection fusion data; carrying out change fusion processing on the projection fusion data to generate real-time projection fusion data; fusion fixed object identification is carried out according to the projection fusion data and the real-time projection fusion data, and object information data are generated; and performing differential object update rendering based on the object information data to generate a re-rendered three-dimensional model. According to the invention, the state of the equipment is monitored, and the monitoring coordinate axis is established, so that the projection fusion processing is timely and accurately carried out, and the monitoring efficiency is improved.

Description

Real-time three-dimensional projection fusion method for oil-gas field video monitoring
Technical Field
The invention relates to the technical field of image communication, in particular to a real-time three-dimensional projection fusion method for video monitoring of an oil-gas field.
Background
Because the comprehensive monitoring and analysis requirements on complex and changeable conditions in the oil and gas production process are urgent, correct monitoring and analysis are needed, a projection fusion method is needed, and the high visualization and real-time monitoring on the oil and gas field production site are realized by integrating real-time video monitoring and three-dimensional geological data. The method is not only helpful for finding potential problems in time and improving production efficiency, but also provides visual and comprehensive information for decision makers so as to more effectively cope with emergency and optimize production strategies, and finally promotes intelligent and refined development of management and operation of oil and gas fields. However, the existing oil-gas field video monitoring real-time three-dimensional projection fusion method is poor in projection fusion monitoring effect due to the lack of an alarm function and fixed monitoring positions.
Disclosure of Invention
Based on this, it is necessary to provide a real-time three-dimensional projection fusion method for video monitoring of an oil-gas field, so as to solve at least one of the above technical problems.
In order to achieve the purpose, the real-time three-dimensional projection fusion method for the video monitoring of the oil-gas field comprises the following steps: step S1: acquiring a three-dimensional model of the oil and gas field; updating the three-dimensional model of the oil and gas field in real time by using sensor equipment to generate a real-time three-dimensional model;
step S2: acquiring a monitoring association parameter; establishing a coordinate axis based on the monitoring related parameters to generate a monitoring coordinate axis; performing renderable range presetting on the real-time three-dimensional model based on the monitoring related parameters and the monitoring coordinate axes to generate a rendering range calibration model;
step S3: acquiring real-time monitoring data; monitoring pitch angle calculation is carried out according to the real-time monitoring data, and pitch angle data are generated; and carrying out rendering area calibration on the rendering range calibration model according to the pitch angle data, and generating the area data to be rendered.
Step S4: carrying out projection fusion processing on the region data to be rendered to generate projection fusion data; acquiring real-time coordinate data according to the rendering range calibration model and the monitoring coordinate axis; carrying out change fusion processing on the projection fusion data based on the real-time coordinate data to generate real-time projection fusion data;
Step S5: fusion fixed object identification is carried out according to the projection fusion data and the real-time projection fusion data, and object information data are generated; and performing differential object update rendering based on the object information data to generate a re-rendered three-dimensional model.
According to the invention, through the sensor equipment, the dynamic change of the oil-gas field is effectively captured, and the accurate reflection of the actual situation of the model is ensured. The method is beneficial to timely finding out the changes of geological structures, equipment states and the like, and improves the real-time performance and accuracy of monitoring. By establishing the monitoring coordinate axis, the monitoring data can be associated with the actual geographic location. And a rendering range calibration model is generated, so that a preset standard is provided for the visualization of subsequent real-time data, and the information presented by the monitoring picture is ensured to be clearer and more visual. Through calculation of the monitoring pitch angle, the angle of the monitoring camera can be determined more accurately, so that the rendering range calibration model is more consistent with the actual monitoring area. Such accurate calibration is helpful to improve accuracy and stability of the monitoring system, and avoids dead angles and information loss. Through projection fusion, data from different sensors and monitoring equipment can be fused into a whole, information redundancy is reduced, and the monitoring result is more comprehensive and reliable. The effect of acquiring real-time coordinate data according to the rendering range calibration model and the monitoring coordinate axis is to ensure the corresponding relation between the real-time monitoring data and the three-dimensional model. The method is beneficial to accurately mapping the monitoring data into the three-dimensional model, so that the monitoring picture is more in line with the actual situation, and the accuracy and the credibility of the data are improved. The effect of fusing the fixed object identification according to the projection fusion data and the real-time projection fusion data is to improve the intelligence and fault detection capability of the monitoring system. Through differential analysis, the system can automatically identify and mark the fixed object which changes in projection fusion, provides timely abnormal alarm and key information for operators, is beneficial to preventing and solving potential problems in advance, and improves the automation level of the monitoring system. Therefore, the real-time three-dimensional projection fusion method for the oil-gas field video monitoring extracts abnormal state data to automatically control the monitoring equipment by acquiring the equipment state data, and performs projection fusion processing, so that the equipment detailed information is obtained, and the monitoring efficiency is improved.
Preferably, step S1 comprises the steps of: step S11: acquiring a three-dimensional model of the oil and gas field;
Step S12: carrying out sensor position marking on the three-dimensional model of the oil and gas field to generate an initial three-dimensional model;
Step S13: monitoring the state of the oil and gas field facilities based on the sensor equipment to generate equipment real-time change data;
step S14: and updating the real-time data of the initial three-dimensional model according to the real-time change data of the equipment to generate a real-time three-dimensional model.
The invention establishes the digital representation of the geographical structure of the oil and gas field through the three-dimensional model of the oil and gas field. The method comprises the key information of geological structure, equipment layout and the like, and provides a visual basis for real-time monitoring and management of the whole system. And the sensor position of the three-dimensional model of the oil and gas field is marked, so that the accurate position of the sensor in the model is ensured, the subsequent real-time monitoring data can be accurately mapped to the corresponding geographic position, and the accuracy and the reliability of the monitoring system are improved. The state of the oil and gas field facilities is monitored through the sensor equipment, and the system can capture information such as the running condition and the health condition of the equipment in real time. The method is beneficial to realizing real-time monitoring and fault prediction of equipment and improving the reliability and stability of the production process. And updating the real-time data of the initial three-dimensional model according to the real-time change data of the equipment to generate a real-time three-dimensional model. The method is beneficial to keeping the actual production condition consistent with the theoretical model and providing real-time and accurate geographic information. The updating of the real-time three-dimensional model is helpful for accurately predicting possible problems, and more effective production monitoring and decision support are realized.
Preferably, step S2 comprises the steps of: step S21: acquiring a monitoring association parameter;
Step S22: calibrating a monitoring center according to the monitoring related parameters to generate coordinate data of the monitoring center; establishing a coordinate axis according to the coordinate data of the monitoring center to generate a monitoring coordinate axis;
Step S23: performing monitoring range analysis based on the monitoring related parameters and the monitoring coordinate axis to generate monitoring view field data;
Step S24: and presetting a renderable range of the real-time three-dimensional model according to the monitoring view field data, and generating a rendering range calibration model.
The invention obtains key parameters related to monitoring, such as the characteristics, position information, field angle and the like of the monitoring equipment through monitoring the related parameters. And calibrating the monitoring center according to the monitoring related parameters, generating coordinate data of the monitoring center, and establishing a monitoring coordinate axis according to the coordinate data. The accurate position of the monitoring center in the actual three-dimensional model is ensured, and a coordinate system related to the position of the monitoring equipment is established, so that a foundation is provided for subsequent visualization and space positioning. And carrying out monitoring range analysis based on the monitoring related parameters and the monitoring coordinate axis, and generating monitoring field data. The method and the device are helpful for determining the field of view range of the monitoring device, including the monitorable space range and the monitoring coverage condition. An effective working range of the monitoring device in a real scene is provided. And presetting a renderable range of the real-time three-dimensional model according to the monitoring view field data, and generating a rendering range calibration model. It is ensured that only the area of the real-time three-dimensional model that is within the field of view of the monitoring device will be rendered and displayed. The method effectively reduces the burden of calculation and rendering, improves the efficiency of the monitoring system, and ensures that the displayed information meets the monitoring requirement.
Preferably, step S3 comprises the steps of: step S31: acquiring real-time monitoring data; performing data preprocessing on the real-time monitoring data to generate standard monitoring data;
Step S32: performing monitoring coordinate extraction on the standard monitoring data based on the monitoring coordinate axis to generate monitoring coordinate data;
Step S33: monitoring pitch angle calculation is carried out based on the monitoring coordinate data, and pitch angle data is generated;
step S34: performing monitoring range assessment according to the pitch angle data to generate monitoring range data;
Step S35: and carrying out rendering area calibration on the rendering range calibration model by using the monitoring range data, and generating the area data to be rendered.
The invention can obtain the real-time information in the actual production process of the oil and gas field by acquiring the real-time monitoring data. Such as equipment status, fluid flow conditions, environmental parameters, etc. By preprocessing the real-time monitoring data to generate standard monitoring data, noise can be cleared, abnormal values can be corrected, and the stability and accuracy of the monitoring system can be improved. And extracting the monitoring coordinates according to the standard monitoring data to generate monitoring coordinate data. The key geographic positioning information is provided for subsequent operations such as field analysis and visualization, and the position of the monitoring data in the actual scene can be accurately reflected. By calculating the pitch angle, the angle of the monitoring device relative to the ground can be determined, thereby more accurately positioning the display range of the monitoring picture. The method is beneficial to enabling the monitoring system to better meet the actual demands and improving the accuracy and the visual effect of the monitoring data. The effect of monitoring range assessment based on pitch angle data is to determine the field of view range of the monitoring device from the information of the pitch angle. To evaluate the area that the monitoring device can cover and to determine the effective range of monitoring. The generation of the monitoring range data is beneficial to optimizing the configuration of the monitoring system, ensuring that the change of the key area is monitored, and improving the effectiveness of the monitoring system. And carrying out rendering area calibration on the rendering range calibration model by using the monitoring range data, and generating the area data to be rendered. Only the area in the monitoring range is rendered and displayed, so that the burden of calculation and rendering is reduced, and the efficiency of the monitoring system is improved. By generating the region data to be rendered, the effective cutting of the monitoring picture is realized, so that the displayed information is more concentrated and targeted, and the monitoring requirement is met.
Preferably, in step S33, the monitored pitch angle calculation is performed by a pitch angle calculation formula, where the pitch angle calculation formula is as follows:
In the method, in the process of the invention, In order to vary the pitch angle,Is the horizontal coordinate value of the camera,Is the ordinate value of the camera head,Is the vertical coordinate value of the camera,Is the abscissa value of the center of the camera,Is the ordinate value of the camera center,Is the vertical coordinate value of the center of the camera,Is a horizontal rotation angle,Is a vertical pitch angle, and is provided with a plurality of vertical pitch angles,In order to monitor the abscissa value of the device,In order to monitor the ordinate value of the device,To monitor the vertical coordinate value of the device.
The invention provides a pitch angle calculation formula for monitoring pitch angle calculation based on monitoring coordinate data, which fully considers the abscissa value of a cameraOrdinate value of cameraVertical coordinate value of cameraAbscissa value of camera centerOrdinate value of camera centerVertical coordinate value of camera centerHorizontal rotation angleVertical pitch angleAbscissa value of monitoring deviceLongitudinal coordinate value of monitoring deviceVertical coordinate value of monitoring deviceAnd interactions between variables, constitute the following functional relationships:
By using rotation transformation and coordinate transformation in three-dimensional space. Let the position of the monitoring equipment be The position of the center of the camera isThe vertical pitch angle isThe horizontal rotation angle is. And taking the center of the camera as the origin of the coordinate system, and establishing a coordinate system taking the center point as the origin. The new coordinates of the monitoring equipment after pitching and horizontal rotation are as followsThe transformation of the rotation matrix can be obtained by: ; from this matrix, the position of the monitoring device in the new coordinate system can be obtained . Thus varying pitch angleCan be calculated by coordinatesAnd combining the two formulas to obtain the result data of the variable pitch angle. The formula calculates the pitch angle of the monitoring equipment in real time, so that the monitoring system can quickly respond to the change of the state of the equipment. The method has important practicability for application scenes needing to adjust the monitoring direction at any time, such as a monitoring camera or a robot vision system for automatically tracking the target. The formula can calculate the pitch angle more accurately by considering the horizontal rotation angle and the pitch angle corresponding to the center, thereby improving the accuracy of the monitoring system. This formula helps to avoid the effects of sensor drift by taking into account the initial position and rotation state of the monitoring device. Sensor drift may cause deviations in the position of the monitoring device, and this formula may partially correct these deviations in the calculation, improving the stability of the monitoring system. Meanwhile, the coordinates of the monitoring equipment and the coordinates of the center of the camera in the formula can be adjusted according to actual monitoring information, and the method is applied to monitoring calculation of pitch angle data corresponding to monitoring coordinate data of different scenes, so that the flexibility and applicability of an algorithm are improved.
Preferably, step S4 comprises the steps of: step S41: carrying out projection fusion processing on the region data to be rendered to generate projection fusion data;
step S42: coordinate record transmission is carried out according to the rendering range calibration model and the monitoring coordinate axis, and real-time coordinate data are generated;
step S43: carrying out coordinate difference calculation on the real-time coordinate data and the monitoring coordinate data to generate coordinate difference data;
Step S44: when the coordinate difference data is larger than the preset coordinate difference data, monitoring pitch angle calculation is performed based on the real-time coordinate data, and real-time pitch angle data are generated; when the coordinate difference data is smaller than or equal to the preset coordinate difference data, the projection fusion data is not subjected to changing processing;
step S45: performing real-time monitoring range assessment according to the real-time pitch angle data to generate changed monitoring range data;
Step S46: performing change region evaluation on the monitoring range data based on the change monitoring range data to generate change region data;
step S47: and performing re-fusion processing based on the changed region data to generate real-time projection fusion data.
By means of projection fusion, the system can integrate multi-source information, ensure that monitoring pictures are more comprehensive and consistent, and improve the comprehensive monitoring effect of the monitoring system on the production process of the oil-gas field. And carrying out coordinate record transmission according to the rendering range calibration model and the monitoring coordinate axis to generate real-time coordinate data. The real-time monitoring data is ensured to be consistent with the coordinate system of the monitoring system, so that the real-time data can be accurately mapped into the monitoring system, and the accuracy of the monitoring data is improved. And carrying out coordinate difference calculation on the real-time coordinate data and the monitoring coordinate data to generate coordinate difference data, which is helpful for judging whether the monitoring equipment has position change or not, and triggering corresponding processing. And when the coordinate difference data is larger than the preset coordinate difference data, performing monitoring pitch angle calculation based on the real-time coordinate data to generate real-time pitch angle data. The real-time adjustment of the pitch angle of the monitoring equipment is realized so as to maintain the stability of the monitoring picture. When the coordinate difference is large, it may indicate that the direction of the monitoring device has changed greatly, and the display angle of the monitoring picture needs to be adjusted accordingly, so as to maintain the accuracy of monitoring. Through real-time monitoring range assessment, the system can dynamically adjust the effective range of the monitoring equipment so as to adapt to the position change of the equipment and keep the stability and accuracy of the monitoring system. Generating change monitoring range data helps identify monitoring range changes caused by monitoring device position changes. And carrying out change area evaluation on the monitoring range data based on the change monitoring range data to generate change area data. And the monitoring range which is actually changed is determined, so that the area which is changed in the monitoring picture can be further accurately identified, and the sensitivity of the monitoring system is improved. And (3) re-fusion processing is carried out based on the changed region data, and the region with the changed monitoring range is re-fused with the original monitoring data to generate real-time projection fusion data. The method is beneficial to realizing the timely updating of the monitoring picture, ensures that the displayed information is more in line with the actual situation, and improves the response speed and accuracy of the monitoring system to the change.
Preferably, step S42 comprises the steps of: step S421: performing rotation control on the monitoring equipment, recording rotation suspension time length, and generating suspension time length data; when the suspension duration data is greater than or equal to the preset suspension duration, the camera coordinate transmission is carried out through the monitoring sensor equipment, and manual control real-time coordinate data are generated; when the stopping time length data is smaller than the preset stopping time length, taking the monitoring coordinate data as manual control real-time coordinate data;
step S422: detecting and extracting abnormal states according to the rendering range calibration model to generate abnormal information data;
step S423: performing abnormal azimuth matching on the rendering range calibration model according to the abnormal information data to generate abnormal matching data;
Step S424: performing area monitoring control based on the abnormal matching data, and confirming coordinate points through a monitoring coordinate axis, so as to generate machine control real-time coordinate data;
Step S425: and carrying out time sequence combination on the manual control real-time coordinate data and the machine control real-time coordinate data to generate real-time coordinate data.
According to the invention, the monitoring equipment is subjected to rotation control and the rotation suspension time length is recorded, so that whether the rotation amplitude changes the original monitoring range or not is determined, when the suspension time length data is larger than or equal to the preset suspension time length, the change amplitude exceeds the preset range, the changed coordinates are required to be transmitted into the rendering range calibration model through the monitoring sensor, and the manual real-time coordinate data are generated; and if the stopping time length data is smaller than the preset stopping time length, taking the monitoring coordinate data as manual control real-time coordinate data. This intelligent switching mechanism allows the system to respond quickly when manual control is required, while preserving the continuity of automatic control. And detecting and extracting abnormal states according to the rendering range calibration model, so that the system is helped to discover possible faults or abnormal conditions of the equipment in time. And carrying out abnormal azimuth matching on the rendering range calibration model according to the abnormal information data, determining the specific position where the abnormality occurs, and providing accurate positioning information for subsequent area monitoring control. And carrying out area monitoring control based on the abnormal matching data, and confirming coordinate points through the monitoring coordinate axes, so as to generate the machine control real-time coordinate data. The monitoring control of the abnormal area is realized, and meanwhile, the coordinate confirmation is carried out through the monitoring coordinate axis, so that the generated machine control real-time coordinate data is ensured to be accurate. The manual control real-time coordinate data and the machine control real-time coordinate data are combined in time sequence, the coordinate data of manual control and automatic control are orderly combined together, the continuity of the monitoring system is maintained, and meanwhile flexible control of the monitoring equipment is realized.
Preferably, step S46 comprises the steps of: step S461: performing data differentiation processing on the changed monitoring range data and the monitoring range data to generate changed monitoring differential data and monitoring differential data;
step S462: performing level difference analysis on the changed monitoring differential data and the monitoring differential data to generate a level difference value;
step S463: performing vertical difference analysis on the changed monitoring differential data and the monitoring differential data to generate a vertical difference value;
step S464: and carrying out difference position combination on the change monitoring range data according to the horizontal difference value and the vertical difference value to generate change area data.
According to the invention, the change monitoring differential data and the monitoring differential data are generated by carrying out data differentiation processing on the change monitoring range data and the monitoring range data, so that the capture of local change of the monitoring range is facilitated, and the subsequent analysis is more detailed and accurate. And carrying out horizontal and vertical difference analysis on the changed monitoring differential data and the monitoring differential data to generate a horizontal difference value and a vertical difference value. The effect of these two steps is to analyze the differential data differences and determine the extent of change in the horizontal and vertical directions. The horizontal difference value and the vertical difference value provide information of changing positions and directions, and provide a basis for generating follow-up change area data. And carrying out difference position combination on the change monitoring range data according to the horizontal difference value and the vertical difference value to generate change area data. The generated change area data accurately reflects the specific position and shape of the change of the monitoring range.
Preferably, step S5 comprises the steps of: step S51: carrying out fixed object identification according to the projection fusion data and the real-time projection fusion data to generate identified object data;
step S52: extracting object information from the identified object data to generate object information data;
step S53: performing differential object recognition on the rendering range calibration model based on the object information data to generate differential object information data;
Step S54: and carrying out model updating rendering on the rendering range calibration model according to the difference object information data to generate a re-rendering three-dimensional model.
By comparing the projection fusion data with the change of the real-time projection fusion data, the system can determine which objects are stationary. After generating the identified object data, the system can know which objects are stable in the monitoring screen, providing a basis for subsequent analysis. And extracting object information from the identified object data to generate object information data. The effect of this step is to extract detailed information from the identified object, possibly including the type, size, location, etc. of the object. Helping the system to more fully understand objects in the monitored scene. And carrying out differential object identification on the rendering range calibration model based on the object information data to generate differential object information data. By comparing the real-time data with the model data, the system can determine which objects are changed, and the perceptibility of the monitoring system to the changes is improved. And carrying out model updating rendering on the rendering range calibration model according to the difference object information data to generate a re-rendered three-dimensional model, so as to ensure that the displayed three-dimensional model more accurately reflects the change in the actual scene. The credibility and the instantaneity of the monitoring picture are improved.
Preferably, step S51 comprises the steps of: step S511: acquiring historical marked object data;
Step S512: fusion picture extraction is carried out according to the projection fusion data and the real-time projection fusion data, and head-to-tail frame data and real-time head-to-tail frame data are generated;
step S513: performing the same object identification based on the head-to-tail frame data and the real-time head-to-tail frame data to generate identifier data and real-time identifier data;
Step S514: carrying out the same data merging processing on the identifier data and the real-time identifier data to generate merged identifier data;
step S515: and screening unrecorded data of the combined identifier data by using the historical identified object data to generate identified object data.
The invention helps the system to better understand the changes in the monitored scene by obtaining object data that has been identified during a previous monitoring period as a reference for comparison. And according to the head and tail frame data of the extracted monitoring picture, the subsequent same object identification and data merging processing is performed. By comparing the historical end-to-end frame data with the real-time end-to-end frame data, the system can identify the same object in the monitoring picture and generate the identifier data and the real-time identifier data. And carrying out the same data merging processing on the identifier data and the real-time identifier data to generate merged identifier data. The effect of this step is to integrate the projection fusion data with the identified objects in the real-time projection fusion data, ensuring that the system can comprehensively and accurately record the identified objects in the whole fusion scene. And the newly added marked object data in the real-time monitoring is screened out through comparison with the historical marked object data, so that the marked object data is more complete and accurate, and the marked object data is generated.
The method has the beneficial effects that an initial oil-gas field geospatial model is built. By updating the three-dimensional model in real time using the sensor device, the system is able to keep the model highly consistent with the actual situation. Helping to provide an accurate, real-time basis for geographic information. And acquiring the monitoring related parameters, establishing a coordinate axis based on the monitoring related parameters, generating a monitoring coordinate axis, presetting a rendering range of the real-time three-dimensional model, and generating a rendering range calibration model, so as to establish a space coordinate system related to the monitoring equipment. The coordinate system not only provides a reference frame, so that the relation between the monitoring equipment and the real-time three-dimensional model is clear, but also the visual area is preset through the rendering range calibration model, and the accuracy and consistency of the subsequent rendering effect are improved. The method comprises the steps of obtaining real-time monitoring data, performing monitoring pitch angle calculation, generating pitch angle data, and performing rendering area calibration on a rendering range calibration model according to the pitch angle data, wherein the effect of generating the area data to be rendered is that dynamic adjustment of the pitch angle of monitoring equipment is achieved. By calculating the pitch angle, the system can more accurately determine the viewable area of the monitor screen, thereby optimizing the rendering effect. Generating the region data to be rendered is beneficial to focusing on only the region of interest on the display, and rendering efficiency is improved. And carrying out projection fusion processing on the region data to be rendered, mapping the information of the monitored scene into a rendering range, and generating projection fusion data. By combining the rendering range calibration model with the monitoring coordinate axis, the system can effectively project the monitoring data into the designated display range. And then, carrying out change fusion processing on the projection fusion data through the real-time coordinate data to generate real-time projection fusion data, thereby realizing dynamic projection adjustment. The method is beneficial to ensuring that information in the monitoring picture can be effectively projected under the condition of real-time change, and the real-time performance and accuracy of the display effect are improved. The effect of fusion fixed object identification according to the projection fusion data and the real-time projection fusion data is to identify the fixed object in the monitoring picture. By comparing the projection fusion data with the real-time projection fusion data, the system can determine which objects are unchanged, generating object information data. And then, performing differential object updating rendering based on the object information data to generate a re-rendered three-dimensional model, thereby realizing real-time updating of the dynamic object. The monitoring system can efficiently identify which objects are fixed and which are dynamically changed, so that only the part needing to be updated is concerned in the re-rendering process, and the real-time performance and the efficiency of the rendering effect are improved. Therefore, the real-time three-dimensional projection fusion method for the oil-gas field video monitoring extracts abnormal state data to automatically control the monitoring equipment by acquiring the equipment state data, and performs projection fusion processing, so that the equipment detailed information is obtained, and the monitoring efficiency is improved.
Drawings
FIG. 1 is a schematic flow chart of the steps of a method for real-time three-dimensional projection fusion of video monitoring of an oil-gas field;
FIG. 2 is a flowchart illustrating the detailed implementation of step S2 in FIG. 1;
FIG. 3 is a flowchart illustrating the detailed implementation of step S3 in FIG. 1;
FIG. 4 is a flowchart illustrating the detailed implementation of step S4 in FIG. 1;
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
The following is a clear and complete description of the technical method of the present patent in conjunction with the accompanying drawings, and it is evident that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, are intended to fall within the scope of the present invention.
Furthermore, the drawings are merely schematic illustrations of the present invention and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. The functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor methods and/or microcontroller methods.
It will be understood that, although the terms "first," "second," etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another element. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of example embodiments. The term "and/or" as used herein includes any and all combinations of one or more of the associated listed items.
To achieve the above objective, please refer to fig. 1 to 4, a method for real-time three-dimensional projection fusion of video monitoring of an oil-gas field, the method comprising the following steps: step S1: acquiring a three-dimensional model of the oil and gas field; updating the three-dimensional model of the oil and gas field in real time by using sensor equipment to generate a real-time three-dimensional model;
step S2: acquiring a monitoring association parameter; establishing a coordinate axis based on the monitoring related parameters to generate a monitoring coordinate axis; performing renderable range presetting on the real-time three-dimensional model based on the monitoring related parameters and the monitoring coordinate axes to generate a rendering range calibration model;
step S3: acquiring real-time monitoring data; monitoring pitch angle calculation is carried out according to the real-time monitoring data, and pitch angle data are generated; and carrying out rendering area calibration on the rendering range calibration model according to the pitch angle data, and generating the area data to be rendered.
Step S4: carrying out projection fusion processing on the region data to be rendered to generate projection fusion data; acquiring real-time coordinate data according to the rendering range calibration model and the monitoring coordinate axis; carrying out change fusion processing on the projection fusion data based on the real-time coordinate data to generate real-time projection fusion data;
Step S5: fusion fixed object identification is carried out according to the projection fusion data and the real-time projection fusion data, and object information data are generated; and performing differential object update rendering based on the object information data to generate a re-rendered three-dimensional model.
According to the invention, through the sensor equipment, the dynamic change of the oil-gas field is effectively captured, and the accurate reflection of the actual situation of the model is ensured. The method is beneficial to timely finding out the changes of geological structures, equipment states and the like, and improves the real-time performance and accuracy of monitoring. By establishing the monitoring coordinate axis, the monitoring data can be associated with the actual geographic location. And a rendering range calibration model is generated, so that a preset standard is provided for the visualization of subsequent real-time data, and the information presented by the monitoring picture is ensured to be clearer and more visual. Through calculation of the monitoring pitch angle, the angle of the monitoring camera can be determined more accurately, so that the rendering range calibration model is more consistent with the actual monitoring area. Such accurate calibration is helpful to improve accuracy and stability of the monitoring system, and avoids dead angles and information loss. Through projection fusion, data from different sensors and monitoring equipment can be fused into a whole, information redundancy is reduced, and the monitoring result is more comprehensive and reliable. The effect of acquiring real-time coordinate data according to the rendering range calibration model and the monitoring coordinate axis is to ensure the corresponding relation between the real-time monitoring data and the three-dimensional model. The method is beneficial to accurately mapping the monitoring data into the three-dimensional model, so that the monitoring picture is more in line with the actual situation, and the accuracy and the credibility of the data are improved. The effect of fusing the fixed object identification according to the projection fusion data and the real-time projection fusion data is to improve the intelligence and fault detection capability of the monitoring system. Through differential analysis, the system can automatically identify and mark the fixed object which changes in projection fusion, provides timely abnormal alarm and key information for operators, is beneficial to preventing and solving potential problems in advance, and improves the automation level of the monitoring system. Therefore, the real-time three-dimensional projection fusion method for the oil-gas field video monitoring extracts abnormal state data to automatically control the monitoring equipment by acquiring the equipment state data, and performs projection fusion processing, so that the equipment detailed information is obtained, and the monitoring efficiency is improved.
In the embodiment of the present invention, as described with reference to fig. 1, the step flow diagram of the method for real-time three-dimensional projection fusion of video monitoring of an oil and gas field of the present invention is provided, and in this example, the method for real-time three-dimensional projection fusion of video monitoring of an oil and gas field includes the following steps: step S1: acquiring a three-dimensional model of the oil and gas field; updating the three-dimensional model of the oil and gas field in real time by using sensor equipment to generate a real-time three-dimensional model;
In the embodiment of the invention, the related data such as the topography, the landform, the geology and the like of the oil and gas field are obtained through professional geographic measurement and exploration technology, and the three-dimensional model of the oil and gas field is constructed. Sensor equipment is arranged in an oil-gas field area, and key parameters such as the running state, the temperature, the pressure and the like of the facility are monitored. The sensor location is marked by Global Positioning System (GPS) or the like. And (3) correlating the sensor position information with the three-dimensional model of the oil and gas field to ensure that the subsequent real-time updating can correspond to the correct geographic position. And acquiring real-time state information of the oil and gas field facilities through real-time data acquisition, and generating equipment real-time change data. And dynamically updating the three-dimensional model of the oil and gas field according to the real-time change data of the equipment, so as to ensure that the model is kept synchronous with the actual facility state.
Step S2: acquiring a monitoring association parameter; establishing a coordinate axis based on the monitoring related parameters to generate a monitoring coordinate axis; performing renderable range presetting on the real-time three-dimensional model based on the monitoring related parameters and the monitoring coordinate axes to generate a rendering range calibration model;
In the embodiment of the invention, the related parameters of the monitoring equipment are acquired through the document, the interface or the configuration file provided by the manufacturer of the monitoring equipment, and the related parameters may include information such as the position of the camera, the angle of view, the coordinates of the monitoring center and the like. And establishing a coordinate axis based on the monitoring center coordinate data in the monitoring related parameters. And the coordinate of the monitoring center is used as an origin, a corresponding coordinate axis is established according to the direction information in the associated parameters, and the coordinate axis is ensured to be consistent with the direction of the monitoring equipment. And analyzing the visual range of the monitoring equipment by using the monitoring related parameters, the coordinate axis information and the equipment characteristics to form monitoring field data. And presetting areas needing rendering in the real-time three-dimensional model by combining the monitoring view field data, and generating a rendering range calibration model. The model can guide the subsequent rendering operation, ensure that only the area in the monitoring view field is rendered, and improve the rendering efficiency.
Step S3: acquiring real-time monitoring data; monitoring pitch angle calculation is carried out according to the real-time monitoring data, and pitch angle data are generated; and carrying out rendering area calibration on the rendering range calibration model according to the pitch angle data, and generating the area data to be rendered.
In the embodiment of the invention, the monitoring data at the current moment, including images, video streams, sensor measurement values and the like, are acquired through real-time data sources such as monitoring equipment and sensors. Preprocessing the real-time monitoring data, which may include denoising, image enhancement, data format conversion and the like, to generate standard monitoring data. And analyzing monitoring standard monitoring data based on the monitoring coordinate axis, and generating monitoring coordinate data according to the corresponding monitoring coordinate points. And based on the pitch angle data obtained by calculation, evaluating the monitoring range of the monitoring equipment to form monitoring range data, namely the space range of the visible area. Combining the monitoring range data with a previously established rendering range calibration model, determining a region to be rendered, and generating region data to be rendered.
Step S4: carrying out projection fusion processing on the region data to be rendered to generate projection fusion data; acquiring real-time coordinate data according to the rendering range calibration model and the monitoring coordinate axis; carrying out change fusion processing on the projection fusion data based on the real-time coordinate data to generate real-time projection fusion data;
In the embodiment of the invention, the projection fusion processing is carried out on the region data to be rendered, and the monitoring data is mapped into the rendering range to generate the projection fusion data. And the accuracy and consistency of fusion are ensured by using a related fusion algorithm so as to maintain the visual effect of the real-time monitoring data. And combining the rendering range calibration model and the monitoring coordinate axis information to acquire real-time monitoring equipment coordinate data. These coordinate data are positional information of the imaging point in the monitoring coordinate axis. And calculating the coordinate difference between the real-time coordinate data and the monitoring coordinate data by comparing the two. Helping to determine the relative movement of the monitoring device. And judging whether the monitoring equipment generates larger motion according to the size of the coordinate difference data. If the large motion occurs, the pitch angle is recalculated based on the real-time coordinate data, and real-time pitch angle data is generated. If the motion is not large, the projection fusion data is kept unchanged. And evaluating the real-time monitoring range of the monitoring equipment by using the real-time pitch angle data, and generating changed monitoring range data. And evaluating the monitoring range data by using the change monitoring range data, determining the changed area, and generating change area data. Helping to distinguish between areas that need to be re-rendered and areas that remain unchanged. And combining the changed region data, and performing re-fusion processing on the projection fusion data to generate real-time projection fusion data. Only the changed area is rendered again, and the rendering efficiency is improved.
Step S5: fusion fixed object identification is carried out according to the projection fusion data and the real-time projection fusion data, and object information data are generated; and performing differential object update rendering based on the object information data to generate a re-rendered three-dimensional model.
In the embodiment of the invention, the fixed object in the rendering range is identified by comparing the projection fusion data with the real-time projection fusion data. The identified object is determined through algorithms such as image processing, pattern matching and the like, and identified object data is generated. Information about the object, such as position, shape, size, etc., is extracted from the identified object data. And combining the object information data with the rendering range calibration model, identifying which objects are changed, and generating difference object information data. The method and the device are beneficial to determining the area and the object which need to be re-rendered, and unnecessary calculation is reduced. And updating the rendering range calibration model by combining the difference object information data to determine which areas need to be re-rendered. And performing re-rendering operation to generate the latest three-dimensional model so as to ensure that rendering results are consistent with actual changes.
Preferably, step S1 comprises the steps of: step S11: acquiring a three-dimensional model of the oil and gas field;
Step S12: carrying out sensor position marking on the three-dimensional model of the oil and gas field to generate an initial three-dimensional model;
Step S13: monitoring the state of the oil and gas field facilities based on the sensor equipment to generate equipment real-time change data;
step S14: and updating the real-time data of the initial three-dimensional model according to the real-time change data of the equipment to generate a real-time three-dimensional model.
The invention establishes the digital representation of the geographical structure of the oil and gas field through the three-dimensional model of the oil and gas field. The method comprises the key information of geological structure, equipment layout and the like, and provides a visual basis for real-time monitoring and management of the whole system. And the sensor position of the three-dimensional model of the oil and gas field is marked, so that the accurate position of the sensor in the model is ensured, the subsequent real-time monitoring data can be accurately mapped to the corresponding geographic position, and the accuracy and the reliability of the monitoring system are improved. The state of the oil and gas field facilities is monitored through the sensor equipment, and the system can capture information such as the running condition and the health condition of the equipment in real time. The method is beneficial to realizing real-time monitoring and fault prediction of equipment and improving the reliability and stability of the production process. And updating the real-time data of the initial three-dimensional model according to the real-time change data of the equipment to generate a real-time three-dimensional model. The method is beneficial to keeping the actual production condition consistent with the theoretical model and providing real-time and accurate geographic information. The updating of the real-time three-dimensional model is helpful for accurately predicting possible problems, and more effective production monitoring and decision support are realized.
In the embodiment of the invention, the data of the topography, the geomorphology, the geological structure and the like of the oil and gas field are obtained by using advanced technologies such as high-resolution remote sensing satellite images, laser radar (LiDAR) and the like. The data can be processed by three-dimensional geological modeling software to generate a high-precision three-dimensional oil and gas field model. The position information of the sensor is accurately marked on the three-dimensional model by using positioning technologies such as a Global Positioning System (GPS), an Inertial Navigation System (INS) and the like. These sensors may include various types of sensors that monitor subsurface hydrocarbon pipelines, equipment conditions, environmental parameters, and the like. The marked three-dimensional model is the initial three-dimensional model. The real-time state monitoring is carried out on the oil-gas field facilities by utilizing various sensor devices such as vibration sensors, temperature sensors, pressure sensors and the like. And generating real-time change data of the equipment through real-time data acquisition. The technology of the Internet of things and the sensor network can realize high-frequency and real-time monitoring of the state of equipment. The affected regions in the initial three-dimensional model are updated in real-time using a data fusion algorithm, such as an Extended Kalman Filter (EKF) or a particle filter. Thus, a real-time three-dimensional model reflecting the state change of the oil and gas field facilities in real time is generated.
Preferably, step S2 comprises the steps of: step S21: acquiring a monitoring association parameter;
Step S22: calibrating a monitoring center according to the monitoring related parameters to generate coordinate data of the monitoring center; establishing a coordinate axis according to the coordinate data of the monitoring center to generate a monitoring coordinate axis;
Step S23: performing monitoring range analysis based on the monitoring related parameters and the monitoring coordinate axis to generate monitoring view field data;
Step S24: and presetting a renderable range of the real-time three-dimensional model according to the monitoring view field data, and generating a rendering range calibration model.
The invention obtains key parameters related to monitoring, such as the characteristics, position information, field angle and the like of the monitoring equipment through monitoring the related parameters. And calibrating the monitoring center according to the monitoring related parameters, generating coordinate data of the monitoring center, and establishing a monitoring coordinate axis according to the coordinate data. The accurate position of the monitoring center in the actual three-dimensional model is ensured, and a coordinate system related to the position of the monitoring equipment is established, so that a foundation is provided for subsequent visualization and space positioning. And carrying out monitoring range analysis based on the monitoring related parameters and the monitoring coordinate axis, and generating monitoring field data. The method and the device are helpful for determining the field of view range of the monitoring device, including the monitorable space range and the monitoring coverage condition. An effective working range of the monitoring device in a real scene is provided. And presetting a renderable range of the real-time three-dimensional model according to the monitoring view field data, and generating a rendering range calibration model. It is ensured that only the area of the real-time three-dimensional model that is within the field of view of the monitoring device will be rendered and displayed. The method effectively reduces the burden of calculation and rendering, improves the efficiency of the monitoring system, and ensures that the displayed information meets the monitoring requirement.
As an example of the present invention, referring to fig. 2, the step S2 in this example includes: step S21: acquiring a monitoring association parameter;
In the embodiment of the invention, various relevant parameters required by monitoring the oil and gas field are acquired through various sensors such as a radar, an infrared sensor, a camera and the like. Parameters may include device location, monitoring device field angle, monitoring frequency, etc. An automatic data acquisition system is used to ensure the accuracy and timeliness of the data.
Step S22: calibrating a monitoring center according to the monitoring related parameters to generate coordinate data of the monitoring center; establishing a coordinate axis according to the coordinate data of the monitoring center to generate a monitoring coordinate axis;
In the embodiment of the invention, the accurate coordinates of the monitoring center are obtained through a positioning technology, such as a differential GPS or a laser positioning system. Using these coordinate data, a monitoring coordinate axis is established. And a coordinate system is established through the installation azimuth of the precise instrument measuring equipment, so that the space accuracy of the monitoring data is ensured.
Step S23: performing monitoring range analysis based on the monitoring related parameters and the monitoring coordinate axis to generate monitoring view field data;
In the embodiment of the invention, the monitoring view field data is generated by combining the monitoring related parameters and the monitoring coordinate axis information and using a simulation and analysis technology. And using an occlusion analysis algorithm, considering the surrounding environment and equipment layout, accurately calculating the occlusion objects possibly existing in the monitored view field, and ensuring the authenticity of view field data.
Step S24: and presetting a renderable range of the real-time three-dimensional model according to the monitoring view field data, and generating a rendering range calibration model.
In the embodiment of the invention, the rendering range of the real-time three-dimensional model is preset by monitoring the view field data. And a virtual reality technology and a real-time rendering algorithm are adopted to ensure the rendering effect of the model in the field of view of the monitoring equipment. And combining the rendering range information with the real-time three-dimensional model to generate a rendering range calibration model.
Preferably, step S3 comprises the steps of: step S31: acquiring real-time monitoring data; performing data preprocessing on the real-time monitoring data to generate standard monitoring data;
Step S32: performing monitoring coordinate extraction on the standard monitoring data based on the monitoring coordinate axis to generate monitoring coordinate data;
Step S33: monitoring pitch angle calculation is carried out based on the monitoring coordinate data, and pitch angle data is generated;
step S34: performing monitoring range assessment according to the pitch angle data to generate monitoring range data;
Step S35: and carrying out rendering area calibration on the rendering range calibration model by using the monitoring range data, and generating the area data to be rendered.
The invention can obtain the real-time information in the actual production process of the oil and gas field by acquiring the real-time monitoring data. Such as equipment status, fluid flow conditions, environmental parameters, etc. By preprocessing the real-time monitoring data to generate standard monitoring data, noise can be cleared, abnormal values can be corrected, and the stability and accuracy of the monitoring system can be improved. And extracting the monitoring coordinates according to the standard monitoring data to generate monitoring coordinate data. The key geographic positioning information is provided for subsequent operations such as field analysis and visualization, and the position of the monitoring data in the actual scene can be accurately reflected. By calculating the pitch angle, the angle of the monitoring device relative to the ground can be determined, thereby more accurately positioning the display range of the monitoring picture. The method is beneficial to enabling the monitoring system to better meet the actual demands and improving the accuracy and the visual effect of the monitoring data. The effect of monitoring range assessment based on pitch angle data is to determine the field of view range of the monitoring device from the information of the pitch angle. To evaluate the area that the monitoring device can cover and to determine the effective range of monitoring. The generation of the monitoring range data is beneficial to optimizing the configuration of the monitoring system, ensuring that the change of the key area is monitored, and improving the effectiveness of the monitoring system. And carrying out rendering area calibration on the rendering range calibration model by using the monitoring range data, and generating the area data to be rendered. Only the area in the monitoring range is rendered and displayed, so that the burden of calculation and rendering is reduced, and the efficiency of the monitoring system is improved. By generating the region data to be rendered, the effective cutting of the monitoring picture is realized, so that the displayed information is more concentrated and targeted, and the monitoring requirement is met.
As an example of the present invention, referring to fig. 3, the step S3 in this example includes: step S31: acquiring real-time monitoring data; performing data preprocessing on the real-time monitoring data to generate standard monitoring data;
In the embodiment of the invention, the monitoring data of the oil and gas field is collected in real time through the monitoring equipment. And the real-time monitoring data is processed in real time by utilizing a stream processing technology, so that noise and abnormal values are removed, and the accuracy of the data is ensured. And (3) carrying out standardized processing on the collected different sensor data to ensure that the collected different sensor data accords with a unified data format and unit so as to facilitate subsequent processing and analysis.
Step S32: performing monitoring coordinate extraction on the standard monitoring data based on the monitoring coordinate axis to generate monitoring coordinate data;
in the embodiment of the invention, the standard monitoring data is mapped into the monitoring coordinate system by performing coordinate axis conversion by utilizing the information of the monitoring coordinate axis. The monitoring coordinate data is extracted by using, for example, feature point matching or visual SLAM (Simultaneous Localization AND MAPPING) algorithm, so as to ensure that the position of the monitoring device in the three-dimensional space is accurately reflected.
Step S33: monitoring pitch angle calculation is carried out based on the monitoring coordinate data, and pitch angle data is generated;
In the embodiment of the invention, the attitude of the monitoring equipment relative to the ground, including pitch angle, yaw angle and the like, is calculated by using the monitoring coordinate data and applying an attitude calculation algorithm. Based on the monitoring coordinate data, a three-dimensional geometric calculation method is adopted to calculate the pitch angle of the monitoring equipment, so that the accuracy of angle information is ensured. Or the pitch angle calculation formula provided by the invention fully considers the variable information such as the coordinates of the camera, the coordinates of the center of the camera, the coordinates of the monitoring equipment and the like, so as to calculate the pitch angle of the monitoring equipment in real time, generate pitch angle data and enable the monitoring system to rapidly respond to the change of the state of the equipment.
Step S34: performing monitoring range assessment according to the pitch angle data to generate monitoring range data;
In the embodiment of the invention, the pitch angle data is converted into the visible range in the vertical direction by utilizing a mathematical model and a geometric principle, and factors such as the earth curvature and the like are considered. By means of environment information acquired in real time, a shielding analysis algorithm is used, and factors such as obstacles, buildings and the like which possibly shield the view of monitoring equipment are considered to generate monitoring range data.
Step S35: and carrying out rendering area calibration on the rendering range calibration model by using the monitoring range data, and generating the area data to be rendered.
In the embodiment of the invention, the visible area of the monitoring equipment is mapped onto the rendering range calibration model through the monitoring range data, and the corresponding spatial mapping relation is established to generate the area data to be rendered.
Preferably, in step S33, the monitored pitch angle calculation is performed by a pitch angle calculation formula, where the pitch angle calculation formula is as follows:
In the method, in the process of the invention, In order to vary the pitch angle,Is the horizontal coordinate value of the camera,Is the ordinate value of the camera head,Is the vertical coordinate value of the camera,Is the abscissa value of the center of the camera,Is the ordinate value of the camera center,Is the vertical coordinate value of the center of the camera,Is a horizontal rotation angle,Is a vertical pitch angle, and is provided with a plurality of vertical pitch angles,In order to monitor the abscissa value of the device,In order to monitor the ordinate value of the device,To monitor the vertical coordinate value of the device.
The invention provides a pitch angle calculation formula for monitoring pitch angle calculation based on monitoring coordinate data, which fully considers the abscissa value of a cameraOrdinate value of cameraVertical coordinate value of cameraAbscissa value of camera centerOrdinate value of camera centerVertical coordinate value of camera centerHorizontal rotation angleVertical pitch angleAbscissa value of monitoring deviceLongitudinal coordinate value of monitoring deviceVertical coordinate value of monitoring deviceAnd interactions between variables, constitute the following functional relationships:
By using rotation transformation and coordinate transformation in three-dimensional space. Let the position of the monitoring equipment be The position of the center of the camera isThe vertical pitch angle isThe horizontal rotation angle is. And taking the center of the camera as the origin of the coordinate system, and establishing a coordinate system taking the center point as the origin. The new coordinates of the monitoring equipment after pitching and horizontal rotation are as followsThe transformation of the rotation matrix can be obtained by: ; from this matrix, the position of the monitoring device in the new coordinate system can be obtained . Thus varying pitch angleCan be calculated by coordinatesAnd combining the two formulas to obtain the result data of the variable pitch angle. The formula calculates the pitch angle of the monitoring equipment in real time, so that the monitoring system can quickly respond to the change of the state of the equipment. The method has important practicability for application scenes needing to adjust the monitoring direction at any time, such as a monitoring camera or a robot vision system for automatically tracking the target. The formula can calculate the pitch angle more accurately by considering the horizontal rotation angle and the pitch angle corresponding to the center, thereby improving the accuracy of the monitoring system. This formula helps to avoid the effects of sensor drift by taking into account the initial position and rotation state of the monitoring device. Sensor drift may cause deviations in the position of the monitoring device, and this formula may partially correct these deviations in the calculation, improving the stability of the monitoring system. Meanwhile, the coordinates of the monitoring equipment and the coordinates of the center of the camera in the formula can be adjusted according to actual monitoring information, and the method is applied to monitoring calculation of pitch angle data corresponding to monitoring coordinate data of different scenes, so that the flexibility and applicability of an algorithm are improved.
Preferably, step S4 comprises the steps of: step S41: carrying out projection fusion processing on the region data to be rendered to generate projection fusion data;
step S42: coordinate record transmission is carried out according to the rendering range calibration model and the monitoring coordinate axis, and real-time coordinate data are generated;
step S43: carrying out coordinate difference calculation on the real-time coordinate data and the monitoring coordinate data to generate coordinate difference data;
Step S44: when the coordinate difference data is larger than the preset coordinate difference data, monitoring pitch angle calculation is performed based on the real-time coordinate data, and real-time pitch angle data are generated; when the coordinate difference data is smaller than or equal to the preset coordinate difference data, the projection fusion data is not subjected to changing processing;
step S45: performing real-time monitoring range assessment according to the real-time pitch angle data to generate changed monitoring range data;
Step S46: performing change region evaluation on the monitoring range data based on the change monitoring range data to generate change region data;
step S47: and performing re-fusion processing based on the changed region data to generate real-time projection fusion data.
By means of projection fusion, the system can integrate multi-source information, ensure that monitoring pictures are more comprehensive and consistent, and improve the comprehensive monitoring effect of the monitoring system on the production process of the oil-gas field. And carrying out coordinate record transmission according to the rendering range calibration model and the monitoring coordinate axis to generate real-time coordinate data. The real-time monitoring data is ensured to be consistent with the coordinate system of the monitoring system, so that the real-time data can be accurately mapped into the monitoring system, and the accuracy of the monitoring data is improved. And carrying out coordinate difference calculation on the real-time coordinate data and the monitoring coordinate data to generate coordinate difference data, which is helpful for judging whether the monitoring equipment has position change or not, and triggering corresponding processing. And when the coordinate difference data is larger than the preset coordinate difference data, performing monitoring pitch angle calculation based on the real-time coordinate data to generate real-time pitch angle data. The real-time adjustment of the pitch angle of the monitoring equipment is realized so as to maintain the stability of the monitoring picture. When the coordinate difference is large, it may indicate that the direction of the monitoring device has changed greatly, and the display angle of the monitoring picture needs to be adjusted accordingly, so as to maintain the accuracy of monitoring. Through real-time monitoring range assessment, the system can dynamically adjust the effective range of the monitoring equipment so as to adapt to the position change of the equipment and keep the stability and accuracy of the monitoring system. Generating change monitoring range data helps identify monitoring range changes caused by monitoring device position changes. And carrying out change area evaluation on the monitoring range data based on the change monitoring range data to generate change area data. And the monitoring range which is actually changed is determined, so that the area which is changed in the monitoring picture can be further accurately identified, and the sensitivity of the monitoring system is improved. And (3) re-fusion processing is carried out based on the changed region data, and the region with the changed monitoring range is re-fused with the original monitoring data to generate real-time projection fusion data. The method is beneficial to realizing the timely updating of the monitoring picture, ensures that the displayed information is more in line with the actual situation, and improves the response speed and accuracy of the monitoring system to the change.
As an example of the present invention, referring to fig. 4, the step S4 includes, in this example: step S41: carrying out projection fusion processing on the region data to be rendered to generate projection fusion data;
In the embodiment of the invention, a projection model is established through a rendering range calibration model, and the region data to be rendered is mapped onto a two-dimensional plane. And carrying out fusion processing on the standard monitoring data and the region data to be rendered mapped to the two-dimensional plane by using a projection fusion algorithm, eliminating the boundary problem and generating unified projection fusion data. And taking the position, the posture and other information of the monitoring equipment into consideration, performing perspective transformation on the projection data, and ensuring that the projection fusion data can accurately reflect the monitoring condition of the three-dimensional space on the two-dimensional plane.
Step S42: coordinate record transmission is carried out according to the rendering range calibration model and the monitoring coordinate axis, and real-time coordinate data are generated;
In the embodiment of the invention, the coordinate record transmission is carried out by the position of the real-time monitoring equipment and the information of the rendering range calibration model, so that the position of the monitoring equipment in the three-dimensional space can be accurately reflected by the real-time coordinate data. And mapping the coordinate information of the monitoring equipment to the monitoring coordinate axis to ensure the consistency of the coordinate system. By adopting a real-time transmission technology, the real-time coordinate data can be ensured to be transmitted into the system in a short time, and the real-time performance of the monitoring system is improved.
Step S43: carrying out coordinate difference calculation on the real-time coordinate data and the monitoring coordinate data to generate coordinate difference data;
In the embodiment of the invention, the real-time coordinate data and the monitoring coordinate data are processed to ensure the same data format and unit. And performing difference calculation on the real-time coordinate data and the monitoring coordinate data by using a difference calculation algorithm to obtain coordinate difference data. The conversion of the coordinate system is performed in consideration of possible coordinate system differences, so that the coordinate difference data can be processed in the same coordinate system.
Step S44: when the coordinate difference data is larger than the preset coordinate difference data, monitoring pitch angle calculation is performed based on the real-time coordinate data, and real-time pitch angle data are generated; when the coordinate difference data is smaller than or equal to the preset coordinate difference data, the projection fusion data is not subjected to changing processing;
In the embodiment of the invention, whether the changed condition is met is judged by comparing the coordinate difference data with the preset coordinate difference data. The preset coordinate difference data is determined according to specific monitoring system requirements, equipment performance and actual scene characteristics, if the accuracy of the sensor is high, the preset coordinate difference data can be larger, so that the sensitivity of the system is reduced. And when the coordinate difference data is larger than the preset coordinate difference data, calculating the pitch angle of the monitoring equipment by using the real-time coordinate data. The calculation of the pitch angle can be based on the principle of triangulation, and the information of the position, the direction and the like of the monitoring equipment is considered. Or the corresponding result is calculated by using the pitch angle calculation formula provided by the invention. And generating real-time pitch angle data based on the pitch angle calculation result. When the coordinate difference data is smaller than or equal to the preset coordinate difference data, the projection fusion data is not changed, and the original projection fusion state is maintained. And a real-time calculation and judgment technology is adopted, so that generation of pitch angle data is completed in a short time, and real-time performance of a monitoring system is ensured.
Step S45: performing real-time monitoring range assessment according to the real-time pitch angle data to generate changed monitoring range data;
In the embodiment of the invention, the real-time monitoring range of the monitoring equipment in the vertical direction is assessed by utilizing the real-time pitch angle data according to the previously established monitoring range model. And generating change monitoring range data according to the real-time monitoring range evaluation result, wherein the change of the monitoring range is indicated by the change data.
Step S46: performing change region evaluation on the monitoring range data based on the change monitoring range data to generate change region data;
In the embodiment of the invention, the previous monitoring range data is acquired. And evaluating the monitoring range data by combining the changed monitoring range data, and determining the changed area in the monitoring range. And generating change area data according to the evaluation result, wherein the data describes a specific area which changes in the monitoring range.
Step S47: and performing re-fusion processing based on the changed region data to generate real-time projection fusion data.
In the embodiment of the invention, the modified region data and the original projection fusion data are combined by utilizing the efficient re-fusion algorithm, so that the real-time projection fusion data is ensured to reflect the latest monitoring condition of the monitoring equipment. By adopting a real-time fusion processing technology, the real-time property of the changed area data is ensured, so that the system can respond to the change of the monitoring equipment in time.
Preferably, step S42 comprises the steps of: step S421: performing rotation control on the monitoring equipment, recording rotation suspension time length, and generating suspension time length data; when the suspension duration data is greater than or equal to the preset suspension duration, the camera coordinate transmission is carried out through the monitoring sensor equipment, and manual control real-time coordinate data are generated; when the stopping time length data is smaller than the preset stopping time length, taking the monitoring coordinate data as manual control real-time coordinate data;
step S422: detecting and extracting abnormal states according to the rendering range calibration model to generate abnormal information data;
step S423: performing abnormal azimuth matching on the rendering range calibration model according to the abnormal information data to generate abnormal matching data;
Step S424: performing area monitoring control based on the abnormal matching data, and confirming coordinate points through a monitoring coordinate axis, so as to generate machine control real-time coordinate data;
Step S425: and carrying out time sequence combination on the manual control real-time coordinate data and the machine control real-time coordinate data to generate real-time coordinate data.
According to the invention, the monitoring equipment is subjected to rotation control and the rotation suspension time length is recorded, so that whether the rotation amplitude changes the original monitoring range or not is determined, when the suspension time length data is larger than or equal to the preset suspension time length, the change amplitude exceeds the preset range, the changed coordinates are required to be transmitted into the rendering range calibration model through the monitoring sensor, and the manual real-time coordinate data are generated; and if the stopping time length data is smaller than the preset stopping time length, taking the monitoring coordinate data as manual control real-time coordinate data. This intelligent switching mechanism allows the system to respond quickly when manual control is required, while preserving the continuity of automatic control. And detecting and extracting abnormal states according to the rendering range calibration model, so that the system is helped to discover possible faults or abnormal conditions of the equipment in time. And carrying out abnormal azimuth matching on the rendering range calibration model according to the abnormal information data, determining the specific position where the abnormality occurs, and providing accurate positioning information for subsequent area monitoring control. And carrying out area monitoring control based on the abnormal matching data, and confirming coordinate points through the monitoring coordinate axes, so as to generate the machine control real-time coordinate data. The monitoring control of the abnormal area is realized, and meanwhile, the coordinate confirmation is carried out through the monitoring coordinate axis, so that the generated machine control real-time coordinate data is ensured to be accurate. The manual control real-time coordinate data and the machine control real-time coordinate data are combined in time sequence, the coordinate data of manual control and automatic control are orderly combined together, the continuity of the monitoring system is maintained, and meanwhile flexible control of the monitoring equipment is realized.
In the embodiment of the invention, the rotation control is realized through a rotation mechanism or an external control system arranged in the device, a timer is started or a time stamp is recorded when the rotation starts, and the monitoring device rotates. When the monitoring device stops rotating, the timer is stopped or the end time stamp is recorded. The obtained time is the rotation suspension time. The recorded rotation suspension period is converted into an available data format. May be milliseconds, seconds, or other suitable units of time. The preset suspension duration depends on the rotation control characteristic of the monitoring device and the possible attention duration of the user to the monitored scene, for example, if the system can quickly respond to the rotation control instruction of the user, a shorter suspension duration can be selected. When the suspension duration data is greater than or equal to the preset suspension duration: and transmitting coordinates of the camera through monitoring the sensor equipment. This may include transmitting the coordinates of the current camera into the system using a wireless communication protocol, such as Wi-Fi or bluetooth. And processing the transmitted coordinate data into manual real-time coordinate data. The pause duration data is smaller than a preset pause duration: and when the rotation of the monitoring equipment is caused by manual false touch or external force influence, the monitoring coordinate data is used as manual real-time coordinate data. And monitoring abnormal states, such as abnormal movement of equipment, abnormal temperature, abnormal environment change and the like, according to the real-time change data of the equipment contained in the rendering range calibration model, and extracting and converting the abnormal information into a data format when the abnormal states are detected. This may include information of anomaly type, location, duration, etc. And matching the abnormal information data with the rendering range calibration model to determine the position of the occurrence of the abnormality. This may involve techniques such as image matching algorithms, pattern recognition, etc. And converting the matching result into abnormal matching data, including information such as abnormal azimuth, abnormal area and the like. And starting the area monitoring control according to the abnormal matching data. This may include adjusting the focal length, direction, angle, etc. of the monitoring device to focus on the anomalous region. And confirming the machine control real-time coordinate data through the monitoring coordinate axis, ensuring that the monitoring equipment accurately aims at the abnormal area, and generating the machine control real-time coordinate data. And merging the manual control real-time coordinate data and the machine control real-time coordinate data according to the time sequence to generate real-time coordinate data so as to determine the sequence of fusion casting.
Preferably, step S46 comprises the steps of: step S461: performing data differentiation processing on the changed monitoring range data and the monitoring range data to generate changed monitoring differential data and monitoring differential data;
step S462: performing level difference analysis on the changed monitoring differential data and the monitoring differential data to generate a level difference value;
step S463: performing vertical difference analysis on the changed monitoring differential data and the monitoring differential data to generate a vertical difference value;
step S464: and carrying out difference position combination on the change monitoring range data according to the horizontal difference value and the vertical difference value to generate change area data.
According to the invention, the change monitoring differential data and the monitoring differential data are generated by carrying out data differentiation processing on the change monitoring range data and the monitoring range data, so that the capture of local change of the monitoring range is facilitated, and the subsequent analysis is more detailed and accurate. And carrying out horizontal and vertical difference analysis on the changed monitoring differential data and the monitoring differential data to generate a horizontal difference value and a vertical difference value. The effect of these two steps is to analyze the differential data differences and determine the extent of change in the horizontal and vertical directions. The horizontal difference value and the vertical difference value provide information of changing positions and directions, and provide a basis for generating follow-up change area data. And carrying out difference position combination on the change monitoring range data according to the horizontal difference value and the vertical difference value to generate change area data. The generated change area data accurately reflects the specific position and shape of the change of the monitoring range.
In the embodiment of the invention, the differential data is obtained by differentiating the monitoring range data by using a differential algorithm. Similarly, the change monitoring range data is differentiated to obtain change monitoring differential data. This can be achieved by using differential operations in mathematics or differential filters in image processing. In the differential data, the horizontal difference generally refers to a change in the horizontal direction of the image. The change in the horizontal direction in the differential data can be detected using image processing techniques such as convolution operations. Analysis of the level difference values may determine the altered level position by setting a threshold or applying more advanced image analysis techniques. Vertical variance generally refers to the change in differential data in the vertical direction of the image. Also, by the image processing technique, a change in the vertical direction in the differential data can be detected. Analysis of the vertical variance values may also determine the modified vertical position by setting a threshold or applying an image analysis algorithm. In combination with the horizontal and vertical discrepancy values, geometric analysis methods or image registration algorithms may be employed to determine the location of the alterations. By comparing the difference values, the specific location of the change region can be determined and combined into change region data.
Preferably, step S5 comprises the steps of: step S51: carrying out fixed object identification according to the projection fusion data and the real-time projection fusion data to generate identified object data;
step S52: extracting object information from the identified object data to generate object information data;
step S53: performing differential object recognition on the rendering range calibration model based on the object information data to generate differential object information data;
Step S54: and carrying out model updating rendering on the rendering range calibration model according to the difference object information data to generate a re-rendering three-dimensional model.
By comparing the projection fusion data with the change of the real-time projection fusion data, the system can determine which objects are stationary. After generating the identified object data, the system can know which objects are stable in the monitoring screen, providing a basis for subsequent analysis. And extracting object information from the identified object data to generate object information data. The effect of this step is to extract detailed information from the identified object, possibly including the type, size, location, etc. of the object. Helping the system to more fully understand objects in the monitored scene. And carrying out differential object identification on the rendering range calibration model based on the object information data to generate differential object information data. By comparing the real-time data with the model data, the system can determine which objects are changed, and the perceptibility of the monitoring system to the changes is improved. And carrying out model updating rendering on the rendering range calibration model according to the difference object information data to generate a re-rendered three-dimensional model, so as to ensure that the displayed three-dimensional model more accurately reflects the change in the actual scene. The credibility and the instantaneity of the monitoring picture are improved.
In the embodiment of the invention, the fixed object in the image is identified by processing the projection fusion data and the real-time projection fusion data by using a computer vision and image processing technology and adopting a target detection algorithm such as a Convolutional Neural Network (CNN) or other deep learning model. This allows the features of the stationary object to be learned by training the model, enabling accurate identification in real-time data. The identified object data obtained by the object detection algorithm typically contains information about the position, size and class of the object. The information is extracted to obtain more detailed object information such as color, shape, motion state, etc. And comparing the extracted object information data with a rendering range calibration model to identify a difference object. This may be done by comparing properties of the objects, such as position, shape, color, etc., to determine if there is a discrepancy and generating corresponding discrepancy object information data. And updating the rendering range calibration model according to the difference object information data. Adjustments in shape, texture, etc. of the model may be involved to ensure that the model remains consistent with the discrepant objects in the actual scene. The updated rendering range calibration model may be used to generate a re-rendered three-dimensional model to reflect changes in the real-time scene.
Preferably, step S51 comprises the steps of: step S511: acquiring historical marked object data;
Step S512: fusion picture extraction is carried out according to the projection fusion data and the real-time projection fusion data, and head-to-tail frame data and real-time head-to-tail frame data are generated;
step S513: performing the same object identification based on the head-to-tail frame data and the real-time head-to-tail frame data to generate identifier data and real-time identifier data;
Step S514: carrying out the same data merging processing on the identifier data and the real-time identifier data to generate merged identifier data;
step S515: and screening unrecorded data of the combined identifier data by using the historical identified object data to generate identified object data.
The invention helps the system to better understand the changes in the monitored scene by obtaining object data that has been identified during a previous monitoring period as a reference for comparison. And according to the head and tail frame data of the extracted monitoring picture, the subsequent same object identification and data merging processing is performed. By comparing the historical end-to-end frame data with the real-time end-to-end frame data, the system can identify the same object in the monitoring picture and generate the identifier data and the real-time identifier data. And carrying out the same data merging processing on the identifier data and the real-time identifier data to generate merged identifier data. The effect of this step is to integrate the projection fusion data with the identified objects in the real-time projection fusion data, ensuring that the system can comprehensively and accurately record the identified objects in the whole fusion scene. And the newly added marked object data in the real-time monitoring is screened out through comparison with the historical marked object data, so that the marked object data is more complete and accurate, and the marked object data is generated.
In the embodiment of the invention, the data of the identified object in the historical time period is saved by utilizing a database or a data storage system. By adopting an efficient data storage and retrieval mechanism, the historical marked object data can be ensured to be rapidly acquired. And extracting a first frame picture and a last frame picture from the projection fusion data and the real-time projection fusion data by utilizing a video processing technology. This may involve using video codec, image processing algorithms, etc. techniques to ensure that the end-to-end frame data and the real-time end-to-end frame data of the fused picture can be accurately extracted. And carrying out object identification on the head-to-tail frame data and the real-time head-to-tail frame data by utilizing computer vision and an image processing algorithm. The adoption of a target matching or tracking algorithm ensures that the same object can be accurately identified, and corresponding identifier data and real-time identifier data are generated. And combining the identifier data of the same object with the real-time identifier data to ensure the consistency of the data. This may involve merging, associating, or other data processing operations of the data structures to generate the complete merged identifier data. And comparing and screening the combined marker data by using the historical marked data. And identifying new objects which are not recorded in the historical data, and ensuring the integrity and the accuracy of the identified object data. Some algorithms and rules, such as object detection algorithms, feature matching algorithms, optical flow analysis, etc., may be required to find unrecorded objects in the data.
The method has the beneficial effects that an initial oil-gas field geospatial model is built. By updating the three-dimensional model in real time using the sensor device, the system is able to keep the model highly consistent with the actual situation. Helping to provide an accurate, real-time basis for geographic information. And acquiring the monitoring related parameters, establishing a coordinate axis based on the monitoring related parameters, generating a monitoring coordinate axis, presetting a rendering range of the real-time three-dimensional model, and generating a rendering range calibration model, so as to establish a space coordinate system related to the monitoring equipment. The coordinate system not only provides a reference frame, so that the relation between the monitoring equipment and the real-time three-dimensional model is clear, but also the visual area is preset through the rendering range calibration model, and the accuracy and consistency of the subsequent rendering effect are improved. The method comprises the steps of obtaining real-time monitoring data, performing monitoring pitch angle calculation, generating pitch angle data, and performing rendering area calibration on a rendering range calibration model according to the pitch angle data, wherein the effect of generating the area data to be rendered is that dynamic adjustment of the pitch angle of monitoring equipment is achieved. By calculating the pitch angle, the system can more accurately determine the viewable area of the monitor screen, thereby optimizing the rendering effect. Generating the region data to be rendered is beneficial to focusing on only the region of interest on the display, and rendering efficiency is improved. And carrying out projection fusion processing on the region data to be rendered, mapping the information of the monitored scene into a rendering range, and generating projection fusion data. By combining the rendering range calibration model with the monitoring coordinate axis, the system can effectively project the monitoring data into the designated display range. And then, carrying out change fusion processing on the projection fusion data through the real-time coordinate data to generate real-time projection fusion data, thereby realizing dynamic projection adjustment. The method is beneficial to ensuring that information in the monitoring picture can be effectively projected under the condition of real-time change, and the real-time performance and accuracy of the display effect are improved. The effect of fusion fixed object identification according to the projection fusion data and the real-time projection fusion data is to identify the fixed object in the monitoring picture. By comparing the projection fusion data with the real-time projection fusion data, the system can determine which objects are unchanged, generating object information data. And then, performing differential object updating rendering based on the object information data to generate a re-rendered three-dimensional model, thereby realizing real-time updating of the dynamic object. The monitoring system can efficiently identify which objects are fixed and which are dynamically changed, so that only the part needing to be updated is concerned in the re-rendering process, and the real-time performance and the efficiency of the rendering effect are improved. Therefore, the real-time three-dimensional projection fusion method for the oil-gas field video monitoring extracts abnormal state data to automatically control the monitoring equipment by acquiring the equipment state data, and performs projection fusion processing, so that the equipment detailed information is obtained, and the monitoring efficiency is improved.
The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.
The foregoing is only a specific embodiment of the invention to enable those skilled in the art to understand or practice the invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. The real-time three-dimensional projection fusion method for the video monitoring of the oil and gas field is characterized by comprising the following steps of:
Step S1: acquiring a three-dimensional model of the oil and gas field; updating the three-dimensional model of the oil and gas field in real time by using sensor equipment to generate a real-time three-dimensional model;
step S2: acquiring a monitoring association parameter; establishing a coordinate axis based on the monitoring related parameters to generate a monitoring coordinate axis; performing renderable range presetting on the real-time three-dimensional model based on the monitoring related parameters and the monitoring coordinate axes to generate a rendering range calibration model;
step S3: acquiring real-time monitoring data; monitoring pitch angle calculation is carried out according to the real-time monitoring data, and pitch angle data are generated; performing rendering area calibration on the rendering range calibration model according to the pitch angle data to generate to-be-rendered area data;
step S4: carrying out projection fusion processing on the region data to be rendered to generate projection fusion data; coordinate record transmission is carried out according to the rendering range calibration model and the monitoring coordinate axis, and real-time coordinate data are generated; carrying out change fusion processing on the projection fusion data based on the real-time coordinate data to generate real-time projection fusion data;
Step S5: fusion fixed object identification is carried out according to the projection fusion data and the real-time projection fusion data, and object information data are generated; and performing differential object update rendering based on the object information data to generate a re-rendered three-dimensional model.
2. The method for real-time three-dimensional projection fusion of video surveillance of an oil and gas field according to claim 1, wherein the step S1 comprises the steps of:
step S11: acquiring a three-dimensional model of the oil and gas field;
Step S12: carrying out sensor position marking on the three-dimensional model of the oil and gas field to generate an initial three-dimensional model;
Step S13: monitoring the state of the oil and gas field facilities based on the sensor equipment to generate equipment real-time change data;
step S14: and updating the real-time data of the initial three-dimensional model according to the real-time change data of the equipment to generate a real-time three-dimensional model.
3. The method for real-time three-dimensional projection fusion of video surveillance of an oil and gas field according to claim 1, wherein the step S2 comprises the steps of:
Step S21: acquiring a monitoring association parameter;
Step S22: calibrating a monitoring center according to the monitoring related parameters to generate coordinate data of the monitoring center; establishing a coordinate axis according to the coordinate data of the monitoring center to generate a monitoring coordinate axis;
Step S23: performing monitoring range analysis based on the monitoring related parameters and the monitoring coordinate axis to generate monitoring view field data;
Step S24: and presetting a renderable range of the real-time three-dimensional model according to the monitoring view field data, and generating a rendering range calibration model.
4. The method for real-time three-dimensional projection fusion of video surveillance of an oil and gas field according to claim 1, wherein the step S3 comprises the steps of:
step S31: acquiring real-time monitoring data; performing data preprocessing on the real-time monitoring data to generate standard monitoring data;
Step S32: performing monitoring coordinate extraction on the standard monitoring data based on the monitoring coordinate axis to generate monitoring coordinate data;
Step S33: monitoring pitch angle calculation is carried out based on the monitoring coordinate data, and pitch angle data is generated;
step S34: performing monitoring range assessment according to the pitch angle data to generate monitoring range data;
Step S35: and carrying out rendering area calibration on the rendering range calibration model by using the monitoring range data, and generating the area data to be rendered.
5. The method for real-time three-dimensional projection fusion of video monitoring of an oil and gas field according to claim 4, wherein in step S33, the monitoring pitch angle is calculated by a pitch angle calculation formula, wherein the pitch angle calculation formula is as follows:
In the method, in the process of the invention, To change pitch angle,/>Is the abscissa value of the camera,/>Is the ordinate value of the camera,/>Is the vertical coordinate value of the camera,/>Is the abscissa value of the center of the camera,/>Is the ordinate value of the center of the camera,/>Is the vertical coordinate value of the center of the camera,/>For horizontal rotation angle,/>Is vertical pitch angle,/>For monitoring the abscissa value of the device,/>For monitoring the ordinate value of the device,/>To monitor the vertical coordinate value of the device.
6. The method for real-time three-dimensional projection fusion of video surveillance of an oil and gas field according to claim 1, wherein the step S4 comprises the steps of:
Step S41: carrying out projection fusion processing on the region data to be rendered to generate projection fusion data;
step S42: coordinate record transmission is carried out according to the rendering range calibration model and the monitoring coordinate axis, and real-time coordinate data are generated;
step S43: carrying out coordinate difference calculation on the real-time coordinate data and the monitoring coordinate data to generate coordinate difference data;
Step S44: when the coordinate difference data is larger than the preset coordinate difference data, monitoring pitch angle calculation is performed based on the real-time coordinate data, and real-time pitch angle data are generated; when the coordinate difference data is smaller than or equal to the preset coordinate difference data, the projection fusion data is not subjected to changing processing;
step S45: performing real-time monitoring range assessment according to the real-time pitch angle data to generate changed monitoring range data;
Step S46: performing change region evaluation on the monitoring range data based on the change monitoring range data to generate change region data;
step S47: and performing re-fusion processing based on the changed region data to generate real-time projection fusion data.
7. The method of real-time three-dimensional projection fusion for video surveillance of an oil and gas field according to claim 6, wherein the step S42 comprises the steps of:
Step S421: performing rotation control on the monitoring equipment, recording rotation suspension time length, and generating suspension time length data; when the suspension duration data is greater than or equal to the preset suspension duration, the camera coordinate transmission is carried out through the monitoring sensor equipment, and manual control real-time coordinate data are generated; when the stopping time length data is smaller than the preset stopping time length, taking the monitoring coordinate data as manual control real-time coordinate data;
step S422: detecting and extracting abnormal states according to the rendering range calibration model to generate abnormal information data;
step S423: performing abnormal azimuth matching on the rendering range calibration model according to the abnormal information data to generate abnormal matching data;
Step S424: performing area monitoring control based on the abnormal matching data, and confirming coordinate points through a monitoring coordinate axis, so as to generate machine control real-time coordinate data;
Step S425: and carrying out time sequence combination on the manual control real-time coordinate data and the machine control real-time coordinate data to generate real-time coordinate data.
8. The method of real-time three-dimensional projection fusion for video surveillance of an oil and gas field according to claim 6, wherein the step S46 comprises the steps of:
step S461: performing data differentiation processing on the changed monitoring range data and the monitoring range data to generate changed monitoring differential data and monitoring differential data;
step S462: performing level difference analysis on the changed monitoring differential data and the monitoring differential data to generate a level difference value;
step S463: performing vertical difference analysis on the changed monitoring differential data and the monitoring differential data to generate a vertical difference value;
step S464: and carrying out difference position combination on the change monitoring range data according to the horizontal difference value and the vertical difference value to generate change area data.
9. The method for real-time three-dimensional projection fusion of video surveillance of an oil and gas field according to claim 1, wherein the step S5 comprises the steps of:
Step S51: carrying out fixed object identification according to the projection fusion data and the real-time projection fusion data to generate identified object data;
step S52: extracting object information from the identified object data to generate object information data;
step S53: performing differential object recognition on the rendering range calibration model based on the object information data to generate differential object information data;
Step S54: and carrying out model updating rendering on the rendering range calibration model according to the difference object information data to generate a re-rendering three-dimensional model.
10. The method for real-time three-dimensional projection fusion of video surveillance of an oil and gas field according to claim 9, wherein the step S51 comprises the steps of:
Step S511: acquiring historical marked object data;
Step S512: fusion picture extraction is carried out according to the projection fusion data and the real-time projection fusion data, and head-to-tail frame data and real-time head-to-tail frame data are generated;
step S513: performing the same object identification based on the head-to-tail frame data and the real-time head-to-tail frame data to generate identifier data and real-time identifier data;
Step S514: carrying out the same data merging processing on the identifier data and the real-time identifier data to generate merged identifier data;
step S515: and screening unrecorded data of the combined identifier data by using the historical identified object data to generate identified object data.
CN202410323494.0A 2024-03-21 2024-03-21 Real-time three-dimensional projection fusion method for oil-gas field video monitoring Active CN117934729B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410323494.0A CN117934729B (en) 2024-03-21 2024-03-21 Real-time three-dimensional projection fusion method for oil-gas field video monitoring

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410323494.0A CN117934729B (en) 2024-03-21 2024-03-21 Real-time three-dimensional projection fusion method for oil-gas field video monitoring

Publications (2)

Publication Number Publication Date
CN117934729A true CN117934729A (en) 2024-04-26
CN117934729B CN117934729B (en) 2024-06-11

Family

ID=90754136

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410323494.0A Active CN117934729B (en) 2024-03-21 2024-03-21 Real-time three-dimensional projection fusion method for oil-gas field video monitoring

Country Status (1)

Country Link
CN (1) CN117934729B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101750390B1 (en) * 2016-10-05 2017-06-23 주식회사 알에프코리아 Apparatus for tracing and monitoring target object in real time, method thereof
CN109714567A (en) * 2018-11-08 2019-05-03 中国船舶重工集团公司七五0试验场 A kind of real-time construction method of three-dimensional virtual scene based on infrared viewing device and device
CN110009561A (en) * 2019-04-10 2019-07-12 南京财经大学 A kind of monitor video target is mapped to the method and system of three-dimensional geographical model of place
CN110992484A (en) * 2019-11-20 2020-04-10 中电科新型智慧城市研究院有限公司 Method for displaying traffic dynamic video in real scene three-dimensional platform
CN112261293A (en) * 2020-10-20 2021-01-22 华雁智能科技(集团)股份有限公司 Remote inspection method and device for transformer substation and electronic equipment
CN112348967A (en) * 2020-10-29 2021-02-09 国网浙江省电力有限公司 Seamless fusion method for three-dimensional model and real-time video of power equipment
CN112365397A (en) * 2020-11-20 2021-02-12 天津七所精密机电技术有限公司 Method for fusing two-dimensional video information and three-dimensional geographic information
CN117560578A (en) * 2024-01-12 2024-02-13 北京睿呈时代信息科技有限公司 Multi-channel video fusion method and system based on three-dimensional scene rendering and irrelevant to view points

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101750390B1 (en) * 2016-10-05 2017-06-23 주식회사 알에프코리아 Apparatus for tracing and monitoring target object in real time, method thereof
CN109714567A (en) * 2018-11-08 2019-05-03 中国船舶重工集团公司七五0试验场 A kind of real-time construction method of three-dimensional virtual scene based on infrared viewing device and device
CN110009561A (en) * 2019-04-10 2019-07-12 南京财经大学 A kind of monitor video target is mapped to the method and system of three-dimensional geographical model of place
CN110992484A (en) * 2019-11-20 2020-04-10 中电科新型智慧城市研究院有限公司 Method for displaying traffic dynamic video in real scene three-dimensional platform
CN112261293A (en) * 2020-10-20 2021-01-22 华雁智能科技(集团)股份有限公司 Remote inspection method and device for transformer substation and electronic equipment
CN112348967A (en) * 2020-10-29 2021-02-09 国网浙江省电力有限公司 Seamless fusion method for three-dimensional model and real-time video of power equipment
CN112365397A (en) * 2020-11-20 2021-02-12 天津七所精密机电技术有限公司 Method for fusing two-dimensional video information and three-dimensional geographic information
CN117560578A (en) * 2024-01-12 2024-02-13 北京睿呈时代信息科技有限公司 Multi-channel video fusion method and system based on three-dimensional scene rendering and irrelevant to view points

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
谢宇光: "路口多相机多维立体监控系统研究", 硕士电子期刊, 15 April 2022 (2022-04-15) *

Also Published As

Publication number Publication date
CN117934729B (en) 2024-06-11

Similar Documents

Publication Publication Date Title
Son et al. Real-time vision-based warning system for prevention of collisions between workers and heavy equipment
Soltani et al. Framework for location data fusion and pose estimation of excavators using stereo vision
US20190258868A1 (en) Motion-validating remote monitoring system
Zollmann et al. Augmented reality for construction site monitoring and documentation
JP6464337B2 (en) Traffic camera calibration update using scene analysis
JP6516558B2 (en) Position information processing method
CN105678748A (en) Interactive calibration method and apparatus based on three dimensional reconstruction in three dimensional monitoring system
US11668577B1 (en) Methods and systems for response vehicle deployment
CN111383204A (en) Video image fusion method, fusion device, panoramic monitoring system and storage medium
CN116349222B (en) Rendering depth-based three-dimensional models using integrated image frames
CN114841944A (en) Tailing dam surface deformation inspection method based on rail-mounted robot
US11348321B2 (en) Augmented viewing of a scenery and subsurface infrastructure
KR101586026B1 (en) device and method of calculating coverage of camera in video surveillance system
CN117934729B (en) Real-time three-dimensional projection fusion method for oil-gas field video monitoring
KR101686797B1 (en) Method for analyzing a visible area of a closed circuit television considering the three dimensional features
KR102209025B1 (en) Markerless based AR implementation method and system for smart factory construction
KR100586815B1 (en) System and method for measuring position of threedimensions
CN110617800A (en) Emergency remote sensing monitoring method, system and storage medium based on civil aircraft
WO2024019000A1 (en) Information processing method, information processing device, and information processing program
Zhang et al. Computer vision-based real-time monitoring for swivel construction of bridges: from laboratory study to a pilot application
JP2023079699A (en) Data acquisition device
CN114972543A (en) Distributed monitoring camera positioning method and system based on visual SLAM
CN118037956A (en) Method and system for generating three-dimensional virtual reality in fixed space
CN118129730A (en) Multi-sensor fusion elevation semantic map construction method
CN116704138A (en) Method and device for establishing oblique photography three-dimensional model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant