CN113627005B - Intelligent vision monitoring method - Google Patents

Intelligent vision monitoring method Download PDF

Info

Publication number
CN113627005B
CN113627005B CN202110879318.1A CN202110879318A CN113627005B CN 113627005 B CN113627005 B CN 113627005B CN 202110879318 A CN202110879318 A CN 202110879318A CN 113627005 B CN113627005 B CN 113627005B
Authority
CN
China
Prior art keywords
data
monitoring
monitoring image
dimensional virtual
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110879318.1A
Other languages
Chinese (zh)
Other versions
CN113627005A (en
Inventor
沈西南
易波
杨军
蒋洋
李�杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Shian Innovation Technology Co ltd
Original Assignee
Chengdu Shian Innovation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Shian Innovation Technology Co ltd filed Critical Chengdu Shian Innovation Technology Co ltd
Priority to CN202110879318.1A priority Critical patent/CN113627005B/en
Publication of CN113627005A publication Critical patent/CN113627005A/en
Application granted granted Critical
Publication of CN113627005B publication Critical patent/CN113627005B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Geometry (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Graphics (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Processing (AREA)
  • Alarm Systems (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention discloses an intelligent visual monitoring method, and relates to the technical field of visual monitoring. The invention marks the corresponding virtual camera in the three-dimensional virtual scene simulation model according to the position of the camera in the monitored scene, takes the visual angle of the virtual camera as the output visual angle of the three-dimensional virtual scene simulation model, outputs a two-dimensional virtual monitoring picture, distorts the two-dimensional virtual monitoring picture, fuses the distorted two-dimensional virtual monitoring picture with the monitoring picture of the camera in the monitored scene, and outputs the fused picture to a display for display. The fused pictures provide visual display and visual management for the ports, the connecting cables and the panel display content of the equipment. According to the intelligent visual monitoring method, multidimensional data information of a monitored object is visually presented through intelligent analysis, and the scene and the data are deeply fused together in the same monitoring interface, so that data visual monitoring based on a scene real video image is realized.

Description

Intelligent vision monitoring method
Technical Field
The invention relates to the technical field of visual monitoring, in particular to an intelligent visual monitoring method.
Background
At present, the monitoring methods for production sites in the market of China are mainly divided into two types: video monitoring and analog type monitoring.
The video monitoring has the advantages of reality and intuitiveness, and people can see the real situation of the production site. However, the defects of video monitoring are also obvious, the video monitoring can only monitor the appearance, video windows are simply listed, a plurality of camera pictures are densely arranged, valuable monitoring corresponds and positions thereof are not easy to distinguish, the identification degree is low, and data information is absent. All video windows for video monitoring are required to be stared by manpower, so that the working strength is high, and if problems occur, the problems are also required to be manually examined, and the problems are found not to be timely enough. Such monitoring methods have limited utility in advance/in-advance, and at best can only be used to retrospectively view video evidence.
The simulation type monitoring usually uses means such as a schematic diagram, an industrial control flow chart, a statistical chart, a three-dimensional simulation scene and the like to represent the condition of the production site, and has the advantages that data is used as a support, the production site is reflected by the intrinsic data, and the simulation type monitoring has the defects that no real image exists, the condition of the site can only be represented indirectly in a simulation mode, and the visual, timely and comprehensive monitoring cannot be achieved aiming at the condition of the non-data type on the site. For example, a three-dimensional simulation scene is established by using a three-dimensional simulation model, equipment running state data is correspondingly filled into the three-dimensional simulation scene, the three-dimensional simulation scene is only responsible for the change of the equipment running data, early warning can be carried out when the change of the data needs early warning, the monitored real scene cannot be monitored, when people enter the real scene to operate the equipment, corresponding people cannot appear in the three-dimensional simulation scene, the operation of the corresponding people cannot be monitored, and the data are monitored only.
In other words, the video monitoring can obtain the production field picture, but cannot monitor the equipment operation data in the production field, and the analog monitoring mode can monitor the data, but cannot obtain the production field real picture. At present, some technologies for combining video monitoring and analog monitoring exist in the prior art, and the current integration is to only map a monitoring video to analog monitoring, so that the technology is generally applicable to security monitoring, for example, the patent application of the invention is disclosed in patent publication No. CN109905664A, entitled "live-action 3D intelligent visual monitoring system and method", which is to use analog monitoring simulation space to paste video monitoring pictures in the simulation space so as to meet the dynamic display of 3D models. This approach involves only dynamic monitoring and does not involve the fusion of data monitoring and video monitoring.
For another example, the national intellectual property office 2020, 2 months and 14 days, discloses a patent application with publication number CN110796727A, named as a machine room remote panoramic monitoring method based on virtual reality technology, which comprises constructing a panoramic view of a machine room by adopting a panoramic view technology, simulating the environment of the machine room and various devices in the machine room, and constructing a three-dimensional virtual machine room simulation model; a plurality of panoramic cameras are respectively installed at a plurality of positions in an actual machine room, and the panoramic cameras are utilized to shoot the environment in the machine room, splice and fuse the environment, and produce a machine room panoramic image; according to the produced machine room panoramic image, remotely monitoring the machine room environment and the working state of each device in the machine room in a visual mode; in the three-dimensional virtual machine room simulation model, visual display and visual management are provided for the port, the connecting cable and the panel display content of the equipment so as to carry out IT operation and maintenance panoramic monitoring. In the invention, when a user is detected to click one item of data in the asset report, the real scene in a machine room related to the data is automatically switched to, and a positioning prompt is sent out.
The prior art realizes the combination of data visualization and monitoring images, but the combination mode is that when a user clicks data, the data is automatically switched to a real scene in a machine room related to the data. The method is to fuse data monitoring as a main part and video live-action monitoring as an auxiliary part. Only when the data is operated, the real scene can be seen. The fusion mode is only convenient for calling the live-action monitoring picture, and can timely see the live-action monitoring picture of the equipment when the early warning appears, if the data change of the equipment is not involved, the feedback cannot be timely carried out, and the mode still lacks the authenticity and timeliness.
Disclosure of Invention
In order to overcome the defects and shortcomings in the prior art, the invention provides an intelligent visual monitoring method, and aims to solve the problems that the combination of analog monitoring and video monitoring in the prior art is only a calling or mapping mode, the reality and timeliness are lacking, and the analog monitoring cannot be fused with the video monitoring depth. The intelligent visual monitoring method provided by the invention comprises the steps of simulating a monitored scene and each device in the monitored scene, constructing a three-dimensional virtual scene simulation model, marking a corresponding virtual camera in the three-dimensional virtual scene simulation model according to the position of a camera in the monitored scene, taking the visual angle of the virtual camera as the output visual angle of the three-dimensional virtual scene simulation model, outputting a two-dimensional virtual monitoring picture, distorting the two-dimensional virtual monitoring picture, fusing the distorted two-dimensional virtual monitoring picture with the monitoring picture of the camera in the monitored scene, and outputting the fused picture to a display for display. The fused pictures provide visual display and visual management for the ports, the connecting cables and the panel display content of the equipment. According to the intelligent visual monitoring method, multidimensional data information of a monitored object is visually presented through intelligent analysis, and the scene and the data are deeply fused together in the same monitoring interface, so that data visual monitoring based on a scene real video image is realized.
In order to solve the problems in the prior art, the invention is realized by the following technical scheme:
an intelligent visual monitoring method comprises the following steps:
the three-dimensional virtual scene simulation model construction step comprises the following steps: simulating the monitored scene and each device in the monitored scene to construct a three-dimensional virtual scene simulation model;
the virtual camera adding step: adding a virtual camera device at a corresponding position in the three-dimensional virtual scene simulation model according to the actual position of the camera device in the monitored scene;
and outputting a two-dimensional virtual monitoring image: setting technical parameters of a virtual camera device according to the technical parameters of the camera device in the monitored scene; taking the visual angle of the virtual camera device as the output visual angle of the three-dimensional virtual scene simulation model, and outputting a two-dimensional virtual monitoring image;
the abnormal quantity acquisition step of the camera device: acquiring a video monitoring image of a camera device in a monitored scene, and correcting the video monitoring image shot by the camera device; obtaining radial distortion quantity generated by the camera device along the radial direction of the optical center point and tangential distortion quantity generated by the camera device lens which is not completely parallel to the imaging plane;
and a two-dimensional virtual monitoring image distortion processing step: according to the radial distortion and the tangential distortion obtained in the image pickup device distortion obtaining step, performing distortion processing on the two-dimensional virtual monitoring image output in the two-dimensional virtual monitoring image output step;
and a step of monitoring image fusion: fusing the two-dimensional virtual monitoring image after distortion processing into a video monitoring image; the fused monitoring image obtained after fusion is mainly a video monitoring image shot by the camera device, and the two-dimensional virtual monitoring image after distortion processing is embedded into the video monitoring image.
The method also comprises a data fusion step of fusing various data information into the three-dimensional virtual scene simulation model according to a standard data format and a communication protocol; when the fusion monitoring image is operated, the data information fused into the three-dimensional virtual scene simulation model is visually displayed in the fusion monitoring image; and carrying out visual display and visual management on the monitored scene and each device in the monitored scene by fusing the monitoring images.
The various data information comprises equipment data, service data and intelligent analysis data, wherein the equipment data comprises equipment basic data, equipment running state data, equipment production index data and equipment sensor data; the business data comprises planning data, capacity data, quality data and cost data; the intelligent analysis data is to combine the video monitoring image and various real-time data information, intelligently identify and analyze the appearance, position, running state, motion state, behavior and production state of the monitored object, actively discover abnormal production, abnormal equipment and abnormal personnel violation according to the business alarm rules, intelligently study and judge the video image in the monitored scene and actively alarm, and automatically visually present the abnormal situation and the reasons thereof on the fused monitoring image.
In the step of obtaining the distortion amount of the image pickup device, correcting the video monitoring image shot by the image pickup device, specifically, performing spatial restoration on the video monitoring image shot by the image pickup device, and mapping the two-dimensional video monitoring image into a three-dimensional monitored scene so that the monitored scene and each monitored object have three-dimensional spatial attributes; and obtaining radial distortion quantity generated by the camera device along the radial direction of the optical center point and tangential distortion quantity generated by the fact that a lens of the camera device is not completely parallel to an imaging plane.
According to the identification data and the positioning data, each monitoring object is accurately positioned and tracked in a three-dimensional virtual scene simulation model, so that each monitoring object can be selected in the fused monitoring image; when the camera device rotates and zooms, the monitoring object can be tracked and selected on the fused monitoring image.
The three-dimensional virtual scene simulation model analyzes the data of the monitored scene and the running states and/or parameters of all the devices, judges whether alarm notification needs to be sent out according to an analysis structure, if so, sends out the alarm notification in different prompting modes according to different alarm data, and the alarm notification is displayed in the fused monitoring image.
Technical parameters of the camera device comprise clear viewing distance, fuzzy viewing distance, illumination viewing distance, horizontal viewing angle, high and low viewing angles, horizontal rotation angle range of the holder and high and low rotation angle range of the holder.
When the device in the fused monitoring image is operated, the view of the device in the three-dimensional virtual scene model is displayed in the fused monitoring image.
The camera device collects face images of people entering the monitored scene, performs face recognition, and visually displays the recognized person information in the fused monitoring image.
Compared with the prior art, the beneficial technical effects brought by the invention are as follows:
1. the intelligent visual monitoring technical method thoroughly breaks the limitation that the video monitoring of the camera is real but no data exists, the analog monitoring has data but not real, the field and the data are integrated and presented uniformly, the data visual monitoring based on the real field video image is creatively realized, people can see multidimensional real-time data while seeing the real field image, the monitoring field is the truest, the data presentation is the intuitionistic, the data focusing is the deepest, and the visual information is the most practical.
2. The video monitoring image is taken as the main part, the two-dimensional virtual monitoring image after distortion processing is embedded into the video monitoring image, namely, the fused monitoring image displays the video monitoring image, and compared with the original video monitoring image, the fused monitoring image is operable, and when the equipment in the fused monitoring image is operated, the permission state and/or parameters of the equipment are visually displayed in the fused monitoring image. The depth fusion of the site and the data is formed, so that the data can be checked in time, and the site authenticity can be restored in time. The existing map fusion mode (CN 109905664A) and calling fusion mode (CN 110796727A) are high in authenticity and timeliness.
3. The invention finally outputs the fused monitoring picture which is displayed in the display after fusion, the fused monitoring picture is mainly video monitoring picture, partial shielding exists in the video monitoring picture, the more the equipment far away from the camera device is shielded, the smaller the displayed image in the fused monitoring picture. In order to avoid error operation, the method comprises the steps of firstly correcting a video monitoring picture, and obtaining radial distortion and tangential distortion of the video monitoring picture obtained by shooting by a shooting device through the correction process of the video monitoring picture; and then carrying out distortion processing on the output two-dimensional virtual monitoring image, wherein the two-dimensional virtual monitoring image obtains the same distortion effect as the video monitoring image shot by the camera device, and the two-dimensional virtual monitoring image is fused into the video monitoring image at the moment, so that the fusion error between the two is small, and when the fused equipment in the fused monitoring image is operated, the probability of error information is greatly reduced, and the monitoring precision is improved.
4. In the invention, the video monitoring image shot by the camera device is spatially restored, and the two-dimensional video monitoring image is mapped into the three-dimensional monitored scene, so that the monitored scene and each monitored object have three-dimensional spatial attributes; the radial distortion and tangential distortion obtained by processing and calculating in the method are more approximate to the true value, so that the fusion error of the distorted two-dimensional virtual monitoring image and the video monitoring image is further reduced, and the monitoring precision is further improved.
5. According to the invention, each monitoring object is accurately positioned and tracked in motion in a three-dimensional scene according to the identification data and the positioning data, so that each monitoring object can be selected on a video image, and tracking and selecting effects can be kept on the video image even when the camera rotates and zooms; the positioning data comprise positioning data accessed by a mobile phone, a bracelet and other devices according to a standard Bluetooth protocol so as to assist in realizing accurate positioning and tracking of monitoring objects such as personnel, dynamic objects and the like. In the invention, a plurality of image pick-up devices can work cooperatively, and when the shooting dead angle of the current image pick-up device is involved, the image can be acquired by rotating the image pick-up device.
6. The alarm information is directly displayed in the fused monitoring image, so that monitoring personnel can intuitively position the alarm position, the positioning is more accurate, and the positioning is more timely.
7. When the method and the device for monitoring the operation state of the equipment in the three-dimensional virtual scene model operate the equipment in the fused monitoring image, the situation that the monitored object in the fused monitoring image is partially blocked and the whole view of the monitored object or all data cannot be watched is avoided, and the operation state of the equipment can be monitored more intuitively by calling the view of the equipment in the three-dimensional virtual scene model.
8. The invention can carry out intelligent analysis and active alarm on the video image and the real-time data of the production site, automatically visually present the abnormal situation and the reason thereof on the monitoring interface, replace the traditional monitoring mode of manual staring judgment, and greatly save the labor cost; further, compared with the traditional analog monitoring system, the intelligent visual monitoring system developed by the invention can greatly reduce the demand of the system for field data, greatly lighten the data communication pressure, and has simpler system structure, lighter system body, shorter development period and lower cost; furthermore, the invention is convenient to technically upgrade the traditional camera video monitoring system, and a large number of production sites which have adopted camera video monitoring can be technically improved on the basis of the prior art, so that the camera video monitoring is upgraded into intelligent visual monitoring.
Drawings
FIG. 1 is a flow chart of the intelligent visual monitoring method of the present invention.
Detailed Description
The technical scheme of the invention is further elaborated below in conjunction with the description and drawings. It will be apparent that the described embodiments are only some, but not all, embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1 of the accompanying drawings, the embodiment of the invention discloses an intelligent visual monitoring method, which comprises a three-dimensional virtual scene simulation model construction step, a virtual camera adding step, a two-dimensional virtual monitoring image output step, a camera distortion obtaining step, a two-dimensional virtual monitoring image distortion processing step and a monitoring image fusion step. In this embodiment, the steps other than the steps having the association relationship have no sequence, and the steps can be adaptively adjusted according to the actual situation of the production site.
Further, the three-dimensional virtual scene simulation model constructing step is to construct a three-dimensional virtual scene simulation model according to the monitored scene and each device in the monitored scene; the construction process of the three-dimensional virtual scene simulation model can be constructed by adopting the existing construction method, and the specific construction method is not repeated herein. The constructed three-dimensional virtual scene simulation model carries out data display on each device in the monitored scene.
The virtual image pickup device adding step is to add the virtual image pickup device at a corresponding position in the three-dimensional virtual scene simulation model according to the actual position of the image pickup device in the monitored scene.
The two-dimensional virtual monitoring image output step is to set the technical parameters of the virtual camera device according to the technical parameters of the camera device in the monitored scene, and output the two-dimensional virtual monitoring image by taking the visual angle of the virtual camera device as the output visual angle of the three-dimensional virtual scene simulation model. The three-dimensional virtual scene simulation model is displayed in a three-dimensional form, but is still displayed in a two-dimensional plane mode after being displayed on a display. Only the three-dimensional virtual scene simulation model is displayed in a 'emperor view angle', and all data in the monitored scene can be displayed in a rotating and pushing mode and the like, so that no visual dead angle exists.
The purpose of adding the virtual camera device is to take the two-dimensional virtual monitoring image acquired by the virtual camera device as the output image of the three-dimensional virtual scene simulation model for output. The technical parameters of the virtual camera device are set according to the technical parameters of the camera device in the monitored scene, so that the two-dimensional virtual monitoring image output by the virtual camera device can be ensured to correspond to the video monitoring image output by the camera device in the monitored scene.
Because the existing image pickup devices all adopt the lens imaging principle to image, compared with the real image, the output video monitoring image can generate certain distortion, for example, in a real scene, the real image is a straight line, and the video monitoring image can become a curved curve.
The two-dimensional virtual monitoring image output by the virtual image pickup device is identical to the real image in that no distortion exists, for example, a straight line in the two-dimensional virtual monitoring image is a straight line in the real image. This is because the virtual image pickup device does not image by using the lens imaging principle, but outputs through data analysis, so that it does not have distortion. If the two-dimensional virtual monitoring image output by the virtual camera device is directly fused into the video monitoring image, larger distortion errors can be caused, when the fused monitoring image is operated, the operation errors can be caused due to the existence of the distortion errors, if the No. 1 device is originally operated, the No. 1 device part is blocked by the No. 2 device and the distance is relatively short, and under the condition of larger distortion errors, the No. 2 device can be wrongly operated, so that the monitoring precision is low.
In order to overcome the distortion error, the embodiment performs the step of obtaining the distortion amount of the camera device, and the step can be performed synchronously when the step of constructing the three-dimensional virtual scene simulation model, the step of adding the virtual camera device and the step of outputting the two-dimensional virtual monitoring image are performed.
The abnormal quantity acquisition step of the camera device: acquiring a video monitoring image of a camera device in a monitored scene, and correcting the video monitoring image shot by the camera device; and obtaining radial distortion quantity generated by the camera device along the radial direction of the optical center point and tangential distortion quantity generated by the fact that a lens of the camera device is not completely parallel to an imaging plane. Since the distortion amounts of each image pickup device are different, it is necessary to obtain the radial distortion amount generated by the image pickup device along the radial direction of the optical center point and the tangential distortion amount generated by the lens of the image pickup device which is not completely parallel to the imaging plane through correction processing. The purpose of this step is to obtain the radial distortion and the tangential distortion, not to correct the video surveillance images.
And a two-dimensional virtual monitoring image distortion processing step: and according to the radial distortion and the tangential distortion obtained in the distortion obtaining step of the camera device, performing distortion processing on the two-dimensional virtual monitoring image output by the two-dimensional virtual monitoring image output step. The fusion error between the distorted two-dimensional virtual monitoring image and the video monitoring image is smaller, namely, the straight line in the two-dimensional virtual monitoring image is distorted to be the same curve as that in the video monitoring image, so that all lines in the two images can be well fused together.
And a step of monitoring image fusion: fusing the two-dimensional virtual monitoring image after distortion processing into a video monitoring image; the fused monitoring image obtained after fusion is mainly a video monitoring image shot by the camera device, and the two-dimensional virtual monitoring image after distortion processing is embedded into the video monitoring image. In this embodiment, the fused monitoring image obtained after fusion is displayed mainly with the video monitoring image, and the two-dimensional virtual monitoring image after distortion processing is hidden and embedded into the video monitoring image. When the fused monitoring image is operated, the two-dimensional virtual monitoring image receives an operation instruction, and the needed data information is visually displayed. The visual information is displayed in a semi-transparent manner in the fused monitoring image.
As an implementation manner of the present embodiment, a data fusion step is further included, in which environmental data of the monitored scene, and each device data, service data, and the like in the monitored scene are fused into a fused monitoring image.
The data fusion step can be fused into the three-dimensional virtual scene simulation model when the three-dimensional virtual scene simulation model is constructed; when the virtual camera device outputs the two-dimensional virtual monitoring image, the corresponding equipment data, the monitored scene environment data, the service data and the like in the two-dimensional virtual monitoring image can be fused into the two-dimensional virtual monitoring image; when distortion processing is performed in the two-dimensional virtual monitoring image, corresponding equipment data, monitored scene environment data, service data and the like in the two-dimensional virtual monitoring image can be fused into the two-dimensional virtual monitoring image after the distortion processing. Corresponding equipment data, monitored scene environment data, business data and the like can be fused into the fused monitoring image.
Specifically, according to a standard data format and a communication protocol, various data information is merged into a three-dimensional virtual scene simulation model (also can be a two-dimensional virtual monitoring image, a distorted two-dimensional virtual monitoring image or a merged monitoring image); when the fusion monitoring image is operated, the data information fused into the three-dimensional virtual scene simulation model is visually displayed in the fusion monitoring image; and carrying out visual display and visual management on the monitored scene and each device in the monitored scene by fusing the monitoring images.
Specifically, the various data information comprises equipment data, service data and intelligent analysis data, wherein the equipment data comprises equipment basic data, equipment running state data, equipment production index data and equipment sensor data; the business data comprises planning data, capacity data, quality data and cost data; the intelligent analysis data is to combine the video monitoring image and various real-time data information, intelligently identify and analyze the appearance, position, running state, motion state, behavior and production state of the monitored object, actively discover abnormal production, abnormal equipment and abnormal personnel violation according to the business alarm rules, intelligently study and judge the video image in the monitored scene and actively alarm, and automatically visually present the abnormal situation and the reasons thereof on the fused monitoring image.
In the step of obtaining the distortion amount of the image pickup device, correcting the video monitoring image shot by the image pickup device, specifically, performing spatial restoration on the video monitoring image shot by the image pickup device, and mapping the two-dimensional video monitoring image into a three-dimensional monitored scene so that the monitored scene and each monitored object have three-dimensional spatial attributes; and obtaining radial distortion quantity generated by the camera device along the radial direction of the optical center point and tangential distortion quantity generated by the fact that a lens of the camera device is not completely parallel to an imaging plane.
According to the identification data and the positioning data, each monitoring object is accurately positioned and tracked in a three-dimensional virtual scene simulation model, so that each monitoring object can be selected in the fused monitoring image; when the camera device rotates and zooms, the monitoring object can be tracked and selected on the fused monitoring image.
The three-dimensional virtual scene simulation model analyzes the data of the monitored scene and the running states and/or parameters of all the devices, judges whether alarm notification needs to be sent out according to an analysis structure, if so, sends out the alarm notification in different prompting modes according to different alarm data, and the alarm notification is displayed in the fused monitoring image.
Technical parameters of the camera device comprise clear viewing distance, fuzzy viewing distance, illumination viewing distance, horizontal viewing angle, high and low viewing angles, horizontal rotation angle range of the holder and high and low rotation angle range of the holder.
When the device in the fused monitoring image is operated, the view of the device in the three-dimensional virtual scene model is displayed in the fused monitoring image.
The camera device collects face images of people entering the monitored scene, performs face recognition, and visually displays the recognized person information in the fused monitoring image.

Claims (7)

1. An intelligent visual monitoring method is characterized by comprising the following steps:
the three-dimensional virtual scene simulation model construction step comprises the following steps: simulating the monitored scene and each device in the monitored scene to construct a three-dimensional virtual scene simulation model;
the virtual camera adding step: adding a virtual camera device at a corresponding position in the three-dimensional virtual scene simulation model according to the actual position of the camera device in the monitored scene;
and outputting a two-dimensional virtual monitoring image: setting technical parameters of a virtual camera device according to the technical parameters of the camera device in the monitored scene; taking the visual angle of the virtual camera device as the output visual angle of the three-dimensional virtual scene simulation model, and outputting a two-dimensional virtual monitoring image;
the abnormal quantity acquisition step of the camera device: acquiring a video monitoring image of a camera device in a monitored scene, and correcting the video monitoring image shot by the camera device; obtaining radial distortion quantity generated by the camera device along the radial direction of the optical center point and tangential distortion quantity generated by the camera device lens which is not completely parallel to the imaging plane;
and a two-dimensional virtual monitoring image distortion processing step: according to the radial distortion and the tangential distortion obtained in the image pickup device distortion obtaining step, performing distortion processing on the two-dimensional virtual monitoring image output in the two-dimensional virtual monitoring image output step;
and a step of monitoring image fusion: fusing the two-dimensional virtual monitoring image after distortion processing into a video monitoring image; the fused monitoring image obtained after fusion takes the video monitoring image shot by the camera device as a main part, and the two-dimensional virtual monitoring image after distortion processing is embedded into the video monitoring image;
the method also comprises a data fusion step of fusing various data information into the three-dimensional virtual scene simulation model according to a standard data format and a communication protocol; when the fusion monitoring image is operated, the data information fused into the three-dimensional virtual scene simulation model is visually displayed in the fusion monitoring image; the monitored scene and each device in the monitored scene are visually displayed and visually managed by fusing the monitoring images; the various data information comprises equipment data, service data and intelligent analysis data, wherein the equipment data comprises equipment basic data, equipment running state data, equipment production index data and equipment sensor data; the business data comprises planning data, capacity data, quality data and cost data; the intelligent analysis data is to combine the video monitoring image and various real-time data information, intelligently identify and analyze the appearance, position, running state, motion state, behavior and production state of the monitored object, actively discover abnormal production, abnormal equipment and abnormal personnel violation according to the business alarm rules, intelligently study and judge the video image in the monitored scene and actively alarm, and automatically visually present the abnormal situation and the reasons thereof on the fused monitoring image.
2. An intelligent vision monitoring method as set forth in claim 1, wherein: in the step of obtaining the distortion amount of the image pickup device, correcting the video monitoring image shot by the image pickup device, specifically, performing spatial restoration on the video monitoring image shot by the image pickup device, and mapping the two-dimensional video monitoring image into a three-dimensional monitored scene so that the monitored scene and each monitored object have three-dimensional spatial attributes; and obtaining radial distortion quantity generated by the camera device along the radial direction of the optical center point and tangential distortion quantity generated by the fact that a lens of the camera device is not completely parallel to an imaging plane.
3. An intelligent vision monitoring method as set forth in claim 1, wherein: according to the identification data and the positioning data, each monitoring object is accurately positioned and tracked in a three-dimensional virtual scene simulation model, so that each monitoring object can be selected in the fused monitoring image; when the camera device rotates and zooms, the monitoring object can be tracked and selected on the fused monitoring image.
4. An intelligent vision monitoring method as set forth in claim 1, wherein: the three-dimensional virtual scene simulation model analyzes the data of the monitored scene and the running states and/or parameters of all the devices, judges whether alarm notification needs to be sent out according to an analysis structure, if so, sends out the alarm notification in different prompting modes according to different alarm data, and the alarm notification is displayed in the fused monitoring image.
5. An intelligent vision monitoring method as set forth in claim 1, wherein: technical parameters of the camera device comprise clear viewing distance, fuzzy viewing distance, illumination viewing distance, horizontal viewing angle, high and low viewing angles, horizontal rotation angle range of the holder and high and low rotation angle range of the holder.
6. An intelligent vision monitoring method as claimed in any one of claims 1-5, characterized in that: when the device in the fused monitoring image is operated, the view of the device in the three-dimensional virtual scene model is displayed in the fused monitoring image.
7. An intelligent vision monitoring method as claimed in any one of claims 1-5, characterized in that: the camera device collects face images of people entering the monitored scene, performs face recognition, and visually displays the recognized person information in the fused monitoring image.
CN202110879318.1A 2021-08-02 2021-08-02 Intelligent vision monitoring method Active CN113627005B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110879318.1A CN113627005B (en) 2021-08-02 2021-08-02 Intelligent vision monitoring method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110879318.1A CN113627005B (en) 2021-08-02 2021-08-02 Intelligent vision monitoring method

Publications (2)

Publication Number Publication Date
CN113627005A CN113627005A (en) 2021-11-09
CN113627005B true CN113627005B (en) 2024-03-26

Family

ID=78382089

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110879318.1A Active CN113627005B (en) 2021-08-02 2021-08-02 Intelligent vision monitoring method

Country Status (1)

Country Link
CN (1) CN113627005B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113891050B (en) * 2021-11-12 2022-09-20 深圳市佰慧智能科技有限公司 Monitoring equipment management system based on video networking sharing
CN116128320B (en) * 2023-01-04 2023-08-08 杭州有泰信息技术有限公司 Visual control method and platform for power transmission and transformation of power grid

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106204656A (en) * 2016-07-21 2016-12-07 中国科学院遥感与数字地球研究所 Target based on video and three-dimensional spatial information location and tracking system and method
CN110796727A (en) * 2019-09-17 2020-02-14 国网天津市电力公司 Machine room remote panoramic monitoring method based on virtual reality technology
CN112669205A (en) * 2019-10-15 2021-04-16 北京航天长峰科技工业集团有限公司 Three-dimensional video fusion splicing method
KR20210086072A (en) * 2019-12-31 2021-07-08 주식회사 버넥트 System and method for real-time monitoring field work

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2575843A (en) * 2018-07-25 2020-01-29 Sony Interactive Entertainment Inc Method and system for generating an image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106204656A (en) * 2016-07-21 2016-12-07 中国科学院遥感与数字地球研究所 Target based on video and three-dimensional spatial information location and tracking system and method
CN110796727A (en) * 2019-09-17 2020-02-14 国网天津市电力公司 Machine room remote panoramic monitoring method based on virtual reality technology
CN112669205A (en) * 2019-10-15 2021-04-16 北京航天长峰科技工业集团有限公司 Three-dimensional video fusion splicing method
KR20210086072A (en) * 2019-12-31 2021-07-08 주식회사 버넥트 System and method for real-time monitoring field work

Also Published As

Publication number Publication date
CN113627005A (en) 2021-11-09

Similar Documents

Publication Publication Date Title
CN112053446B (en) Real-time monitoring video and three-dimensional scene fusion method based on three-dimensional GIS
CN110427917B (en) Method and device for detecting key points
CN108304075B (en) Method and device for performing man-machine interaction on augmented reality device
US8803992B2 (en) Augmented reality navigation for repeat photography and difference extraction
CN113627005B (en) Intelligent vision monitoring method
Gong et al. 3D model-based tree measurement from high-resolution aerial imagery
CN112449093A (en) Three-dimensional panoramic video fusion monitoring platform
CN112686877B (en) Binocular camera-based three-dimensional house damage model construction and measurement method and system
CN111105506A (en) Real-time monitoring method and device based on mixed reality
CN114140528A (en) Data annotation method and device, computer equipment and storage medium
CN113347360B (en) Construction management system and method based on 5G video and BIM
CN101999139A (en) Method for creating and/or updating textures of background object models, video monitoring system for carrying out the method, and computer program
CN113225212A (en) Data center monitoring system, method and server
CN114442805A (en) Monitoring scene display method and system, electronic equipment and storage medium
CN114820924A (en) Method and system for analyzing museum visit based on BIM and video monitoring
CN103136739B (en) Controlled camera supervised video and three-dimensional model method for registering under a kind of complex scene
CN115294207A (en) Fusion scheduling system and method for smart campus monitoring video and three-dimensional GIS model
CN115859689A (en) Panoramic visualization digital twin application method
CN111770450A (en) Workshop production monitoring server, mobile terminal and application
CN115486091A (en) System and method for video processing using virtual reality devices
KR102627538B1 (en) System for providing 360 degree panoramic video based safety education contents generating service
CN115327480A (en) Sound source positioning method and system
CN112825198B (en) Mobile tag display method, device, terminal equipment and readable storage medium
CN114693749A (en) Method and system for associating different physical coordinate systems
Ballestin et al. Assessment of optical see-through head mounted display calibration for interactive augmented reality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant