CN117156108A - Enhanced display system and method for machine room equipment monitoring picture - Google Patents

Enhanced display system and method for machine room equipment monitoring picture Download PDF

Info

Publication number
CN117156108A
CN117156108A CN202311425100.4A CN202311425100A CN117156108A CN 117156108 A CN117156108 A CN 117156108A CN 202311425100 A CN202311425100 A CN 202311425100A CN 117156108 A CN117156108 A CN 117156108A
Authority
CN
China
Prior art keywords
machine room
video
equipment
display
monitoring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311425100.4A
Other languages
Chinese (zh)
Other versions
CN117156108B (en
Inventor
朱毅坚
谢宁
杨晨
吕莉丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhonghai Property Management Co ltd
Original Assignee
Zhonghai Property Management Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhonghai Property Management Co ltd filed Critical Zhonghai Property Management Co ltd
Priority to CN202311425100.4A priority Critical patent/CN117156108B/en
Publication of CN117156108A publication Critical patent/CN117156108A/en
Application granted granted Critical
Publication of CN117156108B publication Critical patent/CN117156108B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/167Synchronising or controlling image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/194Transmission of image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • H04N7/185Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source from a mobile camera, e.g. for remote control

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The application discloses an enhanced display system and method for a monitoring picture of equipment in a machine room, wherein a camera is arranged in the machine room, a display interface of the equipment in the machine room is shot, a live-action video is obtained, and the live-action video is transmitted to a monitoring host; the monitoring host performs image recognition processing on the live-action video, and adds image-text annotation display to the live-action video when the video monitoring picture is displayed. And constructing a three-dimensional virtual scene corresponding to the machine room, and synchronously and switchably displaying the three-dimensional virtual scene when the video monitoring picture is displayed, wherein the virtual equipment synchronously displays corresponding interface display data. The application can identify images of the equipment display interface of the machine room, distinguish a plurality of equipment shot by the camera at different angles, accurately display the interface data of each equipment, provide a more friendly video monitoring display effect and enhance the user experience.

Description

Enhanced display system and method for machine room equipment monitoring picture
Technical Field
The application relates to the technical field of video monitoring, in particular to an enhanced display system and method for a monitoring picture of equipment in a machine room.
Background
In the property computer lab, including multiple electrical equipment, and every electrical equipment includes various instrument panels, digital display window, LCD etc. when carrying out video monitoring to these electrical equipment, receive the quantity of surveillance camera head, mounted position, multiple restriction such as observation angle, can obtain through video monitoring the actual monitoring picture form different, be difficult to let monitoring personnel obtain good visual impression, still probably cause the vision confusion to a plurality of equipment of same type to can not distinguish the parameter display of different equipment through video monitoring clearly, fast, accurately.
Disclosure of Invention
The application mainly solves the technical problems that the video monitoring is carried out on equipment in a property computer room, the display content of a human-computer interface of the equipment in the computer room is difficult to accurately display and distinguish, and the user experience of the video monitoring is poor.
In order to solve the technical problems, the application provides an enhanced display system for a monitoring picture of equipment in a machine room, which comprises a camera arranged in the machine room, a monitoring host and a display screen, wherein the camera shoots a working interface of the equipment in the machine room to obtain a live-action video and transmits the live-action video to the monitoring host; the monitoring host comprises a video decoding unit, a video processing unit and a video labeling unit, wherein the video decoding unit decodes the live-action video to obtain an image sequence; the video processing unit performs image processing on the image sequence, and comprises target identification and region frame division on a working interface of the equipment to obtain a target identification result; and the video labeling unit superimposes the target identification result on the image sequence in a picture and text labeling mode, and then outputs the video superimposed with the picture and text labeling information to the display screen for display.
Specifically, the video processing unit comprises a neural network initial model, and the neural network initial model is trained by using sample data of the working interface of the equipment to obtain a neural network application model capable of being used for identifying the working interfaces of various types of equipment.
Specifically, one path of the video labeling unit receives the image sequence from the video decoding unit, the other path of the video labeling unit receives the target frame information and the identification information from the video processing unit, converts the target frame information and the identification information into labeling characters, superimposes the labeling characters on the input image sequence, then carries out coding processing on the image sequence superimposed with the labeling characters to obtain a video signal superimposed with characters, and displays the video signal through the display screen.
Specifically, the monitoring host also comprises an event identification unit and an alarm output unit,
after the video processing unit identifies the working interface of the equipment, inputting a target identification result into the event identification unit; the event recognition unit realizes event recognition based on continuous observation of the target recognition result, and when a dangerous event is recognized, the alarm output unit alarms.
Specifically, the monitoring host further comprises a three-dimensional scene synthesis unit, which is used for constructing a three-dimensional virtual scene corresponding to the real object scene in the machine room, and displaying identification information on virtual equipment in the three-dimensional virtual scene in real time according to the target identification result output by the video processing unit, so as to keep consistent with the display content of the real-scene video.
Specifically, the method for constructing the three-dimensional virtual scene corresponding to the real object scene in the machine room comprises the following steps: acquiring a first cameraPosition information in machine room space as first reference coordinate +.>Acquiring the first camera +.>Focal length value +.>Horizontal angle of view->And vertical field angle +.>Corresponding to the first camera +.>Observation parameters of->Obtaining the observation parameters ∈ ->The following first camera head>The photographed actual scene; whereby the first camera is selected +.>And constructing the three-dimensional virtual scene in the same proportion in the three-dimensional virtual scene, correspondingly presenting the virtual scene under the observation of the virtual camera, and keeping the virtual scene consistent with the actual scene.
Specifically, the method for constructing the three-dimensional virtual scene corresponding to the real object scene in the machine room comprises the following steps: n cameras are arranged in the machine roomN is more than or equal to 2, setting corresponding n reference coordinates +.>Respectively correspond to n observation parametersThe next n actual scenes; when constructing the three-dimensional virtual scene in the same proportion, simulating the actual scene shot by the n cameras in the three-dimensional virtual scene in the same proportion, and comparingN virtual scenes under observation by n virtual cameras should be presented, and the n virtual scenes and the n actual scenes remain consistent.
The application also provides an enhanced display method for the monitoring picture of the equipment in the machine room, which comprises the following steps: a camera is arranged in the machine room, a display interface of equipment in the machine room is shot, a live-action video is obtained, and the live-action video is transmitted to a monitoring host; and the monitoring host performs image recognition processing on the live-action video, and adds image-text annotation display to the live-action video when a video monitoring picture is displayed.
Specifically, the monitoring host performs image recognition processing on the live-action video, including feature recognition on a working interface of equipment in a machine room, and extracting interface display data; and the information displayed by the graphic labeling comprises the interface display data.
Specifically, a three-dimensional virtual scene corresponding to the machine room is constructed, when a video monitoring picture is displayed, the three-dimensional virtual scene can be synchronously and switchably displayed, the interface display data and the three-dimensional virtual scene are fused, and virtual equipment in the three-dimensional virtual scene synchronously displays the corresponding interface display data.
The beneficial effects of the application are as follows: the application discloses an enhanced display system and method for a monitoring picture of equipment in a machine room, wherein a camera is arranged in the machine room, a display interface of the equipment in the machine room is shot, a live-action video is obtained, and the live-action video is transmitted to a monitoring host; the monitoring host performs image recognition processing on the live-action video, and adds image-text annotation display to the live-action video when the video monitoring picture is displayed. And constructing a three-dimensional virtual scene corresponding to the machine room, and synchronously and switchably displaying the three-dimensional virtual scene when the video monitoring picture is displayed, wherein the virtual equipment synchronously displays corresponding interface display data. The method can identify images of the display interfaces of the equipment in the machine room, distinguish a plurality of equipment shot by the camera at different angles, accurately display the interface data of each equipment, provide a more friendly video monitoring display effect and enhance the user experience.
Drawings
FIG. 1 is a schematic diagram illustrating an exemplary embodiment of an enhanced display system for machine room equipment monitoring pictures according to the present application;
FIG. 2 is a schematic diagram of the working interface composition of a device according to one embodiment of the enhanced display system for machine room device monitor according to the present application;
FIG. 3 is a schematic diagram of an embodiment of an enhanced display system for machine room equipment monitor frames for image-text labeling of live-action video frames in accordance with the present application;
FIG. 4 is a schematic diagram illustrating the components of a monitor host in an embodiment of an enhanced display system for machine room equipment monitor screens according to the present application;
FIG. 5 is a schematic diagram illustrating the composition of a video processing unit of a monitoring host in an embodiment of an enhanced display system for machine room equipment monitoring pictures according to the present application;
FIG. 6 is a schematic diagram of the target recognition result of a monitoring host in an embodiment of an enhanced display system for machine room equipment monitoring frames according to the present application;
fig. 7 is a schematic diagram of a single camera live view scene and a virtual scene construction principle in an embodiment of an enhanced display system for a machine room equipment monitoring screen according to the present application;
FIG. 8 is a schematic view of a three-dimensional virtual scene in an embodiment of an enhanced display system for machine room equipment monitoring pictures according to the present application;
fig. 9 is a schematic diagram of a virtual scene construction principle based on multi-camera construction in an embodiment of an enhanced display system for machine room equipment monitoring frames according to the present application.
Detailed Description
In order that the application may be readily understood, a more particular description thereof will be rendered by reference to specific embodiments that are illustrated in the appended drawings. Preferred embodiments of the present application are shown in the drawings. This application may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used in the description of the application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. The term "and/or" as used in this specification includes any and all combinations of one or more of the associated listed items.
Fig. 1 shows a schematic diagram of an embodiment of an enhanced display method for a monitoring screen of a machine room device, in which a camera 21 is installed in a machine room 2, so that video monitoring can be performed on a plurality of machine rooms distributed in a plurality of different positions simultaneously, a display interface of the device in the machine room is photographed, and live-action videos of each machine room are obtained and transmitted to a monitoring host 1. The communication network 4 for transmitting video signals may be a cable television network mainly comprising coaxial cables, a wired internet connected to a computer network, or a wireless internet connected to mobile communication or satellite communication. The equipment in the machine room comprises various power distribution equipment, transformation equipment, motor equipment, condensation equipment, boiler equipment, network interaction equipment and the like. As shown in fig. 2, the display interface of the apparatus includes various indicator lamps C1, a dashboard C2 with pointer indication, a digital display tube, a liquid crystal display C3, and the like.
In fig. 1, the monitoring host 1 performs image recognition processing on the live-action video, and adds graphic labeling display to the live-action video when the display screen 3 displays a video monitoring picture.
Specifically, the image-text display includes displaying the equipment type and number in the live-action video by means of additional display images, characters and numbers, and performing additional image-text display on a display interface on the equipment, including an analog display picture of an instrument panel, which belongs to additional image display, and a virtual instrument panel is used for replacing an actual instrument panel in the video, but the positions indicated by instrument pointers are completely consistent with each other. After identifying the numerical value indicated by the pointer of the instrument panel, the numerical value indicated by the pointer of the instrument panel can be marked in a numerical display mode, as shown in fig. 3, and the "pointer value" of the instrument panel is marked and displayed in the graph and text: 50". The method comprises the steps of identifying numbers displayed by a nixie tube in a live-action video, the on-off state and the color type of each indicator lamp, and then performing auxiliary display in a pattern-labeling mode, so that the problem that the human eyes observe the images difficultly to identify clearly is solved, for example, in fig. 3, auxiliary analog image display is performed on the indicator lamps to indicate that the indicator lamps are in a lighting state and in a color, and as in fig. 3, "yellow lamp lighting" is displayed.
Specifically, as shown in fig. 4, the monitoring host 1 includes a video decoding unit B1, a video processing unit B2, and a video labeling unit B3. The monitoring camera arranged in the machine room acquires real-time live-action video and inputs the real-time live-action video into the monitoring host 1, and the video decoding unit B1 decodes the live-action video to obtain an image sequence, wherein the image sequence comprises the unified type of image sequences obtained after decoding live-action videos of different types.
The video processing unit B2 performs image processing on the image sequence, including identification of target types of equipment working interfaces in the images and region frame division, and obtains corresponding image features and target identification results. The video labeling unit B3 needs to superimpose the target recognition result on the image sequence in the image labeling manner, so as to display the video display effect of the embodiment shown in fig. 3, and then output the video superimposed with the image labeling information to the display screen for display.
The method for performing image recognition processing on each frame of image in the live-action video by the video processing unit B2 comprises the following steps: and constructing a deep learning neural network initial model, and training the neural network initial model by using working interface sample data of equipment to obtain a neural network application model capable of being used for identifying working interfaces of various types of equipment.
Further, as shown in fig. 5, a composition schematic diagram of an embodiment of an initial model of a neural network is provided, where the composition schematic diagram includes a convolutional neural network A1, a feature map A2 is output at a later stage of the convolutional neural network A1, and meanwhile, an area selection network A3 is connected in parallel, the area selection network A3 outputs a candidate area to select an area for the feature map, a fixed scale feature map A4 is obtained through candidate area matching processing, and then the fixed scale feature map is output to a fully connected network A5, and a corresponding area frame and a target category, for example, an area frame where a dial plate is located, an area frame where a display screen is located, an area frame where a nixie tube is located, an area frame where an indicator lamp is located, and target identification is performed for the dial plate, the display screen, the nixie tube, and the indicator lamp.
For convolutional neural network A1, which is preferably a modified ResNet network, all 3x3 convolutional layers in the conv3, conv4 and conv5 phases of the feature extraction portion ResNet-101 in the ResNet network are replaced with deformable convolutional layers, and the existing ROI Pooling layer (region of interest Pooling layer) is replaced with a deformable ROI alignment layer (region of interest alignment layer).
The deformable convolution layer and the deformable ROI alignment layer are used for further displacement adjustment based on the spatial sampling position information in the unit, and the displacement is learned in a target task without an additional supervision signal. The deformable convolution layer adds a 2D offset to the regular sampling network in a standard convolution, and the deformable ROI alignment layer adds an offset to each bin at the previous ROI alignment, learning the offset from the previous feature map and ROI, so that objects with different shapes are adaptively partially located. The deformable ROI alignment layer acquires the feature corresponding to the ROI through the standard ROI alignment, and the feature obtains the offset of each part of the ROI through a full connection layer. With this offset acting on the deformable ROI alignment, a feature not limited to the ROI fixed grid is obtained.
The region selection network A3 (RPN) performs region selection on the feature map A2 with different scales, and specifically, in the candidate region matching process, mainly further performs pooling processing on the candidate region, so as to pool the feature map with different scales into a fixed-scale feature map A4.
Further preferably, the candidate region matching process includes:
first, mapping the feature map, performing narrowing mapping on the input feature map, and reserving floating point numbers. For example, the size of one candidate region in the feature map A2 is 800×800, and the size of the mapped feature map is: 800/32=12.5, i.e. 12.5×12.5, where no rounding is performed, but floating point numbers are reserved.
And secondly, pooling processing, namely performing fixed-scale pooling processing on the mapped feature map to obtain a pooled feature map of a further partition. For example, the width and height of the pooling are 7, i.e. pooling_w=7, pooling_h=7, i.e. the pooling is fixed to a 7*7 size feature map, so that the mapped feature map of 12.5×12.5 is divided into 49 equally sized small areas, i.e. the pooling feature map, and the size of each pooling feature map is 12.5/7=1.78, i.e. 1.78×1.78.
Thirdly, carrying out downsampling processing to determine a downsampled value, then further dividing the pooled feature map into equal sampling areas according to the downsampled value, taking a central point position for each sampling area, calculating pixels at the central point position by adopting bilinear interpolation to obtain pixel values, and finally taking the maximum value in the pixel values corresponding to each sampling area as the pixel value of the pooled feature map. For example, assume that the sampling value 4 is equal to four sampling areas for each 1.78×1.78 pooled feature map, each sampling area takes the center point position, the pixels at the center point position are calculated by bilinear interpolation to obtain four pixel values, and finally, the maximum value in the four pixel values is taken as the pixel value of the pooled feature map (1.78×1.78-sized area), and so on, the mapped feature map can obtain the pixel values of 49 pooled feature maps to form a 7*7-sized feature map.
And then correcting the target frame and identifying the target type through the full-connection network A5. When preliminary judgment on the display targets (dial, display screen, nixie tube and indicator lamp) of the equipment is needed, the classification loss function is needed to be selectedObjective function of the area selection network>And detecting a loss function->When the device panel needs to be segmented, a classification loss function needs to be selected>Objective function of the area selection network>Detection loss function->And a segmentation loss function->. Thus, the loss function L can be expressed as:
the objective function of the RPN component is represented as the sum of the classification using Softmax and the regression loss using stabilized SmoothL1.
Indicates a loss of classification using Softmax, < ->Indicating a loss of detection with smoothL1, < >>Represents the segmentation loss using average cross entropy (average cross-entropy).
The output category is judged to belong to a stuff category or a sting category, different loss functions are selected according to different categories, then deviation between an actual value and an output value of each layer is calculated and output, the error of each hidden layer is obtained according to a chain rule in a back propagation algorithm, and parameters of each layer are adjusted according to the error of each layer, so that the back propagation process of the network is completed. Iterative forward propagation and backward propagation processesAnd completing the training of the whole network until the network converges. According to the true categorySelecting the corresponding objective function if +.>Is->Training for classification and detection, if +.>Is->And (5) performing classification detection and segmentation training.
As shown in fig. 6, the dial, the indicator light, and the display screen on the photographed device interface are respectively identified by the target category and divided into the area frames. As shown in fig. 6, the area frame D1 corresponding to the dial is identified as the target category, the area frame D2 corresponding to the display screen is identified as the target category, and the area frame D3 corresponding to the indicator light is identified as the target category.
After the target category is identified and the area frame is divided, the target state of the area can be further judged and identified, including the identification of the indication position and the indication value of the dial pointer, the identification of the display content of the display screen, the identification of the display value of the nixie tube, and the identification of the display color (red, yellow, green and the like) and the display state (on, off or flashing) of the indicator lamp. After the identification, the result is correspondingly indicated in the corresponding target area in a graphic labeling mode through the embodiment shown in fig. 3.
As shown in fig. 4, the video labeling unit B3 receives the target frame information and the identification information from the video processing unit B2, and accordingly sets parameters such as color, protocol, position, and the like of the character displayed on the screen. Therefore, the control screen marks the style and the position of character superposition, and can also adjust the row and column of characters and the size coefficient. The configuration parameters include a screen display foreground color, a background color, a position row where a character display frame is located, a position column where the character display frame is located, a character coefficient, character content, a date and time display position row and a date and time display position column.
The screen display foreground color is used for setting the color of the labeling character and comprises options such as white, red and the like; the background color is used for setting the display background color of the character, and comprises options such as transparency, white, blue and the like; the two items of the line where the character display frame is located and the line where the character display frame is located are set according to the positions where the marked characters are overlapped in the video output signal, and a plurality of display targets need to be marked and displayed by the characters, so that a plurality of character display frames need to be set. The character coefficient is used for setting the size of the output character, and the character content corresponds to the undistorted display of the recognized result. The two items of the date and time display position row and the date and time display position column are used for displaying the corresponding video playing time in a superposition mode.
In the video labeling unit B3, one path receives the image sequence after video decoding from the video decoding unit B1, and the other path receives the target frame information and the identification information from the video processing unit B2, and then superimposes the set labeling characters on the input image sequence, and then performs coding processing on the image sequence superimposed with the character signals, so as to obtain the video signal superimposed with the characters, and then displays the video signal through a screen.
Furthermore, in combination with the identification of the display content, the monitoring event can be judged, for example, the normal working state, the abnormal working state and the like can be judged and identified, and the sign before the accident, for example, the frequent automatic power-off and power-on switching, can be identified, so that the problem of unstable voltage exists.
Correspondingly, fig. 4 further includes an event recognition unit B4 and an alarm output unit B5, where after the video processing unit B2 recognizes the working interface of the device, the target recognition result is input to the event recognition unit B4 for event recognition. Based on the identification and continuous observation of the display content and the display state of each dial plate and indicator lamp on the equipment working interface, the events are mainly closely related to the equipment working state and comprise fault prediction events, equipment fault events, equipment damage events, human intervention events and the like.
The fault prediction event refers to that when the dial pointer shows jumping and the indicator lights flash briefly occasionally, but the occurrence frequency of the abnormal states is gradually increased, or the occurrence interval time is shortened and the number of times is increased, the sign events are recorded and identified to judge that the predicted fault is likely to occur, and discovery and warning are carried out early, so that potential fault hidden danger can be eliminated early. The equipment fault event is that the indication lamp is extinguished or continuously and uninterruptedly blinked after the indication of the display error of the dial pointer is recognized, the equipment is indicated to be in an abnormal working state, and an alarm prompt is needed; the equipment damage event is to identify the appearance change of the equipment such as inclination, dumping, damage, paint drop, blacking of a dial and the like, and the equipment is damaged due to the action of external force, so that an alarm prompt is needed; the human intervention event is to identify that personnel activities occur around the equipment, such as unpacking inspection, and identify that the equipment is abnormal in work after the personnel activities, which is obviously different from the working state before the personnel activities, and needs an alarm prompt.
The event identification method comprises the following steps: the method comprises the steps of outputting identification results of equipment display targets (comprising a dial plate, a display screen, a nixie tube and an indicator lamp) at fixed time, and judging whether display change occurs by comparing whether display contents of adjacent time intervals are different or not; when the display changes, increasing the frequency of timing observation, and determining the frequency and the range of the change; and when the frequency of change and the range of change exceed the preset threshold, outputting an alarm and performing human intervention.
Specifically, the event recognition unit B4 forms a feature record according to the recognition result of the video processing unit B2 and the defined event recognition rule, where the feature record generally includes date, time, place, photograph, video clip, and so on, and forms a complete evidence chain. The event recognition rules may be stored in files or in the form of database tables. The method has the advantages that the video processing and the event recognition are separated as far as possible, and the maximum flexibility and expandability are achieved.
Correspondingly, fig. 4 further includes a three-dimensional scene synthesis unit B6, through which a three-dimensional virtual scene corresponding to the real object scene in the machine room can be constructed, and the identification information is displayed on the virtual device in the three-dimensional virtual scene in real time according to the identification information output by the video processing unit B2, so as to keep the consistency with the real-time real-scene video display content.
The method for constructing the three-dimensional virtual scene corresponding to the real object scene in the machine room comprises the following steps: measuring the space size and the space position of the machine room and the internal equipment, and constructing corresponding three-dimensional scenery models for the internal space of the machine room and the internal equipment in a computer; determining a corresponding actual view field picture of the camera under different shooting conditions by taking the actual space position of the camera as a reference point; and virtually constructing the three-dimensional scene model in the same proportion based on the actual view field picture, and constructing a three-dimensional virtual scene under the multi-observation condition.
Specifically, referring to fig. 7, the method for constructing a three-dimensional virtual scene corresponding to the real object scene in the machine room includes: acquiring a first cameraPosition information in machine room space as first reference coordinate +.>Acquiring the focal length value +.>Horizontal angle of view->And vertical field angle +.>Corresponding to the first camera->Is (are) the observed parameters ofCorrespondingly obtain the observation parameters->First camera under condition->Is used for the observation of the actual scene. After constructing the three-dimensional virtual scene by the same proportion, the first camera is selected>The simulation display is carried out in the same proportion in the three-dimensional virtual scene, the virtual scene under the observation of the virtual camera is correspondingly presented, and the virtual scene is required to be consistent with the actual scene, namely the actual scene under the view angle of the lens, and is required to be consistent with the corresponding virtual scene.
Because the three-dimensional virtual scene can be changed to a plurality of observation positions and observation angles for observation, after the virtual scene corresponding to the lens visual angle is switched or converted to the virtual scene of other visual angles, the three-dimensional virtual scene can be fully utilized for simulation presentation, but the display numerical value of the equipment is still displayed by the identification information output by the video processing unit B2.
As shown in fig. 8, a schematic diagram that can present a three-dimensional virtual scene at multiple observation positions and observation angles is shown, where all facilities are virtual simulation, but the numerical values displayed by the device are correspondingly displayed based on the identification information output by the video processing unit B2, so that the effective combination of real-time reality and virtual observation is considered.
As shown in FIG. 9, if there are n (n.gtoreq.2) cameras in the machine roomSetting corresponding n reference coordinates for the position information of the n cameras in the machine room space>N observation parameters are respectively corresponding to>The next n actual scenes. Therefore, when the three-dimensional virtual scene is built in the same proportion, the actual scenes shot by the n cameras are simulated in the same proportion in the three-dimensional virtual scene, n virtual scenes under the observation of the n virtual cameras are correspondingly presented, and the n virtual scenes and the n actual scenes are required to be consistent.
Then, in the same three-dimensional virtual scene, the n virtual scenes can be spliced and synthesized, and when the virtual scene corresponding to the lens visual angle is switched or converted to the virtual scene of other visual angles, the three-dimensional virtual scene can be fully utilized for simulation presentation, such as close-range simulation presentation. However, the display value of the device is still derived from the video processing unit B2 to recognize the images captured by the respective cameras and then display the output identification information. Therefore, the unification of virtual display and real scene display under the condition of monitoring by a plurality of cameras can be realized.
Based on the same inventive concept, the application also provides an enhanced display method for the monitoring picture of the equipment in the machine room, which comprises the following steps: a camera is arranged in the machine room, a display interface of equipment in the machine room is shot, a live-action video is obtained, and the live-action video is transmitted to a monitoring host; and the monitoring host performs image recognition processing on the live-action video, and adds image-text annotation display to the live-action video when a video monitoring picture is displayed.
Specifically, the monitoring host performs image recognition processing on the live-action video, including feature recognition on a working interface of equipment in a machine room, and extracting interface display data; and the information displayed by the graphic labeling comprises the interface display data.
Specifically, a three-dimensional virtual scene corresponding to the machine room is constructed, when a video monitoring picture is displayed, the three-dimensional virtual scene can be synchronously and switchably displayed, the interface display data and the three-dimensional virtual scene are fused, and virtual equipment in the three-dimensional virtual scene synchronously displays the corresponding interface display data.
The details of the method may be understood in conjunction with the foregoing description of the enhanced display system, and will not be described in detail herein.
The application discloses an enhanced display system and method for a monitoring picture of equipment in a machine room, wherein the enhanced display system comprises a camera installed in the machine room, shooting a display interface of the equipment in the machine room to obtain a live-action video, and transmitting the live-action video to a monitoring host; the monitoring host performs image recognition processing on the live-action video, and adds image-text annotation display to the live-action video when the video monitoring picture is displayed. And constructing a three-dimensional virtual scene corresponding to the machine room, and synchronously and switchably displaying the three-dimensional virtual scene when the video monitoring picture is displayed, wherein the virtual equipment synchronously displays corresponding interface display data. The application can identify images of the equipment display interface of the machine room, distinguish a plurality of equipment shot by the camera at different angles, accurately display the interface data of each equipment, provide a more friendly video monitoring display effect and enhance the user experience.
The foregoing description is only illustrative of the present application and is not intended to limit the scope of the application, and all equivalent structural changes made by the present application and the accompanying drawings, or direct or indirect application in other related technical fields, are included in the scope of the present application.

Claims (10)

1. The enhanced display system for the monitoring picture of the equipment in the machine room is characterized by comprising a camera, a monitoring host and a display screen, wherein the camera is arranged in the machine room, the camera shoots a working interface of the equipment in the machine room, and live-action video is obtained and transmitted to the monitoring host;
the monitoring host comprises a video decoding unit, a video processing unit and a video labeling unit, wherein the video decoding unit decodes the live-action video to obtain an image sequence; the video processing unit performs image processing on the image sequence, including area division and target recognition on a working interface of the equipment to obtain a target recognition result; and the video labeling unit superimposes the target identification result on the image sequence in a picture and text labeling mode, and then outputs the video superimposed with the picture and text labeling information to the display screen for display.
2. The enhanced display system for machine room equipment monitoring pictures according to claim 1, wherein the video processing unit comprises a neural network initial model, and the neural network initial model is trained by using sample data of a working interface of the equipment to obtain a neural network application model capable of being used for identifying the working interfaces of multiple types of equipment.
3. The enhancement display system for monitoring pictures of equipment in a machine room according to claim 2, wherein one path of the video labeling unit receives the image sequence from the video decoding unit, the other path of the video labeling unit receives target information and identification information from the video processing unit, converts the target information and identification information into labeling characters, superimposes the labeling characters on the input image sequence, and then performs coding processing on the image sequence superimposed with the labeling characters to obtain a video signal superimposed with characters, and displays the video signal through the display screen.
4. The enhanced display system for machine room equipment monitoring pictures of claim 1, wherein the monitoring host further comprises an event recognition unit and an alarm output unit,
after the video processing unit identifies the working interface of the equipment, inputting a target identification result into the event identification unit; the event recognition unit realizes event recognition based on continuous observation of the target recognition result, and when a dangerous event is recognized, the alarm output unit alarms.
5. The enhanced display system for a machine room equipment monitoring picture according to claim 1, wherein the monitoring host further comprises a three-dimensional scene synthesis unit for constructing a three-dimensional virtual scene corresponding to a physical scene in the machine room, and displaying identification information on virtual equipment in the three-dimensional virtual scene in real time according to the target identification result output by the video processing unit, so as to keep consistent with the real-scene video display content.
6. The enhanced display system for a machine room equipment monitoring screen according to claim 5, wherein the method for constructing a three-dimensional virtual scene corresponding to a physical scene in the machine room comprises: acquiring a first cameraPosition information in machine room space as first reference coordinate +.>Acquiring the first camera +.>Focal length value +.>Horizontal angle of view->And vertical field angle +.>Corresponding to the first camera +.>Observation parameters of->Obtaining the observation parameters ∈ ->The following first camera head>The photographed actual scene; thereby to selectThe first camera head->And constructing the three-dimensional virtual scene in the same proportion in the three-dimensional virtual scene, correspondingly presenting the virtual scene under the observation of the virtual camera, and keeping the virtual scene consistent with the actual scene.
7. The enhanced display system for machine room equipment monitoring pictures according to claim 5, wherein the method for constructing the three-dimensional virtual scene corresponding to the physical scene in the machine room comprises: n cameras are arranged in the machine roomN is more than or equal to 2, setting corresponding n reference coordinates +.>N observation parameters are respectively corresponding to>The next n actual scenes; when the three-dimensional virtual scene is built in the same proportion, simulating the actual scenes shot by the n cameras in the three-dimensional virtual scene in the same proportion, and correspondingly presenting n virtual scenes under the observation of the n virtual cameras, wherein the n virtual scenes are consistent with the n actual scenes.
8. An enhanced display method for a machine room equipment monitoring picture is characterized in that,
a camera is arranged in the machine room, a display interface of equipment in the machine room is shot, a live-action video is obtained, and the live-action video is transmitted to a monitoring host;
and the monitoring host performs image recognition processing on the live-action video, and adds image-text annotation display to the live-action video when a video monitoring picture is displayed.
9. The method for enhancing display of a monitoring picture of equipment in a machine room according to claim 8, wherein the monitoring host performs image recognition processing on the live-action video, and includes performing feature recognition on a working interface of the equipment in the machine room, and extracting interface display data; and the information displayed by the graphic labeling comprises the interface display data.
10. The enhanced display method for a machine room equipment monitoring screen according to claim 9, wherein a three-dimensional virtual scene corresponding to the machine room is constructed, the three-dimensional virtual scene is synchronously switchable to be displayed when a video monitoring screen is displayed, the interface display data is fused with the three-dimensional virtual scene, and virtual equipment in the three-dimensional virtual scene synchronously displays the corresponding interface display data.
CN202311425100.4A 2023-10-31 2023-10-31 Enhanced display system and method for machine room equipment monitoring picture Active CN117156108B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311425100.4A CN117156108B (en) 2023-10-31 2023-10-31 Enhanced display system and method for machine room equipment monitoring picture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311425100.4A CN117156108B (en) 2023-10-31 2023-10-31 Enhanced display system and method for machine room equipment monitoring picture

Publications (2)

Publication Number Publication Date
CN117156108A true CN117156108A (en) 2023-12-01
CN117156108B CN117156108B (en) 2024-03-15

Family

ID=88906589

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311425100.4A Active CN117156108B (en) 2023-10-31 2023-10-31 Enhanced display system and method for machine room equipment monitoring picture

Country Status (1)

Country Link
CN (1) CN117156108B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012018605A (en) * 2010-07-09 2012-01-26 Hitachi Systems Ltd Maintenance/monitoring system using augmented reality technique
CN104238980A (en) * 2013-06-13 2014-12-24 横河电机株式会社 Information display apparatus and information display method
CN111008567A (en) * 2019-11-07 2020-04-14 郑州大学 Driver behavior identification method
CN111091611A (en) * 2019-12-25 2020-05-01 青岛理工大学 Workshop digital twin oriented augmented reality system and method
CN112073679A (en) * 2020-05-26 2020-12-11 许继集团有限公司 Transformer substation equipment monitoring system based on three-dimensional technology and control method thereof
CN112115927A (en) * 2020-11-19 2020-12-22 北京蒙帕信创科技有限公司 Intelligent machine room equipment identification method and system based on deep learning
CN113657307A (en) * 2021-08-20 2021-11-16 北京市商汤科技开发有限公司 Data labeling method and device, computer equipment and storage medium
CN114003190A (en) * 2021-12-30 2022-02-01 江苏移动信息系统集成有限公司 Augmented reality method and device suitable for multiple scenes and multiple devices
US20220335697A1 (en) * 2021-04-18 2022-10-20 Apple Inc. Systems, Methods, and Graphical User Interfaces for Adding Effects in Augmented Reality Environments
CN116012570A (en) * 2021-10-22 2023-04-25 华为技术有限公司 Method, equipment and system for identifying text information in image

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012018605A (en) * 2010-07-09 2012-01-26 Hitachi Systems Ltd Maintenance/monitoring system using augmented reality technique
CN104238980A (en) * 2013-06-13 2014-12-24 横河电机株式会社 Information display apparatus and information display method
CN111008567A (en) * 2019-11-07 2020-04-14 郑州大学 Driver behavior identification method
CN111091611A (en) * 2019-12-25 2020-05-01 青岛理工大学 Workshop digital twin oriented augmented reality system and method
CN112073679A (en) * 2020-05-26 2020-12-11 许继集团有限公司 Transformer substation equipment monitoring system based on three-dimensional technology and control method thereof
CN112115927A (en) * 2020-11-19 2020-12-22 北京蒙帕信创科技有限公司 Intelligent machine room equipment identification method and system based on deep learning
US20220335697A1 (en) * 2021-04-18 2022-10-20 Apple Inc. Systems, Methods, and Graphical User Interfaces for Adding Effects in Augmented Reality Environments
CN113657307A (en) * 2021-08-20 2021-11-16 北京市商汤科技开发有限公司 Data labeling method and device, computer equipment and storage medium
CN116012570A (en) * 2021-10-22 2023-04-25 华为技术有限公司 Method, equipment and system for identifying text information in image
CN114003190A (en) * 2021-12-30 2022-02-01 江苏移动信息系统集成有限公司 Augmented reality method and device suitable for multiple scenes and multiple devices

Also Published As

Publication number Publication date
CN117156108B (en) 2024-03-15

Similar Documents

Publication Publication Date Title
CN107256542B (en) Gas visualization layout, apparatus and method
US7339516B2 (en) Method to provide graphical representation of Sense Through The Wall (STTW) targets
EP2509326A2 (en) Analysis of 3D video
US8982245B2 (en) Method and system for sequential viewing of two video streams
CN108141547B (en) Digitally superimposing an image with another image
KR101553273B1 (en) Method and Apparatus for Providing Augmented Reality Service
US10665034B2 (en) Imaging system, display apparatus and method of producing mixed-reality images
CN108307183A (en) Virtual scene method for visualizing and system
CN108989794B (en) Virtual image information measuring method and system based on head-up display system
JP7092615B2 (en) Shadow detector, shadow detection method, shadow detection program, learning device, learning method, and learning program
JP2004535610A (en) System and method for robust separation of foreground and background image data for determination of the position of an object in front of a controllable display in a camera view
JP2018169831A (en) Apparatus for image comparison
CN108171116A (en) Aircraft obstacle avoidance aiding method, device and obstacle avoidance aiding system
CN114241168A (en) Display method, display device, and computer-readable storage medium
CN104703016A (en) Screen protection display method based on ambient environment
CN117156108B (en) Enhanced display system and method for machine room equipment monitoring picture
KR101692764B1 (en) Method for Providing Augmented Reality by using Virtual Point
CN114998771B (en) Display method and system for enhancing visual field of aircraft, aircraft and storage medium
KR20210032188A (en) System for measuring prevailing visibility and method thereof
JP4836878B2 (en) Image identification display device and image identification display method
CN114923581A (en) Infrared selecting device and infrared selecting method
CN114002704A (en) Laser radar monitoring method and device for bridge tunnel and medium
CN113436134A (en) Visibility measuring method of panoramic camera and panoramic camera applying same
JP6687659B2 (en) Area classifier
KR101788471B1 (en) Apparatus and method for displaying augmented reality based on information of lighting

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant