CN111325824B - Image data display method and device, electronic equipment and storage medium - Google Patents

Image data display method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111325824B
CN111325824B CN201910596428.XA CN201910596428A CN111325824B CN 111325824 B CN111325824 B CN 111325824B CN 201910596428 A CN201910596428 A CN 201910596428A CN 111325824 B CN111325824 B CN 111325824B
Authority
CN
China
Prior art keywords
target
image data
dimensional
display area
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910596428.XA
Other languages
Chinese (zh)
Other versions
CN111325824A (en
Inventor
郭慧程
赵露唏
池学舜
巩浩
池震杰
袁坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision System Technology Co Ltd
Original Assignee
Hangzhou Hikvision System Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision System Technology Co Ltd filed Critical Hangzhou Hikvision System Technology Co Ltd
Priority to CN201910596428.XA priority Critical patent/CN111325824B/en
Publication of CN111325824A publication Critical patent/CN111325824A/en
Application granted granted Critical
Publication of CN111325824B publication Critical patent/CN111325824B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention provides an image data display method, an image data display device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring position information and view field information of a target camera in a target monitoring scene; according to the position information and the view field information of the target camera, determining a display area of a monitoring area of the target camera in a pre-established three-dimensional scene display image, wherein the three-dimensional scene display image is a three-dimensional image of a target monitoring scene; and acquiring image data acquired by the target camera, and performing transcoding processing and three-dimensional rendering on the image data to complete three-dimensional display of the image data in the display area. According to the image data display method, the image data acquired by the camera are displayed in the corresponding display area of the three-dimensional scene display image, so that the perception of the user on the spatial position of the monitored scene can be increased.

Description

Image data display method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image data display method, an image data display device, an electronic device, and a storage medium.
Background
The monitoring system is the most widely applied system in the security field, and plays an important role in guaranteeing public safety. In the related monitoring technology, the image data collected by the camera is directly displayed on the display screen, but the display method is not beneficial to the perception of the space position of the monitored scene by the user.
Disclosure of Invention
The embodiment of the invention aims to provide an image data display method, an image data display device, electronic equipment and a storage medium, so as to realize enhancement of perception of a user on a spatial position of a monitored scene. The specific technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides an image data display method, including:
acquiring position information and view field information of a target camera in a target monitoring scene;
according to the position information and the view field information of the target camera, determining a display area of a monitoring area of the target camera in a pre-established three-dimensional scene display image, wherein the three-dimensional scene display image is a three-dimensional image of the target monitoring scene;
and acquiring image data acquired by the target camera, and performing transcoding processing and three-dimensional rendering on the image data to complete three-dimensional display of the image data in the display area.
Optionally, the method further comprises:
acquiring a target display area selection instruction;
according to the target display area selection instruction, determining a target display area in the display area;
determining target image data corresponding to the target display area in the image data;
the transcoding and three-dimensional rendering of the image data to complete three-dimensional display of the image data in the display area includes:
and performing transcoding processing and three-dimensional rendering on the target image data to finish three-dimensional display of the target image data in the target display area.
Optionally, the transcoding and three-dimensional rendering the target image data to complete three-dimensional display of the target image data includes:
setting Alpha value of the target image data in the image data to be 1, and setting Alpha value of other data except the target image data in the image data to be 0 to obtain clipped image data;
converting the cropped image data into Texture data;
rendering the Texture data through a preset three-dimensional engine to complete three-dimensional display of the target image data in the target display area.
Optionally, the method further comprises:
acquiring angle parameters when the target camera acquires the image data, obtaining target angle parameters, and acquiring region coordinates of the target image data in the image data, thereby obtaining target region coordinates;
and storing the target angle parameter of the target camera, the position information of the target camera, the coordinates of the target display area and the coordinates of the target area in a correlated mode as target video projection preset positions.
Optionally, the method further comprises:
when a preview instruction aiming at the target video projection preset position is acquired, acquiring video data acquired by the target camera under the target angle parameter and the position information of the target video projection preset position mark;
acquiring target video data in the video data according to the region coordinates in the target video projection preset bit;
and performing transcoding processing and three-dimensional rendering on the target video data to finish three-dimensional display of the target video data in a target display area marked by the target video projection preset bits.
In a second aspect, an embodiment of the present invention provides an image data display apparatus, including:
the information acquisition module is used for acquiring the position information and the view field information of the target camera in the target monitoring scene;
the display area determining module is used for determining a display area of a monitoring area of the target camera in a pre-established three-dimensional scene display image according to the position information and the view field information of the target camera, wherein the three-dimensional scene display image is a three-dimensional image of the target monitoring scene;
and the image rendering module is used for acquiring the image data acquired by the target camera, performing transcoding processing and three-dimensional rendering on the image data, and completing three-dimensional display of the image data in the display area.
Optionally, the apparatus further includes:
the selecting instruction obtaining module is used for obtaining a selecting instruction of the target display area;
the target display area determining module is used for determining a target display area in the display area according to the target display area selection instruction;
the target image data determining module is used for determining target image data corresponding to the target display area in the image data;
the image rendering module is specifically configured to:
and performing transcoding processing and three-dimensional rendering on the target image data to finish three-dimensional display of the target image data in the target display area.
Optionally, the image rendering module includes:
a numerical value setting submodule, configured to set an Alpha value of the target image data in the image data to 1, and set Alpha values of other data except the target image data in the image data to 0, so as to obtain clipped image data;
a Texture setting sub-module for converting the cropped image data into Texture data;
and the three-dimensional rendering sub-module is used for rendering the Texture data through a preset three-dimensional engine so as to complete three-dimensional display of the target image data in the target display area.
Optionally, the apparatus further includes:
the associated parameter acquisition module is used for acquiring angle parameters when the target camera acquires the image data to obtain target angle parameters, and acquiring region coordinates of the target image data in the image data to obtain target region coordinates;
and the preset bit storage module is used for storing the target angle parameter of the target camera, the position information of the target camera, the coordinates of the target display area and the coordinates of the target area in an associated mode as target video projection preset bits.
Optionally, the apparatus further includes: a video preview module for:
when a preview instruction aiming at the target video projection preset position is acquired, acquiring video data acquired by the target camera under the target angle parameter and the position information of the target video projection preset position mark;
acquiring target video data in the video data according to the region coordinates in the target video projection preset bit;
and performing transcoding processing and three-dimensional rendering on the target video data to finish three-dimensional display of the target video data in a target display area marked by the target video projection preset bits.
In a third aspect, an embodiment of the present invention provides an electronic device, including a processor and a memory;
the memory is used for storing a computer program;
the processor is configured to implement any one of the image data display methods described in the first aspect when executing the program stored in the memory.
In a fourth aspect, an embodiment of the present invention provides a computer readable storage medium, in which a computer program is stored, the computer program implementing the image data presentation method according to any one of the first aspects when executed by a processor.
The image data display method, the image data display device, the electronic equipment and the storage medium provided by the embodiment of the invention acquire the position information and the view field information of the target camera in the target monitoring scene; according to the position information and the view field information of the target camera, determining a display area of a monitoring area of the target camera in a pre-established three-dimensional scene display image, wherein the three-dimensional scene display image is a three-dimensional image of a target monitoring scene; and acquiring image data acquired by the target camera, and performing transcoding processing and three-dimensional rendering on the image data to complete three-dimensional display of the image data in the display area. The image data acquired by the camera is displayed in the corresponding display area of the three-dimensional scene display image, so that the perception of the user on the spatial position of the monitored scene can be increased. Of course, it is not necessary for any one product or method of practicing the invention to achieve all of the advantages set forth above at the same time.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of an image data display method according to an embodiment of the invention;
FIG. 2 is a schematic diagram of a top view of a target camera according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a target camera head-up according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a bottom view of a target camera according to an embodiment of the present invention;
FIG. 5 is a flowchart of an image data display method according to an embodiment of the present invention;
fig. 6 is a first real-scene diagram of an image data display method according to an embodiment of the present invention;
fig. 7 is a second real-scene diagram of an image data display method according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a target to-be-displayed area in an image data display method according to an embodiment of the present invention;
fig. 9 is a third real-scene diagram of an image data display method according to an embodiment of the present invention;
FIG. 10 is a schematic diagram of an image data display device according to an embodiment of the invention;
fig. 11 is a schematic diagram of an electronic device according to an embodiment of the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the prior art, two-dimensional images acquired by a camera are directly displayed in a display for a user to watch, so that the user is not benefited to perceive the position of a camera monitoring area in a monitored scene, and the user perceives the difference of the spatial position of the monitored scene.
In view of this, an embodiment of the present invention provides an image data displaying method, referring to fig. 1, the method includes:
s101, acquiring position information and view field information of a target camera in a target monitoring scene.
The image data display method in the embodiment of the invention can be realized through a display system, and the display system is any system capable of realizing the image data display method in the embodiment of the invention. For example:
the display system may be an apparatus comprising: a processor, a memory, a communication interface, and a bus; the processor, the memory and the communication interface are connected through a bus and complete communication; the memory stores executable program code; the processor executes a program corresponding to the executable program code by reading the executable program code stored in the memory, for executing the image data presentation method of the embodiment of the present invention.
The presentation system may also be an application program for executing the image data presentation method according to the embodiment of the present invention at runtime.
The presentation system may also be a storage medium for storing executable code for performing the image data presentation method of the embodiment of the present invention.
The display system acquires the position information and the view field information of the target camera in the target monitoring scene. The position information of the target camera may be three-dimensional coordinates of the target camera in the target monitored scene, and the field of view information of the target camera includes a field angle (including a horizontal field angle and a vertical field angle) and an orientation angle (including a horizontal orientation angle and a vertical orientation angle) of the target camera.
S102, determining a display area of a monitoring area of the target camera in a pre-established three-dimensional scene display image according to the position information and the view field information of the target camera, wherein the three-dimensional scene display image is a three-dimensional image of the target monitoring scene.
The three-dimensional scene display image is a three-dimensional image established in advance according to the target monitoring scene. For example, the layout of the target monitoring scene is acquired through unmanned aerial vehicle oblique photography technology or radar technology, three-dimensional modeling software such as 3DMax or Maya is utilized according to the layout of the target monitoring scene, and a three-dimensional image of the target monitoring scene is drawn, for example, the three-dimensional image may be a CAD (Computer Aided Design ) three-dimensional image. The monitoring area of the target camera refers to a three-dimensional area of a real scene that can be photographed by the target camera. The display area is an area corresponding to the monitoring area of the target camera in the three-dimensional scene display image. The display system determines the position of the target camera in the three-dimensional scene display image according to the position information of the target camera, and determines the corresponding area of the monitoring area of the target camera in the three-dimensional scene display image according to the field-of-view information of the target camera, and the corresponding area is used as a display area.
And S103, acquiring the image data acquired by the target camera, and performing transcoding processing and three-dimensional rendering on the image data to finish three-dimensional display of the image data in the display area.
In three-dimensional rendering technology, two elements are required to render an object in a computer: mesh (mesh) and texture, the mesh contains the shape outline of the object, the UV (texture map coordinates) direction, etc., the texture refers to the form of the mesh to be displayed, the texture is divided into texture and a shader, the texture is the appearance of the object, the shader refers to the editable program based on OpenGL (Open Graphics Library)/Directx (Direct extension) graphic library, and the effect to be displayed by the texture is changed, for example: semitransparent, floodlight, high-gloss, etc.
The three-dimensional display is to render the image in three dimensions and then display the image, so that a three-dimensional visual effect can be generated by an observer, and a possible three-dimensional display effect diagram can be shown in fig. 9. The display system decodes the image data acquired by the target camera, and transcodes and renders the decoded image data in three dimensions, so that the three-dimensional rendered image data is displayed in a display area of the three-dimensional scene display image. For example, image data acquired by a target camera is acquired through a streaming interface, the image data is decoded through a VAG (Video Graphics Array ) play library to obtain YUV data, YUV code streams are assembled into RGB (Red Green Blue) code streams, RGB data of video frames in the image data are acquired, texture data obtained by converting the RGB data of the video frames are set as textures of a visual grid, and GPU (Graphics Processing Unit, image processor) processing rendering is performed on Texture rendering materials through a semitransparent shader based on OpenGL or Directx to realize that images acquired by the target camera are displayed in a three-dimensional scene display image.
In the embodiment of the invention, the graphic data acquired by the camera is displayed in the corresponding display area of the three-dimensional scene display image, so that the perception of the user on the spatial position of the monitored scene can be increased.
Optionally, the determining, according to the position information and the field information of the target camera, a display area of a monitoring area of the target camera in a pre-established three-dimensional scene display image includes:
and calculating three-dimensional coordinates of a monitoring area of the target camera in a pre-established three-dimensional scene display image according to the position information and the view field information of the target camera, and taking an area represented by the three-dimensional coordinates as a display area.
In the embodiment of the invention, the monitoring area of the target camera can be mapped into a quadrilateral cross section in the three-dimensional scene display image. And determining camera coordinates of the target camera in the three-dimensional scene display image according to the position information of the target camera, taking the camera coordinates as monitoring points, and generating a quadrilateral plane grid perpendicular to the field angle according to the field angle and the orientation angle of the target camera in the field-of-view information.
Quadrilateral vertex coordinate algorithm: trigonometric function calculation in three-dimensional coordinate system
In the three-dimensional scene display image, a three-dimensional left-hand coordinate system is constructed by taking true north as an x axis, positive west as a z axis and upward as a y axis in reality. The three-dimensional coordinates of the monitoring points are (camposx, camposy, camposz); the distance from the quadrangle to the camera is th; the real-time field angle of the target camera is expressed as: a horizontal angle of view fangleH and a vertical angle of view fangleV; the initial rotation angle of the target camera is: a horizontal initial angle yangleH and a vertical initial angle yangleV; the real-time rotation angle of the target camera is: a horizontal initial angle cangleH and a vertical initial angle cangleV; the horizontal orientation angle of the target camera may be expressed as yangleh+cangleh and the vertical orientation angle may be expressed as yanglev+canglev.
According to the relation between the vertical view angle and the vertical orientation angle of the target camera, three situations are divided:
the target camera is seen in top view, see fig. 2, namely yanglev+canglev > fangleV/2;
target camera head up, see fig. 3, i.e. -fangleV/2< = yanglev+canglev < = fangleV/2;
the object camera is looking up, see FIG. 4, yanglev+cangleV < -fangleV/2;
taking the overlook of the target camera as an example, four vertex coordinates of a quadrilateral mesh are given, and four vertexes of the quadrilateral are V respectively 1 、V 2 、V 3 V (V) 4
V 1 The X coordinate of (2) is:
camposx+radiusLow*cos(cangleH+yangleH-90°-fangle H/2);
V 1 the Y coordinate of (C) is: camposy-yLow;
V 1 z coordinate of (2) is:
camposz-radiusLow*sin(cangleH+yangleH-90°-fangle H/2);
V 2 the X coordinate of (2) is:
camposx+radiusLow*cos(cangleH+yangleH-90°+fangle H/2)
V 2 the Y coordinate of (C) is: camposy-yLow;
V 2 z coordinate of (2) is:
camposz-radiusLow*sin(cangleH+yangleH-90°+fangle H/2);
V 3 the X coordinate of (2) is:
camposx+radiusUp*cos(cangleH+yangleH-90°+fangle H/2);
V 3 the Y coordinate of (C) is: camposy-yUp;
V 3 z coordinate of (2) is:
camposz-radiusUp*sin(cangleH+yangleH-90°+fangle H/2);
V 4 the X coordinate of (2) is:
camposx+radiusUp*cos(cangleH+yangleH-90°-fangle H/2);
V 4 the Y coordinate of (C) is: camposy-yUp;
V 4 z coordinate of (2) is:
camposz-radiusUp*sin(cangleH+yangleH-90°-fangle H/2);
wherein radius=th/cos (fangleV/2),
radiusUp=radius*cos(cangleV+yangleV-fangleV/2),
radiusLow=radius*cos(cangleV+yangleV+fangleV/2),
yUp=radius*sin(cangleV+yangleV-fangleV/2);
yLow=radius*sin(cangleV+yangleV+fangleV/2)。
the coordinate calculation method of four vertexes of the quadrangle when the target camera looks up and looks down is similar to the calculation method when the target camera looks down, and can be deduced according to a mathematical formula. And taking the area represented by the quadrangle as a display area according to the coordinates of four vertexes of the quadrangle.
Coordinate the quadrilateral vertex V 1 、V 2 、V 3 V (V) 4 Divided into two triangular faces, the apexes are (V) 1 ,V 2 ,V 3 )、(V 2 ,V 3 ,V 4 ) Generating a quadrilateral mesh under a three-dimensional coordinate system; setting a streaming media callback, acquiring image data acquired by a target camera through a streaming interface, decoding the image data through a VAG (Video Graphics Array ) play library to obtain YUV data, packing the YUV code stream into an RGB code stream, adding an Alpha channel, performing RGBA32 coding, generating a Texture supported by a three-dimensional engine, namely rendering the data on a quadrilateral mesh in a Texture mode by utilizing an OpenGL or Directx graphics library, changing the Alpha value, and setting the transparency of the image data. Because the quadrilateral plane is the visual cross section of the camera, the image data is rendered on the visual cross section, and fusion of the three-dimensional scene display image and the image data acquired by the target camera can be realized. Wherein YUV, RGB and RGBA are all one coding format of a picture.
In the embodiment of the invention, based on the three-dimensional scene display image of the target monitoring scene, the image data acquired by the target camera in real time are rendered on the cross-section quadrilateral grid of the three-dimensional visual field, the image data acquired by the target camera is subjected to three-dimensional rendering, and the three-dimensional visual security monitoring platform is formed by visualizing other monitoring data together, so that the perception of a user on the spatial position of the monitoring scene can be increased. Meanwhile, the method is simple, and the three-dimensional visual field and the video projection area can be determined by only acquiring the position information and the view field information of the camera in real time.
Optionally, the image data display method of the embodiment of the present invention further includes:
step one, acquiring angle parameters when the target camera acquires the image data, and acquiring coordinates of the display area in the three-dimensional scene display image.
The angle parameter refers to a shooting angle of the camera, for example, when the camera is a ball machine, the angle parameter of the ball machine is PTZ (Pan/Tilt/Zoom) coordinates of the ball machine.
And step two, the angle parameter of the target camera, the position information of the target camera and the coordinates of the display area are stored in a correlated mode as a first video projection preset position.
And thirdly, acquiring first video data acquired by the target camera under the angle parameter and the position information of the first video projection preset position mark when acquiring the preview instruction aiming at the first video projection preset position.
And fourthly, performing transcoding processing and three-dimensional rendering on the first video data to finish three-dimensional display of the first video data in the display area of the first video projection preset bit mark.
Since the positional information of the camera generally does not change, the positional information can be replaced by the identification of the camera, for example, the number of the camera or the like, when the video projection preset position is recorded. Since the angle of orientation of the bolt is not rotatable with respect to the bolt, it is generally the case that the target camera is a bolt that corresponds to only one presentation area. For the ball machine with PTZ (Pan/Tilt/Zoom) cradle head, the rotation of the seed orientation angle can be realized, so that when the video projection preset position is stored, the corresponding relation between the angle parameter of the ball machine and the display area is required to be recorded.
In the embodiment of the invention, the corresponding relation between the position information and the angle parameter of the target camera and the display area in the three-dimensional scene display image is recorded, and the display area can be directly determined according to the corresponding relation according to the position information and the view field information when the three-dimensional scene display image is used later.
Optionally, referring to fig. 5, the image data displaying method according to the embodiment of the present invention further includes:
s501, acquiring a target display area selection instruction.
The target display area selection instruction may be input by a user, where the target display area selection instruction includes location information of the target display area. For example, as shown in fig. 6 and fig. 7, in the video projection mode, the video projection picture, that is, the image data, is set to be semitransparent, and the user manually marks the target frame of the target display area in the display area through the operation interface, so as to determine the position information of the target display area.
S502, determining a target display area in the display areas according to the target display area selection instruction.
And the display system determines the target display area in the display area according to the position information of the target display area in the target display area selection instruction. For example, as shown in fig. 7, according to the target frame of the target display area, the area in the target frame is taken as the target display area.
S503, determining target image data corresponding to the target display area in the image data.
Although the video projection screen and the display area selection screen are both in a three-dimensional coordinate system, the image data to be displayed and the display area are also on the same plane in the three-dimensional space at the same time, and the screen of the image data to be displayed is a regular quadrangle. The distance from a point in the quadrangle to the straight line where each side length is located can be calculated by the cosine theorem under the three-dimensional coordinates, and the relative coordinates of the point can be calculated.
For example, as shown in fig. 8: v (V) 1 、V 2 、V 3 V (V) 4 Is a video projection picture, i.e. a picture of image data, A, B, C, D is the area represented by the target image data, points A (x, y, z) to a straight line V 1 V 3 Divided by V 1 V 3 And straight line V 2 V 4 The distance between the two coordinates is used for obtaining transverse relative coordinates, and the similar longitudinal relative coordinates of the point A can be determined, so that the two-dimensional coordinates of the point A in the image data can be determined.
The transcoding and three-dimensional rendering of the image data to complete three-dimensional display of the image data in the display area includes:
s504, performing transcoding processing and three-dimensional rendering on the target image data to finish three-dimensional display of the target image data in the target display area.
The display system adopts an image superposition mode to realize the display of the target image data acquired by the target camera in the target display area, and the display effect is shown in fig. 9.
In the embodiment of the invention, the video projection picture is a plane in the three-dimensional space, the display of the target image data can be selected according to the target display area selection instruction, the video projection can be clipped, the user can clip the three-dimensional video picture through simple interaction, and the focused key area is selected in the complex monitoring environment. Can meet various demands of users.
Optionally, the transcoding and three-dimensional rendering of the target image data to complete three-dimensional display of the target image data in the target display area includes:
step one, alpha value of the target image data in the image data is set to be 1, alpha value of other data except the target image data in the image data is set to be 0, and cut image data is obtained.
And step two, converting the cut image data into Texture data.
And thirdly, rendering the Texture data through a preset three-dimensional engine to complete three-dimensional display of the target image data in the target display area.
Obtaining RGB data of video frames in image data acquired by a target camera, performing row-column scanning on the RGB data, determining an area of the target image data, adding Alpha channels to the RGB data to adopt RGBA coding, setting Alpha values of the target image data to be 1, setting Alpha values of other areas to be 0, returning frame data in an array form according to RGBA values of picture pixels, setting Texture obtained by converting the returned data into Texture of a visible region grid, and performing GPU processing rendering on Texture rendering materials by using a semitransparent shader based on OpenGL/Directx to obtain video projection of the target image data.
In the embodiment of the invention, other data except the target image data in the image data are transparent by setting the Alpha value, so that only the target image data is displayed, and the method is simple and convenient.
Optionally, the method further comprises:
and step A, acquiring angle parameters when the target camera acquires the image data, obtaining target angle parameters, and acquiring region coordinates of the target image data in the image data, thereby obtaining target region coordinates.
And step B, the target angle parameter of the target camera, the position information of the target camera, the coordinates of the target display area and the coordinates of the target area are stored in a correlated mode as target video projection preset positions.
The coordinates of the target display area are the coordinates of the target display area in the three-dimensional scene display image. Since the positional information of the camera generally does not change, the positional information can be replaced by the identification of the camera, for example, the number of the camera or the like, when the video projection preset position is recorded. Since the angle of orientation of the bolt is not rotatable with respect to the bolt, it is generally the case that the target camera is a bolt that corresponds to only one presentation area. For the ball machine with the PTZ cradle head, the rotation of the seed orientation angle can be realized, so that when the preset video projection position is saved, the corresponding relation between the angle information of the ball machine and the target display area is required to be recorded.
Optionally, the method further comprises:
when a preview instruction aiming at the target video projection preset position is obtained, obtaining video data collected by the target camera under the target angle parameter and the position information of the target video projection preset position mark.
And step two, acquiring target video data in the video data according to the region coordinates in the target video projection preset bit.
And thirdly, performing transcoding processing and three-dimensional rendering on the target video data to finish three-dimensional display of the target video data in a target display area marked by the target video projection preset bits.
In the embodiment of the invention, the corresponding relation between the position information, the angle parameter and the region coordinate of the target image data in the image data of the target camera and the target display region in the three-dimensional scene display image is recorded, and the target display region can be directly determined according to the preview instruction in the subsequent use, so that the method is convenient and quick.
The embodiment of the invention also provides an image data display device, referring to fig. 10, the device comprises:
an information acquisition module 1001, configured to acquire position information and field information of a target camera in a target monitoring scene;
the display area determining module 1002 is configured to determine, according to the position information and the field of view information of the target camera, a display area of a monitoring area of the target camera in a pre-established three-dimensional scene display image, where the three-dimensional scene display image is a three-dimensional image of the target monitoring scene;
and an image rendering module 1003, configured to acquire image data acquired by the target camera, and perform transcoding processing and three-dimensional rendering on the image data to complete three-dimensional display of the image data.
Optionally, the apparatus further includes:
the selecting instruction obtaining module is used for obtaining a selecting instruction of the target display area;
the target display area determining module is used for determining a target display area in the display area according to the target display area selection instruction;
the target image data determining module is used for determining target image data corresponding to the target display area in the image data;
the image rendering module 1003 is specifically configured to:
and performing transcoding processing and three-dimensional rendering on the target image data to finish three-dimensional display of the target image data in the target display area.
Optionally, the image rendering module 1003 includes:
a numerical value setting submodule, configured to set an Alpha value of the target image data in the image data to 1, and set Alpha values of other data except the target image data in the image data to 0, so as to obtain clipped image data;
a Texture setting sub-module for converting the clipped image data into Texture data;
and the three-dimensional rendering sub-module is used for rendering the Texture data through a preset three-dimensional engine so as to complete three-dimensional display of the target image data in the target display area.
Optionally, the apparatus further includes:
the related parameter acquisition module is used for acquiring angle parameters when the target camera acquires the image data to obtain target angle parameters, and acquiring region coordinates of the target image data in the image data to obtain target region coordinates;
and the preset bit storage module is used for storing the target angle parameter of the target camera, the position information of the target camera, the coordinates of the target display area and the coordinates of the target area in a correlated mode as target video projection preset bits.
Optionally, the apparatus further includes: a video preview module for:
when a preview instruction aiming at a target video projection preset position is acquired, acquiring video data acquired by the target camera under target angle parameters and position information of the target video projection preset position mark;
acquiring target video data in the video data according to the region coordinates in the target video projection preset bit;
and performing transcoding processing and three-dimensional rendering on the target video data to finish three-dimensional display of the target video data in a target display area of the target video projection preset bit mark.
An embodiment of the present invention provides an electronic device, referring to fig. 11, including a processor 1101 and a memory 1102;
the memory 1102 is used for storing a computer program;
the processor 1101 is configured to execute the program stored in the memory 1102, and implement the following steps:
acquiring position information and view field information of a target camera in a target monitoring scene;
determining a display area of a monitoring area of the target camera in a pre-established three-dimensional scene display image according to the position information and the view field information of the target camera, wherein the three-dimensional scene display image is a three-dimensional image of the target monitoring scene;
and acquiring image data acquired by the target camera, and performing transcoding processing and three-dimensional rendering on the image data to complete three-dimensional display of the image data in the display area.
In the embodiment of the invention, the graphic data acquired by the camera is displayed in the corresponding display area of the three-dimensional scene display image, so that the perception of the user on the spatial position of the monitored scene can be increased.
Optionally, the processor 1101 is configured to execute the program stored in the memory 1102, and further implement any one of the image data display methods.
Optionally, the electronic device further includes: a communication interface and a communication bus, wherein the processor 1101, the communication interface, and the memory 1102 perform communication with each other through the communication bus.
The communication bus mentioned above for the electronic devices may be a peripheral component interconnect standard (Peripheral Component Interconnect, PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, etc. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
The communication interface is used for communication between the electronic device and other devices.
The Memory may include random access Memory (Random Access Memory, RAM) or may include Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but also digital signal processors (Digital Signal Processing, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
The embodiment of the invention also provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, and the computer program realizes the following steps when being executed by a processor:
acquiring position information and view field information of a target camera in a target monitoring scene;
determining a display area of a monitoring area of the target camera in a pre-established three-dimensional scene display image according to the position information and the view field information of the target camera, wherein the three-dimensional scene display image is a three-dimensional image of the target monitoring scene;
and acquiring image data acquired by the target camera, and performing transcoding processing and three-dimensional rendering on the image data to complete three-dimensional display of the image data in the display area.
In the embodiment of the invention, the graphic data acquired by the camera is displayed in the corresponding display area of the three-dimensional scene display image, so that the perception of the user on the spatial position of the monitored scene can be increased.
Optionally, when the computer program is executed by the processor, any of the image data display methods described above may also be implemented.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In this specification, each embodiment is described in a related manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for embodiments of the apparatus, electronic device and storage medium, the description is relatively simple as it is substantially similar to the method embodiments, where relevant see the section description of the method embodiments.
The foregoing description is only of the preferred embodiments of the present invention and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention are included in the protection scope of the present invention.

Claims (10)

1. A method of displaying image data, the method comprising:
acquiring position information and view field information of a target camera in a target monitoring scene;
according to the position information and the view field information of the target camera, determining a display area of a monitoring area of the target camera in a pre-established three-dimensional scene display image, wherein the three-dimensional scene display image is a three-dimensional image of the target monitoring scene;
acquiring image data acquired by the target camera, and performing transcoding processing and three-dimensional rendering on the image data to complete three-dimensional display of the image data in the display area;
acquiring a target display area selection instruction;
according to the target display area selection instruction, determining a target display area in the display area;
determining target image data corresponding to the target display area in the image data;
acquiring angle parameters when the target camera acquires the image data, obtaining target angle parameters, and acquiring region coordinates of the target image data in the image data, thereby obtaining target region coordinates;
and storing the target angle parameter of the target camera, the position information of the target camera, the coordinates of the target display area and the coordinates of the target area in a correlated mode as target video projection preset positions.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the transcoding and three-dimensional rendering of the image data to complete three-dimensional display of the image data in the display area includes:
and performing transcoding processing and three-dimensional rendering on the target image data to finish three-dimensional display of the target image data in the target display area.
3. The method of claim 2, wherein transcoding and three-dimensional rendering the target image data to complete three-dimensional display of the target image data in the target presentation area comprises:
setting Alpha value of the target image data in the image data to be 1, and setting Alpha value of other data except the target image data in the image data to be 0 to obtain clipped image data;
converting the cropped image data into Texture data;
rendering the Texture data through a preset three-dimensional engine to complete three-dimensional display of the target image data in the target display area.
4. The method according to claim 1, wherein the method further comprises:
when a preview instruction aiming at the target video projection preset position is acquired, acquiring video data acquired by the target camera under the target angle parameter and the position information of the target video projection preset position mark;
acquiring target video data in the video data according to the region coordinates in the target video projection preset bit;
and performing transcoding processing and three-dimensional rendering on the target video data to finish three-dimensional display of the target video data in a target display area marked by the target video projection preset bits.
5. An image data presentation device, the device comprising:
the information acquisition module is used for acquiring the position information and the view field information of the target camera in the target monitoring scene;
the display area determining module is used for determining a display area of a monitoring area of the target camera in a pre-established three-dimensional scene display image according to the position information and the view field information of the target camera, wherein the three-dimensional scene display image is a three-dimensional image of the target monitoring scene;
the image rendering module is used for acquiring image data acquired by the target camera, performing transcoding processing and three-dimensional rendering on the image data so as to complete three-dimensional display of the image data in the display area;
the selecting instruction obtaining module is used for obtaining a selecting instruction of the target display area;
the target display area determining module is used for determining a target display area in the display area according to the target display area selection instruction;
the target image data determining module is used for determining target image data corresponding to the target display area in the image data;
the associated parameter acquisition module is used for acquiring angle parameters when the target camera acquires the image data to obtain target angle parameters, and acquiring region coordinates of the target image data in the image data to obtain target region coordinates;
and the preset bit storage module is used for storing the target angle parameter of the target camera, the position information of the target camera, the coordinates of the target display area and the coordinates of the target area in an associated mode as target video projection preset bits.
6. The apparatus of claim 5, wherein the device comprises a plurality of sensors,
the image rendering module is specifically configured to:
and performing transcoding processing and three-dimensional rendering on the target image data to finish three-dimensional display of the target image data in the target display area.
7. The apparatus of claim 6, wherein the image rendering module comprises:
a numerical value setting submodule, configured to set an Alpha value of the target image data in the image data to 1, and set Alpha values of other data except the target image data in the image data to 0, so as to obtain clipped image data;
a Texture setting sub-module for converting the cropped image data into Texture data;
and the three-dimensional rendering sub-module is used for rendering the Texture data through a preset three-dimensional engine so as to complete three-dimensional display of the target image data in the target display area.
8. The apparatus of claim 5, wherein the apparatus further comprises: a video preview module for:
when a preview instruction aiming at the target video projection preset position is acquired, acquiring video data acquired by the target camera under the target angle parameter and the position information of the target video projection preset position mark;
acquiring target video data in the video data according to the region coordinates in the target video projection preset bit;
and performing transcoding processing and three-dimensional rendering on the target video data to finish three-dimensional display of the target video data in a target display area marked by the target video projection preset bits.
9. An electronic device, comprising a processor and a memory;
the memory is used for storing a computer program;
the processor is configured to implement the method steps of any of claims 1-4 when executing a program stored on the memory.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored therein a computer program which, when executed by a processor, implements the method steps of any of claims 1-4.
CN201910596428.XA 2019-07-03 2019-07-03 Image data display method and device, electronic equipment and storage medium Active CN111325824B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910596428.XA CN111325824B (en) 2019-07-03 2019-07-03 Image data display method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910596428.XA CN111325824B (en) 2019-07-03 2019-07-03 Image data display method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111325824A CN111325824A (en) 2020-06-23
CN111325824B true CN111325824B (en) 2023-10-10

Family

ID=71166846

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910596428.XA Active CN111325824B (en) 2019-07-03 2019-07-03 Image data display method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111325824B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112261293B (en) * 2020-10-20 2022-05-10 华雁智能科技(集团)股份有限公司 Remote inspection method and device for transformer substation and electronic equipment
CN113763090B (en) * 2020-11-06 2024-05-21 北京沃东天骏信息技术有限公司 Information processing method and device
CN112417208B (en) * 2020-11-20 2024-08-13 百度在线网络技术(北京)有限公司 Target searching method, device, electronic equipment, storage medium and program product
CN112767533A (en) * 2020-12-31 2021-05-07 中国铁塔股份有限公司 Equipment display method and device, electronic equipment and readable storage medium
CN114937108A (en) * 2021-02-05 2022-08-23 中国科学院过程工程研究所 Image rendering method and device, electronic equipment and medium
CN112950455A (en) * 2021-02-09 2021-06-11 维沃移动通信有限公司 Image display method and device
CN113159022B (en) * 2021-03-12 2023-05-30 杭州海康威视系统技术有限公司 Method and device for determining association relationship and storage medium
CN112667346A (en) * 2021-03-16 2021-04-16 深圳市火乐科技发展有限公司 Weather data display method and device, electronic equipment and storage medium
CN113542679B (en) * 2021-06-24 2023-05-02 海信视像科技股份有限公司 Image playing method and device
CN115694720A (en) * 2021-07-29 2023-02-03 华为技术有限公司 Data transmission method and related device
CN114442805A (en) * 2022-01-06 2022-05-06 上海安维尔信息科技股份有限公司 Monitoring scene display method and system, electronic equipment and storage medium
CN115713465B (en) * 2022-10-28 2023-11-14 北京阅友科技有限公司 Stereoscopic display method and device for plane image, storage medium and terminal
CN116188680B (en) * 2022-12-21 2023-07-18 金税信息技术服务股份有限公司 Dynamic display method and device for gun in-place state

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102938827A (en) * 2012-11-29 2013-02-20 深圳英飞拓科技股份有限公司 Stratified monitoring command system and cross-camera virtual tracking method
CN103400409A (en) * 2013-08-27 2013-11-20 华中师范大学 3D (three-dimensional) visualization method for coverage range based on quick estimation of attitude of camera
CN107067447A (en) * 2017-01-26 2017-08-18 安徽天盛智能科技有限公司 A kind of integration video frequency monitoring method in large space region
WO2017181699A1 (en) * 2016-04-21 2017-10-26 杭州海康威视数字技术股份有限公司 Method and device for three-dimensional presentation of surveillance video
CN108154553A (en) * 2018-01-04 2018-06-12 中测新图(北京)遥感技术有限责任公司 The seamless integration method and device of a kind of threedimensional model and monitor video
CN108616719A (en) * 2016-12-29 2018-10-02 杭州海康威视数字技术股份有限公司 The method, apparatus and system of monitor video displaying
CN108668108A (en) * 2017-03-31 2018-10-16 杭州海康威视数字技术股份有限公司 A kind of method, apparatus and electronic equipment of video monitoring

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106993152B (en) * 2016-01-21 2019-11-08 杭州海康威视数字技术股份有限公司 Three-dimension monitoring system and its quick deployment method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102938827A (en) * 2012-11-29 2013-02-20 深圳英飞拓科技股份有限公司 Stratified monitoring command system and cross-camera virtual tracking method
CN103400409A (en) * 2013-08-27 2013-11-20 华中师范大学 3D (three-dimensional) visualization method for coverage range based on quick estimation of attitude of camera
WO2017181699A1 (en) * 2016-04-21 2017-10-26 杭州海康威视数字技术股份有限公司 Method and device for three-dimensional presentation of surveillance video
CN108616719A (en) * 2016-12-29 2018-10-02 杭州海康威视数字技术股份有限公司 The method, apparatus and system of monitor video displaying
CN107067447A (en) * 2017-01-26 2017-08-18 安徽天盛智能科技有限公司 A kind of integration video frequency monitoring method in large space region
CN108668108A (en) * 2017-03-31 2018-10-16 杭州海康威视数字技术股份有限公司 A kind of method, apparatus and electronic equipment of video monitoring
CN108154553A (en) * 2018-01-04 2018-06-12 中测新图(北京)遥感技术有限责任公司 The seamless integration method and device of a kind of threedimensional model and monitor video

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
彭斌 ; 麻立群 ; 潘坚跃 ; 张元歆 ; 陈希 ; .基于三维场景的电力设施安全区域预警方法.电子设计工程.2015,(10),65-71. *
赵凯 ; 全春来 ; 艾飞 ; 周翔 ; 王戈 ; .视频图像在三维模型中的渲染.计算机工程与设计.2009,(22),5221-5224. *

Also Published As

Publication number Publication date
CN111325824A (en) 2020-06-23

Similar Documents

Publication Publication Date Title
CN111325824B (en) Image data display method and device, electronic equipment and storage medium
US10096157B2 (en) Generation of three-dimensional imagery from a two-dimensional image using a depth map
US11551418B2 (en) Image rendering of laser scan data
EP3534336B1 (en) Panoramic image generating method and apparatus
US9183666B2 (en) System and method for overlaying two-dimensional map data on a three-dimensional scene
US7570280B2 (en) Image providing method and device
US20170038942A1 (en) Playback initialization tool for panoramic videos
CN111402374B (en) Multi-path video and three-dimensional model fusion method, device, equipment and storage medium thereof
US10733786B2 (en) Rendering 360 depth content
US20140267273A1 (en) System and method for overlaying two-dimensional map elements over terrain geometry
JP2018139102A (en) Method and apparatus for determining interested spot in immersive content
US9286712B2 (en) System and method for approximating cartographic projections by linear transformation
TWI786157B (en) Apparatus and method for generating a tiled three-dimensional image representation of a scene
KR20170091710A (en) Digital video rendering
CN113486941A (en) Live image training sample generation method, model training method and electronic equipment
CN111418213A (en) Method and apparatus for signaling syntax for immersive video coding
US10652514B2 (en) Rendering 360 depth content
KR20120118462A (en) Concave surface modeling in image-based visual hull
CN113810626B (en) Video fusion method, device, equipment and storage medium based on three-dimensional map
CN112825198B (en) Mobile tag display method, device, terminal equipment and readable storage medium
Hanusch A new texture mapping algorithm for photorealistic reconstruction of 3D objects
EP3929878B1 (en) Uncertainty display for a multi-dimensional mesh
EP3598395A1 (en) Rendering 360 depth content
EP3598384A1 (en) Rendering 360 depth content
JPH05114033A (en) Lay tracing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant