CN110910338A - Three-dimensional live-action video acquisition method, device, equipment and storage medium - Google Patents
Three-dimensional live-action video acquisition method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN110910338A CN110910338A CN201911223221.4A CN201911223221A CN110910338A CN 110910338 A CN110910338 A CN 110910338A CN 201911223221 A CN201911223221 A CN 201911223221A CN 110910338 A CN110910338 A CN 110910338A
- Authority
- CN
- China
- Prior art keywords
- dimensional
- virtual model
- video image
- video images
- area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 65
- 238000003860 storage Methods 0.000 title claims abstract description 25
- 238000012544 monitoring process Methods 0.000 claims abstract description 83
- 230000004927 fusion Effects 0.000 claims description 26
- 238000005516 engineering process Methods 0.000 claims description 10
- VNWKTOKETHGBQD-UHFFFAOYSA-N methane Chemical compound C VNWKTOKETHGBQD-UHFFFAOYSA-N 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 6
- UGFAIRIUMAVXCW-UHFFFAOYSA-N Carbon monoxide Chemical compound [O+]#[C-] UGFAIRIUMAVXCW-UHFFFAOYSA-N 0.000 claims description 4
- 229910002091 carbon monoxide Inorganic materials 0.000 claims description 4
- 230000002708 enhancing effect Effects 0.000 claims description 3
- 239000002609 medium Substances 0.000 description 19
- 230000008569 process Effects 0.000 description 13
- 230000000694 effects Effects 0.000 description 7
- 230000006870 function Effects 0.000 description 6
- 238000010276 construction Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 238000007689 inspection Methods 0.000 description 5
- 230000008859 change Effects 0.000 description 3
- 238000013461 design Methods 0.000 description 3
- 238000007526 fusion splicing Methods 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000009877 rendering Methods 0.000 description 3
- 229910052500 inorganic mineral Inorganic materials 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 239000011707 mineral Substances 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 239000003607 modifier Substances 0.000 description 1
- 239000012120 mounting media Substances 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 230000008707 rearrangement Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
The embodiment of the invention discloses a method, a device, equipment and a storage medium for acquiring a three-dimensional live-action video. The method comprises the following steps: constructing a three-dimensional virtual model of the mine area according to the information data of the mine area; acquiring two-dimensional video images of a plurality of monitoring cameras, matching each two-dimensional video image with a three-dimensional virtual model, and determining the position of each two-dimensional video image in the three-dimensional virtual model; and according to the position of each two-dimensional video image in the three-dimensional virtual model, fusing the two-dimensional video image of each monitoring camera with the three-dimensional virtual model to obtain a three-dimensional live-action video image of the mine area. According to the technical scheme provided by the embodiment of the invention, the multi-azimuth three-dimensional monitoring of the mine area is realized, the workers can quickly and conveniently visit the whole mine, the emergency event can be timely found and solved, and the problem that the emergency event cannot be quickly found and positioned in a large number of monitoring images is avoided, so that the monitoring and tracing efficiency and the safety guarantee capability of the mine are improved.
Description
Technical Field
The embodiment of the invention relates to the technical field of mine monitoring construction, in particular to a method, a device, equipment and a storage medium for acquiring a three-dimensional live-action video.
Background
Safety production is always one of the most concerned problems in the mineral industry, casualties and economic losses caused by mineral accidents are huge, most mines are provided with various monitoring systems in order to reduce the accident rate, and reliable technical support is provided for ensuring the safety production of the mines.
However, the existing security video system for mine enterprises generally adopts a traditional switchable matrix, images of all cameras above and below a mine are presented in turn, and blind spots exist in a monitoring time domain and a monitoring space domain. Due to the lack of the whole scene information of the monitoring area, when facing massive real-time video information, if an emergency response event occurs, a monitoring person is difficult to quickly search a target event in the massive information.
Disclosure of Invention
The invention provides a method, a device, equipment and a storage medium for acquiring a three-dimensional live-action video, which are used for acquiring the three-dimensional live-action video of a mine area and realizing omnibearing three-dimensional monitoring on the mine area.
In a first aspect, an embodiment of the present invention provides a method for acquiring a three-dimensional live-action video, where the method includes:
constructing a three-dimensional virtual model of the mine area according to the information data of the mine area;
acquiring two-dimensional video images of a plurality of monitoring cameras, matching each two-dimensional video image with a three-dimensional virtual model, and determining the position of each two-dimensional video image in the three-dimensional virtual model;
and according to the position of each two-dimensional video image in the three-dimensional virtual model, fusing the two-dimensional video image of each monitoring camera with the three-dimensional virtual model to obtain a three-dimensional live-action video image of the mine area.
In a second aspect, an embodiment of the present invention further provides an apparatus for acquiring a three-dimensional live-action video, where the apparatus includes:
the virtual model building module is used for building a three-dimensional virtual model of the mine area according to the information data of the mine area;
the video image acquisition module is used for acquiring two-dimensional video images of the plurality of monitoring cameras, matching each two-dimensional video image with the three-dimensional virtual model and determining the position of each two-dimensional video image in the three-dimensional virtual model;
and the fusion module is used for fusing the two-dimensional video images of the monitoring cameras with the three-dimensional virtual model according to the positions of the two-dimensional video images in the three-dimensional virtual model so as to obtain the three-dimensional live-action video images of the mine area.
In a third aspect, an embodiment of the present invention further provides an apparatus, where the apparatus includes:
one or more processors;
a memory for storing one or more programs;
when the one or more programs are executed by the one or more processors, the one or more processors implement the method for acquiring a three-dimensional live-action video according to any embodiment of the present invention.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the method for acquiring a three-dimensional live-action video according to any embodiment of the present invention.
The embodiment of the invention provides a scheme for acquiring a three-dimensional live-action video, which is characterized in that a three-dimensional virtual model of a mine area is constructed, and then a two-dimensional video image acquired by a monitoring camera is fused with the three-dimensional virtual model according to the matched position of the two-dimensional video image in the three-dimensional virtual model, so that the three-dimensional live-action video image of the mine area is acquired. The mine monitoring system has the advantages that multi-directional three-dimensional monitoring of the mine area is realized, workers can rapidly visit the whole situation of the mine conveniently, emergency events can be found and solved in time, the problem that the emergency events cannot be found and positioned quickly in massive monitoring images is avoided, and accordingly monitoring tracing efficiency and safety guarantee capability of the mine are improved.
Drawings
Fig. 1 is a flowchart of a method for acquiring a three-dimensional live-action video according to an embodiment of the present invention;
fig. 2 is a flowchart of a method for acquiring a three-dimensional live-action video according to a second embodiment of the present invention;
fig. 3 is a block diagram of a three-dimensional live-action video acquiring apparatus according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of an apparatus according to a fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the steps as a sequential process, many of the steps can be performed in parallel, concurrently or simultaneously. In addition, the order of the steps may be rearranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like.
Example one
Fig. 1 is a flowchart of a method for acquiring a three-dimensional live-action video according to an embodiment of the present invention. The present embodiment is applicable to a case where a plurality of monitoring cameras are used to acquire a three-dimensional live-action video of a mine area, and the method may be executed by the apparatus for acquiring a three-dimensional live-action video provided in the embodiment of the present invention, where the apparatus may be implemented by hardware and/or software, and may be generally integrated in a Computer device, such as a Personal Computer (PC). As shown in fig. 1, the method specifically comprises the following steps:
and S11, constructing a three-dimensional virtual model of the mine area according to the information data of the mine area.
Optionally, the mine area includes: roadway regions and non-roadway regions. The laneway is various passages drilled between the earth surface and an ore body and is used for carrying ore, ventilating, draining water, pedestrians, various necessary preparation projects newly dug for mining ore by metallurgical equipment and the like; while the non-roadway regions may include uphole regions and downhole equipment regions; the underground equipment area can comprise an underground substation, a water pump room, a refuge chamber, a shaft bottom yard and the like. In the operation of actual mine region, through providing the control for above-mentioned each region, comparatively comprehensive security for the mine operation provides the guarantee to in time discover the emergency situation in each region.
The three-dimensional virtual model is a polygonal representation of an object, as a data set of points and other information, and can be generated by a three-dimensional modeling software tool or according to a certain modeling algorithm. The three-dimensional virtual model itself is not visible and can be rendered at different levels of detail from a simple wire frame or shaded in different ways, however, many three-dimensional virtual models are overlaid with textures and the process of putting texture arrangements on the three-dimensional virtual model is called texture mapping. Texture is an image, but it allows the model to be more detailed and to look more realistic. For example, a three-dimensional model of a person would look more realistic than a simple monochrome model or a wire frame model if it had skin and clothing textures. In addition to texture, other processing effects may be used on the three-dimensional virtual model to increase realism. For example, surface normals may be adjusted to achieve their illumination effect, some surfaces may use convex-concave texture mapping methods, and other techniques of stereoscopic rendering. In the embodiment of the present invention, optionally, the processing effect adopted by the three-dimensional virtual model is not specifically limited.
Optionally, for a roadway area in the mine area, a three-dimensional virtual model of the roadway area is constructed according to roadway wire point data and roadway section data of the roadway area. Specifically, the modeling of the underground roadway area is mainly based on measured data of roadway lead points and design data of a roadway section, wherein the roadway lead points are selected points used for recording the length and the direction of the roadway, the roadway lead points are connected into broken lines during modeling, the length and the turning angle of each broken line side are measured in sequence, namely the horizontal distance between two adjacent points and the horizontal angle between two adjacent straight lines are obtained, and then the plane position of the roadway lead points is determined.
The section of the roadway is a cross section perpendicular to the central line of the roadway, and the shape of the section of the roadway can be obtained according to design data of the section of the roadway. The section of the tunnel can be divided into rectangle, trapezoid and various arches (determined by lithology, ground pressure and service life), and the section of the tunnel and the section of the capillary can be divided according to the construction process. Optionally, the mine operation adopts trapezoidal and rectangular section, wherein, rectangular section mainly bears the top pressure, and trapezoidal section can bear certain lateral pressure except bearing the top pressure, through adopting two kinds of section types more than, can make the project organization simpler and the construction more convenient.
The three-dimensional virtual model of the roadway area can be constructed visually and rapidly by constructing the three-dimensional virtual model of the roadway area according to the roadway wire point data and the roadway section data of the roadway area.
Optionally, a three-dimensional virtual model of the roadway region is constructed according to the roadway wire point data and the roadway section data of the roadway region, and the method specifically includes: arranging roadway lead point data and roadway section data of the underground roadway area, and storing the data into a roadway file; the stored file comprises coordinate information of a starting node and a terminating node of each section of roadway, and length and width information of the roadway; and (3) constructing a three-dimensional virtual model of the roadway region according to the saved roadway file through commercial modeling software such as a three-dimensional digital mine platform (DM 3D).
Optionally, for a non-roadway area in the mine area, a three-dimensional virtual model of the non-roadway area is constructed according to drawing information and texture photos of the non-roadway area. Specifically, a three-dimensional virtual model of a non-tunnel region can be constructed through commercial modeling software such as 3DMAX (three-dimensional animation rendering and production software based on a PC system), or by using a three-dimensional laser scanning technology, and after the construction is completed, corresponding texture patterns can be added to the three-dimensional virtual model by combining drawing information of the existing design and texture photos taken on site, so as to enrich the three-dimensional virtual model and improve the reality.
S12, acquiring two-dimensional video images of a plurality of monitoring cameras, matching each two-dimensional video image with the three-dimensional virtual model, and determining the position of each two-dimensional video image in the three-dimensional virtual model.
After the monitoring cameras are installed at all positions of a mine area and put into use, the shot two-dimensional video images are obtained from the monitoring cameras in real time and are used for being spliced with the three-dimensional virtual model. Since the two-dimensional video images of the monitoring camera are scattered, the specific position of each two-dimensional video image in the three-dimensional virtual model needs to be determined to splice the two-dimensional video images to the whole three-dimensional virtual model.
Optionally, by using an image recognition technology, matching the two-dimensional video image currently shot by each monitoring camera with the texture or other display effect of the three-dimensional virtual model, and marking the position where matching is successful, so that fusion splicing can be performed subsequently according to the marked sequence.
Optionally, the monitoring cameras at the determined positions may be marked according to the splicing positions, the corresponding marks are transmitted to the two-dimensional video images acquired in real time, the marks of the two-dimensional video images are matched with the preset marks in the three-dimensional virtual model to determine the positions of the two-dimensional video images in the three-dimensional virtual model, and the preset marks corresponding to the determined positions may be used as the basis for splicing, so that the subsequent fusion splicing may be performed according to the sequence of the marks. The three-dimensional virtual model is preset and marked through areas divided by the grids of the three-dimensional virtual model, the grids are composed of a plurality of elements such as three-dimensional coordinates, laser reflection intensity and color information and can be triangles, quadrangles or other simple convex polygons, and the rendering process can be simplified by utilizing the grids. The embodiment of the present invention is not particularly limited to the matching process of the two-dimensional video image and the three-dimensional virtual model.
And S13, fusing the two-dimensional video images of the monitoring cameras with the three-dimensional virtual model according to the positions of the two-dimensional video images in the three-dimensional virtual model to obtain the three-dimensional live-action video images of the mine area.
Optionally, the 3DMAX is used to realize the splicing and fusion of the two-dimensional video images of the monitoring cameras and the three-dimensional virtual model, and simultaneously, the current two-dimensional video image is captured in real time as the fusion basic map. Firstly, constructing a patch (Plane) in 3DMAX, wherein the aspect ratio of the Plane is in direct proportion to that of a basic mapping, and selecting the number of matched segments; the excessive number of the segments can cause the excessive number of the triangular patches of the three-dimensional virtual model to increase the memory load, so that the operation speed of the system is reduced, the excessive number of the segments can increase the deformation probability of the two-dimensional video image when the two-dimensional video image is fused with the three-dimensional virtual model, and the two-dimensional video image of each scene can be preset to different values according to the needs.
Because the positions and angles of the multiple monitoring cameras are different, the two-dimensional video images collected by each camera cannot be guaranteed to be on the same horizontal line, and the shot two-dimensional video images are perspective views of all areas, so that the shot two-dimensional video images are converted into an axonometric view through perspective transformation of the images. Optionally, the transformation of the image is implemented by editing modifier (FFD)4 × 4 × 4 in 3DMAX to modify the far smaller perspective part to the correct view and reduce distortion of the base map. When a point is selected, the positions of the surrounding points associated with the point also change according to the movement of the point, and the closer the point changes, the larger the change of the point is, the farther the point changes, and the smaller the change of the point is. Specifically, the three-dimensional virtual model is modified through a Control point (Control point) function of the FFD, and when the number of editable nodes of the FFD increases, nodes on the three-dimensional virtual model also increase correspondingly to ensure the image conversion effect.
And then, according to the position of each two-dimensional video image in the three-dimensional virtual model, fusing the two-dimensional video image of each monitoring camera with the three-dimensional virtual model, namely, attaching the corresponding two-dimensional video image serving as a basic map to the three-dimensional virtual model according to the position of each two-dimensional video image in the three-dimensional virtual model, thereby obtaining the three-dimensional live-action video image of each sub-area in the mine area. Optionally, the basic map may not overlap with the basic map, or may partially overlap with the basic map, which is not limited in this embodiment of the present invention.
On the basis of the above technical solution, optionally, after obtaining the three-dimensional live-action video image of the mine area, the method further includes: enhancing and displaying sensor monitoring data in a three-dimensional live-action video image, wherein the sensor monitoring data comprises: mine down-hole personnel location, methane content and carbon monoxide content. Specifically, in the fused three-dimensional live-action video image, technologies such as multi-source heterogeneous data acquisition, model and information interactive hooking, dynamic plotting in the three-dimensional live-action video and the like can be combined, and monitoring values of sensors such as underground personnel positioning sensors, methane sensors, carbon monoxide sensors and the like are enhanced and displayed in the three-dimensional live-action video image, so that a dynamic real-time monitoring effect based on three-dimensional live-action video monitoring information is realized.
On the basis of the above technical solution, optionally, after obtaining the three-dimensional live-action video image of the mine area, the method further includes: the historical two-dimensional video image data of the monitoring camera in the preset time range is fused into the three-dimensional virtual model, so that the cross-scene panoramic playing and searching of the historical video of the three-dimensional live-action video is realized, and the efficiency of event query is improved. Optionally, the two-dimensional video image shot by the monitoring camera is stored in the device for fusion, so as to conveniently view the historical data at any time.
On the basis of the above technical solution, optionally, after obtaining the three-dimensional live-action video image of the mine area, the method further includes: and synchronously displaying the three-dimensional live-action video image, the two-dimensional panoramic map and the sub-lenses of each sub-area of the mine area. The position of the monitoring camera, the coverage area of the monitoring camera, the position of the shot currently observed by the user and the like can be displayed in the two-dimensional panoramic map, so that global and local and two-dimensional and three-dimensional organic combination is formed, the correspondence between the shot pictures and the real scene is realized, and the position of the monitoring camera is more convenient to determine.
On the basis of the above technical solution, optionally, after obtaining the three-dimensional live-action video image of the mine area, the method further includes: and determining an automatic routing inspection path, and performing routing inspection according to a preset time interval. The method comprises the steps of setting three-dimensional live-action video images of all subareas of a mine area as an automatic inspection path according to a certain sequence, replacing the subareas once every preset time (such as 2 seconds), and displaying real-time three-dimensional live-action video images corresponding to the replaced subareas. Aiming at each subarea, a complete three-dimensional live-action video image corresponds to each subarea, so that the problem that the traditional monitoring system needs to continuously switch the shot videos of a certain area for inspection is solved, the inspection efficiency is improved, and emergency events can be found in time and corresponding measures can be taken.
According to the technical scheme provided by the embodiment of the invention, the three-dimensional virtual model of the mine area is constructed, and then the two-dimensional video image acquired by the monitoring camera is fused with the three-dimensional virtual model according to the matched position of the two-dimensional video image in the three-dimensional virtual model, so that the three-dimensional live-action video image of the mine area is acquired. The mine monitoring system has the advantages that multi-directional three-dimensional monitoring of the mine area is realized, workers can rapidly visit the whole situation of the mine conveniently, emergency events can be found and solved in time, the problem that the emergency events cannot be found and positioned quickly in massive monitoring images is avoided, and accordingly monitoring tracing efficiency and safety guarantee capability of the mine are improved.
Example two
Fig. 2 is a flowchart of a three-dimensional live-action video acquiring method according to a second embodiment of the present invention. The technical solution of this embodiment is further refined based on the above technical solution, specifically, in this embodiment, the process of fusing the two-dimensional video images of the monitoring cameras with the three-dimensional virtual model is further refined, and accordingly, as shown in fig. 2, the method specifically includes the following steps:
and S21, constructing a three-dimensional virtual model of the mine area according to the information data of the mine area.
S22, acquiring two-dimensional video images of a plurality of monitoring cameras, matching each two-dimensional video image with the three-dimensional virtual model, and determining the position of each two-dimensional video image in the three-dimensional virtual model.
And S23, carrying out position numbering on each two-dimensional video image, wherein the position numbering comprises a row sequence number and/or a column sequence number.
Optionally, the position numbers of the two-dimensional video images are numbered according to the position plan of the monitoring camera, so that the position numbers of the adjacent two-dimensional video images are continuous, and the positions of the two-dimensional video images in the three-dimensional virtual model can be determined more quickly in the splicing and fusing process. Optionally, during the splicing and fusing process, the splicing is performed according to the row and column directions, and then the position number may include a row sequence number and/or a column sequence number.
And S24, taking the first two-dimensional video image matched with the image information at the upper left corner of the three-dimensional virtual model as a splicing and fusing starting point, and sequentially splicing and fusing the two-dimensional video images in the three-dimensional virtual model according to the line sequence number and/or the column sequence number to obtain the three-dimensional live-action video image of the mine area.
In the process of splicing and fusing, a fusion starting point is determined, optionally, a first two-dimensional video image matched with image information at the upper left corner of the three-dimensional virtual model is used as the splicing and fusing starting point, and then, the splicing and fusing operation is performed on the two-dimensional video images in sequence according to the line sequence number and/or the column sequence number. Specifically, if only the line sequence numbers are included, the two-dimensional video images with the same line sequence numbers can be fused into a line by using an image recognition technology, and then multiple lines are fused according to the line sequence numbers; if only the row sequence numbers are included, the two-dimensional video images with the same row sequence numbers can be fused into a row by using an image identification technology, and then multiple rows are fused according to the row sequence numbers; if the two-dimensional video images comprise the row sequence numbers and the column sequence numbers, the two-dimensional video images adjacent up and down or left and right can be directly fused according to the row sequence numbers and the column sequence numbers, specifically, the two-dimensional video images in the first row can be sequentially fused from the starting point, then the first two-dimensional video image in the second row and the first two-dimensional video image are fused in the up-down direction, then the two-dimensional video images in the second row are sequentially fused by taking the first two-dimensional video image in the row as the reference, and the fusion of all the two-dimensional video images is finished by analogy, so that the three-dimensional live-action video image in the mine area can be obtained. In the same way, the fusion of the first row can be performed, then the fusion of the left and right directions of the first two-dimensional video image and the first two-dimensional video image of the second row is performed, then the fusion of the second row is performed, and the fusion of all the two-dimensional video images is completed by analogy, so that the three-dimensional live-action video image of the mine area can be obtained. The embodiment of the present invention is not particularly limited to the order of two-dimensional video image fusion.
On the basis of the above technical solution, optionally, the splicing and fusing operation is performed on each two-dimensional video image in the three-dimensional virtual model in sequence, and the method includes: and determining a common reference object appearing in the adjacent two-dimensional video images, and splicing and fusing the adjacent two-dimensional video images according to the common reference object. Specifically, when two adjacent two-dimensional video images are merged and fused, the two-dimensional video images can be merged and fused according to a common reference object appearing in the two-dimensional video images.
Optionally, the common reference object may be a reference object in the original two-dimensional video image, such as a building, a device, a tree, or a road sign, or may be a reference object preset in a position where two adjacent monitoring cameras can capture an overlapping position and deformation is small as much as possible, such as a spliced sign or a drawn cross-shaped symbol, after the position of the monitoring camera is confirmed. The deformation of the position closer to the center of the image is smaller, and optionally, a position containing eighty percent of the inside in the two-dimensional video image shot by the two monitoring cameras is selected to set a reference object. In addition, the cross-shaped reference object can simultaneously meet the requirements of transverse continuity and longitudinal continuity, and therefore the deviation condition in splicing is reduced. Optionally, a reference object with a color with higher hue and purity, such as pure red, is selected, so that the position of the reference object can be quickly determined during splicing and fusion.
The common reference object appearing in the adjacent two-dimensional video images is determined, and the adjacent two-dimensional video images are spliced and fused according to the common reference object, so that the efficiency and the accuracy of splicing and fusing are improved.
On the basis of the above technical solution, optionally, before the splicing and fusing operation is sequentially performed on each two-dimensional video image in the three-dimensional virtual model, the method further includes: judging whether the overlapping area between the adjacent two-dimensional video images is larger than a preset threshold value or not by an image identification technology; correspondingly, the splicing and fusion operation of the two-dimensional video images in the three-dimensional virtual model sequentially comprises the following steps: and if the overlapping area between the adjacent two-dimensional video images is larger than or equal to a preset threshold value, splicing and fusing the adjacent two-dimensional video images in the three-dimensional virtual model.
The three-dimensional live-action video image of the mine area is formed by splicing and fusing two-dimensional video images shot by numerous monitoring cameras, the fusion quality is closely related to the layout of each monitoring camera, and the overlapping part of two adjacent two-dimensional video images needs to reach a certain preset threshold value, so that the effect of fusion image transition is optimized. Optionally, the overlapped portion is an outer twenty percent portion of each two-dimensional video image, that is, the preset threshold is twenty percent. After the two-dimensional video images are acquired for the first time, whether the overlapping area between the adjacent two-dimensional video images is larger than a preset threshold value or not is judged through an image recognition technology, if the overlapping area between the adjacent two-dimensional video images is larger than or equal to the preset threshold value, splicing and fusion are carried out, and otherwise, a monitoring camera can be adjusted.
If the overlapping portion of two adjacent two-dimensional video images is too much, the images will be wasted. Optionally, a maximum preset threshold may be set, so that if the overlapping area between adjacent two-dimensional video images is greater than or equal to the preset threshold and is smaller than the maximum preset threshold, the splicing and fusion operations are performed, otherwise, the monitoring camera may be adjusted.
On the basis of the above technical solution, optionally, after determining whether the overlapping area between the two-dimensional video images is greater than a preset threshold value by using an image recognition technology, the method further includes: and if the overlapping area between the adjacent two-dimensional video images is smaller than a preset threshold value, sending an angle adjusting instruction to prompt the adjustment of the shooting angle of the monitoring camera matched with the adjacent two-dimensional video images.
Optionally, if the overlapping area between the adjacent two-dimensional video images is smaller than a preset threshold, warning information that the monitoring camera needs to be adjusted is displayed in the monitoring picture, and preset warning sound information can be played to prompt a worker to adjust. Wherein, the adjusting instruction may include: a vertical adjustment command, a horizontal adjustment command, and/or a distance adjustment command.
Meanwhile, after the two-dimensional video images of the monitoring cameras are acquired for the first time and the angle adjustment is completed for all the cameras needing to adjust the shooting angles, the positions of the monitoring cameras are kept unchanged, so that the process of adjusting the cameras is reduced, and the efficiency of subsequent splicing and fusion is improved.
According to the technical scheme provided by the embodiment of the invention, the position number is carried out on each two-dimensional video image, and the fusion splicing operation of each two-dimensional video image is completed according to the position number so as to obtain the three-dimensional live-action video image of the mine area. The efficiency of concatenation and fusion is improved, and then the real-time of three-dimensional live-action video image has been improved, and further for the staff visits the mine global rapidly, in time discovers and solves the emergency and provides convenience.
EXAMPLE III
Fig. 3 is a block diagram of a three-dimensional live-action video acquiring apparatus according to a third embodiment of the present invention, the apparatus including:
the virtual model building module 31 is used for building a three-dimensional virtual model of the mine area according to the information data of the mine area;
the video image acquisition module 32 is configured to acquire two-dimensional video images of a plurality of monitoring cameras, match each two-dimensional video image with the three-dimensional virtual model, and determine the position of each two-dimensional video image in the three-dimensional virtual model;
and the fusion module 33 is configured to fuse the two-dimensional video images of the monitoring cameras with the three-dimensional virtual model according to the positions of the two-dimensional video images in the three-dimensional virtual model, so as to obtain a three-dimensional live-action video image of the mine area.
According to the technical scheme provided by the embodiment of the invention, the three-dimensional virtual model of the mine area is constructed, and then the two-dimensional video image acquired by the monitoring camera is fused with the three-dimensional virtual model according to the matched position of the two-dimensional video image in the three-dimensional virtual model, so that the three-dimensional live-action video image of the mine area is acquired. The mine monitoring system has the advantages that multi-directional three-dimensional monitoring of the mine area is realized, workers can rapidly visit the whole situation of the mine conveniently, emergency events can be found and solved in time, the problem that the emergency events cannot be found and positioned quickly in massive monitoring images is avoided, and accordingly monitoring tracing efficiency and safety guarantee capability of the mine are improved.
On the basis of the technical scheme, optionally, the mine area comprises a roadway area and a non-roadway area;
accordingly, the virtual model building module 31 includes:
the tunnel virtual model building submodule is used for building a three-dimensional virtual model of the tunnel region according to the data of the tunnel lead points and the data of the tunnel section of the tunnel region;
and the non-roadway virtual model building submodule is used for building a three-dimensional virtual model of the non-roadway area according to the drawing information and the texture photo of the non-roadway area.
On the basis of the above technical solution, optionally, the video image obtaining module 32 includes:
the position numbering submodule is used for numbering the positions of the two-dimensional video images; the position numbers comprise row sequence numbers and/or column sequence numbers;
correspondingly, the fusion module 33 is specifically configured to:
and taking the first two-dimensional video image matched with the image information at the upper left corner of the three-dimensional virtual model as a splicing and fusing starting point, and sequentially splicing and fusing the two-dimensional video images in the three-dimensional virtual model according to the line sequence number and/or the column sequence number to obtain the three-dimensional live-action video image of the mine area.
On the basis of the above technical solution, optionally, the fusion module 33 includes:
and the common reference object determining submodule is used for determining a common reference object appearing in the adjacent two-dimensional video images and splicing and fusing the adjacent two-dimensional video images according to the common reference object.
On the basis of the above technical solution, optionally, the apparatus further includes:
the overlapping area judging module is used for judging whether the overlapping area between the adjacent two-dimensional video images is larger than a preset threshold value or not through an image recognition technology;
correspondingly, the fusion module 33 is specifically configured to:
and if the overlapping area between the adjacent two-dimensional video images is larger than or equal to a preset threshold value, splicing and fusing the adjacent two-dimensional video images in the three-dimensional virtual model.
On the basis of the above technical solution, optionally, the apparatus further includes:
and the adjusting instruction sending module is used for sending an angle adjusting instruction if the overlapping area between the adjacent two-dimensional video images is smaller than a preset threshold value so as to prompt the adjustment of the shooting angle of the monitoring camera matched with the adjacent two-dimensional video images.
On the basis of the above technical solution, optionally, the apparatus further includes:
the enhanced display module is used for enhancing and displaying the sensor monitoring data in the three-dimensional live-action video image, and the sensor monitoring data comprises: mine down-hole personnel location, methane content and carbon monoxide content.
The device for acquiring the three-dimensional live-action video provided by the embodiment of the invention can execute the method for acquiring the three-dimensional live-action video provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
It should be noted that, in the embodiment of the apparatus for acquiring a three-dimensional live-action video, the units and modules included in the embodiment are only divided according to functional logic, but are not limited to the above division as long as the corresponding functions can be implemented; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
Example four
Fig. 4 is a schematic structural diagram of an apparatus according to a fourth embodiment of the present invention. FIG. 4 illustrates a block diagram of an exemplary device suitable for use to implement embodiments of the present invention. The device shown in fig. 4 is only an example and should not bring any limitation to the function and the scope of use of the embodiments of the present invention.
As shown in fig. 4, the apparatus includes a processor 41, a memory 42, an input device 43, and an output device 44; the number of the processors 41 in the device may be one or more, one processor 41 is taken as an example in fig. 4, the processor 41, the memory 42, the input device 43 and the output device 44 in the device may be connected by a bus or other means, and the connection by the bus is taken as an example in fig. 4.
The memory 42 serves as a computer-readable storage medium, and can be used for storing software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the three-dimensional live-action video acquisition method in the embodiment of the present invention (for example, the virtual model construction module 31, the video image acquisition module 32, and the fusion module 33 in the three-dimensional live-action video acquisition device). The processor 41 executes various functional applications and data processing of the device by running software programs, instructions and modules stored in the memory 42, that is, implements the above-described three-dimensional live-action video acquisition method.
The memory 42 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the device, and the like. Further, the memory 42 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device.
The input device 43 may be used to receive two-dimensional video images of the mine area acquired by the surveillance camera and to generate key signal inputs relating to user settings and function control of the apparatus. The output device 44 may include a display, an alarm, and other devices, and may be configured to display the generated three-dimensional live-action video image of the mine area and send warning information to the designated location when needed.
EXAMPLE five
An embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a method for acquiring a three-dimensional live-action video according to any embodiment of the present invention, where the method includes:
constructing a three-dimensional virtual model of the mine area according to the information data of the mine area;
acquiring two-dimensional video images of a plurality of monitoring cameras, matching each two-dimensional video image with a three-dimensional virtual model, and determining the position of each two-dimensional video image in the three-dimensional virtual model;
and according to the position of each two-dimensional video image in the three-dimensional virtual model, fusing the two-dimensional video image of each monitoring camera with the three-dimensional virtual model to obtain a three-dimensional live-action video image of the mine area.
Storage medium-any of various types of memory devices or storage devices. The term "storage medium" is intended to include: mounting media such as CD-ROM, floppy disk, or tape devices; computer system memory or random access memory such as DRAM, DDR RAM, SRAM, EDO RAM, Lanbas (Rambus) RAM, etc.; non-volatile memory such as flash memory, magnetic media (e.g., hard disk or optical storage); registers or other similar types of memory elements, etc. The storage medium may also include other types of memory or combinations thereof. In addition, the storage medium may be located in the computer system in which the program is executed, or may be located in a different second computer system connected to the computer system through a network (such as the internet). The second computer system may provide the program instructions to the computer for execution. The term "storage medium" may include two or more storage media that may reside in different locations, such as in different computer systems that are connected by a network. The storage medium may store program instructions (e.g., embodied as a computer program) that are executable by one or more processors.
Of course, the storage medium provided by the embodiment of the present invention contains computer-executable instructions, and the computer-executable instructions are not limited to the method operations described above, and may also perform related operations in the method for acquiring a three-dimensional live-action video provided by any embodiment of the present invention.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly, can also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the methods according to the embodiments of the present invention.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.
Claims (10)
1. A method for acquiring a three-dimensional live-action video is characterized by comprising the following steps:
constructing a three-dimensional virtual model of the mine area according to information data of the mine area;
acquiring two-dimensional video images of a plurality of monitoring cameras, matching each two-dimensional video image with the three-dimensional virtual model, and determining the position of each two-dimensional video image in the three-dimensional virtual model;
and according to the position of each two-dimensional video image in the three-dimensional virtual model, fusing the two-dimensional video image of each monitoring camera with the three-dimensional virtual model to obtain the three-dimensional live-action video image of the mine area.
2. The method of claim 1, wherein the mine area comprises a roadway area and a non-roadway area;
correspondingly, the three-dimensional virtual model of the mine area is constructed according to the information data of the mine area, and the method comprises the following steps:
constructing a three-dimensional virtual model of the roadway area according to the roadway wire point data and the roadway section data of the roadway area;
and constructing a three-dimensional virtual model of the non-roadway area according to the drawing information and the texture photo of the non-roadway area.
3. The method of claim 1, wherein determining the location of each of the two-dimensional video images in the three-dimensional virtual model comprises:
position numbering is carried out on each two-dimensional video image; the position numbers comprise row sequence numbers and/or column sequence numbers;
correspondingly, according to the position of each two-dimensional video image in the three-dimensional virtual model, fusing the two-dimensional video image of each monitoring camera with the three-dimensional virtual model to obtain a three-dimensional live-action video image of the mine area, including:
and taking the first two-dimensional video image matched with the image information at the upper left corner of the three-dimensional virtual model as a splicing and fusing starting point, and sequentially splicing and fusing the two-dimensional video images in the three-dimensional virtual model according to the line sequence number and/or the column sequence number to obtain the three-dimensional live-action video image of the mine area.
4. The method of claim 3, wherein performing a stitching fusion operation on each of the two-dimensional video images in the three-dimensional virtual model in sequence comprises:
and determining a common reference object appearing in the adjacent two-dimensional video images, and splicing and fusing the adjacent two-dimensional video images according to the common reference object.
5. The method according to claim 3, further comprising, before the stitching and fusing each of the two-dimensional video images in the three-dimensional virtual model in sequence:
judging whether the overlapping area between the two-dimensional video images adjacent to each other is larger than a preset threshold value or not through an image identification technology;
correspondingly, the splicing and fusing operation of the two-dimensional video images in the three-dimensional virtual model sequentially comprises the following steps:
and if the overlapping area between the adjacent two-dimensional video images is larger than or equal to the preset threshold value, splicing and fusing the adjacent two-dimensional video images in the three-dimensional virtual model.
6. The method according to claim 5, wherein after determining whether the overlapping area between the two-dimensional video images is larger than a preset threshold value by using an image recognition technique, the method further comprises:
and if the overlapping area between the adjacent two-dimensional video images is smaller than a preset threshold value, sending an angle adjusting instruction to prompt the adjustment of the shooting angle of the monitoring camera matched with the adjacent two-dimensional video images.
7. The method of claim 1, after acquiring the three-dimensional live-action video image of the mine area, further comprising:
enhancing display of sensor monitoring data in the three-dimensional live-action video image, the sensor monitoring data comprising: mine down-hole personnel location, methane content and carbon monoxide content.
8. An apparatus for acquiring a three-dimensional live-action video, comprising:
the virtual model building module is used for building a three-dimensional virtual model of the mine area according to information data of the mine area;
the video image acquisition module is used for acquiring two-dimensional video images of a plurality of monitoring cameras, matching each two-dimensional video image with the three-dimensional virtual model and determining the position of each two-dimensional video image in the three-dimensional virtual model;
and the fusion module is used for fusing the two-dimensional video images of the monitoring cameras with the three-dimensional virtual model according to the positions of the two-dimensional video images in the three-dimensional virtual model so as to obtain the three-dimensional live-action video images of the mine area.
9. An apparatus, comprising:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the method of acquiring three-dimensional live-action video of any one of claims 1-7.
10. A computer-readable storage medium on which a computer program is stored, the program, when executed by a processor, implementing the method for acquiring a three-dimensional live-action video according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911223221.4A CN110910338A (en) | 2019-12-03 | 2019-12-03 | Three-dimensional live-action video acquisition method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911223221.4A CN110910338A (en) | 2019-12-03 | 2019-12-03 | Three-dimensional live-action video acquisition method, device, equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110910338A true CN110910338A (en) | 2020-03-24 |
Family
ID=69822066
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911223221.4A Pending CN110910338A (en) | 2019-12-03 | 2019-12-03 | Three-dimensional live-action video acquisition method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110910338A (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111447504A (en) * | 2020-03-27 | 2020-07-24 | 北京字节跳动网络技术有限公司 | Three-dimensional video processing method and device, readable storage medium and electronic equipment |
CN111586351A (en) * | 2020-04-20 | 2020-08-25 | 上海市保安服务(集团)有限公司 | Visual monitoring system and method for fusion of three-dimensional videos of venue |
CN111931830A (en) * | 2020-07-27 | 2020-11-13 | 泰瑞数创科技(北京)有限公司 | Video fusion processing method and device, electronic equipment and storage medium |
CN112053391A (en) * | 2020-09-11 | 2020-12-08 | 中德(珠海)人工智能研究院有限公司 | Monitoring and early warning method and system based on dynamic three-dimensional model and storage medium |
CN112069571A (en) * | 2020-08-12 | 2020-12-11 | 重庆交通大学 | Green mine stereoscopic planning method based on three-dimensional live-action |
CN112233228A (en) * | 2020-10-28 | 2021-01-15 | 五邑大学 | Unmanned aerial vehicle-based urban three-dimensional reconstruction method and device and storage medium |
CN113359987A (en) * | 2021-06-03 | 2021-09-07 | 煤炭科学技术研究院有限公司 | VR virtual reality-based semi-physical fully-mechanized mining actual operation platform |
CN114463617A (en) * | 2022-01-30 | 2022-05-10 | 三一重型装备有限公司 | Device, method, equipment, medium and product for identifying mounting hole of anchor steel belt |
CN114845053A (en) * | 2022-04-25 | 2022-08-02 | 国能寿光发电有限责任公司 | Panoramic video generation method and device |
CN115396720A (en) * | 2022-07-21 | 2022-11-25 | 贝壳找房(北京)科技有限公司 | Video fusion method based on video control, electronic equipment and storage medium |
CN116612012A (en) * | 2023-07-17 | 2023-08-18 | 南方电网数字电网研究院有限公司 | Power transmission line image splicing method, system, computer equipment and storage medium |
CN117218131A (en) * | 2023-11-09 | 2023-12-12 | 天宇正清科技有限公司 | Method, system, equipment and storage medium for marking room examination problem |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20010043738A1 (en) * | 2000-03-07 | 2001-11-22 | Sawhney Harpreet Singh | Method of pose estimation and model refinement for video representation of a three dimensional scene |
CN103606151A (en) * | 2013-11-15 | 2014-02-26 | 南京师范大学 | A wide-range virtual geographical scene automatic construction method based on image point clouds |
CN104835202A (en) * | 2015-05-20 | 2015-08-12 | 中国人民解放军装甲兵工程学院 | Quick three-dimensional virtual scene constructing method |
CN110097527A (en) * | 2019-03-19 | 2019-08-06 | 深圳市华橙数字科技有限公司 | Video-splicing fusion method, device, terminal and storage medium |
-
2019
- 2019-12-03 CN CN201911223221.4A patent/CN110910338A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20010043738A1 (en) * | 2000-03-07 | 2001-11-22 | Sawhney Harpreet Singh | Method of pose estimation and model refinement for video representation of a three dimensional scene |
CN103606151A (en) * | 2013-11-15 | 2014-02-26 | 南京师范大学 | A wide-range virtual geographical scene automatic construction method based on image point clouds |
CN104835202A (en) * | 2015-05-20 | 2015-08-12 | 中国人民解放军装甲兵工程学院 | Quick three-dimensional virtual scene constructing method |
CN110097527A (en) * | 2019-03-19 | 2019-08-06 | 深圳市华橙数字科技有限公司 | Video-splicing fusion method, device, terminal and storage medium |
Non-Patent Citations (2)
Title |
---|
张元生 等: "矿山安全三维仿真平台的设计与实现" * |
徐华龙 等: "矿山三维可视化监测系统设计与实现" * |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111447504B (en) * | 2020-03-27 | 2022-05-03 | 北京字节跳动网络技术有限公司 | Three-dimensional video processing method and device, readable storage medium and electronic equipment |
CN111447504A (en) * | 2020-03-27 | 2020-07-24 | 北京字节跳动网络技术有限公司 | Three-dimensional video processing method and device, readable storage medium and electronic equipment |
US11785195B2 (en) | 2020-03-27 | 2023-10-10 | Beijing Bytedance Network Technology Co., Ltd. | Method and apparatus for processing three-dimensional video, readable storage medium and electronic device |
CN111586351A (en) * | 2020-04-20 | 2020-08-25 | 上海市保安服务(集团)有限公司 | Visual monitoring system and method for fusion of three-dimensional videos of venue |
CN111931830A (en) * | 2020-07-27 | 2020-11-13 | 泰瑞数创科技(北京)有限公司 | Video fusion processing method and device, electronic equipment and storage medium |
CN111931830B (en) * | 2020-07-27 | 2023-12-29 | 泰瑞数创科技(北京)股份有限公司 | Video fusion processing method and device, electronic equipment and storage medium |
CN112069571A (en) * | 2020-08-12 | 2020-12-11 | 重庆交通大学 | Green mine stereoscopic planning method based on three-dimensional live-action |
CN112053391A (en) * | 2020-09-11 | 2020-12-08 | 中德(珠海)人工智能研究院有限公司 | Monitoring and early warning method and system based on dynamic three-dimensional model and storage medium |
CN112233228A (en) * | 2020-10-28 | 2021-01-15 | 五邑大学 | Unmanned aerial vehicle-based urban three-dimensional reconstruction method and device and storage medium |
CN112233228B (en) * | 2020-10-28 | 2024-02-20 | 五邑大学 | Unmanned aerial vehicle-based urban three-dimensional reconstruction method, device and storage medium |
CN113359987A (en) * | 2021-06-03 | 2021-09-07 | 煤炭科学技术研究院有限公司 | VR virtual reality-based semi-physical fully-mechanized mining actual operation platform |
CN113359987B (en) * | 2021-06-03 | 2023-12-26 | 煤炭科学技术研究院有限公司 | Semi-physical fully-mechanized mining and real-time operating platform based on VR virtual reality |
CN114463617A (en) * | 2022-01-30 | 2022-05-10 | 三一重型装备有限公司 | Device, method, equipment, medium and product for identifying mounting hole of anchor steel belt |
CN114845053A (en) * | 2022-04-25 | 2022-08-02 | 国能寿光发电有限责任公司 | Panoramic video generation method and device |
CN115396720A (en) * | 2022-07-21 | 2022-11-25 | 贝壳找房(北京)科技有限公司 | Video fusion method based on video control, electronic equipment and storage medium |
CN115396720B (en) * | 2022-07-21 | 2023-11-14 | 贝壳找房(北京)科技有限公司 | Video fusion method based on video control, electronic equipment and storage medium |
CN116612012A (en) * | 2023-07-17 | 2023-08-18 | 南方电网数字电网研究院有限公司 | Power transmission line image splicing method, system, computer equipment and storage medium |
CN117218131A (en) * | 2023-11-09 | 2023-12-12 | 天宇正清科技有限公司 | Method, system, equipment and storage medium for marking room examination problem |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110910338A (en) | Three-dimensional live-action video acquisition method, device, equipment and storage medium | |
US11482008B2 (en) | Directing board repositioning during sensor calibration for autonomous vehicles | |
US11094113B2 (en) | Systems and methods for modeling structures using point clouds derived from stereoscopic image pairs | |
JP6779698B2 (en) | Pavement crack analysis device, pavement crack analysis method and pavement crack analysis program | |
TW550521B (en) | Method for re-building 3D model of house in a semi-automatic manner using edge segments of buildings | |
JP5682060B2 (en) | Image composition apparatus, image composition program, and image composition system | |
JP5582691B2 (en) | Measuring device, measuring method and measuring program | |
JP2018165726A (en) | Point group data utilization system | |
CN110659385B (en) | Fusion method of multi-channel video and three-dimensional GIS scene | |
JP6671852B2 (en) | Information setting system and simulation system | |
EP3748583A1 (en) | Subsurface utility visualization | |
EP3413266B1 (en) | Image processing device, image processing method, and image processing program | |
CN110992510A (en) | Security scene VR-based automatic night patrol inspection method and system | |
CN108388995B (en) | Method and system for establishing road asset management system | |
CN111222190A (en) | Ancient building management system | |
JP6110780B2 (en) | Additional information display system | |
JP5725908B2 (en) | Map data generation system | |
WO2020199057A1 (en) | Self-piloting simulation system, method and device, and storage medium | |
CN106504336A (en) | A kind of digital mine integrated management approach and system | |
CN115393192A (en) | Multi-point multi-view video fusion method and system based on general plane diagram | |
JP2004265396A (en) | Image forming system and image forming method | |
CN113963095B (en) | Urban three-dimensional map video stream encryption method and system based on artificial intelligence | |
CN117011413B (en) | Road image reconstruction method, device, computer equipment and storage medium | |
CN113223146A (en) | Data labeling method and device based on three-dimensional simulation scene and storage medium | |
US20220398804A1 (en) | System for generation of three dimensional scans and models |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200324 |