CN116109684B - Online video monitoring two-dimensional and three-dimensional data mapping method and device for variable electric field station - Google Patents
Online video monitoring two-dimensional and three-dimensional data mapping method and device for variable electric field station Download PDFInfo
- Publication number
- CN116109684B CN116109684B CN202310362822.3A CN202310362822A CN116109684B CN 116109684 B CN116109684 B CN 116109684B CN 202310362822 A CN202310362822 A CN 202310362822A CN 116109684 B CN116109684 B CN 116109684B
- Authority
- CN
- China
- Prior art keywords
- dimensional
- monitoring
- video image
- point cloud
- substation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
- G06T7/344—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/063—Operations research, analysis or management
- G06Q10/0635—Risk analysis of enterprise or organisation activities
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
- G06Q50/06—Electricity, gas or water supply
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y04—INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
- Y04S—SYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
- Y04S10/00—Systems supporting electrical power generation, transmission or distribution
- Y04S10/50—Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications
Abstract
The invention discloses a transformer station-oriented online video monitoring two-dimensional and three-dimensional data mapping method and device, comprising the following steps: acquiring a monitoring video image of a substation and a point cloud of a monitoring scene; calculating camera external parameters of a pinhole camera model according to registration of the monitoring video image and the point cloud; based on a plane motion assumption and a pinhole camera model, converting the two-dimensional pixel coordinates of the personnel extracted from the monitoring video image to obtain three-dimensional coordinates of the personnel in the digital twin body; and setting a monitoring video image of the substation, which is acquired in real time, as a mapping plane texture based on a mapping plane created in the point cloud, and dynamically updating the texture of the digital twin body. By implementing the invention, the monitoring camera of the existing transformer substation is utilized to acquire the monitoring picture and personnel positioning information, and the monitoring picture and personnel positioning information are mapped and transformed to the digital twin three-dimensional space, so that the method has important significance for utilizing and fully playing the roles of the monitoring camera equipment in the transformer substation in the aspects of dynamic updating of a digital base and personnel safety control.
Description
Technical Field
The invention relates to the technical field of online monitoring of substation stations, in particular to a two-dimensional and three-dimensional data mapping method and device for online video monitoring of substation stations.
Background
In order to realize the production safety control of the whole transformer substation, accurate perception and identification of space dimensions are needed for the positions, states and changes of personnel and equipment of the transformer substation. Currently, a three-dimensional space state monitoring form of a transformer substation with a live-action scanning digital twin body as a digital base is gradually popularized, and a transformer substation safety monitoring system based on a video monitoring technology is widely applied. However, the video monitoring technology is mainly based on manual checking of video monitoring pictures by safety supervision staff at present, namely the current safety control of transformer production is still mainly based on manual management, and the automation technology and the artificial intelligence technology do not play a great role in the video monitoring technology.
Aiming at the situation that a live-action scanning digital twin body is used as a digital base for monitoring the state of a transformer substation, based on the widely used monitoring cameras of the transformer substation, three-dimensional space information is perceived in real time and the safe production management of a supporting site is carried out, at present, the information such as images, videos and the like is mainly aligned and mapped with the digital twin body by hands, the efficiency and flexibility are limited, and a method for accurately mapping real-time video data reflecting the state of equipment into a three-dimensional space is lacking.
Disclosure of Invention
In view of the above, the embodiment of the invention provides a method and a device for mapping two-dimensional data of online video monitoring for a substation, which are used for solving the technical problem that the prior art lacks a method for accurately mapping real-time video data reflecting equipment states into a three-dimensional space.
The technical scheme provided by the invention is as follows:
the first aspect of the embodiment of the invention provides a transformer station-oriented online video monitoring two-dimensional and three-dimensional data mapping method, which comprises the following steps: acquiring a monitoring video image of a substation and a point cloud of a monitoring scene of the substation, wherein the point cloud forms a digital base of a digital twin body; calculating camera external parameters of a pinhole camera model according to registration of the monitoring video image and the point cloud; based on a plane motion assumption and the pinhole camera model, converting the two-dimensional pixel coordinates of the person extracted from the monitoring video image to obtain three-dimensional coordinates of the person in the digital twin body; and setting a monitoring video image of the substation, which is acquired in real time, as a mapping plane texture based on a mapping plane created in the point cloud, and dynamically updating the texture of the digital twin body.
Optionally, calculating camera outliers of a pinhole camera model from the registration of the surveillance video image and the point cloud, including: a 3D engine is adopted to point and select two-dimensional matching points in the monitoring video image and three-dimensional matching points in the point cloud, and the two-dimensional matching points and the three-dimensional matching points are in one-to-one correspondence; and registering the two-dimensional matching points and the three-dimensional matching points by adopting an EPnP algorithm to obtain the camera external parameters of the pinhole camera model.
Optionally, registering the two-dimensional matching point and the three-dimensional matching point by adopting an EPnP algorithm to obtain a camera external parameter of the pinhole camera model, including: performing principal component analysis on the three-dimensional matching points to obtain virtual control points; substituting the virtual control points into a pinhole camera model, and solving by combining a Gauss Newton method to obtain coordinates of the virtual control points in a camera coordinate system; and solving the pinhole camera model based on the coordinates to obtain the camera external parameters of the pinhole camera model.
Optionally, based on a planar motion assumption and the pinhole camera model, converting the two-dimensional pixel coordinates of the person extracted from the surveillance video image to obtain three-dimensional coordinates of the person in the digital twin body, including: extracting two-dimensional pixel coordinates of a person in the monitoring video image by adopting a deep learning algorithm; aligning the horizon of the point cloud to a horizontal plane; determining the distance between the ground plane of the monitoring video image and the horizontal plane according to the plane motion assumption; substituting the two-dimensional pixel coordinates into a pinhole camera model of a known camera internal parameter and a known camera external parameter, and combining the distances to calculate and obtain the three-dimensional coordinates of the personnel in the digital twin body.
Optionally, extracting two-dimensional pixel coordinates of the person in the monitoring video image by using a deep learning algorithm includes: extracting a rectangular frame of a person in the monitoring video image by adopting a deep learning algorithm; and according to the personnel plantar approximation, the personnel position is represented, and the two-dimensional pixel coordinates of the personnel in the monitoring video image are calculated by combining the rectangular frame.
Optionally, based on a mapping plane created in the point cloud, setting a monitoring video image of the substation acquired in real time as a mapping plane texture, and dynamically updating the texture of the digital twin body, including: creating a mapping plane in the point cloud according to the vertex coordinates and the pose of the point selected rectangle; projecting the mapping plane into a two-dimensional video image, and obtaining a rectangle corresponding to the mapping plane by adopting perspective projection transformation; acquiring an online monitoring video image of a substation, and performing perspective projection transformation to obtain a front view angle rectangular image; and setting the texture of the rectangle as the right angle rectangle image by adopting a 3D engine, and dynamically updating the texture of the digital twin body.
Optionally, creating a mapping plane in the point cloud according to vertex coordinates and pose of the point selected rectangle includes: determining and calculating vertex coordinates of the selected rectangle according to three points on the selected rectangle region of interest; determining the pose of the selected rectangle according to the position relation between the initial matrix and the selected rectangle created in the 3D engine; and creating a mapping plane in the point cloud according to the vertex coordinates of the selected rectangle and the pose of the selected rectangle.
A second aspect of the embodiment of the present invention provides a transformer station-oriented online video monitoring two-dimensional and three-dimensional data mapping device, including: the data acquisition module is used for acquiring a monitoring video image of the substation and a point cloud of a monitoring scene of the substation, wherein the point cloud forms a digital base of the digital twin body; the external parameter calculation module is used for calculating camera external parameters of a pinhole camera model according to the registration of the monitoring video image and the point cloud; the personnel extraction module is used for converting the two-dimensional pixel coordinates of the personnel extracted from the monitoring video image to obtain the three-dimensional coordinates of the personnel in the digital twin body based on the plane motion assumption and the pinhole camera model; and the texture mapping module is used for setting a monitoring video image of the substation, which is acquired in real time, as a mapping plane texture based on a mapping plane created in the point cloud, and carrying out texture dynamic update of the digital twin body.
Optionally, the external parameter calculating module includes: the matching point selection module is used for selecting two-dimensional matching points in the monitoring video image and three-dimensional matching points in the point cloud by adopting a 3D engine, and the two-dimensional matching points and the three-dimensional matching points are in one-to-one correspondence; and the registration module is used for registering the two-dimensional matching points and the three-dimensional matching points by adopting an EPnP algorithm to obtain camera external parameters of the pinhole camera model.
Optionally, the registration module is specifically configured to: performing principal component analysis on the three-dimensional matching points to obtain virtual control points; substituting the virtual control points into a pinhole camera model, and solving by combining a Gauss Newton method to obtain coordinates of the virtual control points in a camera coordinate system; and solving the pinhole camera model based on the coordinates to obtain the camera external parameters of the pinhole camera model.
Optionally, the personnel extraction module includes: the coordinate extraction module is used for extracting two-dimensional pixel coordinates of the personnel in the monitoring video image by adopting a deep learning algorithm; an alignment module for aligning the horizon of the point cloud to a horizontal plane; the distance determining module is used for determining the distance between the ground plane of the monitoring video image and the horizontal plane according to the plane motion assumption; and the conversion module is used for substituting the two-dimensional pixel coordinates into a pinhole camera model of the known camera internal parameter and the camera external parameter, and calculating to obtain the three-dimensional coordinates of the personnel in the digital twin body by combining the distance.
Optionally, the coordinate extraction module is specifically configured to: extracting a rectangular frame of a person in the monitoring video image by adopting a deep learning algorithm; and according to the personnel plantar approximation, the personnel position is represented, and the two-dimensional pixel coordinates of the personnel in the monitoring video image are calculated by combining the rectangular frame.
Optionally, the texture mapping module comprises: the plane creating module is used for creating a mapping plane in the point cloud according to the vertex coordinates and the pose of the point selection rectangle; the first transformation module is used for projecting the mapping plane into a two-dimensional video image, and obtaining a rectangle corresponding to the mapping plane by adopting perspective projection transformation; the second transformation module is used for obtaining an online monitoring video image of the substation and performing perspective projection transformation to obtain a front view angle rectangular image; and the updating module is used for setting the texture of the rectangle into the right angle rectangular image by adopting a 3D engine and dynamically updating the texture of the digital twin body.
Optionally, the plane creation module is specifically configured to: determining and calculating vertex coordinates of the selected rectangle according to three points on the selected rectangle region of interest; determining the pose of the selected rectangle according to the position relation between the initial matrix and the selected rectangle created in the 3D engine; and creating a mapping plane in the point cloud according to the vertex coordinates of the selected rectangle and the pose of the selected rectangle.
A third aspect of the embodiments of the present invention provides a computer readable storage medium, where the computer readable storage medium stores computer instructions, where the computer instructions are configured to cause a computer to execute the method for mapping two-dimensional data for online video monitoring of a substation according to the first aspect of the embodiments of the present invention or any one of the first aspect of the embodiments of the present invention.
A fourth aspect of an embodiment of the present invention provides an electronic device, including: the system comprises a memory and a processor, wherein the memory and the processor are in communication connection, the memory stores computer instructions, and the processor executes the computer instructions so as to execute the two-dimensional and three-dimensional data mapping method for online video monitoring of the transformer station according to the first aspect of the embodiment of the invention and any one of the first aspect of the embodiment of the invention.
The technical scheme provided by the invention has the following effects:
according to the online video monitoring two-dimensional data mapping method and device for the transformer substation, provided by the embodiment of the invention, the personnel position information is mapped to the digital twin coordinates, so that the judgment of whether personnel safety hidden danger exists or not by combining the operation area and the equipment semantics is facilitated, and the requirement of updating the personnel coordinate information to the three-dimensional digital substation is met from the digital twin angle. Meanwhile, by means of continuous and real-time mapping of two-dimensional image data to the three-dimensional digital twin body and alignment of a video real-time sensing source and the digital twin body, video information reflecting the equipment state in the static digital twin body is compensated, and the defect that the digital twin body of the traditional transformer substation cannot be dynamically updated is overcome. Therefore, the method utilizes the monitoring cameras of the existing transformer substation to acquire monitoring pictures and personnel positioning information, maps and transforms the monitoring pictures and personnel positioning information into the digital twin three-dimensional space, and has important significance for utilizing and fully playing roles of monitoring camera equipment in the transformer substation in the aspects of dynamic updating of a digital base and personnel safety control.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a substation oriented online video monitoring two-dimensional data mapping method according to an embodiment of the invention;
fig. 2 is a block diagram of a transformer station-oriented online video monitoring two-dimensional and three-dimensional data mapping device according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a computer-readable storage medium provided according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
The terms first, second, third, fourth and the like in the description and in the claims and in the above drawings are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments described herein may be implemented in other sequences than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to an embodiment of the present invention, there is provided a substation-oriented online video monitoring two-dimensional data mapping method, it being noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flowchart, in some cases, the steps illustrated or described may be performed in an order different from that illustrated herein.
In this embodiment, a mapping method for two three-dimensional data of online video monitoring for a substation is provided, which can be used for electronic devices, such as a computer, a mobile phone, a tablet computer, etc., fig. 1 is a flowchart of a mapping method for two three-dimensional data of online video monitoring for a substation according to an embodiment of the present invention, as shown in fig. 1, and the method includes the following steps:
step S101: and acquiring a monitoring video image of the substation and a point cloud of a monitoring scene of the substation, wherein the point cloud forms a digital base of the digital twin body. Specifically, the digital base is a three-dimensional model of the digital twin, i.e., a basic model of the digital twin prior to updating. The digital base may include a three-dimensional model corresponding to a fixed structure in a substation monitoring scene, such as a fixed table or chair or a fixed device. Wherein the monitoring video image can be obtained from a monitoring camera or a monitoring sensor and the like installed in the substation; the point cloud can be obtained by adopting a mode of scanning a substation monitoring scene by a laser radar.
Step S102: and calculating camera external parameters of a pinhole camera model according to the registration of the monitoring video image and the point cloud. Specifically, matching points corresponding to each other in the monitoring video image and the point cloud are acquired, and the matching points are registered by adopting a registration algorithm, so that camera external parameters of the pinhole camera model are determined. The registration algorithm may be an existing registration algorithm, and the selection of a specific registration algorithm is not limited in the embodiment of the present invention.
Step S103: and converting the two-dimensional pixel coordinates of the personnel extracted from the monitoring video image based on the plane motion assumption and the pinhole camera model to obtain the three-dimensional coordinates of the personnel in the digital twin body. Specifically, the two-dimensional pixel coordinates of the person can be extracted and determined through a deep learning algorithm, and then the coordinates are brought into a pinhole camera model, so that the three-dimensional coordinates in the digital twin can be determined.
Step S104: and setting a monitoring video image of the substation, which is acquired in real time, as a mapping plane texture based on a mapping plane created in the point cloud, and dynamically updating the texture of the digital twin body. Specifically, the obtained point cloud information of the monitoring scene forms a digital base of the digital twin body, and on the basis, the real-time obtained monitoring video image is set as the texture of the mapping plane in the point cloud, namely, the dynamic projection of the real-time monitoring video image into the three-dimensional space is realized, and the dynamic update of the digital twin body is realized.
According to the online video monitoring two-dimensional data mapping method for the transformer substation, provided by the embodiment of the invention, the personnel position information is mapped to the digital twin coordinates, so that whether personnel safety hidden danger exists or not can be judged by combining the operation area and the equipment semantics, and the requirements of updating the personnel coordinate information to the three-dimensional digital substation are met from the digital twin angle. Meanwhile, by means of continuous and real-time mapping of two-dimensional image data to the three-dimensional digital twin body and alignment of a video real-time sensing source and the digital twin body, video information reflecting the equipment state in the static digital twin body is compensated, and the defect that the digital twin body of the traditional transformer substation cannot be dynamically updated is overcome. Therefore, the method utilizes the monitoring cameras of the existing transformer substation to acquire monitoring pictures and personnel positioning information, maps and transforms the monitoring pictures and personnel positioning information into the digital twin three-dimensional space, and has important significance for utilizing and fully playing roles of monitoring camera equipment in the transformer substation in the aspects of dynamic updating of a digital base and personnel safety control.
In an embodiment, calculating the camera outlier of the pinhole camera model from the registration of the surveillance video image and the point cloud comprises the steps of:
step S201: a 3D engine is adopted to point and select two-dimensional matching points in the monitoring video image and three-dimensional matching points in the point cloud, and the two-dimensional matching points and the three-dimensional matching points are in one-to-one correspondence; specifically, when the two-dimensional matching points and the three-dimensional matching points are selected, a ray collision detection interface provided by a 3D engine is utilized, a plurality of pairs of matching points corresponding to one another are selected from the monitoring video image and the point cloud through mouse clicking, and the logarithmic selection of the matching points is greater than or equal to 4 pairs for facilitating subsequent calculation. The three-dimensional matching points thus selected are expressed asThe selected two-dimensional matching point is expressed as +.>。
Step S202: and registering the two-dimensional matching points and the three-dimensional matching points by adopting an EPnP algorithm to obtain the camera external parameters of the pinhole camera model. The specific registration flow is as follows: performing principal component analysis on the three-dimensional matching points to obtain virtual control points; substituting the virtual control points into a pinhole camera model, and solving by combining a Gauss Newton method to obtain coordinates of the virtual control points in a camera coordinate system; and solving the pinhole camera model based on the coordinates to obtain the camera external parameters of the pinhole camera model.
Specifically, four virtual control points are selected,/>Taking the gravity center of the three-dimensional matching point, and then adopting three principal components obtained by carrying out principal component analysis (PCA, principal Component Analysis) on the three-dimensional matching point as the rest three virtual control points; simultaneously, the world coordinate system coordinates of the three-dimensional matching points are expressed as a weighted sum of the virtual control points: />。
Bringing the determined four virtual control points into a pinhole camera model formula:the following formula is obtained:
the above method is simplified to obtain:
let the space coordinates of the camera coordinate system of the 4 virtual control points beSimultaneous 4 virtual control points +.>The above two equations of (1) can be formed as homogeneous matrix equations of the following formula: />. Then the matrixIs the unknown quantity +.>Due to->Nuclear space and->Is equal to the nuclear space of (2), thus the unknown +.>Solutions of (2) can be written as:>. According to the guaranty of European transformation, the distance between two control points is the same in the camera coordinate system and world coordinate system, i.e.>Solution of +.A Gauss Newton method is used to minimize the distance difference between control points in two coordinate systems>:
Will beCarry in->Obtaining the coordinates of the virtual control point in the camera coordinate system>Thereby converting the PnP problem of solving 3D to 2D into the rigid motion problem of solving classical 3D to 3D, and solving by SVD decomposition or nonlinear optimization method to obtain camera external parametersR t]。
In one embodiment, based on a planar motion assumption and the pinhole camera model, the two-dimensional pixel coordinates of the person extracted from the surveillance video image are converted to obtain three-dimensional coordinates of the person in the digital twin body, including the following steps:
step S301: extracting two-dimensional pixel coordinates of a person in the monitoring video image by adopting a deep learning algorithm; wherein, the two-dimensional pixel coordinates are determined by the following method: extracting a rectangular frame of a person in the monitoring video image by adopting a deep learning algorithm; and according to the personnel plantar approximation, the personnel position is represented, and the two-dimensional pixel coordinates of the personnel in the monitoring video image are calculated by combining the rectangular frame.
Specifically, the deep learning algorithm may be YOLOv4 (you only look once, unifying tasks of category classification and frame regression), or may be another deep learning algorithm. Person detection is performed through a trained deep learning model, and the detected person is located in a rectangular frame, namely the YOLOv4 output is a rectangular frame, and the rectangular frame is formed by quaternary vectorsAnd (3) representing. In order to convert the rectangular frame into a single coordinate representing the position of the person, the quaternary vector is simplified into a pixel point by adopting a mode that the sole of the person approximately represents the position of the person>Representing the coordinates of a person in a pixel coordinate system, which corresponds toThe relation of (2) is:
step S302: the horizon of the point cloud is aligned to the horizon. In particular, model processing software such as blender or the like may be employed to align the horizon of the point cloud to the horizon.
Step S303: and determining the distance between the ground plane of the monitoring video image and the horizontal plane according to the plane motion assumption. Specifically, the ground plane of the surveillance video image is set to be according to the plane motion assumptionPlane, set->Indicating a distance from the horizontal plane of +.>。
Step S304: substituting the two-dimensional pixel coordinates into a pinhole camera model of a known camera internal parameter and a known camera external parameter, and combining the distances to calculate and obtain the three-dimensional coordinates of the personnel in the digital twin body.
Specifically, the camera internal parameter K and external parameter of the pinhole camera modelTwo-dimensional pixel coordinates +.>Brought into the pinhole camera model, know +.>Since the equation variables are more, some intermediate variables are set to make +.>,/>,/>,,/>,/>,/>,A binary once equation can be obtained:
solving the above formula can obtain:
after the above processing, h=0, then for the personnel pixel coordinatesThe corresponding world coordinates are。
In an embodiment, based on a mapping plane created in the point cloud, a monitoring video image of a substation acquired in real time is set as a mapping plane texture, and the texture of the digital twin body is dynamically updated, including the following steps:
step S401: creating a mapping plane in the point cloud according to the vertex coordinates and the pose of the point selected rectangle; specifically, the step of creating a mapping plane is as follows: determining and calculating vertex coordinates of the selected rectangle according to three points on the selected rectangle region of interest; determining the pose of the selected rectangle according to the position relation between the initial matrix and the selected rectangle created in the 3D engine; and creating a mapping plane in the point cloud according to the vertex coordinates of the selected rectangle and the pose of the selected rectangle.
The rectangular region of interest may be a device dashboard or a device surface in the substation, or the like, or may be other locations. When selecting a rectangular region of interest, three points are selected clockwiseWherein->For two vertices of a rectangle, which define an edge of the rectangle +.>,/>Is positioned at->Parallel sides +.>The rectangular region of interest, also referred to as a click rectangle, can thus be determined by the three points that are clicked.
The vertex coordinates of the click rectangle may be determined by geometric operations. First, two side length parameters of a click rectangle are determined according to three click points: wide width ofAnd high->,/>Wherein->Is a lineSection->Length of->For->To straight line->Is a distance of (3). Let go of>The normal vector of the click rectangle can be obtained according to the right-hand rule>Where norm represents normalization. The direction vector of the click rectangle width is +.>The last available click rectangle except +.>Two vertex coordinates other than +.>,Simultaneously obtaining the center of the click rectangle: />。
In creating the mapping plane, it is necessary to determine the pose of the pointing rectangle in addition to the vertex coordinates of the pointing rectangle. First, an interface for creating a planar geometry is provided using a 3D engine such as three. Js, and an interface is used to initialize a width ofHigh->Is the initial rectangle of +.>. The vertex coordinates of the initial rectangle areThe center is +.>. The vertex coordinates and the vertices +.>Corresponding to the above.
Then, the pose of the selected rectangle is acquired by selecting the initial rectangle pair Ji Daodian as the rectangle. The alignment process is as follows: rotating the initial rectangle to the same plane of the selected rectangle by pivoting, and rotating the shaftThe rotation angle is +>. The rotation matrix corresponding to the rotation is recorded as +.>. After pivoting, the initial rectangle and the selected rectangle have the same normal vector +.>But the pose is not perfectly aligned. Also need to wind->Rotated by a certain angleθSo that the pose is completely aligned. After pivoting->The corresponding vertex coordinates are +.>Clicking rectangular centercenterTo->Is +.>An initial rectangular center after pivoting>To the point ofIs +.>. This gives:
simultaneously, on the fly, can obtain. After twice rotation and once translation, the initial rectangle pair Ji Daodian selects the rectangle, and the pose of the initial matrix is the pose of the selected rectangle. Finally, a mapping plane is created according to the pose parameters and the vertex coordinates obtained by calculation>。
Step S402: projecting the mapping plane into a two-dimensional video image, and obtaining a rectangle corresponding to the mapping plane by adopting perspective projection transformation; specifically, in order to realize two-dimensional and three-dimensional mapping, the created mapping plane is projected into a two-dimensional video image to obtain four corner points.Four corner points are connected to form a quadrilateral image +.>. However, since the viewpoint selecting rectangle in the camera is not a positive angle of view, a quadrangle constituted by four corner points is notIs rectangular, a quadrilateral image is required to be +.>Conversion to a front rectangular image->And obtaining a rectangle corresponding to the mapping plane in 2D.
Wherein the perspective transformation (Perspective Transformation) is to project the picture to a new Viewing Plane, also called projection mapping (Projective Mapping). The general transformation formula is that
Wherein the perspective transformation matrix is,/>For the original point coordinates +.>To transform the coordinates of the post-point. Dividing by the image in a two-dimensional planeZObtaining:
four equations can be obtained for four points, each point has an equation corresponding to the equation, and thus eight equations can be obtained through the four points, and the remaining eight unknowns of A can be solved.
Step S403: and obtaining an online monitoring video image of the substation, and performing perspective projection transformation to obtain a right angle rectangular image. Specifically, the quadrangle corresponding to each frame of image in the online monitoring video image is not rectangular, so that the quadrangle image in each frame of image is subjected to perspective projection transformation to obtain a forward-looking angle rectangular image, and the forward-looking angle rectangular image corresponding to each frame of image is combined together to reconstruct the mapping video.
Step S404: and setting the texture of the rectangle as the right angle rectangle image by adopting a 3D engine, and dynamically updating the texture of the digital twin body. Specifically, since the mapping plane and the online monitoring video image are both converted into rectangles, the 3D engine video texture API interface is used to bind the texture of the mapping plane into the mapping video composed of the right angle rectangular images, and when the online video is played, the texture of the mapping plane also changes dynamically.
The embodiment of the invention also provides a device for mapping the two-dimensional data of the online video monitoring of the transformer station, as shown in fig. 2, which comprises:
the data acquisition module is used for acquiring a monitoring video image of the substation and a point cloud of a monitoring scene of the substation, wherein the point cloud forms a digital base of the digital twin body; the specific content refers to the corresponding parts of the above method embodiments, and will not be described herein.
The external parameter calculation module is used for calculating camera external parameters of a pinhole camera model according to the registration of the monitoring video image and the point cloud; the specific content refers to the corresponding parts of the above method embodiments, and will not be described herein.
The personnel extraction module is used for converting the two-dimensional pixel coordinates of the personnel extracted from the monitoring video image to obtain the three-dimensional coordinates of the personnel in the digital twin body based on the plane motion assumption and the pinhole camera model; the specific content refers to the corresponding parts of the above method embodiments, and will not be described herein.
And the texture mapping module is used for setting a monitoring video image of the substation, which is acquired in real time, as a mapping plane texture based on a mapping plane created in the point cloud, and carrying out texture dynamic update of the digital twin body. The specific content refers to the corresponding parts of the above method embodiments, and will not be described herein.
The online video monitoring two-dimensional data mapping device for the transformer substation provided by the embodiment of the invention maps the personnel position information to the digital twin coordinates, is favorable for judging whether personnel safety hidden danger exists by combining the operation area and the equipment semantics, and meets the requirement of updating the personnel coordinate information to the three-dimensional digital substation from the digital twin angle. Meanwhile, by means of continuous and real-time mapping of two-dimensional image data to the three-dimensional digital twin body and alignment of a video real-time sensing source and the digital twin body, video information reflecting the equipment state in the static digital twin body is compensated, and the defect that the digital twin body of the traditional transformer substation cannot be dynamically updated is overcome. Therefore, the device acquires monitoring pictures and personnel positioning information by using the monitoring cameras of the existing transformer substation, maps and transforms the monitoring pictures and personnel positioning information into the digital twin three-dimensional space, and has important significance for utilizing and fully playing roles of monitoring camera equipment in the transformer substation in the aspects of dynamic updating of a digital base and personnel safety control.
The functional description of the online video monitoring two-dimensional data mapping device for the transformer station provided by the embodiment of the invention is described in detail by referring to the online video monitoring two-dimensional data mapping method for the transformer station in the embodiment.
Optionally, the external parameter calculating module includes: the matching point selection module is used for selecting two-dimensional matching points in the monitoring video image and three-dimensional matching points in the point cloud by adopting a 3D engine, and the two-dimensional matching points and the three-dimensional matching points are in one-to-one correspondence; and the registration module is used for registering the two-dimensional matching points and the three-dimensional matching points by adopting an EPnP algorithm to obtain camera external parameters of the pinhole camera model.
Optionally, the registration module is specifically configured to: performing principal component analysis on the three-dimensional matching points to obtain virtual control points; substituting the virtual control points into a pinhole camera model, and solving by combining a Gauss Newton method to obtain coordinates of the virtual control points in a camera coordinate system; and solving the pinhole camera model based on the coordinates to obtain the camera external parameters of the pinhole camera model.
Optionally, the personnel extraction module includes: the coordinate extraction module is used for extracting two-dimensional pixel coordinates of the personnel in the monitoring video image by adopting a deep learning algorithm; an alignment module for aligning the horizon of the point cloud to a horizontal plane; the distance determining module is used for determining the distance between the ground plane of the monitoring video image and the horizontal plane according to the plane motion assumption; and the conversion module is used for substituting the two-dimensional pixel coordinates into a pinhole camera model of the known camera internal parameter and the camera external parameter, and calculating to obtain the three-dimensional coordinates of the personnel in the digital twin body by combining the distance.
Optionally, the coordinate extraction module is specifically configured to: extracting a rectangular frame of a person in the monitoring video image by adopting a deep learning algorithm; and according to the personnel plantar approximation, the personnel position is represented, and the two-dimensional pixel coordinates of the personnel in the monitoring video image are calculated by combining the rectangular frame.
Optionally, the texture mapping module comprises: the plane creating module is used for creating a mapping plane in the point cloud according to the vertex coordinates and the pose of the point selection rectangle; the first transformation module is used for projecting the mapping plane into a two-dimensional video image, and obtaining a rectangle corresponding to the mapping plane by adopting perspective projection transformation; the second transformation module is used for obtaining an online monitoring video image of the substation and performing perspective projection transformation to obtain a front view angle rectangular image; and the updating module is used for setting the texture of the rectangle into the right angle rectangular image by adopting a 3D engine and dynamically updating the texture of the digital twin body.
Optionally, the plane creation module is specifically configured to: determining and calculating vertex coordinates of the selected rectangle according to three points on the selected rectangle region of interest; determining the pose of the selected rectangle according to the position relation between the initial matrix and the selected rectangle created in the 3D engine; and creating a mapping plane in the point cloud according to the vertex coordinates of the selected rectangle and the pose of the selected rectangle.
The embodiment of the present invention further provides a storage medium, as shown in fig. 3, on which a computer program 601 is stored, where the instructions, when executed by a processor, implement the steps of the online video monitoring two-dimensional data mapping method facing the transformer substation in the above embodiment. The storage medium also stores audio and video stream data, characteristic frame data, interactive request signaling, encrypted data, preset data size and the like. The storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a Flash Memory (Flash Memory), a Hard Disk (HDD), or a Solid State Drive (SSD); the storage medium may also comprise a combination of memories of the kind described above.
It will be appreciated by those skilled in the art that implementing all or part of the above-described embodiment method may be implemented by a computer program to instruct related hardware, where the program may be stored in a computer readable storage medium, and the program may include the above-described embodiment method when executed. Wherein the storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a Flash Memory (Flash Memory), a Hard Disk (HDD), or a Solid State Drive (SSD); the storage medium may also comprise a combination of memories of the kind described above.
The embodiment of the present invention further provides an electronic device, as shown in fig. 4, where the electronic device may include a processor 51 and a memory 52, where the processor 51 and the memory 52 may be connected by a bus or other means, and in fig. 4, the connection is exemplified by a bus.
The processor 51 may be a central processing unit (Central Processing Unit, CPU). The processor 51 may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or combinations thereof.
The memory 52 serves as a non-transitory computer readable storage medium that may be used to store non-transitory software programs, non-transitory computer-executable programs, and modules, such as corresponding program instructions/modules in embodiments of the present invention. The processor 51 executes various functional applications and data processing of the processor by running non-transitory software programs, instructions and modules stored in the memory 52, that is, implements the substation-oriented online video monitoring two-dimensional data mapping method in the above method embodiment.
The memory 52 may include a memory program area that may store an operating device, an application program required for at least one function, and a memory data area; the storage data area may store data created by the processor 51, etc. In addition, memory 52 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, memory 52 may optionally include memory located remotely from processor 51, which may be connected to processor 51 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The one or more modules are stored in the memory 52, which when executed by the processor 51, performs the substation-oriented online video monitoring two-dimensional data mapping method in the embodiment shown in fig. 1.
The specific details of the electronic device may be understood correspondingly with respect to the corresponding related descriptions and effects in the embodiment shown in fig. 1, which are not repeated herein.
Although embodiments of the present invention have been described in connection with the accompanying drawings, various modifications and variations may be made by those skilled in the art without departing from the spirit and scope of the invention, and such modifications and variations fall within the scope of the invention as defined by the appended claims.
Claims (8)
1. The online video monitoring two-dimensional and three-dimensional data mapping method for the transformer station is characterized by comprising the following steps of:
acquiring a monitoring video image of a substation and a point cloud of a monitoring scene of the substation, wherein the point cloud forms a digital base of a digital twin body;
calculating camera external parameters of a pinhole camera model according to registration of the monitoring video image and the point cloud;
based on a plane motion assumption and the pinhole camera model, converting the two-dimensional pixel coordinates of the person extracted from the monitoring video image to obtain three-dimensional coordinates of the person in the digital twin body;
setting a monitoring video image of the substation, which is acquired in real time, as a mapping plane texture based on a mapping plane created in the point cloud, and dynamically updating the texture of the digital twin body;
calculating camera external parameters of a pinhole camera model according to registration of the monitoring video image and the point cloud, wherein the camera external parameters comprise:
a 3D engine is adopted to point and select two-dimensional matching points in the monitoring video image and three-dimensional matching points in the point cloud, and the two-dimensional matching points and the three-dimensional matching points are in one-to-one correspondence;
registering the two-dimensional matching points and the three-dimensional matching points by adopting an EPnP algorithm to obtain camera external parameters of a pinhole camera model;
registering the two-dimensional matching points and the three-dimensional matching points by adopting an EPnP algorithm to obtain camera external parameters of a pinhole camera model, wherein the method comprises the following steps:
performing principal component analysis on the three-dimensional matching points to obtain virtual control points;
substituting the virtual control points into a pinhole camera model, and solving by combining a Gauss Newton method to obtain coordinates of the virtual control points in a camera coordinate system;
and solving the pinhole camera model based on the coordinates to obtain the camera external parameters of the pinhole camera model.
2. The transformer station-oriented online video monitoring two-dimensional data mapping method according to claim 1, wherein the step of converting the two-dimensional pixel coordinates of the person extracted from the monitoring video image to obtain the three-dimensional coordinates of the person in the digital twin body based on the planar motion assumption and the pinhole camera model comprises the steps of:
extracting two-dimensional pixel coordinates of a person in the monitoring video image by adopting a deep learning algorithm;
aligning the horizon of the point cloud to a horizontal plane;
determining the distance between the ground plane of the monitoring video image and the horizontal plane according to the plane motion assumption;
substituting the two-dimensional pixel coordinates into a pinhole camera model of a known camera internal parameter and a known camera external parameter, and combining the distances to calculate and obtain the three-dimensional coordinates of the personnel in the digital twin body.
3. The substation-oriented online video monitoring two-dimensional data mapping method according to claim 2, wherein extracting two-dimensional pixel coordinates of a person in the monitoring video image by adopting a deep learning algorithm comprises:
extracting a rectangular frame of a person in the monitoring video image by adopting a deep learning algorithm;
and according to the personnel plantar approximation, the personnel position is represented, and the two-dimensional pixel coordinates of the personnel in the monitoring video image are calculated by combining the rectangular frame.
4. The substation-oriented online video monitoring two-dimensional data mapping method according to claim 1, wherein setting a monitoring video image of a substation acquired in real time as a mapping plane texture based on a mapping plane created in the point cloud, and performing texture dynamic update of the digital twin body comprises:
creating a mapping plane in the point cloud according to the vertex coordinates and the pose of the point selected rectangle;
projecting the mapping plane into a two-dimensional video image, and obtaining a rectangle corresponding to the mapping plane by adopting perspective projection transformation;
acquiring an online monitoring video image of a substation, and performing perspective projection transformation to obtain a front view angle rectangular image;
and setting the texture of the rectangle as the right angle rectangle image by adopting a 3D engine, and dynamically updating the texture of the digital twin body.
5. The transformer substation-oriented online video monitoring two-dimensional data mapping method according to claim 4, wherein creating a mapping plane in the point cloud according to vertex coordinates and poses of a point selection rectangle comprises:
determining and calculating vertex coordinates of the selected rectangle according to three points on the selected rectangle region of interest;
determining the pose of the selected rectangle according to the position relation between the initial matrix and the selected rectangle created in the 3D engine;
and creating a mapping plane in the point cloud according to the vertex coordinates of the selected rectangle and the pose of the selected rectangle.
6. The utility model provides a become electric station on-line video monitoring two three-dimensional data mapping device which characterized in that includes:
the data acquisition module is used for acquiring a monitoring video image of the substation and a point cloud of a monitoring scene of the substation, wherein the point cloud forms a digital base of the digital twin body;
the external parameter calculation module is used for calculating camera external parameters of a pinhole camera model according to the registration of the monitoring video image and the point cloud;
the personnel extraction module is used for converting the two-dimensional pixel coordinates of the personnel extracted from the monitoring video image to obtain the three-dimensional coordinates of the personnel in the digital twin body based on the plane motion assumption and the pinhole camera model;
the texture mapping module is used for setting a monitoring video image of the substation, which is acquired in real time, as a mapping plane texture based on a mapping plane created in the point cloud, and dynamically updating the texture of the digital twin body;
calculating camera external parameters of a pinhole camera model according to registration of the monitoring video image and the point cloud, wherein the camera external parameters comprise:
a 3D engine is adopted to point and select two-dimensional matching points in the monitoring video image and three-dimensional matching points in the point cloud, and the two-dimensional matching points and the three-dimensional matching points are in one-to-one correspondence;
registering the two-dimensional matching points and the three-dimensional matching points by adopting an EPnP algorithm to obtain camera external parameters of a pinhole camera model;
registering the two-dimensional matching points and the three-dimensional matching points by adopting an EPnP algorithm to obtain camera external parameters of a pinhole camera model, wherein the method comprises the following steps:
performing principal component analysis on the three-dimensional matching points to obtain virtual control points;
substituting the virtual control points into a pinhole camera model, and solving by combining a Gauss Newton method to obtain coordinates of the virtual control points in a camera coordinate system;
and solving the pinhole camera model based on the coordinates to obtain the camera external parameters of the pinhole camera model.
7. A computer-readable storage medium storing computer instructions for causing the computer to perform the substation-oriented online video monitoring two-dimensional data mapping method according to any one of claims 1 to 5.
8. An electronic device, comprising: the system comprises a memory and a processor, wherein the memory and the processor are in communication connection, the memory stores computer instructions, and the processor executes the computer instructions so as to execute the two-dimensional and three-dimensional data mapping method for online video monitoring of the substation according to any one of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310362822.3A CN116109684B (en) | 2023-04-07 | 2023-04-07 | Online video monitoring two-dimensional and three-dimensional data mapping method and device for variable electric field station |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310362822.3A CN116109684B (en) | 2023-04-07 | 2023-04-07 | Online video monitoring two-dimensional and three-dimensional data mapping method and device for variable electric field station |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116109684A CN116109684A (en) | 2023-05-12 |
CN116109684B true CN116109684B (en) | 2023-06-30 |
Family
ID=86264025
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310362822.3A Active CN116109684B (en) | 2023-04-07 | 2023-04-07 | Online video monitoring two-dimensional and three-dimensional data mapping method and device for variable electric field station |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116109684B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117292079B (en) * | 2023-11-27 | 2024-03-05 | 浙江城市数字技术有限公司 | Multi-dimensional scene coordinate point position conversion and mapping method applied to digital twin |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108830894B (en) * | 2018-06-19 | 2020-01-17 | 亮风台(上海)信息科技有限公司 | Remote guidance method, device, terminal and storage medium based on augmented reality |
US11675329B2 (en) * | 2021-06-21 | 2023-06-13 | Rockwell Automation Technologies, Inc. | Functional safety system using three dimensional sensing and dynamic digital twin |
CN114155299B (en) * | 2022-02-10 | 2022-04-26 | 盈嘉互联(北京)科技有限公司 | Building digital twinning construction method and system |
CN115082254A (en) * | 2022-03-15 | 2022-09-20 | 济南大学 | Lean control digital twin system of transformer substation |
CN114741768A (en) * | 2022-04-27 | 2022-07-12 | 四川赛康智能科技股份有限公司 | Three-dimensional modeling method for intelligent substation |
CN115034986A (en) * | 2022-06-02 | 2022-09-09 | 中企恒达(北京)科技有限公司 | Three-dimensional video fusion method for performing camera modeling based on single monitoring image |
-
2023
- 2023-04-07 CN CN202310362822.3A patent/CN116109684B/en active Active
Non-Patent Citations (2)
Title |
---|
基于数字孪生的生产线三维检测与交互算法研究;陈末然;邓昌义;张健;郭锐锋;;小型微型计算机系统(05);第979-984页 * |
面向数字化车间的介入式三维实时监控系统;张涛;唐敦兵;张泽群;魏鑫;;中国机械工程(08);第990-999页 * |
Also Published As
Publication number | Publication date |
---|---|
CN116109684A (en) | 2023-05-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111127655B (en) | House layout drawing construction method and device, and storage medium | |
US11270460B2 (en) | Method and apparatus for determining pose of image capturing device, and storage medium | |
US11521311B1 (en) | Collaborative disparity decomposition | |
CN109887003B (en) | Method and equipment for carrying out three-dimensional tracking initialization | |
WO2019242262A1 (en) | Augmented reality-based remote guidance method and device, terminal, and storage medium | |
Zhang et al. | A UAV-based panoramic oblique photogrammetry (POP) approach using spherical projection | |
WO2018077071A1 (en) | Panoramic image generating method and apparatus | |
Meilland et al. | Dense visual mapping of large scale environments for real-time localisation | |
US20170213396A1 (en) | Virtual changes to a real object | |
CN103226838A (en) | Real-time spatial positioning method for mobile monitoring target in geographical scene | |
US20220139030A1 (en) | Method, apparatus and system for generating a three-dimensional model of a scene | |
US20230298280A1 (en) | Map for augmented reality | |
CN116109684B (en) | Online video monitoring two-dimensional and three-dimensional data mapping method and device for variable electric field station | |
Suenaga et al. | A practical implementation of free viewpoint video system for soccer games | |
Guan et al. | DeepMix: mobility-aware, lightweight, and hybrid 3D object detection for headsets | |
Nagy et al. | Development of an omnidirectional stereo vision system | |
WO2023088127A1 (en) | Indoor navigation method, server, apparatus and terminal | |
da Silveira et al. | Omnidirectional visual computing: Foundations, challenges, and applications | |
US20220068024A1 (en) | Determining a three-dimensional representation of a scene | |
WO2021149509A1 (en) | Imaging device, imaging method, and program | |
Li et al. | An occlusion detection algorithm for 3d texture reconstruction of multi-view images | |
CN114089836A (en) | Labeling method, terminal, server and storage medium | |
Huang et al. | Design and application of intelligent patrol system based on virtual reality | |
CN112258435A (en) | Image processing method and related product | |
Wang et al. | Real‐time fusion of multiple videos and 3D real scenes based on optimal viewpoint selection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |