CN118247420A - Green plant and live-action fusion reconstruction method and device, electronic equipment and storage medium - Google Patents

Green plant and live-action fusion reconstruction method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN118247420A
CN118247420A CN202211612629.2A CN202211612629A CN118247420A CN 118247420 A CN118247420 A CN 118247420A CN 202211612629 A CN202211612629 A CN 202211612629A CN 118247420 A CN118247420 A CN 118247420A
Authority
CN
China
Prior art keywords
dimensional
green plant
green
action
live
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211612629.2A
Other languages
Chinese (zh)
Inventor
伍广明
王珊珊
胡晓燕
金姣
陈晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fengtu Technology Shenzhen Co Ltd
Original Assignee
Fengtu Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fengtu Technology Shenzhen Co Ltd filed Critical Fengtu Technology Shenzhen Co Ltd
Priority to CN202211612629.2A priority Critical patent/CN118247420A/en
Publication of CN118247420A publication Critical patent/CN118247420A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The application provides a green plant and live-action fusion reconstruction method, a device, electronic equipment and a storage medium, which relate to the technical field of three-dimensional reconstruction and solve the problem that the texture of green plants on two sides of a road is seriously lost, so that the green plants cannot be subjected to fine three-dimensional reconstruction.

Description

Green plant and live-action fusion reconstruction method and device, electronic equipment and storage medium
Technical Field
The application relates to the technical field of three-dimensional reconstruction, in particular to a green plant and live-action fusion reconstruction method, a device, electronic equipment and a storage medium.
Background
The three-dimensional relationship of the live-action is in many civil fields such as intelligent transportation, intelligent house management, pipeline planning, environmental monitoring and the like, and is greatly beneficial to improving the living standard of people. The method is used for promoting the large-scale live-action three-dimensional reconstruction, and not only is the reconstruction of a large scene of a overlooking view field of a district, a street, a city district and the like required, but also the fine three-dimensional reconstruction of a road and greening plants on two sides of the road is required.
The existing technology generally adopts an unmanned aerial vehicle shooting mode to acquire images in a large-range scene, and directly completes reconstruction of the large-range scene based on the images in the large-range scene to obtain a large-range scene model, and because the imaging distance is too far and the visual angle is not full in the unmanned aerial vehicle shooting mode, texture missing of green plants on two sides of a road is serious in the shot images, and fine three-dimensional reconstruction of the green plants cannot be performed.
Disclosure of Invention
The application provides a green plant and live-action fusion reconstruction method, a device, electronic equipment and a storage medium, which can finish reconstruction of a large-scale three-dimensional live-action model, can perform fine reconstruction on green plants on two sides of a road in a large-scale scene and improve the reconstruction effect of the three-dimensional live-action model.
In one aspect, the application provides a green plant and live-action fusion reconstruction method, which comprises the following steps:
Acquiring a three-dimensional live-action video of a three-dimensional live-action, wherein the three-dimensional live-action video is obtained by shooting through unmanned aerial vehicle shooting equipment, and the landmark in the three-dimensional live-action comprises a target green plant;
acquiring a green planting video of the target green planting, wherein the green planting video is obtained by shooting through vehicle-mounted video acquisition equipment;
Constructing a three-dimensional live-action model of the three-dimensional live-action according to the three-dimensional live-action video;
According to the green plant video, a green plant single model with green plant textures and green plant outlines is constructed;
And fusing the green plant monomer model into the three-dimensional live-action model according to the height of the target green plant in the three-dimensional live-action model, so as to obtain the three-dimensional live-action model fused with the green plant monomer model.
In one possible implementation manner of the present application, the constructing a three-dimensional live-action model of the three-dimensional live-action according to the three-dimensional live-action video includes:
According to the three-dimensional live-action video, performing aerial triangulation calculation through a pre-trained three-dimensional live-action model generator, and generating three-dimensional live-action point cloud information of the three-dimensional live-action;
And constructing the three-dimensional real scene model according to the three-dimensional real scene point cloud information.
In one possible implementation manner of the present application, the constructing a green plant monomer model of the target green plant with green plant texture according to the green plant video includes:
performing three-dimensional point cloud reconstruction on the target green plant according to the green plant video to obtain three-dimensional green plant point cloud information of the target green plant;
identifying green plant textures of the target green plants in the green plant video to obtain texture identification results;
and constructing the green plant monomer model of the green plant texture according to the three-dimensional green plant point cloud information, the pre-constructed three-dimensional green plant template and the texture identification result.
In one possible implementation manner of the present application, the green plant video includes an image sequence composed of multiple frames of two-dimensional green plant images, and the identifying green plant textures of the target green plant in the green plant video to obtain a texture identification result includes:
Detecting the target green plants from the two-dimensional green plant image to generate a green plant detection frame;
intercepting a target green plant image of a corresponding area of the green plant detection frame;
and identifying green plant textures in the target green plant image to obtain the texture identification result.
In one possible implementation manner of the present application, after the identifying the green plant texture in the target green plant image, the texture identifying result includes:
And according to the texture recognition result, retrieving the three-dimensional green planting template applicable to the target green planting from a preset template database.
In one possible implementation manner of the present application, the constructing the green plant monomer model of the green plant texture according to the three-dimensional green plant point cloud information, the pre-constructed three-dimensional green plant template and the texture recognition result includes:
determining a three-dimensional green planting stereoscopic frame of the target green planting according to the frame formed by the three-dimensional green planting point cloud information;
Determining the green planting height of the three-dimensional green planting stereoscopic frame in an xyz three-dimensional space coordinate system, wherein the xyz three-dimensional space coordinate system is a three-dimensional coordinate system formed by an x axis, a y axis and a z axis;
and scaling the three-dimensional green planting template according to the green planting height of the three-dimensional green planting stereoscopic frame, and fusing the scaled three-dimensional green planting template and the texture recognition result into the three-dimensional green planting stereoscopic frame to obtain the green planting monomer model.
In one possible implementation manner of the present application, the determining the green plant height of the three-dimensional green plant stereoscopic frame in the xyz three-dimensional space coordinate system includes:
Determining the green planting height of the three-dimensional green planting stereoscopic frame according to the difference value of the maximum z-axis value and the minimum z-axis value of the three-dimensional green planting stereoscopic frame along the z-axis direction;
the z-axis direction is the direction in which the z-axis extends in the xyz three-dimensional space coordinate system.
In another aspect, the present application provides a green plant and live-action fusion reconstruction device, the device comprising:
The first video acquisition module is used for acquiring a three-dimensional live-action video of a three-dimensional live-action, wherein the three-dimensional live-action video is shot by unmanned aerial vehicle shooting equipment, and the landmark in the three-dimensional live-action comprises a target green plant;
the second video acquisition module is used for acquiring a green planting video of the target green planting, and the green planting video is obtained through shooting by the vehicle-mounted video acquisition equipment;
the first model construction module is used for constructing a three-dimensional live-action model of the three-dimensional live-action according to the three-dimensional live-action video;
the second model building module is used for building a green plant single model with green plant textures and green plant outlines according to the green plant video;
And the model fusion module is used for fusing the green plant monomer model into the three-dimensional live-action model according to the height of the target green plant in the three-dimensional live-action model to obtain the three-dimensional live-action model fused with the green plant monomer model.
In another aspect, the present application also provides an electronic device, including:
one or more processors;
A memory; and
One or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the processor to implement the green plant and live-action fusion reconstruction method.
In another aspect, the present application also provides a computer readable storage medium having stored thereon a computer program, the computer program being loaded by a processor to perform the steps of the green plant and live-action fusion reconstruction method.
According to the application, the three-dimensional live-action video is obtained by shooting the three-dimensional live-action through the unmanned aerial vehicle shooting equipment, the construction of the three-dimensional live-action model is completed according to the three-dimensional live-action video, in addition, the green plant video is obtained by shooting the target green plant through the vehicle-mounted video acquisition equipment, then the green plant single model with the green plant texture and the green plant outline is constructed according to the green plant video, and then in the model fusion process, the green plant single model is fused into the three-dimensional live-action model according to the height of the target green plant in the three-dimensional live-action model, so that the fine reconstruction of the green plant in the three-dimensional live-action model is realized, and each green plant single model of the three-dimensional live-action model has clear green plant texture.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic view of a scene of a green plant and live-action fusion reconstruction system provided by an embodiment of the present application;
FIG. 2 is a schematic flow chart of an embodiment of a method for reconstructing a green plant and live-action fusion provided in an embodiment of the present application;
FIG. 3 is a schematic structural diagram of an embodiment of a green plant and live-action fusion reconstruction device according to the present application;
Fig. 4 is a schematic structural diagram of an embodiment of an electronic device provided in an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to fall within the scope of the application.
In the description of the present invention, it should be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more of the described features. In the description of the present invention, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
In the present application, the term "exemplary" is used to mean "serving as an example, instance, or illustration. Any embodiment described as "exemplary" in this disclosure is not necessarily to be construed as preferred or advantageous over other embodiments. The following description is presented to enable any person skilled in the art to make and use the application. In the following description, details are set forth for purposes of explanation. It will be apparent to one of ordinary skill in the art that the present application may be practiced without these specific details. In other instances, well-known structures and processes have not been described in detail so as not to obscure the description of the application with unnecessary detail. Thus, the present application is not intended to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein.
In order to facilitate understanding, some technical terms related to the embodiments of the present application are briefly described below.
1. Three-dimensional model: a three-dimensional model is a polygonal representation of an object, typically displayed with a computer or other video device. The displayed object may be a real world entity or an imaginary object. Anything that exists in physical nature can be represented by a three-dimensional model. In the embodiment of the application, the three-dimensional model of the object is used for indicating the three-dimensional structure and the size information of the object. There are various data storage forms of the three-dimensional model, for example, the three-dimensional model is represented in the form of a three-dimensional point cloud, a grid or a voxel, and the data storage forms are not limited herein.
2. Camera external parameters: i.e. the external parameters of the camera, are the conversion relations between the world coordinate system and the camera coordinate system, including rotation parameters and translation parameters.
2.1 World Coordinate System (World Coordinates)
The world coordinate system (x w,yw,zw), also called a measurement coordinate system, is a three-dimensional rectangular coordinate system, and based on the three-dimensional rectangular coordinate system, the spatial positions of the camera and the object to be measured can be described, and the position of the world coordinate system can be freely determined according to actual conditions.
2.2 Camera coordinate System (Camera Coordinate)
The camera coordinate system (x c,yc,zc) is also a three-dimensional rectangular coordinate system, the origin is positioned at the optical center of the lens, the x and y axes are respectively parallel to the two sides of the phase plane, and the z axis is the optical axis of the lens and is perpendicular to the image plane.
Conversion of world coordinate system into camera coordinate system
2.3 Conversion of world coordinate System into Camera coordinate System
Wherein,Namely an external reference matrix of the camera, R is a 3*3 rotation matrix, which is the product of the rotation matrix of each coordinate axis, wherein the rotation parameter/>, of each coordinate axisT is the translation parameter of 3*1 (t x,ty,tz).
According to camera external parameters, the camera pose, namely the position of the camera in space and the pose of the camera, can be determined, and can be respectively regarded as translation transformation and rotation transformation of the camera from an original reference position to a current position. Similarly, the pose of the target object in the present application is the position of the target object in space and the pose of the target object.
3. Camera internal parameters: that is, the internal parameters of the camera are the conversion relation between the camera coordinate system and the pixel coordinate system, that is, the conversion relation is used for converting the length unit into the pixel coordinate with the pixel as the unit, and after the camera leaves the factory, the internal parameters of the camera are fixed. Illustratively, the internal parameters of the camera include an internal parameter matrix of the camera, specifically:
The internal parameters of the camera are respectively as follows: f is focal length in millimeters; f x is the length of the focal length in the x-axis direction using pixels; f y is the length of the focal length in the y-axis direction using pixels; u 0 and v 0 are principal point coordinates (relative to the imaging plane), in pixels; gamma is a coordinate axis tilt parameter, ideally 0.
4. Calibrating a camera: in image measurement processes and machine vision applications, in order to determine the correlation between the three-dimensional geometric position of a point on the surface of a spatial object and its corresponding point in the image, geometric models of camera imaging must be established, these geometric model parameters are camera parameters, which include camera internal parameters, camera external parameters and distortion parameters of the camera, these parameters must be obtained through experiments and calculations under most conditions, and this process of solving parameters is called camera calibration (or camera calibration), and the current method of camera calibration includes: a linear calibration method, a nonlinear optimization calibration method and a two-step calibration method.
5. Three-dimensional template: the template model library is a database for storing three-dimensional templates, and the corresponding three-dimensional templates are prefabricated according to different types and parameters of green plants, wherein the three-dimensional templates comprise three-dimensional geometric information of a target object, specifically comprise geometric structure and size information, and optionally comprise texture features of the target object. Optionally, the three-dimensional templates in the template database carry labels of the target object types, for example, the target object three-dimensional templates can comprise a three-dimensional indication board template, a three-dimensional green planting template, a three-dimensional traffic signal box template and the like, and each template is preset with an initial angle, so that calculation of a subsequent rotation angle is facilitated.
The embodiment of the application provides a green plant and live-action fusion reconstruction method, a device, electronic equipment and a storage medium, and the method, the device, the electronic equipment and the storage medium are respectively described in detail below.
The execution main body of the green plant and live-action fusion reconstruction method of the embodiment of the application can be the green plant and live-action fusion reconstruction device provided by the embodiment of the application, or different types of electronic Equipment such as server Equipment, physical host or User Equipment (UE) integrated with the green plant and live-action fusion reconstruction device, wherein the green plant and live-action fusion reconstruction device can be realized in a hardware or software mode, and the UE can be terminal Equipment such as a smart phone, a tablet computer, a notebook computer, a palm computer, a desktop computer or a Personal digital assistant (Personal DIGITAL ASSISTANT, PDA).
The electronic device may be operated in a single operation mode, or may also be operated in a device cluster mode.
As shown in fig. 1, fig. 1 is a schematic view of a scene of a green plant and live-action fusion reconstruction system according to an embodiment of the present application. The green plant and live-action fusion reconstruction system can comprise vehicle-mounted video acquisition equipment for shooting a target green plant and a road three-dimensional live-action and electronic equipment 100 for completing a green plant and live-action fusion reconstruction method, wherein a green plant and live-action fusion reconstruction device is integrated in the electronic equipment 100. For example, the electronic device may acquire a three-dimensional live-action video of a three-dimensional live-action, where the three-dimensional live-action video is obtained by shooting with an unmanned aerial vehicle shooting device, and a landmark in the three-dimensional live-action includes a target green plant; acquiring a green planting video of a target green planting, wherein the green planting video is obtained by shooting through vehicle-mounted video acquisition equipment; constructing a three-dimensional live-action model of the three-dimensional live-action according to the three-dimensional live-action video; according to the green planting video, constructing a green planting single model with green planting textures and green planting outlines; and fusing the green plant monomer model into the three-dimensional live-action model according to the height of the green plant of the target in the three-dimensional live-action model, so as to obtain the three-dimensional live-action model fused with the green plant monomer model.
In addition, as shown in fig. 1, the green plant and live-action fusion reconstruction system may further include a memory 200 for storing data, such as video data, image data, and device data of an on-vehicle video capture device for capturing video, and the like.
It should be noted that, the schematic view of the scene of the green plant and live action fusion reconstruction system shown in fig. 1 is only an example, and the green plant and live action fusion reconstruction system and the scene described in the embodiments of the present application are for more clearly describing the technical solution of the embodiments of the present application, and do not constitute a limitation on the technical solution provided by the embodiments of the present application, and as a person of ordinary skill in the art can know that the technical solution provided by the embodiments of the present application is equally applicable to similar technical problems as the evolution of the green plant and live action fusion reconstruction system and the appearance of new service scenes.
In the embodiment of the present application, an electronic device is used as an execution body, and for simplicity and convenience of description, the execution body is omitted in the subsequent method embodiment, and the green plant and live-action fusion reconstruction method includes:
acquiring a three-dimensional live-action video of a three-dimensional live-action, wherein the three-dimensional live-action video is obtained by shooting through unmanned aerial vehicle shooting equipment, and a landmark in the three-dimensional live-action comprises a target green plant;
Acquiring a green planting video of a target green planting, wherein the green planting video is obtained by shooting through vehicle-mounted video acquisition equipment;
constructing a three-dimensional live-action model of the three-dimensional live-action according to the three-dimensional live-action video;
According to the green planting video, constructing a green planting single model with green planting textures and green planting outlines;
and fusing the green plant monomer model into the three-dimensional live-action model according to the height of the green plant of the target in the three-dimensional live-action model, so as to obtain the three-dimensional live-action model fused with the green plant monomer model.
The application realizes the fine reconstruction of green plants in the three-dimensional live-action model, and each green plant monomer model of the three-dimensional live-action model has clear green plant textures, so that the application not only can complete the reconstruction of a large-scale three-dimensional live-action model, but also can realize the effective reconstruction of the target green plants in the large-scale three-dimensional live-action model.
Fig. 2 is a schematic flow chart of an embodiment of a green plant and live-action fusion reconstruction method according to the embodiment of the application, and fig. 2 is a schematic flow chart of a green plant and live-action fusion reconstruction method according to the embodiment of the application. It should be noted that although a logical order is depicted in the flowchart, in some cases the steps depicted or described may be performed in a different order than presented herein. The green planting and live-action fusion reconstruction method specifically comprises the following steps 201-205:
201. And acquiring a three-dimensional live-action video of the three-dimensional live-action, wherein the three-dimensional live-action video is obtained by shooting through unmanned aerial vehicle shooting equipment, and the landmark in the three-dimensional live-action comprises a target green plant.
The three-dimensional real scene can be any street scene needing three-dimensional reconstruction, the three-dimensional real scene comprises a plurality of landmarks, the landmarks can comprise buildings, signs, green plants and the like, and specifically, the landmarks in the three-dimensional real scene comprise the target green plants needing three-dimensional reconstruction;
In the flight process of the unmanned aerial vehicle, the unmanned aerial vehicle shooting equipment loaded on the unmanned aerial vehicle is utilized to simultaneously acquire two-dimensional images of streetscapes from a plurality of different visual angles such as vertical, inclined and the like, so that a three-dimensional live-action video is obtained, and the three-dimensional live-action video is a video formed by a two-dimensional three-dimensional live-action image sequence with multiple frames and overlapping degrees.
In the acquisition process, first geographic position information of streetscape along the way is recorded simultaneously, wherein the first geographic position information comprises information such as first position information, first attitude angle information, first timestamp information and the like of unmanned aerial vehicle shooting equipment during imaging. The first position information comprises (X 1,Y1,Z1) information of the unmanned aerial vehicle shooting equipment in a world coordinate system when imaging, wherein an X 1 value and a Y 1 value are respectively longitude and latitude of the unmanned aerial vehicle shooting equipment when imaging, and a Z1 value is altitude of the unmanned aerial vehicle shooting equipment when imaging;
The first attitude angle information comprises pitch angle (PITCH ANGLE) information, roll angle (roll angle) and yaw angle (yaw angle) of the unmanned aerial vehicle shooting equipment when imaging, in the embodiment, an angular velocity sensor mounted in the unmanned aerial vehicle shooting equipment detects the angular velocity of the unmanned aerial vehicle shooting equipment on an X 1 axis, a Y 1 axis and a Z 1 axis of a world coordinate system respectively through the angular velocity sensor when imaging, and time integration is carried out through the output of the angular velocity sensor of a processor built in the unmanned aerial vehicle shooting equipment so as to calculate and obtain first attitude angle information of the unmanned aerial vehicle shooting equipment in real time;
The first timestamp information is time information of the unmanned aerial vehicle shooting equipment in imaging.
In this embodiment, after the unmanned aerial vehicle shoots the three-dimensional live-action video of the street view, a connection channel is established between the unmanned aerial vehicle shooting device and the electronic device for executing the green planting and live-action fusion reconstruction method through the network transmission module, the three-dimensional live-action video or the image acquired by the unmanned aerial vehicle shooting device is sent to the electronic device for executing the green planting and live-action fusion reconstruction method in a message form, the acquisition of the three-dimensional live-action video is realized, the data transmission cost of the three-dimensional live-action video is reduced, and meanwhile, the transmission efficiency is improved.
In this embodiment, in order to obtain the geographic information and shooting time information of the shot three-dimensional live-action, when the unmanned aerial vehicle shooting device transmits the three-dimensional live-action video to the electronic device for executing the green planting and live-action fusion reconstruction method, the first geographic position information recorded by the unmanned aerial vehicle shooting device is transmitted at the same time, and the data content transmitted by the vehicle-mounted video acquisition device is not specifically limited in this embodiment.
202. And acquiring a green planting video of the target green planting, wherein the green planting video is obtained by shooting through vehicle-mounted video acquisition equipment.
The target green plants can be any green plants needing to be subjected to three-dimensional reconstruction in the three-dimensional real scene;
And shooting the green plant outside the target from a plurality of different visual angles through the vehicle-mounted video acquisition equipment to obtain a green plant video, wherein the green plant video is a video formed by a plurality of two-dimensional green plant image sequences with overlapping degrees, and the green plant video comprises all visible parts of the target street lamp.
In the present embodiment, the in-vehicle video capturing apparatus may be a radar integrated machine or a camera or the like arranged on a moving vehicle, which is not particularly limited in the present embodiment.
Therefore, the application can acquire the green planting videos of the target green planting from a plurality of different visual angles in real time, the texture in the acquired target green planting image is clearer and is more similar to the real texture of the target green planting, and meanwhile, the vehicle-mounted video acquisition equipment is adopted to acquire the green planting videos, so that the time efficiency is higher and the cost is lower.
In this embodiment, after the vehicle-mounted video acquisition device arrives at the target green plant video, a connection channel is established between the vehicle-mounted video acquisition device and the electronic device through the network transmission module, the target green plant video or the image acquired by the vehicle-mounted video acquisition device is sent to the electronic device in a message form, so that the acquisition of the target green plant video is realized, the data transmission cost of the target green plant video is reduced, and meanwhile, the transmission efficiency is improved.
In this embodiment, in order to obtain the geographic information and shooting time information of the shot target green plant, when the vehicle-mounted video acquisition device transmits the target green plant video to the electronic device for executing the green plant and live-action fusion reconstruction method, the geographic position information of the green plant recorded by the vehicle-mounted video acquisition device is transmitted at the same time, and the data content transmitted by the vehicle-mounted video acquisition device is not specifically limited in this embodiment.
203. And constructing a three-dimensional live-action model of the three-dimensional live-action according to the three-dimensional live-action video.
The internal and external parameters of the unmanned aerial vehicle shooting equipment comprise internal parameters and external parameters of the unmanned aerial vehicle shooting equipment, wherein the internal parameters of the unmanned aerial vehicle shooting equipment comprise internal parameter matrixes of the unmanned aerial vehicle shooting equipment, and the external parameters of the unmanned aerial vehicle shooting equipment comprise position information and attitude information of the unmanned aerial vehicle shooting equipment. Before the embodiment, the camera calibration of the unmanned aerial vehicle shooting equipment is completed by any one camera calibration method of a linear calibration method, a nonlinear optimization calibration method or a two-step calibration method, so that the initial internal parameters and the initial external parameters of the unmanned aerial vehicle shooting equipment are obtained.
In this embodiment, a three-dimensional live-action model of a three-dimensional live-action is constructed according to a three-dimensional live-action video, which specifically includes:
according to the three-dimensional live-action video, performing aerial triangulation calculation through a pre-trained three-dimensional live-action model generator, and generating three-dimensional live-action point cloud information of the three-dimensional live-action; and constructing a three-dimensional real scene model according to the three-dimensional real scene point cloud information.
Specifically, a trained three-dimensional real-scene model generator is built in an electronic device for executing a green planting and real-scene fusion reconstruction method in advance, a three-dimensional real-scene video with first geographic position information and a certain overlapping degree of an image is input into the pre-trained three-dimensional real-scene model generator, aerial triangulation calculation is automatically carried out by combining first geographic position information through a graphic processor (Graphics Processing Unit, GPU) in the three-dimensional real-scene model generator, sparse three-dimensional real-scene point cloud information is generated, the sparse three-dimensional real-scene point cloud information is encrypted, dense three-dimensional real-scene point cloud information is finally obtained, a three-dimensional real-scene triangular grid model is formed according to the dense three-dimensional real-scene point cloud information, and finally, a three-dimensional real-scene model rich in texture is generated by combining pixel information in the three-dimensional real-scene video.
The method comprises the following specific steps of: detecting a first image characteristic point in a multi-frame three-dimensional live-action image, wherein the first image characteristic point refers to a point with intense color or texture change, and the first image characteristic point is generally described by a pixel value and a surrounding pixel relation thereof; correlating the same first image feature points in the three-dimensional live-action images of different frames to finish the matching of the first image feature points; according to the matching result of the first image feature point matching, the initial internal parameters and the initial external parameters of the unmanned aerial vehicle shooting equipment are adjusted with the aim that the intersecting error of the first image feature point in the three-dimensional space is minimum, and finally the internal parameters and the external parameters of the adjusted unmanned aerial vehicle shooting equipment are obtained, and sparse three-dimensional real scenic spot cloud information is generated.
In this embodiment, the pre-trained three-dimensional real model generator may use any of ContextCap ture, photoScan, pix4Dmapper, and other relatively common oblique photography modeling software.
204. And constructing a green plant monomer model with green plant textures and green plant outlines according to the green plant video.
According to the green plant video, a green plant monomer model of a target green plant with green plant texture is constructed, and the method comprises the following steps of 2041-2043:
2041. and carrying out three-dimensional point cloud reconstruction on the target green plant according to the green plant video to obtain three-dimensional green plant point cloud information of the target green plant.
In this embodiment, according to the green plant video, performing three-dimensional point cloud reconstruction on the target green plant to obtain three-dimensional green plant point cloud information of the target green plant specifically includes:
and carrying out sparse point cloud reconstruction on the target green plant according to an image sequence formed by a plurality of two-dimensional green plant images in the green plant video to obtain pose parameters when the vehicle-mounted video acquisition equipment images.
In this embodiment, a sparse point cloud reconstruction is performed on a target green plant through a motion restoration structure algorithm (Structure from motion, sfM), specifically, openSfM open source codes may be adopted to perform three-dimensional reconstruction, a multi-frame two-dimensional green plant image sequence is used as input, and second image feature points of scale transformation and rotation angles in a two-dimensional green plant image are detected and extracted through a Shi & Tomasi algorithm, a SIFT algorithm or a SURF algorithm, wherein in image processing, the second image feature points refer to points with sharp changes of image gray values in the two-dimensional green plant image or points with larger curvature on image edges (i.e. points with intersection points of two edges), the second image feature points in the two-dimensional green plant image can reflect essential characteristics of the two-dimensional green plant image, target green plants in the two-dimensional green plant image can be identified, and matching of target green plants in a plurality of two-dimensional green plant images can be completed through matching of the second image feature points;
After detecting and extracting second image feature points in two-dimensional green plant images, matching second image feature points between every two-dimensional green plant images in a multi-frame two-dimensional green plant image sequence, calculating corresponding matching points, calculating a base matrix and an eigenvector according to the calculated matching points, performing singular value decomposition on the eigenvector to calculate a depth value of the second image feature points, namely obtaining the position of the second image feature points in a three-dimensional space, finally generating a three-dimensional sparse point cloud of a target green plant, and simultaneously calculating pose parameters and sparse three-dimensional green plant point cloud information when the vehicle-mounted video acquisition equipment images, wherein the pose parameters are position information and pose information when the vehicle-mounted video acquisition equipment shoots the two-dimensional green plant images.
According to the indication board video, carrying out three-dimensional point cloud reconstruction on the target indication board to obtain three-dimensional indication board point cloud information of the target indication board, and further specifically comprising:
According to an image sequence and pose parameters formed by multi-frame two-dimensional indication board images, carrying out dense point cloud reconstruction on the target indication board to obtain a three-dimensional dense point cloud of the target indication board; and taking the three-dimensional dense point cloud of the target indication board as three-dimensional indication board point cloud information of the target indication board.
In this embodiment, after pose parameters of the vehicle-mounted video acquisition device during imaging are obtained, dense point cloud reconstruction is performed on the target indication board through a multi-view stereoscopic vision algorithm (Multiple View Stereo, MVS), so as to generate dense three-dimensional point cloud. Specifically, openMVS open source codes can be adopted to perform data processing, pose parameters of the vehicle-mounted video acquisition equipment and multi-frame two-dimensional green plant image sequences are used as inputs, pixel-by-pixel depth estimation is performed according to the pose parameters shot during imaging of the vehicle-mounted video acquisition equipment and the multi-frame two-dimensional green plant image sequences of a plurality of visual angles, dense three-dimensional point clouds are generated, and then three-dimensional green plant point cloud information of the target green plant is finally obtained.
According to pose parameters and multi-frame two-dimensional green plant image sequences of a plurality of visual angles when the vehicle-mounted video acquisition equipment images, carrying out pixel-by-pixel depth estimation, specifically:
and calculating a pixel p of a certain second image feature point in the two-dimensional green plant image according to the camera internal parameters, the pose parameters and the depth values of the second image feature point to obtain three-dimensional point cloud coordinates in a real space:
P=D(p)T-1K-1p
Wherein P is the three-dimensional point cloud coordinates of the point cloud coordinate system, D (P) is the depth value of the pixel P of a certain second image feature point, T is the camera pose of the vehicle-mounted video acquisition equipment, the camera pose comprises a rotation matrix R and a translation vector T, and K is the camera internal reference of the vehicle-mounted video acquisition equipment.
2042. And identifying green plant textures of the target green plants in the green plant video, and obtaining a texture identification result.
In this embodiment, the green plant video includes an image sequence composed of a plurality of two-dimensional green plant images, and a green plant texture of a target green plant in the green plant video is identified to obtain a texture identification result, and specifically includes A1-A3:
A1, detecting a target green plant from a two-dimensional green plant image to generate a green plant detection frame;
Because when the vehicle-mounted video acquisition equipment acquires the green plant video, pictures of other elements except the target green plant are acquired, in order to avoid interference of the pictures of the other elements in the green plant video on the acquisition of the green plant texture image of the target green plant, before the acquisition of the green plant texture image of the target green plant in the green plant video, the area where the target green plant is located needs to be determined from the green plant video, namely, the target green plant needs to be detected from the green plant video first, and a green plant detection frame is generated.
In this embodiment, an image sequence composed of multiple frames of two-dimensional green plant images of a green plant video is taken as input, a target green plant detection model is trained to perform target green plant detection on the green plant video, and an image sequence with target green plant and green plant detection frames for each two-dimensional green plant image is output.
A2, intercepting a target green plant image of a region corresponding to the green plant detection frame.
In order to avoid interference of other irrelevant areas in a two-dimensional green plant image with a target green plant and a green plant detection frame to a green plant texture recognition program, in the embodiment, based on the green plant detection frame, an image of an area where the target green plant is located is intercepted to obtain a target green plant image, and then green plant texture recognition of the target green plant is performed based on the target green plant image. In this embodiment, the size of the target green plant image may be the same as the size of the green plant detection frame, that is, the plurality of vertices of the truncated target green plant image are respectively in one-to-one correspondence with the plurality of vertices of the green plant detection frame, which is not specifically limited in this embodiment.
A3, identifying green plant textures in the target green plant image, and obtaining a texture identification result.
The identification of the green plant texture in the target green plant image is performed based on one of two-dimensional green plant images with target green plant and green plant detection frames in the image sequence. In the application process, analysis can be performed based on any frame of target green plant image in the image sequence to acquire a green plant texture image with a target green plant texture, and the green plant texture image with the target green plant texture is used as a final texture recognition result. In order to obtain a more comprehensive and complete green plant texture image, analysis can be performed based on multiple frames of target green plant images or each frame of target green plant image in an image sequence so as to correspondingly acquire multiple green plant texture images, and a final texture recognition result is obtained according to the analysis of the multiple green plant texture images.
In this embodiment, after identifying the green plant texture in the target green plant image and obtaining the texture identification result, the method further includes:
and according to the texture recognition result, retrieving a three-dimensional green planting template suitable for the target green planting from a preset template database.
Because the shape of the green plant is irregular and is subject to environmental factor reasons and growth factors of the green plant, the spatial structure of the green plant has strong characteristics, and if an accurate three-dimensional green plant template is to be constructed in advance in a preset template database, growth data under various environments need to be considered, so that the parameters to be described are excessive when the three-dimensional green plant template is constructed in the preset template database, the data size is large and the acquisition is difficult. In order to reduce the work of pre-constructing the three-dimensional green planting templates in the early stage, a rougher three-dimensional green planting template is constructed, different texture elements are assigned to each three-dimensional green planting template, and the corresponding three-dimensional green planting template can be quickly searched through the texture elements.
Therefore, in the present application, after obtaining a green plant texture image including a texture of a target green plant, the manner of calling the three-dimensional green plant template is: and taking the green plant texture image as a query image, and retrieving a three-dimensional green plant template containing the same texture image from a preset template database.
2043. And constructing a green planting monomer model of the green planting texture according to the three-dimensional green planting point cloud information, the pre-constructed three-dimensional green planting template and the texture identification result.
Embedding a pre-constructed three-dimensional green planting template into a frame formed by three-dimensional green planting point cloud information, fusing a texture recognition result into the frame formed by the three-dimensional green planting point cloud information, and constructing to obtain a green planting monomer model.
According to the three-dimensional green planting point cloud information and a pre-constructed three-dimensional green planting template, before a green planting monomer model is constructed, the height of a frame formed by the three-dimensional green planting point cloud information is firstly required to be determined.
According to the three-dimensional green planting point cloud information, a pre-constructed three-dimensional green planting template and a texture recognition result, a green planting monomer model of a green planting texture is constructed, and the method specifically comprises the following steps of S1-S3:
S1, determining a three-dimensional green planting stereoscopic frame of the target green planting according to the frame formed by the three-dimensional green planting point cloud information.
And connecting the three-dimensional green planting point cloud information to generate a virtual minimum external three-dimensional frame, wherein the virtual minimum external three-dimensional frame is the three-dimensional green planting three-dimensional frame of the target green planting.
And S2, determining the green planting height of the three-dimensional green planting stereoscopic frame in an xyz three-dimensional space coordinate system, wherein the xyz three-dimensional space coordinate system is a three-dimensional coordinate system formed by an x axis, a y axis and a z axis.
The xyz three-dimensional space coordinate system is a three-dimensional space coordinate system in a virtual space, the virtual space is used for executing a green planting and live-action fusion reconstruction method, when three-dimensional green planting point cloud information is generated, images of the virtual space and the three-dimensional green planting point cloud generated in the virtual space are displayed on a display interface of the electronic device, the xyz three-dimensional space coordinate system constructed in the virtual space is displayed, and the xyz three-dimensional space coordinate system is longitude, latitude and altitude corresponding to a world coordinate system by an x axis, a y axis and a z axis.
In the embodiment, the green planting height of the three-dimensional green planting stereoscopic frame is determined according to the difference value between the maximum z-axis value and the minimum z-axis value of the three-dimensional green planting stereoscopic frame along the z-axis direction; the z-axis direction is the direction in which the z-axis extends in the xyz three-dimensional space coordinate system.
Because the z-axis of the xyz three-dimensional space coordinate system corresponds to the actual height, the green planting height of the three-dimensional green planting three-dimensional frame can be determined by calculating the difference value between the maximum z-axis value and the minimum z-axis value of the three-dimensional green planting three-dimensional frame along the z-axis direction, wherein the z-axis value is the coordinate value of the three-dimensional green planting three-dimensional frame in the z-axis direction of the xyz three-dimensional space coordinate system.
And S3, scaling the three-dimensional green planting template according to the green planting height of the three-dimensional green planting stereoscopic frame, and fusing the scaled three-dimensional green planting template and the texture recognition result into the three-dimensional green planting stereoscopic frame to obtain a green planting monomer model.
After determining the green planting height of the three-dimensional green planting stereoscopic frame and obtaining the three-dimensional green planting template, scaling the three-dimensional green planting template according to the green planting height of the three-dimensional green planting stereoscopic frame, embedding the adjusted three-dimensional green planting template into the three-dimensional green planting stereoscopic frame, and fusing the texture identification result of the target green planting into the three-dimensional green planting stereoscopic frame to finally obtain the green planting monomer model with clear green planting textures.
205. And fusing the green plant monomer model into the three-dimensional live-action model according to the height of the green plant of the target in the three-dimensional live-action model, so as to obtain the three-dimensional live-action model fused with the green plant monomer model.
In the construction stage of constructing the green plant monomer model, the height adjustment of the green plant monomer model is completed according to the green plant height of the target green plant, so that after the green plant monomer model with clear green plant texture is obtained, the green plant monomer model is directly embedded into the green plant position with the same height in the three-dimensional real model, and the three-dimensional real model with clear texture of the target green plant can be obtained.
Because in the real three-dimensional live-action, most of the green plants have various shapes, but most of the green plants have the characteristics of identical textures and only have different heights, in the embodiment, after the green plant monomer model is obtained, the height of the green plant monomer model can be adjusted to realize three-dimensional reconstruction of other green plants with identical textures in a large batch in the three-dimensional live-action.
Therefore, the green planting and live-action fusion reconstruction method provided by the application has the advantages that the unmanned aerial vehicle shooting equipment is used for acquiring a large-scale and comprehensive three-dimensional live-action video, the vehicle-mounted video acquisition equipment is used for acquiring the target indication board video with more comprehensive visual angles and clearer textures, and the unmanned aerial vehicle shooting equipment and the vehicle-mounted video acquisition equipment are combined, so that the equipment cost and the shooting cost are low; the three-dimensional reconstruction is carried out based on the three-dimensional live-action video and the target indication board video, so that the three-dimensional reconstruction of a large-scale live-action can be completed, a three-dimensional live-action model is obtained, the three-dimensional reconstruction of a green-plant single model with a clear texture can be independently carried out, the real texture of the indication board is restored, and the reconstructed green-plant single model is fused into the three-dimensional live-action model, so that the three-dimensional live-action model with the clear texture for the green-plant of the target is finally obtained.
Therefore, the three-dimensional reconstruction mode can realize the three-dimensional reconstruction of a large number of green plants in the three-dimensional live-action model without increasing extra cost, further improves the reconstruction effect of green plant elements in the large-scale live-action three-dimensional reconstruction, and solves the problem of green plant texture deletion in the three-dimensional live-action reconstruction due to the fact that the inclined model is generated only by unmanned aerial vehicle inclination at present, and the imaging distance is too far, the visual angle is incomplete and the picture is unclear.
In another embodiment of the present application, the target green plant detection model may be trained by:
Adopting EFFICIENTNET as a main network, adopting a yolox or an anchor free target detection model as a model to be trained for training, taking a sample image frame aggregate of green plant sample videos stored in an outer vertical complete sample library as input of the model to be trained, and taking an image frame containing a target green plant and a green plant detection frame as output for model training to obtain the target green plant detection model;
in the training process, the modeling capability of the model to be trained can be improved by adopting a data enhancement mode, wherein the data enhancement mode specifically comprises the following modes but not limited to the following modes:
(1) Random cropping (Random Crop) is performed on the sample image frames in the sample image frame set, specifically, cropping is performed on the sample image frames in a region with a Random ratio of 0.6-1.0, and the cropped sample image frames are used as input of a model to be trained;
(2) The Dropblock layers are embedded in the network, namely the backbone network comprises a plurality of inner roll layers, a pooling layer and Dropblock layers, in Dropblock layers, the neighborhood space pixel point with the area size of K multiplied by R in the discarding feature map is discarded, the discarding probability is p, and in an exemplary Dropblock layers, the neighborhood space pixel point with the area size of 3 multiplied by 3 in the discarding feature map is set, and the discarding probability is set to be 0.1;
(3) The Mosaic mosaics are adopted to realize data enhancement, specifically, four sample image frames in a sample image frame set are spliced into one Mosaic image randomly, and a new sample image obtained by splicing is used as training data to be used as input for model training.
In this embodiment, multi-scale training (Multi SCALE TRAINING, MST) is adopted, so that the risk of model overfitting is reduced, and the robustness of the target green plant detection model is enhanced.
In order to better implement the green plant and live-action fusion reconstruction method in the embodiment of the present application, on the basis of the green plant and live-action fusion reconstruction method, the embodiment of the present application further provides a green plant and live-action fusion reconstruction device, as shown in fig. 3, where the green plant and live-action fusion reconstruction device 300 includes:
The first video acquisition module 301 is configured to acquire a three-dimensional live-action video of a three-dimensional live-action, where the three-dimensional live-action video is obtained by shooting through an unmanned aerial vehicle shooting device, and the landmark includes a target green plant;
The second video acquisition module 302 is configured to acquire a green plant video of the target green plant, where the green plant video is obtained by shooting through a vehicle-mounted video acquisition device;
The first model building module 303 is configured to build a three-dimensional live-action model of the three-dimensional live-action according to the three-dimensional live-action video;
A second model building module 304, configured to build a green plant monomer model with a green plant texture and a green plant contour according to the green plant video;
And the model fusion module 305 is configured to fuse the green plant monomer model into the three-dimensional real model according to the height of the target green plant in the three-dimensional real model, so as to obtain the three-dimensional real model fused with the green plant monomer model.
The first model building module 303 specifically is:
the three-dimensional real scene point cloud information generating method is used for carrying out aerial triangulation calculation through a pre-trained three-dimensional real scene model generator according to the three-dimensional real scene video to generate three-dimensional real scene point cloud information of the three-dimensional real scene;
The method is used for constructing a three-dimensional real scene model according to the three-dimensional real scene point cloud information.
The second model building module 304 specifically is:
The three-dimensional point cloud reconstruction method comprises the steps of carrying out three-dimensional point cloud reconstruction on a target street lamp according to a green planting video to obtain three-dimensional green planting point cloud information of a target green planting;
The method comprises the steps of identifying green plant textures of a target green plant in a green plant video to obtain texture identification results;
The method comprises the steps of determining a green plant edge of a target green plant in a green plant video;
the method is used for constructing a green plant monomer model of the green plant texture according to the three-dimensional green plant point cloud information, the pre-constructed three-dimensional green plant template and the texture identification result.
The green plant video includes an image sequence composed of a plurality of two-dimensional green plant images, and the second model building module 304 is further specifically:
the method comprises the steps of detecting a target green plant from a two-dimensional green plant image, and generating a green plant detection frame;
The target green plant image is used for intercepting a corresponding region of the green plant detection frame;
And the method is used for identifying the green plant texture in the target green plant image to obtain a texture identification result.
The green plant and live-action fusion reconstruction device 300 further comprises a template retrieval module, wherein the template retrieval module specifically comprises:
And the three-dimensional green planting template is used for retrieving the three-dimensional green planting template suitable for the target green planting from a preset template database according to the texture recognition result.
The second model building module 304 is also specifically:
The three-dimensional green planting three-dimensional frame is used for determining a target green planting according to the frame formed by the three-dimensional green planting point cloud information;
The method comprises the steps of determining green planting height of a three-dimensional green planting three-dimensional frame in an xyz three-dimensional space coordinate system, wherein the xyz three-dimensional space coordinate system is a three-dimensional coordinate system formed by an x axis, a y axis and a z axis;
And the three-dimensional green planting template is scaled according to the green planting height of the three-dimensional green planting stereoscopic frame, and the scaled three-dimensional green planting template and the texture recognition result are fused into the three-dimensional green planting stereoscopic frame to obtain a green planting monomer model.
The second model building module 304 is also specifically:
The method comprises the steps of determining the green planting height of a three-dimensional green planting three-dimensional frame according to the difference value between the maximum z-axis value and the minimum z-axis value of the three-dimensional green planting three-dimensional frame along the z-axis direction;
The z-axis direction is the direction in which the z-axis extends in the xyz three-dimensional space coordinate system.
In another embodiment of the present application, as shown in fig. 4, the present application further provides an electronic device 400, which shows a schematic structural diagram of the electronic device according to the embodiment of the present application, specifically:
The electronic device may include one or more processing cores 'processors 401, one or more computer-readable storage media's memory 402, power supply 403, and input unit 404, among other components. Those skilled in the art will appreciate that the electronic device structure shown in fig. 4 is not limiting of the electronic device and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components. Wherein:
The processor 401 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 402, and calling data stored in the memory 402, thereby performing overall monitoring of the electronic device. Optionally, processor 401 may include one or more processing cores; the Processor 401 may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), off-the-shelf Programmable gate array (F-Programmable GATE ARRAY, FPGA) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, and preferably, the processor 401 may integrate an application processor, which primarily handles operating systems, user interfaces, application programs, and the like, with a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 401.
The memory 402 may be used to store software programs and modules, and the processor 401 executes various functional applications and data processing by executing the software programs and modules stored in the memory 402. The memory 402 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data created according to the use of the electronic device, etc. In addition, memory 402 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. Accordingly, the memory 402 may also include a memory controller to provide the processor 401 with access to the memory 402.
The electronic device further comprises a power supply 403 for supplying power to the various components, preferably the power supply 403 may be logically connected to the processor 401 by a power management system, so that functions of managing charging, discharging, and power consumption are achieved by the power management system. The power supply 403 may also include one or more of any of a direct current or alternating current power supply, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
The electronic device may further comprise an input unit 404, which input unit 404 may be used for receiving input digital or character information and generating keyboard, mouse, joystick, optical or trackball signal inputs in connection with user settings and function control.
Although not shown, the electronic device may further include a display unit or the like, which is not described herein. In particular, in this embodiment, the processor 401 in the electronic device loads executable files corresponding to the processes of one or more application programs into the memory 402 according to the following instructions, and the processor 401 executes the application programs stored in the memory 402, so as to implement various functions as follows:
acquiring a three-dimensional live-action video of a three-dimensional live-action, wherein the three-dimensional live-action video is obtained by shooting through unmanned aerial vehicle shooting equipment, and a landmark in the three-dimensional live-action comprises a target green plant;
Acquiring a green planting video of a target green planting, wherein the green planting video is obtained by shooting through vehicle-mounted video acquisition equipment;
constructing a three-dimensional live-action model of the three-dimensional live-action according to the three-dimensional live-action video;
According to the green planting video, constructing a green planting single model with green planting textures and green planting outlines;
and fusing the green plant monomer model into the three-dimensional live-action model according to the height of the green plant of the target in the three-dimensional live-action model, so as to obtain the three-dimensional live-action model fused with the green plant monomer model.
In some embodiments of the application, the application also provides a computer readable storage medium, which may include: read Only Memory (ROM), random access Memory (ram, random Access Memory), magnetic or optical disk, and the like. The method comprises the steps of storing a computer program thereon, and loading the computer program by a processor to execute the steps in the green planting and live-action fusion reconstruction method provided by the embodiment of the application. For example, the loading of the computer program by the processor may perform the steps of:
acquiring a three-dimensional live-action video of a three-dimensional live-action, wherein the three-dimensional live-action video is obtained by shooting through unmanned aerial vehicle shooting equipment, and a landmark in the three-dimensional live-action comprises a target green plant;
Acquiring a green planting video of a target green planting, wherein the green planting video is obtained by shooting through vehicle-mounted video acquisition equipment;
constructing a three-dimensional live-action model of the three-dimensional live-action according to the three-dimensional live-action video;
According to the green planting video, constructing a green planting single model with green planting textures and green planting outlines;
and fusing the green plant monomer model into the three-dimensional live-action model according to the height of the green plant of the target in the three-dimensional live-action model, so as to obtain the three-dimensional live-action model fused with the green plant monomer model.
In the foregoing embodiments, the descriptions of the embodiments are focused on, and the portions of one embodiment that are not described in detail in the foregoing embodiments may be referred to in the foregoing detailed description of other embodiments, which are not described herein again.
The above describes in detail a method, a device, an electronic device and a storage medium for reconstructing a fusion of green plants and live scenes provided by the embodiments of the present application, and specific examples are applied to illustrate the principles and embodiments of the present application, and the above description of the embodiments is only used to help understand the method and core ideas of the present application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present application, the present description should not be construed as limiting the present application.

Claims (10)

1. The green plant and live-action fusion reconstruction method is characterized by comprising the following steps of:
Acquiring a three-dimensional live-action video of a three-dimensional live-action, wherein the three-dimensional live-action video is obtained by shooting through unmanned aerial vehicle shooting equipment, and the landmark in the three-dimensional live-action comprises a target green plant;
acquiring a green planting video of the target green planting, wherein the green planting video is obtained by shooting through vehicle-mounted video acquisition equipment;
Constructing a three-dimensional live-action model of the three-dimensional live-action according to the three-dimensional live-action video;
According to the green plant video, a green plant single model with green plant textures and green plant outlines is constructed;
And fusing the green plant monomer model into the three-dimensional live-action model according to the height of the target green plant in the three-dimensional live-action model, so as to obtain the three-dimensional live-action model fused with the green plant monomer model.
2. The method for reconstructing a green plant and a live-action fusion according to claim 1, wherein the constructing a three-dimensional live-action model of the three-dimensional live-action according to the three-dimensional live-action video comprises:
According to the three-dimensional live-action video, performing aerial triangulation calculation through a pre-trained three-dimensional live-action model generator, and generating three-dimensional live-action point cloud information of the three-dimensional live-action;
And constructing the three-dimensional real scene model according to the three-dimensional real scene point cloud information.
3. The method of claim 1, wherein constructing a green plant monomer model of the target green plant with green plant texture from the green plant video comprises:
performing three-dimensional point cloud reconstruction on the target green plant according to the green plant video to obtain three-dimensional green plant point cloud information of the target green plant;
identifying green plant textures of the target green plants in the green plant video to obtain texture identification results;
and constructing the green plant monomer model of the green plant texture according to the three-dimensional green plant point cloud information, the pre-constructed three-dimensional green plant template and the texture identification result.
4. The method of claim 3, wherein the green plant video includes an image sequence composed of a plurality of two-dimensional green plant images, and the identifying the green plant texture of the target green plant in the green plant video includes:
Detecting the target green plants from the two-dimensional green plant image to generate a green plant detection frame;
intercepting a target green plant image of a corresponding area of the green plant detection frame;
and identifying green plant textures in the target green plant image to obtain the texture identification result.
5. The method of claim 4, wherein after said identifying green plant texture in said target green plant image to obtain said texture identification result, comprising:
And according to the texture recognition result, retrieving the three-dimensional green planting template applicable to the target green planting from a preset template database.
6. The method of claim 5, wherein the constructing the green plant monomer model of the green plant texture according to the three-dimensional green plant point cloud information, the pre-constructed three-dimensional green plant template and the texture recognition result comprises:
determining a three-dimensional green planting stereoscopic frame of the target green planting according to the frame formed by the three-dimensional green planting point cloud information;
Determining the green planting height of the three-dimensional green planting stereoscopic frame in an xyz three-dimensional space coordinate system, wherein the xyz three-dimensional space coordinate system is a three-dimensional coordinate system formed by an x axis, a y axis and a z axis;
and scaling the three-dimensional green planting template according to the green planting height of the three-dimensional green planting stereoscopic frame, and fusing the scaled three-dimensional green planting template and the texture recognition result into the three-dimensional green planting stereoscopic frame to obtain the green planting monomer model.
7. The method of green plant and live-action fusion reconstruction of claim 1, wherein the determining a green plant height of the three-dimensional green plant stereoscopic frame in an xyz three-dimensional space coordinate system comprises:
Determining the green planting height of the three-dimensional green planting stereoscopic frame according to the difference value of the maximum z-axis value and the minimum z-axis value of the three-dimensional green planting stereoscopic frame along the z-axis direction;
the z-axis direction is the direction in which the z-axis extends in the xyz three-dimensional space coordinate system.
8. A green plant and live-action fusion reconstruction device, the device comprising:
The first video acquisition module is used for acquiring a three-dimensional live-action video of a three-dimensional live-action, wherein the three-dimensional live-action video is shot by unmanned aerial vehicle shooting equipment, and the landmark in the three-dimensional live-action comprises a target green plant;
the second video acquisition module is used for acquiring a green planting video of the target green planting, and the green planting video is obtained through shooting by the vehicle-mounted video acquisition equipment;
the first model construction module is used for constructing a three-dimensional live-action model of the three-dimensional live-action according to the three-dimensional live-action video;
the second model building module is used for building a green plant single model with green plant textures and green plant outlines according to the green plant video;
And the model fusion module is used for fusing the green plant monomer model into the three-dimensional live-action model according to the height of the target green plant in the three-dimensional live-action model to obtain the three-dimensional live-action model fused with the green plant monomer model.
9. An electronic device, the electronic device comprising:
one or more processors;
A memory; and
One or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the processor to implement the green plant and live-action fusion reconstruction method of any one of claims 1 to 7.
10. A computer readable storage medium, having stored thereon a computer program, the computer program being loaded by a processor to perform the steps of the green plant and live-action fusion reconstruction method of any of claims 1 to 7.
CN202211612629.2A 2022-12-15 2022-12-15 Green plant and live-action fusion reconstruction method and device, electronic equipment and storage medium Pending CN118247420A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211612629.2A CN118247420A (en) 2022-12-15 2022-12-15 Green plant and live-action fusion reconstruction method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211612629.2A CN118247420A (en) 2022-12-15 2022-12-15 Green plant and live-action fusion reconstruction method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN118247420A true CN118247420A (en) 2024-06-25

Family

ID=91551485

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211612629.2A Pending CN118247420A (en) 2022-12-15 2022-12-15 Green plant and live-action fusion reconstruction method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN118247420A (en)

Similar Documents

Publication Publication Date Title
CN112894832B (en) Three-dimensional modeling method, three-dimensional modeling device, electronic equipment and storage medium
CN111968129B (en) Instant positioning and map construction system and method with semantic perception
CN110568447B (en) Visual positioning method, device and computer readable medium
Liang et al. Image based localization in indoor environments
Mastin et al. Automatic registration of LIDAR and optical images of urban scenes
US7509241B2 (en) Method and apparatus for automatically generating a site model
CN110728671B (en) Dense reconstruction method of texture-free scene based on vision
CN110176032B (en) Three-dimensional reconstruction method and device
CN104330074B (en) Intelligent surveying and mapping platform and realizing method thereof
JP2010109783A (en) Electronic camera
CN107843251A (en) The position and orientation estimation method of mobile robot
CN113192200B (en) Method for constructing urban real scene three-dimensional model based on space-three parallel computing algorithm
CN115641401A (en) Construction method and related device of three-dimensional live-action model
CN112489099A (en) Point cloud registration method and device, storage medium and electronic equipment
CN113379901A (en) Method and system for establishing house live-action three-dimension by utilizing public self-photographing panoramic data
CN110245199A (en) A kind of fusion method of high inclination-angle video and 2D map
CN110766731A (en) Method and device for automatically registering panoramic image and point cloud and storage medium
CN113345084B (en) Three-dimensional modeling system and three-dimensional modeling method
CN115527016A (en) Three-dimensional GIS video fusion registration method, system, medium, equipment and terminal
CN113034347A (en) Oblique photographic image processing method, device, processing equipment and storage medium
CN115063485B (en) Three-dimensional reconstruction method, device and computer-readable storage medium
CN109166176B (en) Three-dimensional face image generation method and device
CN116704112A (en) 3D scanning system for object reconstruction
JP3791186B2 (en) Landscape modeling device
CN113487741B (en) Dense three-dimensional map updating method and device

Legal Events

Date Code Title Description
PB01 Publication