CN117197401A - Test method and device for point cloud construction, electronic equipment and storage medium - Google Patents

Test method and device for point cloud construction, electronic equipment and storage medium Download PDF

Info

Publication number
CN117197401A
CN117197401A CN202210618839.6A CN202210618839A CN117197401A CN 117197401 A CN117197401 A CN 117197401A CN 202210618839 A CN202210618839 A CN 202210618839A CN 117197401 A CN117197401 A CN 117197401A
Authority
CN
China
Prior art keywords
point cloud
data
dense
target equipment
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210618839.6A
Other languages
Chinese (zh)
Inventor
姜翰青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Shangtang Technology Development Co Ltd
Original Assignee
Zhejiang Shangtang Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Shangtang Technology Development Co Ltd filed Critical Zhejiang Shangtang Technology Development Co Ltd
Priority to CN202210618839.6A priority Critical patent/CN117197401A/en
Publication of CN117197401A publication Critical patent/CN117197401A/en
Pending legal-status Critical Current

Links

Abstract

The disclosure provides a testing method, a testing device, electronic equipment and a storage medium aiming at point cloud construction, wherein the testing method comprises the following steps: responding to a target device to construct a dense point cloud corresponding to a current scene in real time in a moving process, and acquiring point cloud data corresponding to the dense point cloud and processing rate information when the target device constructs the dense point cloud; determining a point cloud position error corresponding to the dense point cloud constructed by the target equipment based on the point cloud data and the reference point cloud data of the reference point cloud corresponding to the current scene; and determining a test result of the point cloud construction function aiming at the target equipment based on the point cloud position error, the processing rate information and a preset performance test index.

Description

Test method and device for point cloud construction, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of computer vision, in particular to a testing method and device for point cloud construction, electronic equipment and a storage medium.
Background
With the rapid development of mobile devices such as cell phones, tablets, etc., the addition of augmented reality (Augmented Reality, AR) applications on mobile devices is becoming more common. For example, an AR system may be integrated on a mobile device to enable AR effect presentation.
Generally, an AR system integrated on a mobile device needs to perform three-dimensional modeling on a real scene, for example, construct a dense point cloud of any object in the real scene, so as to perform subsequent AR effect display by using the constructed dense point cloud. Therefore, the AR system has great influence on the efficiency and the precision of constructing the dense point cloud and the display effect of the AR effect display. Therefore, it is particularly important to provide a test method for point cloud construction.
Disclosure of Invention
In view of this, the disclosure provides at least a testing method, a testing device, an electronic device and a storage medium for point cloud construction.
In a first aspect, the present disclosure provides a test method for point cloud construction, including:
responding to a target device to construct a dense point cloud corresponding to a current scene in real time in a moving process, and acquiring point cloud data corresponding to the dense point cloud and processing rate information when the target device constructs the dense point cloud;
determining a point cloud position error corresponding to the dense point cloud constructed by the target equipment based on the point cloud data and the reference point cloud data of the reference point cloud corresponding to the current scene;
and determining a test result of the point cloud construction function aiming at the target equipment based on the point cloud position error, the processing rate information and a preset performance test index.
In the method, the point cloud position error corresponding to the dense point cloud constructed by the target equipment is determined by acquiring the point cloud data corresponding to the dense point cloud and the processing rate information when the target equipment constructs the dense point cloud and according to the point cloud data and the reference point cloud data of the reference point cloud corresponding to the current scene, so that whether the precision of the dense point cloud construction function of the target equipment meets the requirement is tested by utilizing the point cloud position error and a preset performance test index; based on the processing rate information and the performance test index, whether the efficiency of the dense point cloud construction function of the target equipment meets the requirement or not can be tested; and the test result of the point cloud construction function aiming at the target equipment can be accurately determined, the test of the point cloud construction function of the target equipment is realized, and the test accuracy is improved.
In one possible embodiment, the processing rate information includes a critical frame number processed per second; the determining a test result of the point cloud construction function for the target device based on the point cloud position error, the processing rate information and a preset performance test index includes:
and under the condition that the point cloud position error is smaller than a position error threshold indicated by the performance test index and the processing rate information indicates that the key frame number processed per second is larger than the key frame number per second threshold indicated by the performance test index, determining that a test result of the point cloud construction function aiming at the target equipment is test passing.
The method comprises the steps of comparing a point cloud position error with a position error threshold, comparing a key frame number processed per second with the key frame number per second threshold, determining that the accuracy of a point cloud construction function of target equipment meets the requirement when the point cloud position error is smaller than the position error threshold, determining that the instantaneity (or efficiency) of the point cloud construction function of the target equipment meets the requirement when the key frame number processed per second is larger than the key frame number per second threshold, and determining that the test result of the point cloud construction function of the target equipment is test passing when the accuracy and instantaneity meet the requirement.
In one possible embodiment, the processing rate information includes an average time consumption per key frame;
the determining a test result of the point cloud construction function for the target device based on the point cloud position error, the processing rate information and a preset performance test index includes:
and determining that the test result of the point cloud construction function corresponding to the target equipment is test passing under the condition that the point cloud position error is smaller than a position error threshold indicated by the performance test index and the average time consumption of each key frame indicated by the processing rate information is smaller than a time consumption threshold indicated by the performance test index.
Comparing the point cloud position error with a position error threshold, comparing the average time consumption of each key frame with the time consumption threshold, determining that the accuracy of the point cloud construction function of the target device meets the requirement when the point cloud position error is smaller than the position error threshold, determining that the instantaneity (or efficiency) of the point cloud construction function of the target device meets the requirement when the average time consumption of each key frame is smaller than the time consumption threshold, and determining that the test result of the point cloud construction function of the target device is test passing when the accuracy and the instantaneity meet the requirement.
In one possible embodiment, the average time consumption of each key frame is determined according to the following steps:
acquiring execution time length of a dense point cloud constructed by target equipment based on a plurality of key frames in video stream data acquired in real time; the video time length corresponding to the video stream data is longer than or equal to a preset time length;
and determining the average time consumption of each key frame based on the execution time length corresponding to the key frames and the number of the key frames.
In order to accurately detect the real-time performance of the point cloud construction function of the target device, a preset duration can be set, and the video time length corresponding to the video stream data is ensured to be greater than or equal to the preset duration. Meanwhile, based on the execution time length corresponding to the plurality of key frames and the number of the plurality of key frames, the average time consumption of each key frame is determined, and data support can be provided for the test result of the point cloud construction function of the target equipment to be determined later.
In a possible implementation manner, the determining, based on the point cloud data and the reference point cloud data of the reference point cloud corresponding to the current scene, the point cloud position error corresponding to the dense point cloud constructed by the target device includes:
determining a similarity transformation matrix between the dense point cloud and the reference point cloud based on the point cloud data and the reference point cloud data;
transforming the coordinate information of each vertex in the point cloud data based on the similarity transformation matrix to obtain transformed point cloud data; the converted point cloud data comprise transformed coordinate information corresponding to each vertex;
and determining a point cloud position error corresponding to the dense point cloud constructed by the target equipment based on the converted point cloud data and the reference point cloud data.
In the embodiment of the disclosure, the coordinate information of each vertex in the point cloud data is transformed by determining a similarity transformation matrix between the dense point cloud and the reference point cloud and utilizing the similarity transformation matrix to obtain transformed point cloud data; and then, based on the converted point cloud data and the reference point cloud data, the point cloud position error corresponding to the dense point cloud constructed by the target equipment is accurately determined, so that the accuracy of the point cloud construction function of the target equipment can be accurately tested by using the determined point cloud position error.
In a possible implementation manner, the reference point cloud data of the reference point cloud corresponding to the current scene is constructed according to the following steps:
acquiring video stream data of the current scene, which is acquired in advance by an acquisition device, and Inertial Measurement Unit (IMU) data when the acquisition device acquires the video stream data;
processing the video stream data and the IMU data by using a model reconstruction platform to generate virtual point cloud data of a datum point cloud corresponding to the current scene;
and generating datum point cloud data of the datum point cloud based on the scale information between the model reconstruction platform and the real scene and the virtual point cloud data.
In the method, the model reconstruction platform is utilized to process the video stream data and the IMU data, virtual point cloud data of the datum point cloud corresponding to the current scene is generated, and the virtual point cloud data can accurately represent the scene structure of the current scene. And based on the scale information and the virtual point cloud data between the model reconstruction platform and the real scene, the datum point cloud data of the datum point cloud is accurately generated. On the basis of guaranteeing the scene structure of the current scene, the method ensures that the indicated size of the datum point cloud data is consistent with the real size of the current scene, and the precision of the datum point cloud data is higher.
In a possible implementation manner, the target device builds a dense point cloud corresponding to the current scene in real time in a moving process, and the method includes:
acquiring video stream data corresponding to a current scene acquired by the target equipment in real time and Inertial Measurement Unit (IMU) data corresponding to the target equipment in the process that the target equipment moves according to a set moving route;
and based on the video stream data and the IMU data, utilizing a three-dimensional reconstruction module integrated on the target equipment to construct the dense point cloud corresponding to the current scene in real time.
Here, by using the integrated three-dimensional reconstruction module on the target device based on the video stream data and the IMU data, a dense point cloud corresponding to the current scene is constructed in real time, and data support is provided for the test result of the point cloud construction function corresponding to the target device to be determined later.
In a possible implementation manner, after determining a test result of a point cloud construction function for the target device, the method further includes:
and responding to the test result as the test passing, and carrying out dense point cloud reconstruction on any scene by utilizing the target equipment to obtain dense point cloud corresponding to any scene.
When the test result of the target equipment is that the test is passed, the target equipment is determined to have a point cloud construction function and the performance of the function is better, so that dense point cloud reconstruction can be performed on any scene by using the target equipment, and a precise dense point cloud corresponding to any scene is obtained.
In a possible implementation manner, after obtaining the dense point cloud corresponding to the any scene, the method further includes:
and superposing a virtual model on the dense point cloud of any scene by using the target equipment to obtain the augmented reality data corresponding to any scene, and controlling the target equipment to display the augmented reality data.
In the method, after the dense point cloud corresponding to any scene is obtained, the target device can be utilized to superimpose the virtual model on the dense point cloud of any scene, so that the augmented reality data corresponding to any scene is obtained, the target device is controlled to display the augmented reality data, and AR effect display is realized.
The following description of the effects of the apparatus, the electronic device, etc. refers to the description of the above method, and will not be repeated here.
In a second aspect, the present disclosure provides a test apparatus constructed for a point cloud, including:
the acquisition module is used for responding to the dense point cloud corresponding to the current scene constructed by the target equipment in real time in the moving process, acquiring the point cloud data corresponding to the dense point cloud and the processing rate information when the dense point cloud is constructed by the target equipment;
The first determining module is used for determining a point cloud position error corresponding to the dense point cloud constructed by the target equipment based on the point cloud data and the reference point cloud data of the reference point cloud corresponding to the current scene;
and the second determining module is used for determining a test result of the point cloud construction function aiming at the target equipment based on the point cloud position error, the processing rate information and a preset performance test index.
In one possible embodiment, the processing rate information includes a critical frame number processed per second; the second determining module is configured to, when determining a test result of a point cloud construction function for the target device based on the point cloud position error, the processing rate information, and a preset performance test index, determine:
and under the condition that the point cloud position error is smaller than a position error threshold indicated by the performance test index and the processing rate information indicates that the key frame number processed per second is larger than the key frame number per second threshold indicated by the performance test index, determining that a test result of the point cloud construction function aiming at the target equipment is test passing.
In one possible embodiment, the processing rate information includes an average time consumption per key frame; the second determining module is configured to, when determining a test result of a point cloud construction function for the target device based on the point cloud position error, the processing rate information, and a preset performance test index, determine:
And determining that the test result of the point cloud construction function corresponding to the target equipment is test passing under the condition that the point cloud position error is smaller than a position error threshold indicated by the performance test index and the average time consumption of each key frame indicated by the processing rate information is smaller than a time consumption threshold indicated by the performance test index.
In a possible implementation manner, the obtaining module is configured to determine an average time consumption of each key frame according to the following steps:
acquiring execution time length of a dense point cloud constructed by target equipment based on a plurality of key frames in video stream data acquired in real time; the video time length corresponding to the video stream data is longer than or equal to a preset time length;
and determining the average time consumption of each key frame based on the execution time length corresponding to the key frames and the number of the key frames.
In a possible implementation manner, the first determining module is configured to, when determining, based on the point cloud data and the reference point cloud data of the reference point cloud corresponding to the current scene, a point cloud position error corresponding to the dense point cloud constructed by the target device:
determining a similarity transformation matrix between the dense point cloud and the reference point cloud based on the point cloud data and the reference point cloud data;
Transforming the coordinate information of each vertex in the point cloud data based on the similarity transformation matrix to obtain transformed point cloud data; the converted point cloud data comprise transformed coordinate information corresponding to each vertex;
and determining a point cloud position error corresponding to the dense point cloud constructed by the target equipment based on the converted point cloud data and the reference point cloud data.
In a possible implementation manner, the first determining module is configured to construct the reference point cloud data of the reference point cloud corresponding to the current scene according to the following steps:
acquiring video stream data of the current scene, which is acquired in advance by an acquisition device, and Inertial Measurement Unit (IMU) data when the acquisition device acquires the video stream data;
processing the video stream data and the IMU data by using a model reconstruction platform to generate virtual point cloud data of a datum point cloud corresponding to the current scene;
and generating datum point cloud data of the datum point cloud based on the scale information between the model reconstruction platform and the real scene and the virtual point cloud data.
In a possible implementation manner, the obtaining module is configured to construct, in real time, a dense point cloud corresponding to a current scene in the moving process of the target device according to the following steps:
Acquiring video stream data corresponding to a current scene acquired by the target equipment in real time and Inertial Measurement Unit (IMU) data corresponding to the target equipment in the process that the target equipment moves according to a set moving route;
and based on the video stream data and the IMU data, utilizing a three-dimensional reconstruction module integrated on the target equipment to construct the dense point cloud corresponding to the current scene in real time.
In a possible implementation manner, after determining a test result of a point cloud construction function for the target device, the apparatus further includes: an application module for:
and responding to the test result as the test passing, and carrying out dense point cloud reconstruction on any scene by utilizing the target equipment to obtain dense point cloud corresponding to any scene.
In a possible implementation manner, the application module is further configured to, after obtaining a dense point cloud corresponding to the any one scene:
and superposing a virtual model on the dense point cloud of any scene by using the target equipment to obtain the augmented reality data corresponding to any scene, and controlling the target equipment to display the augmented reality data.
In a third aspect, the present disclosure provides an electronic device comprising: a processor, a memory and a bus, the memory storing machine readable instructions executable by the processor, the processor and the memory in communication over the bus when the electronic device is running, the machine readable instructions when executed by the processor performing the steps of the test method for point cloud construction as described in the first aspect or any of the embodiments above.
In a fourth aspect, the present disclosure provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the test method for point cloud construction as described in the first aspect or any of the embodiments above.
The foregoing objects, features and advantages of the disclosure will be more readily apparent from the following detailed description of the preferred embodiments taken in conjunction with the accompanying drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for the embodiments are briefly described below, which are incorporated in and constitute a part of the specification, these drawings showing embodiments consistent with the present disclosure and together with the description serve to illustrate the technical solutions of the present disclosure. It is to be understood that the following drawings illustrate only certain embodiments of the present disclosure and are therefore not to be considered limiting of its scope, for the person of ordinary skill in the art may admit to other equally relevant drawings without inventive effort.
Fig. 1 shows a flow chart of a test method for point cloud construction according to an embodiment of the present disclosure;
Fig. 2 illustrates an architecture diagram of a test apparatus constructed for point clouds according to an embodiment of the present disclosure;
fig. 3 shows a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only some embodiments of the present disclosure, not all embodiments. The components of the embodiments of the present disclosure, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure provided in the accompanying drawings is not intended to limit the scope of the disclosure, as claimed, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be made by those skilled in the art based on the embodiments of this disclosure without making any inventive effort, are intended to be within the scope of this disclosure.
An integrated augmented reality (Augmented Reality, AR) system on a mobile device requires three-dimensional modeling of a real scene, such as building a dense point cloud of any object in the real scene, for subsequent AR effect presentation using the built dense point cloud. Therefore, the AR system has great influence on the efficiency and the precision of constructing the dense point cloud and the display effect of the AR effect display. Based on the above, the embodiment of the disclosure provides a testing method, a testing device, an electronic device and a storage medium for point cloud construction, by which a point cloud construction function and construction performance of a mobile device can be tested, for example, whether the mobile device has the point cloud construction function, whether the efficiency and the accuracy of the point cloud construction of the mobile device meet requirements, and the like.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
For the sake of understanding the embodiments of the present disclosure, a test method for point cloud construction disclosed in the embodiments of the present disclosure is first described in detail. The execution subject of the test method for point cloud construction provided by the embodiments of the present disclosure is generally a computer device having a certain computing capability, where the computer device includes, for example: a terminal device or server or other processing device, which may be a User Equipment (UE), a mobile device, a User terminal, a personal digital assistant (Personal Digital Assistant, PDA), a handheld device, a computing device, a wearable device, etc.; the server may include, for example, a local server, a cloud server, and the like. In some possible implementations, the test method for point cloud construction may be implemented by a processor calling computer readable instructions stored in a memory.
Referring to fig. 1, a flow chart of a testing method for point cloud construction according to an embodiment of the disclosure is shown, where the method includes S101-S103, where:
S101, responding to a target device to construct a dense point cloud corresponding to a current scene in real time in a moving process, and acquiring point cloud data corresponding to the dense point cloud and processing rate information when the target device constructs the dense point cloud;
s102, determining a point cloud position error corresponding to the dense point cloud constructed by the target equipment based on the point cloud data and the reference point cloud data of the reference point cloud corresponding to the current scene;
s103, determining a test result of the point cloud construction function aiming at the target equipment based on the point cloud position error, the processing rate information and a preset performance test index.
In the method, the point cloud position error corresponding to the dense point cloud constructed by the target equipment is determined by acquiring the point cloud data corresponding to the dense point cloud and the processing rate information when the target equipment constructs the dense point cloud and according to the point cloud data and the reference point cloud data of the reference point cloud corresponding to the current scene, so that whether the precision of the dense point cloud construction function of the target equipment meets the requirement is tested by utilizing the point cloud position error and a preset performance test index; based on the processing rate information and the performance test index, whether the efficiency of the dense point cloud construction function of the target equipment meets the requirement or not can be tested; and the test result of the point cloud construction function aiming at the target equipment can be accurately determined, the test of the point cloud construction function of the target equipment is realized, and the test accuracy is improved.
S101 to S103 are specifically described below.
For S101:
the target device may be any device with dense point cloud reconstruction function, for example, the target device may be a mobile phone, a tablet, AR glasses, a computer, or the like. After the dense point cloud reconstruction function of the target device is started, the target device can construct dense point clouds corresponding to the current scene in real time in the moving process, for example, construct dense point clouds corresponding to any object in the current scene. Wherein the dense point cloud may comprise a plurality of point cloud points.
In an optional implementation manner, the target device builds a dense point cloud corresponding to the current scene in real time in the moving process, and the method includes:
acquiring video stream data corresponding to a current scene acquired by the target equipment in real time and inertial measurement unit (Inertial Measurement Unit, IMU) data corresponding to the target equipment in the process that the target equipment moves according to a set moving route; and based on the video stream data and the IMU data, utilizing a three-dimensional reconstruction module integrated on the target equipment to construct the dense point cloud corresponding to the current scene in real time.
Here, by using the integrated three-dimensional reconstruction module on the target device based on the video stream data and the IMU data, a dense point cloud corresponding to the current scene is constructed in real time, and data support is provided for the test result of the point cloud construction function corresponding to the target device to be determined later.
In specific implementation, the moving route of the target device can be planned in a real scene. For example, the moving route can cover a walkable area of the real scene, namely, any position of the real scene is taken as a starting position, the moving route returns to the starting position from the starting position along the walkable area to form a closed-loop route, and the closed-loop route is taken as a set moving route; alternatively, a start position and an end position may be set, and a single-pass route may be formed from the start position to the end position along the walkable region, and the single-pass route may be used as the set travel route. The moving route may be set according to actual situations, and is only exemplified here.
Generally, in order to more accurately test the dense point cloud construction performance of the target device, the video duration of the obtained video stream data of the current scene is greater than or equal to the preset duration. Therefore, after the moving speed is determined, the length threshold value can be determined according to the moving speed and the preset duration, and the length of the moving route is ensured to be larger than the determined length threshold value.
After the moving route is determined, the target device may be controlled to move in accordance with the set moving route. And in the process that the target equipment moves according to the set moving route, acquiring video stream data corresponding to the current scene acquired by the target equipment in real time and IMU data corresponding to the target equipment.
The target equipment is integrated with a three-dimensional reconstruction module, wherein the three-dimensional reconstruction module corresponds to a three-dimensional reconstruction algorithm, and the three-dimensional reconstruction algorithm can be any algorithm capable of realizing three-dimensional model construction. For example, three-dimensional reconstruction algorithms may include simultaneous localization and reconstruction (Simultaneous Localization and Mapping, SLAM). Furthermore, the video stream data and the IMU data can be input to a three-dimensional reconstruction module of the target device, and a dense point cloud corresponding to the current scene is constructed in real time by utilizing a three-dimensional reconstruction algorithm corresponding to the three-dimensional reconstruction module, so that point cloud data corresponding to the dense point cloud is obtained.
When the three-dimensional reconstruction algorithm can comprise SLAM, SLAM can reconstruct the point cloud according to each video frame in the video stream data to obtain dense point cloud corresponding to the current scene. Meanwhile, the SLAM can determine key video frames (key frames for short) in the video stream data, and further can count total time consumption of a dense point cloud corresponding to the plurality of key video frames constructed by the three-dimensional reconstruction module, total number of the key video frames in the video stream data, number of the key video frames constructed in each unit time length (such as per second) and the like.
In response to the target device constructing a dense point cloud corresponding to the current scene, the execution subject may acquire point cloud data corresponding to the dense point cloud from a memory of the target device, where the point cloud data may include point cloud information of each point cloud point (i.e., vertex) on the dense point cloud, and the point cloud information includes vertex coordinate information; the vertex coordinate information may be coordinate values in a world coordinate system. The point cloud information may further include: vertex color information, vertex normal information, and the like.
And the target device may also obtain processing rate information when constructing the dense point cloud. For example, the processing rate information may include the number of key frames processed per second, the average time consumed per key frame, etc. For example, the total time consumed by the three-dimensional reconstruction module in reconstructing a plurality of key video frames in the video stream data, the total number of key video frames, the number of key video frames processed per second, etc., may be recorded in a log of the target device. The number of key video frames processed per second may be determined as the number of key video frames processed per second, for example, if the number of key video frames processed per second is 6, determining the processing rate information includes: 6 keyframes per second are processed.
The average time consumption per key frame is determined according to the following steps: acquiring execution time length of a dense point cloud constructed by target equipment based on a plurality of key frames in video stream data acquired in real time; the video time length corresponding to the video stream data is longer than or equal to a preset time length; and determining the average time consumption of each key frame based on the execution time length corresponding to the plurality of key frames and the number of the plurality of key frames.
In practice, the log of the target device may store the execution duration of building a dense point cloud based on a plurality of keyframes in the video stream data. After the execution time lengths corresponding to the key frames are obtained, the execution time lengths corresponding to the key frames are calculated by quotient of the number of the key frames, and the average time consumption of each key frame is obtained. For example, if the execution duration of 10 key frames is 1000 milliseconds, the average time taken for each key frame is 1000 milliseconds divided by 10, i.e., 100 milliseconds.
In order to accurately detect the real-time performance of the point cloud construction function of the target device, a preset duration can be set, and the video time length corresponding to the video stream data is ensured to be greater than or equal to the preset duration. Meanwhile, based on the execution time length corresponding to the plurality of key frames and the number of the plurality of key frames, the average time consumption of each key frame is determined, and data support can be provided for the test result of the point cloud construction function of the target equipment to be determined later.
For S102:
here, a reference point cloud corresponding to the current scene may be previously constructed, and reference point cloud data of the reference point cloud may be acquired, where the reference point cloud data includes coordinate information of a plurality of vertices (point cloud points) included in the reference point cloud. For example, historical video data corresponding to the current scene can be prerecorded, and a reference point cloud is constructed by using a three-dimensional reconstruction algorithm with higher accuracy based on the historical video data.
In an alternative embodiment, the reference point cloud data of the reference point cloud corresponding to the current scene is constructed according to the following steps: acquiring video stream data of the current scene, which is acquired in advance by an acquisition device, and Inertial Measurement Unit (IMU) data when the acquisition device acquires the video stream data; processing the video stream data and the IMU data by using a model reconstruction platform to generate virtual point cloud data of a datum point cloud corresponding to the current scene; and generating datum point cloud data of the datum point cloud based on the scale information between the model reconstruction platform and the real scene and the virtual point cloud data.
The acquisition device can be any device with a camera shooting function, such as a mobile phone, a camera, and the like. And acquiring video stream data of the current scene and IMU data when the acquisition device acquires the video stream data according to the moving route by utilizing the acquisition device. Transmitting the video stream data and the IMU data of the current scene to a model reconstruction platform, and processing the video stream data and the IMU data by the model reconstruction platform to generate a datum point cloud corresponding to the current scene and virtual point cloud data of the datum point cloud. For example, the model reconstruction platform may include a colmap.
The model reconstruction platform can accurately obtain the structural information of the current scene, but cannot determine the size of the current scene in the real world. Therefore, the distance measurement can be performed on the current scene in advance, for example, the real distance between any two positions in the current scene can be determined; and determining the scale information between the model reconstruction platform and the real scene according to the virtual distance and the real distance of any two positions on the point cloud structure. And generating datum point cloud data of the datum point cloud according to the scale information and the virtual point cloud data. For example, the size of the reference point cloud may be adjusted according to the scale information to obtain adjusted point cloud data, and the adjusted point cloud data is used as the reference point cloud data.
In the method, the model reconstruction platform is utilized to process the video stream data and the IMU data, virtual point cloud data of the datum point cloud corresponding to the current scene is generated, and the virtual point cloud data can accurately represent the scene structure of the current scene. And based on the scale information and the virtual point cloud data between the model reconstruction platform and the real scene, the datum point cloud data of the datum point cloud is accurately generated. On the basis of guaranteeing the scene structure of the current scene, the method ensures that the indicated size of the datum point cloud data is consistent with the real size of the current scene, and the precision of the datum point cloud data is higher.
And further, the point cloud position error corresponding to the dense point cloud constructed by the target equipment can be determined based on the point cloud data and the reference point cloud data of the reference point cloud corresponding to the current scene.
In an optional implementation manner, the determining, based on the point cloud data and the reference point cloud data of the reference point cloud corresponding to the current scene, the point cloud position error corresponding to the dense point cloud constructed by the target device includes: determining a similarity transformation matrix between the dense point cloud and the reference point cloud based on the point cloud data and the reference point cloud data; transforming the coordinate information of each vertex in the point cloud data based on the similarity transformation matrix to obtain transformed point cloud data; the converted point cloud data comprise transformed coordinate information corresponding to each vertex; and determining a point cloud position error corresponding to the dense point cloud constructed by the target equipment based on the converted point cloud data and the reference point cloud data.
In practice, a similarity transformation matrix between the dense point cloud and the reference point cloud may be determined based on the point cloud data and the reference point cloud data. For example, the vertex coordinates of the reconstructed dense point cloud and the vertex coordinates of the reference point cloud can be aligned through similar transformation to obtain a similar transformation matrix, so that the square sum of the nearest distances from the vertexes of the dense point cloud to the surface of the reference point cloud after transformation is minimum.
For example, the similarity change matrix S may be determined according to the following formula:
wherein,homogeneous form of coordinates of the ith vertex (point cloud point) of the dense point cloud +.> Is vertex->Vertex coordinates of closest distance to the surface of the reference point cloud +.> Is vertex->Is defined as normal to (c).
In practice, the "registration alignment" function in cloudCompare may be used to determine a similarity transformation matrix between the dense point cloud and the reference point cloud. Wherein cloudCompare is a three-dimensional point cloud editing and processing software.
And then, based on the similar transformation matrix, transforming the coordinate information of each vertex in the point cloud data to obtain transformed coordinate information corresponding to each vertex and transformed point cloud data. And determining a point cloud position error corresponding to the dense point cloud constructed by the target equipment based on the converted point cloud data and the reference point cloud data.
For example, the point cloud position error is determined according to the following company:
wherein,is a homogeneous form of the coordinates of the ith vertex (point cloud point) of the converted dense point cloud.
In practice, the "distance measurement" function in cloudcomputer may be used to determine the point cloud position error (i.e., the average error of the reconstructed dense point cloud).
In the embodiment of the disclosure, the coordinate information of each vertex in the point cloud data is transformed by determining a similarity transformation matrix between the dense point cloud and the reference point cloud and utilizing the similarity transformation matrix to obtain transformed point cloud data; and then, based on the converted point cloud data and the reference point cloud data, the point cloud position error corresponding to the dense point cloud constructed by the target equipment is accurately determined, so that the accuracy of the point cloud construction function of the target equipment can be accurately tested by using the determined point cloud position error.
For S103:
in implementation, the point cloud position error and the processing rate information are respectively compared with performance test indexes, and a test result of a point cloud construction function aiming at target equipment is determined. The test results include test pass and test fail. For example, if the position error of the point cloud is greater than or equal to the position error threshold indicated by the performance test index, determining that the test result is that the test fails.
The performance test index comprises a threshold value corresponding to each test data. For example, the performance test index may include a position error threshold (threshold corresponding to a position error of the point cloud), a key frame number per second threshold (threshold corresponding to a key frame number processed per second), a time consumption threshold (threshold corresponding to an average time consumption of each key frame), and the like.
In an alternative embodiment, the processing rate information includes a critical frame number per second of processing; the determining a test result of the point cloud construction function for the target device based on the point cloud position error, the processing rate information and a preset performance test index includes: and under the condition that the point cloud position error is smaller than a position error threshold indicated by the performance test index and the processing rate information indicates that the key frame number processed per second is larger than the key frame number per second threshold indicated by the performance test index, determining that a test result of the point cloud construction function aiming at the target equipment is test passing.
When the processing rate information includes a key frame number per second of processing, the performance test index includes a key frame number per second threshold. Comparing the position error of the point cloud with a position error threshold value, and comparing the processed key frame number per second with a key frame number per second threshold value; and under the condition that the point cloud position error is smaller than a position error threshold indicated by the performance test index, and the processing rate information indicates the key frame number processed per second and is larger than the key frame number per second threshold indicated by the performance test index, determining that the test result of the point cloud construction function aiming at the target equipment is test passing.
In practice, the position error threshold may be 3cm/m and the key frame number per second threshold may be 6 key frames. I.e. the dense point cloud reconstruction function of the target device should fulfil the following two conditions: the expansion processing speed of the dense point cloud is not lower than 6 key frames per second, and the point cloud position error corresponding to the dense point cloud is not more than 3cm/m.
And determining that the test result of the point cloud construction function for the target equipment is test failure under the condition that the point cloud position error is greater than or equal to a position error threshold indicated by the performance test index and/or the processing rate information indicates that the key frame number processed per second is less than or equal to a key frame number per second threshold indicated by the performance test index.
In the test, testing a dense point cloud reconstruction function environment of the target device by executing a dense point cloud reconstruction process on the target device, and confirming whether the function is available; if the target device can obtain dense point clouds of the current scene in real time, determining that the function is available; if the target device cannot obtain the dense point cloud of the current scene in real time, determining that the function is not available. Under the condition that the function is available, if the test result corresponding to the target equipment is that the test is passed, the dense point cloud reconstruction function of the target equipment meets the requirement of real-time incremental expansion; if the test result corresponding to the target equipment is that the test is not passed, the dense point cloud reconstruction function of the target equipment does not meet the requirement of real-time incremental expansion.
The method comprises the steps of comparing a point cloud position error with a position error threshold, comparing a key frame number processed per second with the key frame number per second threshold, determining that the accuracy of a point cloud construction function of target equipment meets the requirement when the point cloud position error is smaller than the position error threshold, determining that the instantaneity (or efficiency) of the point cloud construction function of the target equipment meets the requirement when the key frame number processed per second is larger than the key frame number per second threshold, and determining that the test result of the point cloud construction function of the target equipment is test passing when the accuracy and instantaneity meet the requirement.
In another alternative embodiment, the processing rate information includes an average time consumption per key frame; the determining a test result of the point cloud construction function for the target device based on the point cloud position error, the processing rate information and a preset performance test index includes: and determining that the test result of the point cloud construction function corresponding to the target equipment is test passing under the condition that the point cloud position error is smaller than a position error threshold indicated by the performance test index and the average time consumption of each key frame indicated by the processing rate information is smaller than a time consumption threshold indicated by the performance test index.
When the processing rate information includes an average consumption per key frame, a time consumption threshold is included in the performance test index. Comparing the point cloud position error with a position error threshold, and comparing the average of each key frame with a time-consuming threshold; and under the condition that the point cloud position error is smaller than a position error threshold indicated by the performance test index and the average time consumption of each key frame is smaller than a time consumption threshold, determining that the test result of the point cloud construction function aiming at the target equipment is test passing.
And determining that the test result of the point cloud construction function aiming at the target equipment is not passing under the condition that the point cloud position error is greater than or equal to a position error threshold indicated by the performance test index and/or the average time consumption of each key frame is greater than or equal to a time consumption threshold.
Comparing the point cloud position error with a position error threshold, comparing the average time consumption of each key frame with the time consumption threshold, determining that the accuracy of the point cloud construction function of the target device meets the requirement when the point cloud position error is smaller than the position error threshold, determining that the instantaneity (or efficiency) of the point cloud construction function of the target device meets the requirement when the average time consumption of each key frame is smaller than the time consumption threshold, and determining that the test result of the point cloud construction function of the target device is test passing when the accuracy and the instantaneity meet the requirement.
In specific implementation, the point cloud construction function of the mobile device can be tested. In the test, testing the functional environment by performing dense point cloud reconstruction on the mobile device and confirming whether the function is available; and the reconstructed dense point cloud should be incremental in real time.
The performance of dense point cloud reconstruction can be tested according to the following method:
a) Taking an accurate three-dimensional model of a scene obtained by scanning by a three-dimensional scanner as a true value;
b) Performing dense point cloud reconstruction on the scene on the mobile device, and evaluating a geometric error between the reconstructed dense point cloud and the true value;
c) Recording single frame execution time of a dense point cloud reconstruction algorithm in a log of the mobile device; and counting the average reconstruction time of each frame through the execution time of not less than 5 min. Wherein, the average reconstruction time per frame may refer to the average reconstruction time per key frame.
Further, the mobile device augmented reality system dense point cloud reconstruction should satisfy the following requirements:
a. dense point cloud extension processing time is not greater than 166ms/keyframe (166 ms/keyframe);
b. the dense point cloud position error is not more than 3cm/m.
For complex structures that contain non-planar surfaces, mobile device augmented reality systems should meet the following requirements:
a. Reconstructing 3D dense point clouds in the scene, each point cloud comprising position information, normal, and color;
b. as the mobile device moves, dense 3D point clouds expand in real time and the locations of the 3D point clouds are updated as the visualized scene area expands.
In an alternative embodiment, after determining the test result of the point cloud construction function for the target device, the method further includes: and responding to the test result as the test passing, and carrying out dense point cloud reconstruction on any scene by utilizing the target equipment to obtain dense point cloud corresponding to any scene.
And responding to the test result as the test passing, and carrying out dense point cloud reconstruction on any scene by utilizing the target equipment to obtain dense point cloud corresponding to any scene. Any scene is a real scene which needs to be built into the dense point cloud. After obtaining the dense point cloud corresponding to any scene, the constructed dense point cloud can be displayed on the target device.
When the test result of the target equipment is that the test is passed, the target equipment is determined to have a point cloud construction function and the performance of the function is better, so that dense point cloud reconstruction can be performed on any scene by using the target equipment, and a precise dense point cloud corresponding to any scene is obtained.
In an optional implementation manner, after obtaining the dense point cloud corresponding to the any scene, the method further includes: and superposing a virtual model on the dense point cloud of any scene by using the target equipment to obtain the augmented reality data corresponding to any scene, and controlling the target equipment to display the augmented reality data.
When the method is implemented, after the dense point cloud corresponding to any scene is obtained, the target equipment can be used for superposing the virtual model on the dense point cloud of any scene to obtain the augmented reality data corresponding to any scene. Wherein the augmented reality data may be presentation data having an AR effect. For example, the virtual model may be any pre-constructed AR model, and the virtual model may be an animal virtual model, a cartoon character virtual model, a myth character virtual model, a building virtual model, a plant virtual model, or the like.
In the method, after the dense point cloud corresponding to any scene is obtained, the target device can be utilized to superimpose the virtual model on the dense point cloud of any scene, so that the augmented reality data corresponding to any scene is obtained, the target device is controlled to display the augmented reality data, and AR effect display is realized.
In the present disclosure, the process of building and testing a dense point cloud is elaborated. In other scenarios, the AR system integrated on the mobile device may also mesh reconstruct the target object in the real-world scenario. The specific implementation and testing of mesh reconstruction is described in detail below.
When the target device builds a dense grid corresponding to the current scene in real time in the moving process, dense grid data corresponding to the current scene can be generated, wherein the dense grid data can comprise: and the grid data respectively corresponds to the triangular grids, and each grid data can comprise position information, normal line information, color information and the like of each vertex on the triangular grids.
In particular implementations, dense mesh reconstruction functionality of a mobile device may be tested. For scenes containing non-planar complex structures, an augmented reality system on a mobile device should meet the following requirements: 1. reconstructing a dense three-dimensional triangle mesh of the scene, wherein each vertex in the triangle mesh comprises position, normal direction and color information; 2. as the visible scene area expands as the mobile device moves, the dense 3D mesh expands and updates the vertex positions and mesh topology in real time.
In the test, the performance of dense mesh reconstruction was tested as follows:
a) Taking an accurate 3D model of a scene scanned by the 3D scanner as a true three-dimensional model;
b) Performing dense grid reconstruction on the scene on the mobile equipment, and evaluating geometric accuracy errors between the reconstructed dense grid and the true three-dimensional model;
c) Recording the single frame execution time of the dense grid reconstruction algorithm in a log, and counting the average reconstruction time of the dense grid reconstruction through the execution time of not less than 5 min.
Further, dense mesh reconstruction of augmented reality systems on mobile devices should meet the following requirements:
a. the processing time of dense mesh extension is no greater than 166 milliseconds/key frame;
b. the geometric error of the dense mesh reconstruction is not more than 3cm/m.
By testing the dense grid reconstruction function of the mobile device, such as testing the expansion rate and geometric error of dense grid reconstruction, according to the test result, whether the dense grid reconstruction function of the mobile device meets the requirements is determined, and the accurate test of the dense grid reconstruction task of the mobile device is realized.
It will be appreciated by those skilled in the art that in the above-described method of the specific embodiments, the written order of steps is not meant to imply a strict order of execution but rather should be construed according to the function and possibly inherent logic of the steps.
Based on the same concept, the embodiment of the present disclosure further provides a testing device for point cloud construction, and referring to fig. 2, which is a schematic architecture diagram of the testing device for point cloud construction provided by the embodiment of the present disclosure, including an obtaining module 201, a first determining module 202, a second determining module 203, and specifically:
an obtaining module 201, configured to respond to real-time construction of a dense point cloud corresponding to a current scene in a moving process of a target device, and obtain point cloud data corresponding to the dense point cloud, and processing rate information when the target device constructs the dense point cloud;
a first determining module 202, configured to determine a point cloud position error corresponding to the dense point cloud constructed by the target device, based on the point cloud data and reference point cloud data of a reference point cloud corresponding to the current scene;
a second determining module 203, configured to determine a test result of the point cloud construction function for the target device based on the point cloud position error, the processing rate information and a preset performance test index.
In one possible embodiment, the processing rate information includes a critical frame number processed per second; the second determining module 203 is configured to, when determining a test result of the point cloud construction function for the target device based on the point cloud position error, the processing rate information, and a preset performance test index:
And under the condition that the point cloud position error is smaller than a position error threshold indicated by the performance test index and the processing rate information indicates that the key frame number processed per second is larger than the key frame number per second threshold indicated by the performance test index, determining that a test result of the point cloud construction function aiming at the target equipment is test passing.
In one possible embodiment, the processing rate information includes an average time consumption per key frame; the second determining module 203 is configured to, when determining a test result of the point cloud construction function for the target device based on the point cloud position error, the processing rate information, and a preset performance test index:
and determining that the test result of the point cloud construction function corresponding to the target equipment is test passing under the condition that the point cloud position error is smaller than a position error threshold indicated by the performance test index and the average time consumption of each key frame indicated by the processing rate information is smaller than a time consumption threshold indicated by the performance test index.
In a possible implementation manner, the obtaining module 201 is configured to determine the average time consumption of each key frame according to the following steps:
Acquiring execution time length of a dense point cloud constructed by target equipment based on a plurality of key frames in video stream data acquired in real time; the video time length corresponding to the video stream data is longer than or equal to a preset time length;
and determining the average time consumption of each key frame based on the execution time length corresponding to the key frames and the number of the key frames.
In a possible implementation manner, the first determining module 202 is configured to, when determining, based on the point cloud data and the reference point cloud data of the reference point cloud corresponding to the current scene, a point cloud position error corresponding to the dense point cloud constructed by the target device:
determining a similarity transformation matrix between the dense point cloud and the reference point cloud based on the point cloud data and the reference point cloud data;
transforming the coordinate information of each vertex in the point cloud data based on the similarity transformation matrix to obtain transformed point cloud data; the converted point cloud data comprise transformed coordinate information corresponding to each vertex;
and determining a point cloud position error corresponding to the dense point cloud constructed by the target equipment based on the converted point cloud data and the reference point cloud data.
In a possible implementation manner, the first determining module 202 is configured to construct the reference point cloud data of the reference point cloud corresponding to the current scene according to the following steps:
acquiring video stream data of the current scene, which is acquired in advance by an acquisition device, and Inertial Measurement Unit (IMU) data when the acquisition device acquires the video stream data;
processing the video stream data and the IMU data by using a model reconstruction platform to generate virtual point cloud data of a datum point cloud corresponding to the current scene;
and generating datum point cloud data of the datum point cloud based on the scale information between the model reconstruction platform and the real scene and the virtual point cloud data.
In a possible implementation manner, the obtaining module 201 is configured to construct, in real time, a dense point cloud corresponding to the current scene during the moving process of the target device according to the following steps:
acquiring video stream data corresponding to a current scene acquired by the target equipment in real time and Inertial Measurement Unit (IMU) data corresponding to the target equipment in the process that the target equipment moves according to a set moving route;
and based on the video stream data and the IMU data, utilizing a three-dimensional reconstruction module integrated on the target equipment to construct the dense point cloud corresponding to the current scene in real time.
In a possible implementation manner, after determining a test result of a point cloud construction function for the target device, the apparatus further includes: an application module 204 for:
and responding to the test result as the test passing, and carrying out dense point cloud reconstruction on any scene by utilizing the target equipment to obtain dense point cloud corresponding to any scene.
In a possible implementation manner, the application module 204 is further configured to, after obtaining the dense point cloud corresponding to the any one scene:
and superposing a virtual model on the dense point cloud of any scene by using the target equipment to obtain the augmented reality data corresponding to any scene, and controlling the target equipment to display the augmented reality data.
In some embodiments, the functions or templates included in the apparatus provided by the embodiments of the present disclosure may be used to perform the methods described in the foregoing method embodiments, and specific implementations thereof may refer to descriptions of the foregoing method embodiments, which are not repeated herein for brevity.
Based on the same technical concept, the embodiment of the disclosure also provides electronic equipment. Referring to fig. 3, a schematic structural diagram of an electronic device according to an embodiment of the disclosure includes a processor 301, a memory 302, and a bus 303. The memory 302 is configured to store execution instructions, including a memory 3021 and an external memory 3022; the memory 3021 is also referred to as an internal memory, and is used for temporarily storing operation data in the processor 301 and data exchanged with the external memory 3022 such as a hard disk, and the processor 301 exchanges data with the external memory 3022 through the memory 3021, and when the electronic device 300 is operated, the processor 301 and the memory 302 communicate with each other through the bus 303, so that the processor 301 executes the following instructions:
Responding to a target device to construct a dense point cloud corresponding to a current scene in real time in a moving process, and acquiring point cloud data corresponding to the dense point cloud and processing rate information when the target device constructs the dense point cloud;
determining a point cloud position error corresponding to the dense point cloud constructed by the target equipment based on the point cloud data and the reference point cloud data of the reference point cloud corresponding to the current scene;
and determining a test result of the point cloud construction function aiming at the target equipment based on the point cloud position error, the processing rate information and a preset performance test index.
The specific process flow of the processor 301 may refer to the descriptions of the above method embodiments, and will not be described herein.
Furthermore, the embodiment of the present disclosure further provides a computer readable storage medium, where a computer program is stored, and the computer program is executed by a processor to perform the steps of the test method for point cloud construction described in the above method embodiment. Wherein the storage medium may be a volatile or nonvolatile computer readable storage medium.
The embodiments of the present disclosure further provide a computer program product, where the computer program product carries program code, and instructions included in the program code may be used to execute the steps of the test method for point cloud construction described in the foregoing method embodiments, and specifically reference may be made to the foregoing method embodiments, which are not described herein.
Wherein the above-mentioned computer program product may be realized in particular by means of hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied as a computer storage medium, and in another alternative embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
The present disclosure relates to the field of augmented reality, and more particularly, to the field of augmented reality, in which, by acquiring image information of a target object in a real environment, detection or identification processing of relevant features, states and attributes of the target object is further implemented by means of various visual correlation algorithms, so as to obtain an AR effect combining virtual and reality matching with a specific application. By way of example, the target object may relate to a face, limb, gesture, action, etc. associated with a human body, or a marker, a marker associated with an object, or a sand table, display area, or display item associated with a venue or location, etc. Vision related algorithms may involve vision localization, SLAM, three-dimensional reconstruction, image registration, background segmentation, key point extraction and tracking of objects, pose or depth detection of objects, and so forth. The specific application not only can relate to interactive scenes such as navigation, explanation, reconstruction, virtual effect superposition display and the like related to real scenes or articles, but also can relate to interactive scenes such as makeup beautification, limb beautification, special effect display, virtual model display and the like related to people. The detection or identification processing of the relevant characteristics, states and attributes of the target object can be realized through a convolutional neural network. The convolutional neural network is a network model obtained by performing model training based on a deep learning framework.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system and apparatus may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present disclosure may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in essence or a part contributing to the prior art or a part of the technical solution, or in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present disclosure. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
If the technical scheme of the application relates to personal information, the product applying the technical scheme of the application clearly informs the personal information processing rule before processing the personal information and obtains the autonomous agreement of the individual. If the technical scheme of the application relates to sensitive personal information, the product applying the technical scheme of the application obtains individual consent before processing the sensitive personal information, and simultaneously meets the requirement of 'explicit consent'. For example, a clear and remarkable mark is set at a personal information acquisition device such as a camera to inform that the personal information acquisition range is entered, personal information is acquired, and if the personal voluntarily enters the acquisition range, the personal information is considered as consent to be acquired; or on the device for processing the personal information, under the condition that obvious identification/information is utilized to inform the personal information processing rule, personal authorization is obtained by popup information or a person is requested to upload personal information and the like; the personal information processing rule may include information such as a personal information processor, a personal information processing purpose, a processing mode, and a type of personal information to be processed.
The foregoing is merely a specific embodiment of the disclosure, but the protection scope of the disclosure is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the disclosure, and it should be covered in the protection scope of the disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (12)

1. The testing method for the point cloud construction is characterized by comprising the following steps of:
responding to a target device to construct a dense point cloud corresponding to a current scene in real time in a moving process, and acquiring point cloud data corresponding to the dense point cloud and processing rate information when the target device constructs the dense point cloud;
determining a point cloud position error corresponding to the dense point cloud constructed by the target equipment based on the point cloud data and the reference point cloud data of the reference point cloud corresponding to the current scene;
and determining a test result of the point cloud construction function aiming at the target equipment based on the point cloud position error, the processing rate information and a preset performance test index.
2. The method of claim 1, wherein the processing rate information includes a key frame number processed per second;
The determining a test result of the point cloud construction function for the target device based on the point cloud position error, the processing rate information and a preset performance test index includes:
and under the condition that the point cloud position error is smaller than a position error threshold indicated by the performance test index and the processing rate information indicates that the key frame number processed per second is larger than the key frame number per second threshold indicated by the performance test index, determining that a test result of the point cloud construction function aiming at the target equipment is test passing.
3. The method of claim 1, wherein the processing rate information comprises an average time consumption per key frame;
the determining a test result of the point cloud construction function for the target device based on the point cloud position error, the processing rate information and a preset performance test index includes:
and determining that the test result of the point cloud construction function corresponding to the target equipment is test passing under the condition that the point cloud position error is smaller than a position error threshold indicated by the performance test index and the average time consumption of each key frame indicated by the processing rate information is smaller than a time consumption threshold indicated by the performance test index.
4. A method according to claim 3, wherein the average time consumption of each key frame is determined according to the steps of:
acquiring execution time length of a dense point cloud constructed by target equipment based on a plurality of key frames in video stream data acquired in real time; the video time length corresponding to the video stream data is longer than or equal to a preset time length;
and determining the average time consumption of each key frame based on the execution time length corresponding to the key frames and the number of the key frames.
5. The method according to any one of claims 1-4, wherein the determining, based on the point cloud data and the reference point cloud data of the reference point cloud corresponding to the current scene, a point cloud position error corresponding to the dense point cloud constructed by the target device includes:
determining a similarity transformation matrix between the dense point cloud and the reference point cloud based on the point cloud data and the reference point cloud data;
transforming the coordinate information of each vertex in the point cloud data based on the similarity transformation matrix to obtain transformed point cloud data; the converted point cloud data comprise transformed coordinate information corresponding to each vertex;
And determining a point cloud position error corresponding to the dense point cloud constructed by the target equipment based on the converted point cloud data and the reference point cloud data.
6. The method of any one of claims 1-5, wherein constructing the reference point cloud data of the reference point cloud corresponding to the current scene is performed according to the steps of:
acquiring video stream data of the current scene, which is acquired in advance by an acquisition device, and Inertial Measurement Unit (IMU) data when the acquisition device acquires the video stream data;
processing the video stream data and the IMU data by using a model reconstruction platform to generate virtual point cloud data of a datum point cloud corresponding to the current scene;
and generating datum point cloud data of the datum point cloud based on the scale information between the model reconstruction platform and the real scene and the virtual point cloud data.
7. The method according to any one of claims 1-6, wherein the target device builds a dense point cloud corresponding to the current scene in real time during the moving process, including:
acquiring video stream data corresponding to a current scene acquired by the target equipment in real time and Inertial Measurement Unit (IMU) data corresponding to the target equipment in the process that the target equipment moves according to a set moving route;
And based on the video stream data and the IMU data, utilizing a three-dimensional reconstruction module integrated on the target equipment to construct the dense point cloud corresponding to the current scene in real time.
8. The method according to any one of claims 1-7, wherein after said determining a test result of a point cloud construction function for the target device, the method further comprises:
and responding to the test result as the test passing, and carrying out dense point cloud reconstruction on any scene by utilizing the target equipment to obtain dense point cloud corresponding to any scene.
9. The method of claim 8, wherein after obtaining the dense point cloud corresponding to the any one scene, further comprises:
and superposing a virtual model on the dense point cloud of any scene by using the target equipment to obtain the augmented reality data corresponding to any scene, and controlling the target equipment to display the augmented reality data.
10. A test device constructed for a point cloud, comprising:
the acquisition module is used for responding to the dense point cloud corresponding to the current scene constructed by the target equipment in real time in the moving process, acquiring the point cloud data corresponding to the dense point cloud and the processing rate information when the dense point cloud is constructed by the target equipment;
The first determining module is used for determining a point cloud position error corresponding to the dense point cloud constructed by the target equipment based on the point cloud data and the reference point cloud data of the reference point cloud corresponding to the current scene;
and the second determining module is used for determining a test result of the point cloud construction function aiming at the target equipment based on the point cloud position error, the processing rate information and a preset performance test index.
11. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine readable instructions executable by the processor, the processor and the memory in communication over the bus when the electronic device is running, the machine readable instructions when executed by the processor performing the steps of the test method for point cloud construction according to any of claims 1 to 9.
12. A computer-readable storage medium, characterized in that it has stored thereon a computer program which, when executed by a processor, performs the steps of the test method for point cloud construction according to any of claims 1 to 9.
CN202210618839.6A 2022-06-01 2022-06-01 Test method and device for point cloud construction, electronic equipment and storage medium Pending CN117197401A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210618839.6A CN117197401A (en) 2022-06-01 2022-06-01 Test method and device for point cloud construction, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210618839.6A CN117197401A (en) 2022-06-01 2022-06-01 Test method and device for point cloud construction, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117197401A true CN117197401A (en) 2023-12-08

Family

ID=88991197

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210618839.6A Pending CN117197401A (en) 2022-06-01 2022-06-01 Test method and device for point cloud construction, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117197401A (en)

Similar Documents

Publication Publication Date Title
CN111325796B (en) Method and apparatus for determining pose of vision equipment
EP3798801A1 (en) Image processing method and apparatus, storage medium, and computer device
KR101135186B1 (en) System and method for interactive and real-time augmented reality, and the recording media storing the program performing the said method
US11003956B2 (en) System and method for training a neural network for visual localization based upon learning objects-of-interest dense match regression
CN111028330B (en) Three-dimensional expression base generation method, device, equipment and storage medium
US20200334842A1 (en) Methods, devices and computer program products for global bundle adjustment of 3d images
CN110135455A (en) Image matching method, device and computer readable storage medium
CN113012282B (en) Three-dimensional human body reconstruction method, device, equipment and storage medium
EP3326156B1 (en) Consistent tessellation via topology-aware surface tracking
WO2015134794A2 (en) Method and system for 3d capture based on structure from motion with simplified pose detection
KR101851303B1 (en) Apparatus and method for reconstructing 3d space
CN110956695B (en) Information processing apparatus, information processing method, and storage medium
WO2019164502A1 (en) Methods, devices and computer program products for generating 3d models
CN111080776B (en) Human body action three-dimensional data acquisition and reproduction processing method and system
CN113870401B (en) Expression generation method, device, equipment, medium and computer program product
CN110648363A (en) Camera posture determining method and device, storage medium and electronic equipment
CN109754464B (en) Method and apparatus for generating information
CN113657357B (en) Image processing method, image processing device, electronic equipment and storage medium
US20240029301A1 (en) Efficient localization based on multiple feature types
CN114529647A (en) Object rendering method, device and apparatus, electronic device and storage medium
US11373329B2 (en) Method of generating 3-dimensional model data
CN110310325B (en) Virtual measurement method, electronic device and computer readable storage medium
CN111179408B (en) Three-dimensional modeling method and equipment
CN117197401A (en) Test method and device for point cloud construction, electronic equipment and storage medium
CN114529648A (en) Model display method, device, apparatus, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination