CN110853135A - Indoor scene real-time reconstruction tracking service method based on endowment robot - Google Patents
Indoor scene real-time reconstruction tracking service method based on endowment robot Download PDFInfo
- Publication number
- CN110853135A CN110853135A CN201911050511.3A CN201911050511A CN110853135A CN 110853135 A CN110853135 A CN 110853135A CN 201911050511 A CN201911050511 A CN 201911050511A CN 110853135 A CN110853135 A CN 110853135A
- Authority
- CN
- China
- Prior art keywords
- information
- data
- robot
- user
- point cloud
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 34
- 230000009466 transformation Effects 0.000 claims abstract description 23
- 238000009616 inductively coupled plasma Methods 0.000 claims abstract description 17
- 230000003993 interaction Effects 0.000 claims abstract description 17
- 238000013500 data storage Methods 0.000 claims abstract description 13
- 238000012545 processing Methods 0.000 claims abstract description 13
- 238000001914 filtration Methods 0.000 claims abstract description 10
- 239000011159 matrix material Substances 0.000 claims abstract description 10
- 230000005540 biological transmission Effects 0.000 claims abstract description 9
- 238000004458 analytical method Methods 0.000 claims abstract description 8
- 238000012544 monitoring process Methods 0.000 claims abstract description 4
- 230000006870 function Effects 0.000 claims description 29
- 239000013598 vector Substances 0.000 claims description 18
- 230000008569 process Effects 0.000 claims description 9
- 230000036541 health Effects 0.000 claims description 7
- 230000008921 facial expression Effects 0.000 claims description 6
- 238000009877 rendering Methods 0.000 claims description 4
- HPTJABJPZMULFH-UHFFFAOYSA-N 12-[(Cyclohexylcarbamoyl)amino]dodecanoic acid Chemical compound OC(=O)CCCCCCCCCCCNC(=O)NC1CCCCC1 HPTJABJPZMULFH-UHFFFAOYSA-N 0.000 claims description 3
- 230000002146 bilateral effect Effects 0.000 claims description 3
- 238000000354 decomposition reaction Methods 0.000 claims description 3
- 238000007499 fusion processing Methods 0.000 claims description 3
- 238000004806 packaging method and process Methods 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 claims description 3
- 230000001131 transforming effect Effects 0.000 claims description 3
- 230000000007 visual effect Effects 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 4
- 238000011161 development Methods 0.000 description 3
- 230000018109 developmental process Effects 0.000 description 3
- 230000000474 nursing effect Effects 0.000 description 3
- 230000032683 aging Effects 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000010195 expression analysis Methods 0.000 description 2
- 206010003805 Autism Diseases 0.000 description 1
- 208000020706 Autistic disease Diseases 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/003—Navigation within 3D models or images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Software Systems (AREA)
- Computer Graphics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Manipulator (AREA)
Abstract
The invention discloses an indoor scene real-time reconstruction tracking service method based on an elderly people robot, which comprises the following steps: importing user information of the handheld device, denoising and filtering, and performing primary analysis processing; transmitting the data to a data storage area in the data interaction module through the wireless transmission module; the image acquisition module acquires an indoor image by using Kinect V2, wherein the acquisition depth information and the color information are included; reconstructing operation to obtain multi-frame depth information, and denoising; coordinate axis transformation is carried out to obtain point cloud information; calculating a rigid body transformation matrix by adopting an ICP (inductively coupled plasma) algorithm; fusing repeated areas among image frames, extracting key point cloud information and adding color information; planning a motion track of the depth camera; carrying out triangular meshing on the point cloud information to generate a three-dimensional model; putting the data into an information storage area of a data interaction module; and determining the current user, tracking the service by using the Kinect V2, and monitoring an instruction sent by the user so as to complete the corresponding service function.
Description
Technical Field
The invention mainly relates to an image reconstruction technology, in particular to a real-time three-dimensional scene reconstruction technology, and more particularly relates to an indoor scene real-time reconstruction tracking service method based on an elderly people robot.
Background
Along with the continuous development of the aging of the population of the current society, China is about to become a deep population aging country, the problem of old age nursing also has a huge gap, because children and children are busy in work, the old people cannot be taken care of at home in time, autism is easy to suffer, the old nursing function of the old nursing institution in the social aspect is incomplete, and the requirement of old people in the current society cannot be met. In recent years, due to rapid development of artificial intelligence, the endowment robot is favored by a plurality of old people, the service robot with complete functions can solve a series of endowment problems, so that the old people can enjoy high-quality life, and the social pressure of young people is greatly reduced.
However, most of the old-age robots in the market currently have limited service functions, and especially when the old-age robots need three-dimensional information in a scene during service, many old-age robots cannot accurately analyze three-dimensional objects and do not stand out in the aspect of indoor three-dimensional reconstruction, so that the preset service functions are single due to the defects of the existing old-age robots, three-dimensional information does not exist, and diversified service cannot be performed.
Microsoft's Kinect degree of depth camera of releasing a few years ago, this kind of camera compares with traditional camera, and not only low price, convenient operation, but also can gather the depth information and the color information of object in real time in step, and Kinect camera has the infrared emission function simultaneously, and the information that consequently gathers is difficult to receive illumination and the influence of texture change, and the image quality of gathering is than higher. At present, Kinect is already applied to VR/AR and other related fields.
Recently, a Kinect V2 depth camera is introduced, the tracking accuracy of the depth camera is improved by 20% compared with that of the previous V1 version, the Kinect V2 depth camera can track the human body, the maximum distance reaches 4 meters, and in addition, the V2 version has 2 times higher precision than the V1 version in a short distance, so that the Kinect V2 depth camera is very suitable for indoor information acquisition and human body tracking.
Aiming at the current development situation of the old-age-care robot and the advantages of the Kinect V2 depth camera, more and more people apply the depth camera to the middle of a three-dimensional scene and the service quality of a product is improved by combining the depth camera with modern scientific and technological equipment such as a robot.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, and provides a novel indoor scene real-time reconstruction tracking service method based on an endowment robot, so that the service quality of the existing endowment robot is improved, the instruction execution accuracy of a service function is improved, and the endowment accompanying work of the robot is further developed. Under the system, a user firstly records own information data into a equipped handheld device (such as a mobile phone), the information data is transmitted into a robot body in a wireless transmission mode after primary data analysis is carried out in the handheld device, and the robot body executes related service functions after storing the data and starting to carry out initialization real-time reconstruction indoors. The system can be used for carrying out scene three-dimensional reconstruction in real time, is low in cost, and can generate a three-dimensional model with high precision, so that the related functions of the endowment robot equipment, such as intelligent object recognition, expression analysis, map navigation and the like, can be met.
The purpose of the invention is realized by the following technical scheme.
The invention relates to an indoor scene real-time reconstruction tracking service method based on an elderly people robot, which comprises the following processes:
s1, importing user information of the handheld device, and carrying out denoising and filtering on the imported data information to carry out preliminary analysis processing;
s2, transmitting the data after the preliminary analysis processing to a data storage area in a data interaction module in the robot body through a wireless transmission module;
s3, an image acquisition module acquires images indoors by using a Kinect V2 depth camera built in the robot body, wherein the image acquisition module acquires depth information and color information, a function library compatible with the depth camera needs to be compiled before use, and environment variables are set;
s4, carrying out reconstruction operation, measuring the image information collected in the step S3 to obtain the depth information of multiple frames, and carrying out noise processing by using an image denoising algorithm;
s5, carrying out coordinate axis transformation on the processed image information, and transforming the processed image information into a unified world coordinate so as to obtain point cloud information;
s6, starting to calculate rigid transformation matrix of six directional degrees of freedom, aligning the current point cloud information with the existing reference point cloud information, outputting relative transformation matrix parameters between two continuous frames of images, and then initializing to the camera pose when aligning the next frame, wherein an ICP algorithm is adopted;
s7, after the registration of the ICP algorithm is completed, performing fusion processing on repeated areas among the image frames, extracting key point cloud information, simplifying the point cloud, and adding color information;
s8, rendering a scene in real time and guiding a user to plan a motion track of the depth camera;
s9, triangulating the point cloud information by using a surface reconstruction algorithm to generate a visual three-dimensional model;
s10, putting the generated three-dimensional model into an information storage area of the data interaction module for calling each function of the service module;
and S11, determining the current user, performing tracking service on the user by using a Kinect V2 camera, and monitoring an instruction sent by the user so as to complete a corresponding service function.
The specific operation steps of S1 include:
s11, importing the current basic information of the user, such as name, age, height and weight health condition indexes, by using the handheld device APP;
s12, collecting facial expression information and posture information of the user;
s13, preprocessing information, and filtering facial expression information and posture information;
and S14, comprehensively analyzing the collected information, judging the health condition of the old at present through network reference data, recording expression information and posture information, and packaging the data into a fixed file format.
The specific operation steps of S2 include:
s21, transmitting data to a data interaction module of the robot body through WIFI wireless transmission;
and S22, storing the packaged file into a data storage area in the data interaction module.
The specific operation steps of S3 include:
s31, firstly, installing a camera driver of Kinect V2, and building a depth camera and OpenNI in the image acquisition module;
s32, finding Opencv and VTK source codes, compiling by using cmake, and configuring environment variables required by Kinect V2;
s33, installing the CUDA and a driver thereof, installing an open source code Eigen library, and configuring the required environment variables.
The specific operation steps of S4 include:
s41, installing the Kinect V2 on the robot body, and connecting the USB interface into an internal computer of the robot body;
s42, after all the objects are prepared properly, the robot starts an initialization program, the robot is placed in the center of a room, and image acquisition is started;
s43, ensuring that the robot stably collects and scans scene information at a constant speed and obtaining image data;
and S44, carrying out smooth denoising processing on the acquired image data by using a bilateral filtering algorithm.
The specific operation steps of S5 include:
s51, projecting pixel points mu on each processed frame image to a camera space to obtain a vertex v (mu);
s52, calculating the world coordinate point V of each frame of imagel,k(x, y), l is depth information, k belongs to camera Index;
s53, after the world coordinates of each frame of depth map are obtained, calculating the normal vector n of each vertex;
the normal vector n is calculated as follows:
s531, obtaining point cloud data characteristic V of Kinect V2 according to the point cloud data characteristic Vk(x, y) 4 neighboring points, Vk(x-1,y)、Vk(x+1,y)、Vk(x,y+1)、Vk(x,y-1);
S532, calculating Vk(x, y) Domain centroid V0;
S533, calculating a third-order covariance matrix C:
s534, carrying out characteristic decomposition on the C:
S535、λ1corresponding feature vector v1Is a point V1The normal vector of (2).
The specific operation steps of S6 include:
s61, after the coordinate transformation in the step S5, unifying the collected frame image information into world coordinates, and determining the points meeting the depth difference threshold and the vector included angle threshold as matching points;
s62, minimizing the objective function by utilizing an ICP (inductively coupled plasma) algorithm so as to obtain the optimal transformation parameters of the solution;
and S63, determining the position and orientation of the current corresponding camera according to the transformation parameters and the reference point cloud information.
The specific operation steps of S7 include:
s71, opening up two resolutions of 1024 in the memory by using GPU3The volume space is used for storing space geometric information and color information;
s72, solving a pixel point mu corresponding to the voxel center point through a projection equation;
s73, calculating the three-dimensional position point p of the pixel point mu in the world coordinate through coordinate transformation, and calculating the distance d between the voxel v of the point cloud space and the three-dimensional position point pvpk;
S74, finding dvpkThe intersection point of 0 and the color information is recorded in the point cloud color volume space.
The specific operation steps of S10 include:
s101, storing the three-dimensional model data generated finally into a data storage area of a data interaction module in a ply format;
s102, the robot starts to position the target user, starts to perform tracking service on the user, has a safety distance of 1.5 m, monitors the starting of a user instruction process, and waits for command input.
The specific operation steps of S11 include:
s111, when a user inputs an instruction, the monitoring program receives user voice information and converts the voice information into text information;
s112, extracting keywords from the text information, such as searching for a cup, and starting an intelligent object searching mode;
s113, the service module calls a ply file of the data storage area and finds characteristic data of the cup in the three-dimensional information data;
s114, the position of the cup is returned to the service module, and the robot determines the position information of the cup.
The invention provides an indoor scene real-time reconstruction tracking service method based on an endowment robot by further improving the endowment robot, which utilizes a handheld device and a Kinect camera to ensure that the endowment robot can utilize three-dimensional information in the environment when carrying out endowment accompanying service, the service quality of the endowment robot is increased, and more service functions can be developed by utilizing the three-dimensional information, meanwhile, in the process of real-time reconstruction, the adopted ICP algorithm has high registration accuracy and low cost, can meet the requirement of the required three-dimensional information, and has the expected effective effects of:
1) the invention can carry out preliminary information analysis in the handheld device according to the information condition input by the user, accurately analyze the health condition, physical quality, expression information and the like of the user, apply the obtained data to the robot and improve the service quality of the robot.
2) The method can reconstruct the indoor scene in real time by using the Kinect depth camera, and can preprocess the acquired three-dimensional information before reconstruction so as to ensure the quality of the three-dimensional information. The nearest point cloud between the image frames can be accurately registered by adopting an ICP algorithm, and the quality of the reconstructed image after reconstruction can meet the precision level required by the related functions of the robot.
3) The traditional endowment robot has the service functions, the quality of the related service functions is improved under the condition that the service functions have three-dimensional information, and the effects of the three-dimensional information on object identification, indoor navigation, expression analysis, obstacle avoidance and the like are improved.
4) The invention has low reconstruction cost and improved efficiency; the quality of data collected by the Kinect V2 depth camera is optimized; the calculation of vertex vectors and normal vectors is optimized; ICP registration is used, and a confidence interval is set for each pair of registration points, so that the robustness of registration is improved.
Drawings
FIG. 1 is an overall system framework diagram of the present invention;
FIG. 2 is a hand-held device module of the present invention;
FIG. 3 is a block diagram of a wireless transmission module according to the present invention;
FIG. 4 is a KinectV2 camera configuration of the present invention;
FIG. 5 is a flow chart of the real-time reconstruction of the present invention;
FIG. 6 illustrates an embodiment of the present invention;
FIG. 7 shows the function of part of Kinect V2 according to the present invention; (cited in the detailed description)
Fig. 8 is a software module diagram of a part of the robot system of the present invention. (cited in the detailed description)
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings. Figure 1 shows the overall process framework of the present invention. We provide a practical process of the indoor scene real-time reconstruction tracking service method based on the elderly robot, which is implemented by using the present invention, for reference.
The invention provides an indoor scene real-time reconstruction tracking service system based on an elderly people robot, wherein a handheld device is arranged for the elderly people to control the robot, the information of the elderly people is transmitted to an elderly people robot body, the robot completes the real-time reconstruction work of the scene by using a Kinect depth camera, and the reconstructed data information is applied to a service function module.
The invention relates to an indoor scene real-time reconstruction tracking service method based on an elderly people robot, which comprises the following specific implementation processes:
and S1, importing user information of the handheld device, and carrying out denoising and filtering on the imported data information to carry out preliminary analysis processing. As shown in fig. 2, the specific operation steps include:
s11, importing the current basic information of the user, such as name, age, height, weight and other health condition indexes, by using a handheld device APP (which can be docked with a mobile phone);
s12, collecting facial expression information, posture information and the like of the user;
s13, preprocessing information, and filtering facial expression information and posture information;
and S14, comprehensively analyzing the collected information, judging the health condition of the old at present through network reference data, recording expression information, posture information and the like, and packaging the data into a fixed file format.
And S2, transmitting the data after the preliminary analysis processing to a data storage area in a data interaction module in the robot body through a wireless transmission module. As shown in fig. 3, the specific operation steps include:
s21, transmitting data to a data interaction module of the robot body through WIFI wireless transmission;
and S22, storing the packaged file into a data storage area in the data interaction module.
S3, the image acquisition module acquires images indoors by using a Kinect V2 depth camera built in the robot body, the image acquisition module acquires depth information and color information, a function library compatible with the depth camera needs to be compiled before use, and environment variables are set. FIG. 4 is a diagram of the KinectV2 structure, including the following steps:
s31, firstly, installing a camera driver of Kinect V2, and building a depth camera and OpenNI in the image acquisition module;
s32, finding Opencv and VTK source codes, compiling by using cmake, and configuring environment variables required by Kinect V2;
s33, installing the CUDA and a driver thereof, installing an open source code Eigen library, and configuring the required environment variables.
And S4, carrying out reconstruction operation, measuring the image information collected in the step S3 to obtain the depth information of multiple frames (namely the depth information of the video sequence), and carrying out noise processing by using an image denoising algorithm. As shown in fig. 5 and 6, the specific operation steps include:
s41, installing the Kinect V2 on the robot body, and connecting the USB interface into an internal computer of the robot body;
s42, after all the objects are prepared properly, the robot starts an initialization program, the robot is placed in the center of a room, and image acquisition is started;
s43, ensuring that the robot stably collects and scans scene information at a constant speed and obtaining image data;
and S44, carrying out smooth denoising processing on the acquired image data by using a bilateral filtering algorithm.
And S5, performing coordinate axis transformation on the processed image information, and transforming the processed image information into a uniform world coordinate, thereby obtaining point cloud information. The method comprises the following specific operation steps:
s51, projecting pixel points mu on each processed frame image to a camera space to obtain a vertex v (mu);
s52, calculating the world coordinate point V of each frame of imagel,k(x, y), l is depth information, k belongs to camera Index;
s53, after the world coordinates of each frame of depth map are obtained, calculating the normal vector n of each vertex;
the normal vector n is calculated as follows:
s531, obtaining point cloud data characteristic V of Kinect V2 according to the point cloud data characteristic Vk(x, y) give 4 neighborsDot, Vk(x-1,y)、Vk(x+1,y)、Vk(x,y+1)、Vk(x,y-1);
S532, calculating Vk(x, y) Domain centroid V0;
S533, calculating a third-order covariance matrix C:
s534, carrying out characteristic decomposition on the C:
S535、λ1corresponding feature vector v1Is a point V1The normal vector of (2).
And S6, calculating rigid transformation matrix of six directional degrees of freedom, aligning the current point cloud information with the existing reference point cloud information, outputting relative transformation matrix parameters between two continuous frames of images, and initializing to the camera pose when aligning the next frame, wherein an ICP algorithm is adopted. The method comprises the following specific operation steps:
s61, after the coordinate transformation in the step S5, unifying the collected frame image information into world coordinates, and determining the points meeting the depth difference threshold and the vector included angle threshold as matching points;
s62, minimizing the objective function by utilizing an ICP (inductively coupled plasma) algorithm so as to obtain the optimal transformation parameters of the solution;
and S63, determining the position and orientation of the current corresponding camera according to the transformation parameters and the reference point cloud information.
The depth difference threshold value is set to be 0.2, and the vector included angle threshold value is set to be 15 degrees.
And S7, after the registration of the ICP algorithm is completed, performing fusion processing on repeated areas among the image frames, extracting key point cloud information, simplifying the point cloud, and adding color information. The method comprises the following specific operation steps:
s71, opening up two resolutions of 1024 in the memory by using GPU3In the body space ofTo store spatial geometry information and color information;
s72, solving a pixel point mu corresponding to the voxel center point through a projection equation;
s73, calculating the three-dimensional position point p of the pixel point mu in the world coordinate through coordinate transformation, and calculating the distance d between the voxel v of the point cloud space and the three-dimensional position point pvpk;
S74, finding dvpkThe intersection point of 0 and the color information is recorded in the point cloud color volume space.
S8, rendering the scene in real-time and guiding the user to plan the motion trajectory of the depth camera.
And rendering the scene by using the space geometric information and the color information obtained from the GPU memory, and determining the motion trail of the camera.
S9, triangulating point cloud information by using a surface reconstruction algorithm to generate a visual three-dimensional model
And reconstructing the scene by using FSS surface reconstruction, wherein the FSS surface can enable the reconstructed triangular mesh to have color information.
And S10, placing the generated three-dimensional model into an information storage area of the data interaction module for calling the service module for each function. The method comprises the following specific operation steps:
s101, storing the three-dimensional model data generated finally into a data storage area of a data interaction module in a ply format;
s102, the robot starts to position the target user, starts to perform tracking service on the user, has a safety distance of 1.5 m, monitors the starting of a user instruction process, and waits for command input.
And S11, determining the current user, performing tracking service on the user by using a Kinect V2 camera, and monitoring an instruction sent by the user, so as to complete the corresponding service function, as shown in FIGS. 7 and 8. The method comprises the following specific operation steps:
s111, when a user inputs an instruction, the monitoring program receives user voice information and converts the voice information into text information;
s112, extracting keywords from the text information, such as searching for a cup, and starting an intelligent object searching mode;
s113, the service module calls a ply file of the data storage area and finds characteristic data of the cup in the three-dimensional information data;
s114, the position of the cup is returned to the service module, and the robot determines the position information of the cup, as shown in the flow chart of FIG. 8.
While the present invention has been described in terms of its functions and operations with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise functions and operations described above, and that the above-described embodiments are illustrative rather than restrictive, and that various changes and modifications may be effected therein by one skilled in the art without departing from the scope or spirit of the invention as defined by the appended claims.
Claims (10)
1. An indoor scene real-time reconstruction tracking service method based on an elderly people robot is characterized by comprising the following processes:
s1, importing user information of the handheld device, and carrying out denoising and filtering on the imported data information to carry out preliminary analysis processing;
s2, transmitting the data after the preliminary analysis processing to a data storage area in a data interaction module in the robot body through a wireless transmission module;
s3, an image acquisition module acquires images indoors by using a Kinect V2 depth camera built in the robot body, wherein the image acquisition module acquires depth information and color information, a function library compatible with the depth camera needs to be compiled before use, and environment variables are set;
s4, carrying out reconstruction operation, measuring the image information collected in the step S3 to obtain the depth information of multiple frames, and carrying out noise processing by using an image denoising algorithm;
s5, carrying out coordinate axis transformation on the processed image information, and transforming the processed image information into a unified world coordinate so as to obtain point cloud information;
s6, starting to calculate rigid transformation matrix of six directional degrees of freedom, aligning the current point cloud information with the existing reference point cloud information, outputting relative transformation matrix parameters between two continuous frames of images, and then initializing to the camera pose when aligning the next frame, wherein an ICP algorithm is adopted;
s7, after the registration of the ICP algorithm is completed, performing fusion processing on repeated areas among the image frames, extracting key point cloud information, simplifying the point cloud, and adding color information;
s8, rendering a scene in real time and guiding a user to plan a motion track of the depth camera;
s9, triangulating the point cloud information by using a surface reconstruction algorithm to generate a visual three-dimensional model;
s10, putting the generated three-dimensional model into an information storage area of the data interaction module for calling each function of the service module;
and S11, determining the current user, performing tracking service on the user by using a Kinect V2 camera, and monitoring an instruction sent by the user so as to complete a corresponding service function.
2. The indoor scene real-time reconstruction tracking service method based on the endowment robot as claimed in claim 1, wherein the specific operation steps of S1 include:
s11, importing the current basic information of the user, such as name, age, height and weight health condition indexes, by using the handheld device APP;
s12, collecting facial expression information and posture information of the user;
s13, preprocessing information, and filtering facial expression information and posture information;
and S14, comprehensively analyzing the collected information, judging the health condition of the old at present through network reference data, recording expression information and posture information, and packaging the data into a fixed file format.
3. The indoor scene real-time reconstruction tracking service method based on the endowment robot as claimed in claim 1, wherein the specific operation steps of S2 include:
s21, transmitting data to a data interaction module of the robot body through WIFI wireless transmission;
and S22, storing the packaged file into a data storage area in the data interaction module.
4. The indoor scene real-time reconstruction tracking service method based on the endowment robot as claimed in claim 1, wherein the specific operation steps of S3 include:
s31, firstly, installing a camera driver of Kinect V2, and building a depth camera and OpenNI in the image acquisition module;
s32, finding Opencv and VTK source codes, compiling by using cmake, and configuring environment variables required by Kinect V2;
s33, installing the CUDA and a driver thereof, installing an open source code Eigen library, and configuring the required environment variables.
5. The indoor scene real-time reconstruction tracking service method based on the endowment robot as claimed in claim 1, wherein the specific operation steps of S4 include:
s41, installing the Kinect V2 on the robot body, and connecting the USB interface into an internal computer of the robot body;
s42, after all the objects are prepared properly, the robot starts an initialization program, the robot is placed in the center of a room, and image acquisition is started;
s43, ensuring that the robot stably collects and scans scene information at a constant speed and obtaining image data;
and S44, carrying out smooth denoising processing on the acquired image data by using a bilateral filtering algorithm.
6. The indoor scene real-time reconstruction tracking service method based on the endowment robot as claimed in claim 1, wherein the specific operation steps of S5 include:
s51, projecting pixel points mu on each processed frame image to a camera space to obtain a vertex v (mu);
s52, calculating the world coordinate point V of each frame of imagel,k(x, y), l is depth information, k belongs to camera Index;
s53, after the world coordinates of each frame of depth map are obtained, calculating the normal vector n of each vertex;
the normal vector n is calculated as follows:
s531, obtaining point cloud data characteristic V of Kinect V2 according to the point cloud data characteristic Vk(x, y) 4 neighboring points, Vk(x-1,y)、Vk(x+1,y)、Vk(x,y+1)、Vk(x,y-1);
S532, calculating Vk(x, y) Domain centroid V0;
S533, calculating a third-order covariance matrix C:
s534, carrying out characteristic decomposition on the C:
S535、λ1the corresponding characteristic vector v1Is a point V1The normal vector of (2).
7. The indoor scene real-time reconstruction tracking service method based on the endowment robot as claimed in claim 1, wherein the specific operation steps of S6 include:
s61, after the coordinate transformation in the step S5, unifying the collected frame image information into world coordinates, and determining the points meeting the depth difference threshold and the vector included angle threshold as matching points;
s62, minimizing the objective function by utilizing an ICP (inductively coupled plasma) algorithm so as to obtain the optimal transformation parameters of the solution;
and S63, determining the position and orientation of the current corresponding camera according to the transformation parameters and the reference point cloud information.
8. The indoor scene real-time reconstruction tracking service method based on the endowment robot as claimed in claim 1, wherein the specific operation steps of S7 include:
s71, opening up two resolutions of 1024 in the memory by using GPU3The volume space is used for storing space geometric information and color information;
s72, solving a pixel point mu corresponding to the voxel center point through a projection equation;
s73, calculating the three-dimensional position point p of the pixel point mu in the world coordinate through coordinate transformation, and calculating the distance d between the voxel v of the point cloud space and the three-dimensional position point pvpk;
S74, finding dvpkThe intersection point of 0 and the color information is recorded in the point cloud color volume space.
9. The indoor scene real-time reconstruction tracking service method based on the endowment robot as claimed in claim 1, wherein the specific operation steps of S10 include:
s101, storing the three-dimensional model data generated finally into a data storage area of a data interaction module in a ply format;
s102, the robot starts to position the target user, starts to perform tracking service on the user, has a safety distance of 1.5 m, monitors the starting of a user instruction process, and waits for command input.
10. The indoor scene real-time reconstruction tracking service method based on the endowment robot as claimed in claim 1, wherein the specific operation steps of S11 include:
s111, when a user inputs an instruction, the monitoring program receives user voice information and converts the voice information into text information;
s112, extracting keywords from the text information, such as searching for a cup, and starting an intelligent object searching mode;
s113, the service module calls a ply file of the data storage area and finds characteristic data of the cup in the three-dimensional information data;
s114, the position of the cup is returned to the service module, and the robot determines the position information of the cup.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911050511.3A CN110853135A (en) | 2019-10-31 | 2019-10-31 | Indoor scene real-time reconstruction tracking service method based on endowment robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911050511.3A CN110853135A (en) | 2019-10-31 | 2019-10-31 | Indoor scene real-time reconstruction tracking service method based on endowment robot |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110853135A true CN110853135A (en) | 2020-02-28 |
Family
ID=69598777
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911050511.3A Pending CN110853135A (en) | 2019-10-31 | 2019-10-31 | Indoor scene real-time reconstruction tracking service method based on endowment robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110853135A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113554691A (en) * | 2021-07-22 | 2021-10-26 | 河北农业大学 | Plant height measuring method |
CN115107057A (en) * | 2022-08-12 | 2022-09-27 | 广东海洋大学 | Intelligent endowment service robot based on ROS and depth vision |
CN116958353A (en) * | 2023-07-27 | 2023-10-27 | 深圳优立全息科技有限公司 | Holographic projection method based on dynamic capture and related device |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107466404A (en) * | 2017-05-11 | 2017-12-12 | 深圳前海达闼云端智能科技有限公司 | Articles search method, apparatus and robot |
CN108765548A (en) * | 2018-04-25 | 2018-11-06 | 安徽大学 | Three-dimensional scenic real-time reconstruction method based on depth camera |
CN110310362A (en) * | 2019-06-24 | 2019-10-08 | 中国科学院自动化研究所 | High dynamic scene three-dimensional reconstruction method, system based on depth map and IMU |
-
2019
- 2019-10-31 CN CN201911050511.3A patent/CN110853135A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107466404A (en) * | 2017-05-11 | 2017-12-12 | 深圳前海达闼云端智能科技有限公司 | Articles search method, apparatus and robot |
CN108765548A (en) * | 2018-04-25 | 2018-11-06 | 安徽大学 | Three-dimensional scenic real-time reconstruction method based on depth camera |
CN110310362A (en) * | 2019-06-24 | 2019-10-08 | 中国科学院自动化研究所 | High dynamic scene three-dimensional reconstruction method, system based on depth map and IMU |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113554691A (en) * | 2021-07-22 | 2021-10-26 | 河北农业大学 | Plant height measuring method |
CN113554691B (en) * | 2021-07-22 | 2022-05-10 | 河北农业大学 | Plant height measuring method |
CN115107057A (en) * | 2022-08-12 | 2022-09-27 | 广东海洋大学 | Intelligent endowment service robot based on ROS and depth vision |
CN116958353A (en) * | 2023-07-27 | 2023-10-27 | 深圳优立全息科技有限公司 | Holographic projection method based on dynamic capture and related device |
CN116958353B (en) * | 2023-07-27 | 2024-05-24 | 深圳优立全息科技有限公司 | Holographic projection method based on dynamic capture and related device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11762475B2 (en) | AR scenario-based gesture interaction method, storage medium, and communication terminal | |
CN108885459B (en) | Navigation method, navigation system, mobile control system and mobile robot | |
JP7096925B2 (en) | Deep machine learning system for rectangular parallelepiped detection | |
CN110853135A (en) | Indoor scene real-time reconstruction tracking service method based on endowment robot | |
WO2021175050A1 (en) | Three-dimensional reconstruction method and three-dimensional reconstruction device | |
CN105335696B (en) | A kind of intelligence based on the identification of 3D abnormal gaits behavioral value is helped the elderly robot and implementation method | |
US20190139297A1 (en) | 3d skeletonization using truncated epipolar lines | |
KR101738569B1 (en) | Method and system for gesture recognition | |
CN108388882B (en) | Gesture recognition method based on global-local RGB-D multi-mode | |
US6804396B2 (en) | Gesture recognition system | |
JP7016522B2 (en) | Machine vision with dimensional data reduction | |
CN105867630A (en) | Robot gesture recognition method and device and robot system | |
JP2004078316A (en) | Attitude recognition device and autonomous robot | |
CN109000655B (en) | Bionic indoor positioning and navigation method for robot | |
CN105824417B (en) | human-object combination method adopting virtual reality technology | |
US20220351405A1 (en) | Pose determination method and device and non-transitory storage medium | |
Yan et al. | Human-object interaction recognition using multitask neural network | |
US6437808B1 (en) | Apparatus and method for transmitting graphical representations | |
CN112789020B (en) | Visualization method and system for intelligent wheelchair | |
CN114612539A (en) | Semantic three-dimensional face reconstruction method based on RGB-D image | |
CN116416518A (en) | Intelligent obstacle avoidance method and device | |
CN103258188A (en) | Moving target object detection tracking method based on cross-platform computer vision library | |
KR101862545B1 (en) | Method and system for providing rescue service using robot | |
CN112655021A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
TWI817680B (en) | Image data augmentation method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20200228 |