CN114353780B - Gesture optimization method and device - Google Patents

Gesture optimization method and device Download PDF

Info

Publication number
CN114353780B
CN114353780B CN202111666849.9A CN202111666849A CN114353780B CN 114353780 B CN114353780 B CN 114353780B CN 202111666849 A CN202111666849 A CN 202111666849A CN 114353780 B CN114353780 B CN 114353780B
Authority
CN
China
Prior art keywords
point cloud
cloud data
environment
detectors
environment detectors
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111666849.9A
Other languages
Chinese (zh)
Other versions
CN114353780A (en
Inventor
黄玉玺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Autonavi Software Co Ltd
Original Assignee
Autonavi Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Autonavi Software Co Ltd filed Critical Autonavi Software Co Ltd
Priority to CN202111666849.9A priority Critical patent/CN114353780B/en
Publication of CN114353780A publication Critical patent/CN114353780A/en
Application granted granted Critical
Publication of CN114353780B publication Critical patent/CN114353780B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

The embodiment of the application provides a gesture optimization method and device. In the embodiment of the application, the characteristic that different environment detectors can scan the same object at different moments can be utilized to perform characteristic extraction on each frame of point cloud data acquired by the plurality of environment detectors, so as to obtain characteristic points corresponding to the plurality of environment detectors in each frame of point cloud data; matching the characteristic points corresponding to the plurality of environment detectors to obtain homonymous points in the characteristic points corresponding to the plurality of environment detectors; and the distances between the coordinates of the same-name points corresponding to the plurality of environment detectors under the same coordinates are minimized as targets, and the posture information recorded by the plurality of environment detectors in the process of collecting the point cloud data is corrected according to the point cloud coordinates of the same-name points and the posture information recorded by the plurality of environment detectors in the process of collecting the same-name points, so that the posture of the environment detectors is optimized, and the posture precision of the environment detectors is improved.

Description

Gesture optimization method and device
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and an apparatus for optimizing a gesture.
Background
With the continuous development and popularization of intelligent terminals, map application software is widely installed and applied. The creation of electronic map data based on data collected by a data collection device along a road is widely used. In constructing a map based on the profile data, it is necessary to identify corresponding geographic elements and their geometric data, such as roads and their geometric data (including data of shapes, directions, positions, etc. of the roads) from the profile data. In order to accurately acquire corresponding geographic elements and geometric data thereof from data, pose information of data acquisition equipment is required to be acquired.
The accuracy of the electronic map depends largely on the track accuracy of the data acquisition device, and the track accuracy is divided into position accuracy and attitude accuracy. Therefore, the gesture accuracy of the data acquisition device has a crucial impact on the accuracy of the electronic map. In view of the foregoing, how to improve the attitude accuracy of a data acquisition device is a continuing problem for those skilled in the art.
Disclosure of Invention
Aspects of the present application provide a method and apparatus for optimizing a posture of an environment detector, which are useful for improving a posture accuracy of the environment detector.
The embodiment of the application provides a gesture optimization method, which comprises the following steps:
acquiring point cloud data of each frame acquired by a plurality of environment detectors;
extracting features of the point cloud data of each frame to obtain feature points corresponding to the plurality of environment detectors from the point cloud data of each frame;
matching the characteristic points corresponding to the plurality of environment detectors to determine homonymous points in the characteristic points corresponding to the plurality of environment detectors; wherein, the homonymy point is a data point corresponding to the same physical point in the real world in the cloud data of each frame acquired by the plurality of environment detectors;
Acquiring pose information recorded in the process of acquiring the homonymous points by the plurality of environment detectors;
and optimizing the gesture recorded in the process of acquiring the point cloud data by the plurality of environment detectors according to the point cloud coordinates of the corresponding homonymous points of the plurality of environment detectors and the gesture information recorded in the process of acquiring the homonymous points by the plurality of environment detectors.
The embodiment of the application also provides a computer device, which comprises: a memory and a processor; wherein the memory is used for storing a computer program;
the processor is coupled to the memory for executing the computer program for performing the steps in the above-described pose optimization method.
The embodiment of the application also provides a data acquisition device, which comprises: a machine body; the machine body is provided with a memory, a processor and a plurality of environment detectors;
the plurality of environment detectors are used for collecting point cloud data;
the memory is used for storing a computer program;
the processor is coupled to the memory for executing the computer program for performing the steps in the above-described pose optimization method.
Embodiments also provide a computer-readable storage medium storing computer instructions that, when executed by one or more processors, cause the one or more processors to perform the steps in the above-described pose optimization method.
Embodiments of the present application also provide a computer program product comprising: computer program. Wherein execution of the computer program product by the processor may implement the gesture optimization method described above.
In the embodiment of the application, the characteristic that different environment detectors can scan the same object at different moments is utilized to perform characteristic extraction on each frame of point cloud data acquired by the plurality of environment detectors, so as to obtain characteristic points corresponding to the plurality of environment detectors in each frame of point cloud data; matching the characteristic points corresponding to the plurality of environment detectors to obtain homonymous points in the characteristic points corresponding to the plurality of environment detectors; and the distances between the coordinates of the same-name points corresponding to the plurality of environment detectors under the same coordinates are minimized as targets, and the gesture information recorded in the process of collecting the point cloud data by the plurality of environment detectors is corrected according to the point cloud coordinates of the same-name points corresponding to the plurality of detectors and the gesture information in the process of collecting the same-name points by the plurality of detectors, so that the gesture of the environment detectors is optimized, and the gesture precision of the environment detectors is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application. In the drawings:
FIG. 1a is a schematic diagram of a working environment of a data acquisition device according to an embodiment of the present application;
fig. 1b is a schematic structural diagram of a data acquisition device according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an arrangement of an environmental sensor according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of scanning adjacent frames of a dual-environment detector according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram of a point cloud data feature scanned by a dual single line lidar according to an embodiment of the present application;
FIG. 5 is a schematic diagram showing the effect of coordinate transformation on the scanned point cloud data of FIG. 4 using poses before and after optimization;
FIG. 6 is a schematic flow chart of a gesture optimization method according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
For the purposes, technical solutions and advantages of the present application, the technical solutions of the present application will be clearly and completely described below with reference to specific embodiments of the present application and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
In the field of electronic maps, in order to produce electronic map data, a data acquisition device is often used to acquire environmental data along a road and produce electronic map data based on the environmental data. In order to accurately acquire corresponding geographic elements and geometric data thereof from data, pose information of data acquisition equipment is required to be acquired.
The accuracy of the electronic map depends largely on the track accuracy of the data acquisition device, and the track accuracy is divided into position accuracy and attitude accuracy. Therefore, the gesture accuracy of the data acquisition device has a crucial impact on the accuracy of the electronic map. In the prior art, a radar ranging and mapping (Lidar odometry and mapping, LOAM) method can be adopted to correct the attitude of the laser radar, but the LOAM method requires overlapping of front and rear frame data acquired by the laser radar and is not suitable for correcting the attitude of the laser radar with non-overlapping front and rear frame data acquired.
In view of the foregoing, there is a need to provide an attitude optimization method that is unlimited for front and rear frame data collected by a lidar. In order to solve the technical problems, in some embodiments of the present application, feature extraction is performed on each frame of point cloud data acquired by a plurality of environment detectors by using the characteristic that different environment detectors can scan the same object at different moments, so as to obtain feature points corresponding to the plurality of environment detectors in each frame of point cloud data; matching the characteristic points corresponding to the plurality of environment detectors to obtain homonymous points in the characteristic points corresponding to the plurality of environment detectors; and the distances between the coordinates of the same-name points corresponding to the plurality of environment detectors under the same coordinates are minimized as targets, and the gestures in the process of collecting the point cloud data by the plurality of environment detectors are corrected according to the point cloud coordinates of the same-name points corresponding to the plurality of detectors and the gesture information in the process of collecting the same-name points by the plurality of environment detectors, so that the gestures of the environment detectors are optimized, and the gesture precision of the environment detectors is improved.
On the other hand, the embodiment of the application optimizes the gesture of the environment detector by utilizing the characteristic that different environment detectors can scan the same object at different moments, and provides a gesture optimization method without limitation on front and rear frame data acquired by the laser radar, wherein the gesture optimization method is used for optimizing the gesture of the environment detector by utilizing the characteristic that whether front and rear frame point cloud data acquired by the same environment detector are overlapped or not.
The following describes in detail the technical solutions provided by the embodiments of the present application with reference to the accompanying drawings.
It should be noted that: like reference numerals denote like objects in the following figures and embodiments, and thus once an object is defined in one figure or embodiment, further discussion thereof is not necessary in the subsequent figures and embodiments.
Fig. 1a is an exemplary diagram of a working environment of a data acquisition device according to an embodiment of the present application. Fig. 1b is a schematic structural diagram of a data acquisition device according to an embodiment of the present application. As shown in fig. 1a and 1b, the data acquisition device S1 includes: the machine body 10 and a plurality of environment detectors 11 provided on the machine body. The plural means 2 or more than 2. A plurality of environment sensors 11 may be disposed on the machine body 10 at an angle in a manner as shown in fig. 2. Fig. 1a and 2 are only illustrated with 2 environmental detectors 11, but are not limiting.
The machine body 10 is an actuator of the data acquisition device, and mainly refers to a body of the data acquisition device. The specified operations may be performed in the determined environment. The machine body 10 is a form of appearance of the data acquisition device. In the present embodiment, the appearance of the data acquisition device is not limited. For example, the data collection device may be a data collection vehicle, a data collection drone, or a humanoid or non-humanoid robot, or the like.
In the present embodiment, the environment detector 11 mainly refers to an electronic device that can collect environment information. The environment detector 11 may be a vision sensor or radar or the like. Wherein the vision sensor can be a camera and the like; the radar may be a microwave radar, millimeter wave radar, laser radar, or the like. The lidar may in turn be a single-line lidar or a multi-line lidar.
In this embodiment, as shown in fig. 1b, some basic components of the data acquisition device, such as the driving component 12, are further disposed on the machine body 10. Alternatively, the drive assembly 12 may include drive wheels, drive motors, universal wheels, and the like.
In this embodiment, as shown in fig. 1a, a plurality of environment detectors 11 may collect environment information to obtain point cloud data. Each environment detector 11 can collect point cloud data, so as to obtain the point cloud data collected by each of the plurality of environment detectors 11. The environment detector 11 can move along with the movement of the data acquisition device, and acquire environment information in the process of the movement of the data acquisition device, so as to obtain point cloud data. The frame of point cloud data is a data set obtained by one-time environment detection of the environment detector 11, and includes: the spatial coordinates of the detection point in the environment detector 11 coordinate system. For the vision sensor, the pixels of the environment image collected by the vision sensor form point cloud data, each pixel corresponds to a detection point, and each pixel can be used as a data point in the point cloud data. For radar, the point cloud data is a data set obtained by radar scanning for one week. The following describes point cloud data using a lidar as an example.
For a laser radar, the surrounding environment of the current position of the data acquisition equipment can be scanned to obtain point cloud data. In this embodiment, the laser radar includes a laser transmitter, an optical receiver, and an information processing system, where the laser transmitter is configured to transmit a laser detection signal, and the detection laser signal reflects an optical echo signal when encountering an obstacle, and the optical receiver may receive the optical echo signal. The information processing system can then obtain information about the target, such as the distance between the laser radar and the target, the azimuth, altitude, shape, etc., of the target according to the reflected optical echo signals.
In this embodiment, the laser detection signal encounters an obstacle, which is essentially: the laser detection signal encounters a point on the obstacle. For convenience of description and distinction, an obstacle point actually encountered by the laser detection signal in the propagation process is defined as a detection point. In this embodiment, the detection point may be a point on the target object, or may belong to another object other than the target object, such as dust in the air, or the like. Wherein each laser detection signal can return a corresponding optical echo signal when encountering one detection point. The radar can acquire the distance between the laser radar and the detection point according to the difference between the laser detection signal and the optical echo signal. The laser detection signals emitted by the radar are different, and the ways of acquiring the distance between the laser radar and the detection point are also different.
For example, if the laser detection signal emitted by the laser radar is a laser pulse signal, the distance between the radar and the detection point may be calculated according to the time difference between the detected laser pulse signal emitted by the radar and the received optical echo signal. Namely, the distance between the laser radar and the detection point is calculated by using a time-of-flight method. Optionally, knowing the speeds of the laser pulse signal and the optical echo signal in the atmospheric propagation, the distance between the laser radar and the detection point can be calculated according to the time difference between the laser pulse signal sent by the laser radar and the received optical echo signal and the speeds of the laser second pulse signal and the optical echo signal in the atmospheric propagation. For another example, if the laser detection signal emitted by the laser radar is a continuous optical signal, the distance between the laser radar and the detection point may be calculated based on the frequency difference between the continuous optical signal emitted by the radar and the received optical echo signal. Optionally, the continuous wave is a frequency modulated continuous wave (Frequency Modulated Continuous Wave, FMCW). The frequency modulation mode may be triangular wave frequency modulation, sawtooth wave frequency modulation, code modulation, noise frequency modulation, or the like, but is not limited thereto.
Further, based on the laser detection signal emitted by the laser radar and the received optical echo signal, the spatial coordinate information of the detection points can be obtained, and the spatial coordinate information of a plurality of detection points forms point cloud data, namely, the point cloud data is a set formed by a series of spatial coordinate points. Alternatively, the data point corresponding to the detection point may be calculated according to the distance between the lidar and the detection point and the pose of the lidar. The pose of the laser radar refers to the position and the pose of the laser radar. Further, the attitude of the lidar may refer to the directionality of the laser beam emitted by the laser transmitter. Further, according to the directivity of the laser beam emitted by the laser emitter, the direction of the detection point compared with the laser radar can be obtained; and further, according to the direction of the detection point compared with the laser radar, the distance between the laser radar and the detection point and the position of the laser radar, the space coordinate of the detection point under the laser radar coordinate system can be calculated and used as one data point in the point cloud data corresponding to the detection point.
In some embodiments, the data acquisition device may simply perform data acquisition, and provide the acquired point cloud data to other computer devices for processing. In other embodiments, the data acquisition device may also have data processing functionality. For example, the data acquisition device may be an autonomous mobile device, such as an autonomous vehicle, a robot, or an unmanned aerial vehicle, among others. The autonomous mobile equipment can acquire environment information and construct an environment map in the moving process; and route planning based on the constructed environment map, etc. Of course, the data acquisition device may also be other removable devices, such as vehicles that require a driver to drive, and the like.
The gesture optimization method provided in the embodiment of the present application is described below by taking a data acquisition device with a data processing function as an example.
As shown in fig. 1b, the data acquisition device S1 may further comprise: a memory 13 and a processor 14 provided on the machine body 10. It should be noted that the memory 13 and the processor 14 may be disposed inside the machine body 10, or may be disposed on a surface of the machine body 10.
In the present embodiment, the memory 13 may store a computer program. The computer program may be executed by the processor 14 to cause the processor 14 to carry out respective functions or to control the data acquisition device to perform respective actions or tasks, etc.
The processor 14 may be regarded as a control system of the data acquisition device and may be used to execute a computer program stored in the memory 13 for controlling the data acquisition device to perform the respective functions, to perform the respective actions or tasks, etc.
In this embodiment, the processor 14 may acquire each frame of point cloud data acquired by the plurality of environment detectors 11. Wherein each environment detector 11 corresponds to a frame of point cloud data. In order to reduce the calculation amount and noise, the processor 14 may perform feature extraction on each frame of point cloud data to obtain feature points corresponding to the plurality of environment detectors 11 from each frame of point cloud data. The feature points obtained from the point cloud data collected by which environmental detector are the feature points corresponding to the environmental detector 11.
Alternatively, the processor 14 may calculate the curvature of the data points from the point cloud coordinates of the data points in each frame of point cloud data. Optionally, for any data point a, the processor 14 may calculate, according to the point cloud coordinates of the data point in the point cloud data to which the data point a belongs, the distances between the other data points B and the data point a in the same frame of the point cloud data to which the data point a belongs; and selecting a data point C with a distance meeting a set requirement from other data points B. Alternatively, from among the data points B, a data point whose distance from the data point a is less than or equal to a set distance threshold value may be selected as the data point C. Alternatively, a set number of data points may be selected as the data point C in order of decreasing distance from the data point a. Further, the curvature of data point a may be calculated from the coordinates of data point C and data point a.
Based on the curvature of the data points in each frame of point cloud data, the processor 14 may acquire, from each frame of point cloud data, a data point whose curvature meets a set requirement as a feature point corresponding to the environment detector 11 that acquires the frame of point cloud data. Alternatively, the processor 14 may acquire, from each frame of point cloud data, an edge point having a curvature greater than or equal to a set first curvature threshold value as a feature point corresponding to the environment detector 11 that acquires the frame of point cloud data; and/or, obtaining a plane point with curvature smaller than a set second curvature threshold value from each frame of point cloud data as a characteristic point corresponding to the environment detector 11 for collecting the frame of point cloud data; wherein the first curvature threshold is greater than the second curvature threshold.
Further, the plurality of environment detectors 11 are disposed on the machine body 10 at a certain angle, and the same object can be observed at different times. As shown in fig. 3, the dashed line represents a schematic of the scanning of the first environment detector 11a at times T1 and T2; the solid line represents a scanning schematic of the second environment detector 11b at times T1 and T2. As can be taken from fig. 3, the first environment detector 11a and the second environment detector 11b can observe the object P1 at the time T1 and the time T2, respectively; and object P2 is observed at times T2 and T1, respectively. Based on this, in the present embodiment, the processor 14 may match the feature points corresponding to the plurality of environment detectors 11, and determine the homonymous points among the feature points corresponding to the plurality of environment detectors 11. The same name points refer to data points corresponding to the same physical point in the real world in each frame of point cloud data acquired by the plurality of environment detectors 11. As shown in the schematic diagram of the point cloud data characteristics of the tunnel scene collected by 2 environmental detectors in fig. 4, the data points a and B are a pair of identical points. The point cloud data shown in fig. 4 may be obtained by a dual single line laser radar scan, but is not limited thereto.
Because the physical points of the same name are the same physical point in the real world, the processor 14 can convert the point cloud data collected by different environment detectors 11 into the same set coordinate system, such as the world coordinate system, when matching the feature points corresponding to the plurality of environment detectors 11. For any data point a of the point cloud data collected by any environmental detector 11, the following equation (1) may be used to convert the point cloud coordinates thereof into a set coordinate system, where equation (1) may be expressed as:
Thus, under ideal conditions, for a pair of homonymous points a and B, it can satisfy in the set coordinate system:
in formula (1), P l Representing the point cloud coordinates of the data point A in the point cloud data, namely the coordinates of the data point A under an environment detector coordinate system for collecting the frame of the point cloud data;and->Representing the pose matrix and the translation matrix between the environment detector coordinate system and the set coordinate system (e.g., the world coordinate system). In the formula (1) and the formula (2), P w Representing the coordinates of the data point A in a set coordinate system, P w0 The coordinates of the same-name point B of the data point a in the set coordinate system are represented.
In some embodiments, as shown in FIG. 1b, an integrated navigation module 15 is provided on the machine body 10 of the data acquisition device S1. The integrated navigation module 15 may acquire the pose (i.e., position and posture) of the environment detector 11. The integrated navigation module 15 may include: inertial Sensors (IMUs), locating units, wheel speed meters, and the like, but are not limited thereto. The pose of the environment detector 11 acquired by the integrated navigation module 15 is a pose matrix and a translation matrix between the IMU coordinate system and the world coordinate system. Therefore, in order to reduce the amount of calculation, the point cloud data acquired by the environment detector 11 may be converted into an IMU coordinate system, where the conversion formula may be expressed as:
In formula (3), P l Representing the point cloud coordinates of the data point A in the point cloud data, namely the coordinates of the data point A under an environment detector coordinate system for collecting the frame of the point cloud data; p (P) i Representing coordinates of the data point a in an IMU coordinate system;and->Representing a pose matrix and a translation matrix between the environment detector coordinate system and the IMU coordinate system. Wherein, since the relative positional relationship between the rotation plane of the environment detector 11 and the integrated navigation module 15 is fixed, the +.>And->Can be calibrated in advance.
Based on the above formula (3), the point cloud data collected by the environmental detectors 11 may be converted into an IMU coordinate system, when the feature points corresponding to the environmental detectors 11 are matched, the coordinates of the data points collected by different environmental detectors 11 in the IMU coordinate system may be converted into the same set coordinate system, for example, the world coordinate system, and then for any data point a, the following formula (4) may be used to convert the point cloud coordinates of the data points into the set coordinate system, where the formula (4) may be expressed as follows:
in the formula (4), the amino acid sequence of the compound,and->Representing the pose matrix and the translation matrix between the IMU coordinate system and the set coordinate system (e.g., the world coordinate system).
Under theoretical conditions, the following conditions can be satisfied for a pair of homonymous points a and B:
In formula (5), P i And P i0 Representing the coordinates of data point a and data point a's homonymous point B in the IMU coordinate system. The first environment detector 11a is an environment detector that collects data points a; the second environment detector 11B is an environment detector that collects the same name point B of the data point a, then in equation (5),and->The pose and the position (i.e. the pose) in the process of collecting the data point A by the first environment detector 11a are respectively, namely a pose matrix and a translation matrix between an IMU coordinate system corresponding to the first environment detector 11a and a set coordinate system (such as a world coordinate system); />And->The same as the data points a are acquired for the second environment detector 11b respectivelyThe pose and position (i.e., pose) during the point B, i.e., the pose matrix and the translation matrix between the IMU coordinate system corresponding to the second environment detector 11B and the set coordinate system (e.g., the world coordinate system). P can be calculated by the formula on the left and right sides of the equal sign of (5) i And P i0 And converting into the same set coordinate system, i.e. the world coordinate system, namely that the coordinates of the same-name points A and B in the same set coordinate system are equal, and the same points are the same points.
Based on the above analysis, in the present embodiment, the integrated navigation module 15 may collect pose information of the plurality of environment detectors 11 during the process of collecting the point cloud data. The processor 14 may obtain pose information of the plurality of environment detectors 11 during acquisition of the point cloud data.
When matching the feature points corresponding to the plurality of environment detectors 11, the processor 14 may match the feature points in the respective frame point cloud data acquired by the plurality of environment detectors. However, this is computationally expensive. Considering the difference of geographic areas acquired by the point cloud data with a longer acquisition position distance, the matching degree between the characteristic points among the point cloud data acquired aiming at different geographic areas or different subareas in the same geographic area is lower. Based on this, in order to reduce the amount of computation, the processor 14 may acquire pose information of the plurality of environment detectors 11 in the process of acquiring point cloud data; according to the position information in the pose information, calculating the distance between the acquisition positions of the point cloud data corresponding to different environment detectors; and selecting target point cloud data with the distance between acquisition positions smaller than or equal to a set distance threshold from the point cloud data corresponding to different environment detectors. And the point cloud data corresponding to the environment detector is the point cloud data acquired by the environment detector.
Further, target feature points belonging to the target point cloud data may be acquired from feature points corresponding to the plurality of environment detectors 11; and calculating the position information of the target feature points under a set coordinate system according to the pose information of the plurality of environment detectors in the process of collecting the target feature points and the point cloud coordinates of the target feature points. In this embodiment, the above formula (3) may be used to convert the point cloud coordinates of the target feature point into coordinates of the target feature point under the IMU coordinate system; advancing one And (3) calculating the position information of the target feature point under the set coordinate system according to the formula (4). In calculating the position information of the target feature point in the set coordinate system, in the formula (4)And->And respectively adopting the gesture and the position information of the environment detector in the process of collecting the target characteristic points.
Further, the processor 14 may determine homonymous points among the target feature points corresponding to the plurality of environment detectors 11 according to the position information of the target feature points in the set coordinate system.
Alternatively, the processor 14 may calculate the distances between the target feature points corresponding to the different environment detectors 11 according to the position information of the target feature points in the set coordinate system; and determines that the target feature points whose distances between the target feature points are smaller than or equal to the set distance threshold are homonymous points among the target feature points corresponding to the different environment detectors 11. Or alternatively; the processor 14 can calculate the included angle between the normal vectors of the target feature points corresponding to different environment detectors according to the position information of the target feature points corresponding to the plurality of environment detectors 11 under the set coordinate system; and determining target feature points with included angles smaller than or equal to a set angle threshold value between normal vectors as homonymy points in the target feature points corresponding to different environment detectors. Alternatively, the processor 14 may also calculate the distances between the target feature points corresponding to the different environmental detectors 11 according to the position information of the target feature points in the set coordinate system; calculating the included angles between normal vectors of the target feature points corresponding to different environment detectors according to the position information of the target feature points corresponding to the environment detectors 11 under the set coordinate system; further, processor 14 may determine that the target feature points having a distance between the target feature points that is less than or equal to the set distance threshold and an included angle between the normal vectors that is less than or equal to the set angle threshold are homonymous points in the target feature points corresponding to different environmental detectors.
After determining the same-name points in the feature points corresponding to the plurality of environment detectors 11, the processor 14 may correct the postures of the plurality of environment detectors 11 in the process of collecting the point cloud data according to the associated information of the corresponding same-name points of the plurality of environment detectors 11.
Wherein, the association information of the homonymy point can comprise: the point cloud coordinates of the same name point, pose information recorded by the plurality of environment detectors 11 in the process of collecting the same name point, and the like. The pose information recorded by the plurality of environment detectors 11 during the process of collecting the same name points may be pose information of the plurality of environment detectors 11 collected by the integrated navigation module 15 during the process of collecting the point cloud data.
Accordingly, the processor 14 may correct the postures of the plurality of environment detectors 11 in the process of collecting the point cloud data according to the point cloud coordinates of the corresponding points of the plurality of environment detectors 11 and the pose information of the plurality of environment detectors in the process of collecting the points of the same name. The above formula (5) is satisfied under theoretical conditions because of the pair of homonymous points a and B, i.e., the coordinates of the pair of homonymous points a and B in the same set coordinate system under theoretical conditions are identical. Therefore, the distances between corresponding coordinates of the same-name points corresponding to the plurality of environment detectors 11 in the same coordinate system can be minimized as a target, and posture information recorded in the process of collecting the point cloud data by the plurality of environment detectors can be corrected according to the point cloud coordinates of the same-name points in the point cloud data of each frame and posture information recorded in the process of collecting the same-name points by the plurality of environment detectors 11. In this way, when the corrected posture information is used for calculating the position information of the point cloud data under the set coordinate system, the calculated position information is enabled to be as close to the actual position information as possible, and the accuracy of the determined position information is improved.
Optionally, the processor 14 may acquire pose information recorded by the plurality of environment detectors 11 during the process of acquiring the same name point from pose information of the plurality of environment detectors 11 during the process of acquiring the point cloud data acquired by the integrated navigation module; and calculating the posture correction amount for optimizing the postures of the plurality of environment detectors 11 according to the point cloud coordinates of the same-name points in the point cloud data of each frame and the posture information recorded by the plurality of environment detectors in the process of collecting the same-name points.
In the present embodiment, the specific embodiment of calculating the posture correction amount for performing posture optimization on the plurality of environment detectors 11 according to the point cloud coordinates of the same name points in the point cloud data of each frame and the posture information recorded by the plurality of environment detectors in the process of collecting the same name points is not limited. Alternatively, the point cloud coordinates of the same-name points may be converted into coordinates in the IMU coordinate system according to the above formula (4), and the posture correction amounts for performing posture optimization on the plurality of environment detectors 11 may be calculated according to the coordinates of the same-name points in the IMU coordinate system. In some embodiments, the above equation (5) is satisfied under theoretical conditions due to a pair of homonymous points a and B, i.e., the coordinates of a pair of homonymous points a and B in the same set coordinate system under theoretical conditions are the same. Therefore, the calculated posture correction amount for performing posture optimization on the environment detector should make the difference between the coordinates of the same-name points A and B in the same coordinate system after posture optimization as small as possible.
Based on the analysis, the gesture correction parameters can be used as the quantity to be calculated, and the coordinate expression of the homonymous points corresponding to the plurality of environment detectors under the set coordinate system is determined according to the point cloud coordinates of the homonymous points in the point cloud data of each frame and the gesture information recorded by the plurality of environment detectors in the process of collecting the homonymous points. Specifically, based on the above formula (3), the point cloud coordinates of the same-name points in the point cloud data of each frame can be converted into coordinates of the same-name points under an IMU coordinate system; further, based on the above formula (4), the gesture correction parameter Δr may be used as a to-be-calculated quantity, and the coordinates of the same-name points in the IMU coordinate system and the pose information recorded by the plurality of environment detectors in the process of collecting the same-name points are used as known quantities, so as to determine that the coordinate expressions of the same-name points corresponding to the plurality of environment detectors in the set coordinate system are as follows:
in the formula (6.1), Δr represents an attitude correction parameter to be solved; p (P) w The coordinates of the same name point i in the set coordinate system are shown.
Further, a mathematical model reflecting distances between coordinates of the homonymous points corresponding to the plurality of environment detectors in the set coordinate system can be constructed according to the coordinate expression of the homonymous points corresponding to the plurality of environment detectors in the set coordinate system. Wherein, the solving model of the mathematical model can be expressed as:
In the formula (6.2), Δr represents an attitude correction parameter to be solved; n represents the total logarithm of the homonymous points in the point cloud data acquired by the first environment detector 11a and the second environment detector 11 b; k represents the kth pair of homonymous points; where k=1, 2,..n. argminf (x) represents the value of the argument x when the function f (x) takes the minimum value. Correspondingly, the above formula (6) represents that f (DeltaR) is equal to the functionThe value of Δr at the minimum value is taken.
In the formula (6), the amino acid sequence of the compound,the distance between the coordinates of the kth pair of homonymous points in the set coordinate system is represented. Accordingly, the +>The square of the distance between coordinate points of the kth pair of homonymous points in the set coordinate system is represented. Thus (S)>The sum of squares representing the distances between n pairs of identical-name points in the set coordinate system may be the mathematical model reflecting the distances between the coordinates of the identical-name points corresponding to the plurality of environment detectors 11 in the set coordinate system.
Because a pair of homonymous points corresponds to the same point in the set coordinate system under the theoretical condition, the smaller the square sum of distances between n pairs of homonymous points in the coordinate points in the set coordinate system is, the higher the accuracy of coordinates of the point cloud data calculated by using the gesture correction amount delta R in the set coordinate system is. Therefore, the distance between the coordinates of the same-name points corresponding to the plurality of environment detectors under the set coordinates may be minimized as a target, and the mathematical model reflecting the distance between the coordinates of the same-name points corresponding to the plurality of environment detectors 11 under the set coordinates may be solved, that is, the equation (6.2) may be solved, and the Δr value may be obtained, that is, the posture correction amount.
Further, after the attitude correction amount Δr is obtained, the attitude correction amount Δr may be used to record the attitudes of the plurality of environmental detectors 11 during the process of collecting the point cloud dataAnd correcting to obtain the optimized posture information of the plurality of environment detectors in the process of collecting the point cloud data. Wherein, the optimization formula can be expressed as:
in the formula (7), the amino acid sequence of the compound,representing the gesture recorded by the environment detector acquired by the integrated navigation module when the ith data point is acquired, namely the gesture before optimization; />Representing the optimized pose of the environmental detector at the time the ith data point was acquired.
The data acquisition equipment provided by the embodiment can utilize the characteristic that different environment detectors can scan the same object at different moments to perform feature extraction on the point cloud data acquired by the plurality of environment detectors so as to obtain feature points in the point cloud data; matching the characteristic points corresponding to the plurality of environment detectors to obtain homonymous points in the characteristic points corresponding to the plurality of environment detectors; and according to the point cloud coordinates of the same name points corresponding to the plurality of environment detectors and the pose information in the process of collecting the same name points by the plurality of detectors, the poses of the plurality of environment detectors in the process of collecting the point cloud data are corrected, so that the poses of the environment detectors are optimized, and the pose precision of the environment detectors is improved.
After correcting the posture recorded by the environment detectors 11 during the process of collecting the point cloud coordinates, the processor 14 may further calculate spatial information of the point cloud data collected by the environment detectors 11 under the set coordinate system according to the optimized posture information of the environment detectors 11 during the process of collecting the point cloud coordinates and the point cloud coordinates in the point cloud data. The spatial information of the point cloud data may be a position coordinate distribution of the point cloud data under a set coordinate system.
Optionally, the optimized gesture information of the environment detector 11 in the process of collecting the point cloud coordinates and the point cloud coordinates corresponding to the point cloud data may be brought into the following formula (8), so as to obtain the coordinate expression of the point cloud data under the set coordinate system. Formula (8) may be represented as:
for the point cloud data of the tunnel scene shown in fig. 4, the effect diagram of the spatial information of the point cloud data under the set coordinate system is shown in the left diagram of fig. 5 by using the posture information before optimization and the point cloud coordinates corresponding to the point cloud data, and the effect diagram of the spatial information of the point cloud data under the set coordinate system is shown in the right diagram of fig. 5 by using the posture information after optimization of the environment detector and the point cloud coordinates corresponding to the point cloud data.
According to the comparison of the scene detection effects before and after the gesture optimization of the environment detector shown in fig. 5, it can be seen that the tunnel shape obtained after the optimization has higher restoration accuracy, and the real scene can be restored more truly.
Further, the processor 14 may also construct an electronic map according to the spatial information of the point cloud data collected by the environment detector 11 under the set coordinate system. Because the posture of the environment detector 11 is used as the optimized posture, the precision of the spatial information of the point cloud data acquired by the environment detector 11 under the set coordinate system is higher, and the precision of the constructed electronic map is also higher.
For the autonomous mobile device, after the electronic map is built, autonomous route planning can be performed based on the electronic map, so that the accuracy of route planning is improved, and the accuracy of subsequent navigation is improved.
It should be noted that, the implementation forms of the data acquisition devices are different, and the included basic components and the configurations of the basic components are different, which are only a part of examples in the embodiments of the present application, which do not mean that the data acquisition device must include all the components shown in fig. 1a and 1b, and do not mean that the data acquisition device only includes the components shown in fig. 1a and 1 b.
It should be further noted that the data processing method executed by the data acquisition device may also be implemented by other computer devices. The following describes an exemplary data processing method provided in the embodiment of the present application.
Fig. 6 is a flow chart of a gesture optimization method provided in an embodiment of the present application. As shown in fig. 6, the method includes:
601. and acquiring the point cloud data of each frame acquired by the plurality of environment detectors.
602. And extracting the characteristics of each frame of point cloud data to obtain characteristic points corresponding to a plurality of environment detectors from each frame of point cloud data.
603. And matching the characteristic points corresponding to the plurality of environment detectors to determine homonymy points in the characteristic points corresponding to the plurality of environment detectors.
604. And acquiring pose information recorded in the process of acquiring the same name points by a plurality of environment detectors.
605. And (3) taking the minimum distance between corresponding coordinates of the homonymous points corresponding to the plurality of environment detectors in the same coordinate system as a target, and correcting the gesture information recorded by the plurality of environment detectors in the process of collecting the point cloud data according to the point cloud coordinates of the homonymous points corresponding to the plurality of environment detectors in the point cloud data of each frame and the gesture information recorded by the plurality of environment detectors in the process of collecting the homonymous points.
In this embodiment, a plurality of environmental detectors are disposed on the data acquisition device and are angled. The plurality of environment detectors can collect environment information to obtain point cloud data. For a description of the implementation and arrangement of the environment detector, reference may be made to the relevant content of the above-described embodiments.
The plurality of environment detectors can move along with the movement of the data acquisition equipment, and acquire environment information in the movement process of the data acquisition equipment to obtain point cloud data. The frame of point cloud data is a data set obtained by one-time environment detection of an environment detector, and comprises the following steps: the spatial coordinates of the detection point in the environment detector coordinate system.
In this embodiment, in order to optimize the pose of the environmental detector, in step 601, each frame of point cloud data acquired by a plurality of environmental detectors may be acquired. Wherein each environment detector corresponds to a frame of point cloud data. In consideration of the fact that the data size of one frame of point cloud data is large and noise points exist, in order to reduce the calculation amount and noise, in step 602, feature extraction may be performed on each frame of point cloud data, so as to obtain feature points corresponding to a plurality of environment detectors from each frame of point cloud data. The characteristic points corresponding to each environment detector are characteristic points acquired from the point cloud data acquired by the environment detector.
Alternatively, the curvature of the data point in each frame of point cloud data may be calculated from the point cloud coordinates of the data point in each frame of point cloud data. For the calculation of the curvature of the data points, reference may be made to the relevant content of the above embodiments, and details are not repeated here.
Further, data points with curvature meeting the set requirement can be obtained from each frame of point cloud data and used as characteristic points corresponding to an environment detector for collecting the frame of point cloud data. Optionally, an edge point with curvature greater than or equal to a set first curvature threshold value can be obtained from each frame of point cloud data and used as a feature point corresponding to an environment detector for collecting the frame of point cloud data; and/or obtaining a plane point with curvature smaller than a set second curvature threshold value from each frame of point cloud data as a characteristic point corresponding to an environment detector for collecting the frame of point cloud data; wherein the first curvature threshold is greater than the second curvature threshold.
Further, since the plurality of environment detectors are disposed on the data collection device at an angle, the plurality of environment detectors can observe the same object at different times. Based on this, in step 603, the feature points corresponding to the plurality of environment detectors may be matched, and homonymous points among the feature points corresponding to the plurality of environment detectors may be determined. The same name points refer to data points corresponding to the same physical point in the real world in the point cloud data acquired by the plurality of environment detectors.
Because the physical points of the same name point in the real world are the same physical point, when the characteristic points corresponding to the plurality of environment detectors are matched, the point cloud data collected by the different environment detectors can be converted into the same set coordinate system, for example, the world coordinate system, and the conversion process can refer to the related content of the formula (1) and is not repeated here. Under ideal conditions, the coordinates in the same set coordinate system are equal for a pair of homonymous points.
In some embodiments, the data acquisition device is provided with an integrated navigation module. The integrated navigation module may acquire the pose (i.e., position and pose) of the environmental probe. The integrated navigation module may include: inertial sensors (Inertial Measurement Unit, IMU), positioning units, wheel speed meters, and the like, but are not limited thereto. The pose of the environment detector acquired by the integrated navigation module is a pose matrix and a translation matrix between an IMU coordinate system and a world coordinate system. Therefore, in order to reduce the calculation amount, the point cloud data acquired by the environment detector can be converted into the IMU coordinate system, and the specific conversion process can be referred to as the related content of the above formula (3).
The point cloud data collected by the environmental detectors can be converted into an IMU coordinate system based on the formula (3), when the characteristic points corresponding to the environmental detectors are matched, the coordinates of the data points collected by different environmental detectors in the IMU coordinate system can be converted into the same set coordinate system, for example, the world coordinate system, and for any data point, the point cloud coordinates can be converted into the set coordinate system by adopting the formula (4). Based on the analysis, in this embodiment, the integrated navigation module may collect pose information of the plurality of environment detectors during the process of collecting the point cloud data. Accordingly, prior to step 603, pose information recorded by the plurality of environmental detectors during the process of collecting the point cloud data may be obtained. The pose information recorded by the plurality of environment detectors in the process of collecting the point cloud data can be the pose information of the plurality of environment detectors collected by the integrated navigation module in the process of collecting the point cloud data.
When the characteristic points corresponding to the plurality of environment detectors are matched, the characteristic points in the point cloud data acquired by the different environment detectors can be matched. However, this is computationally expensive. Considering the difference of geographic areas acquired by the point cloud data with a longer acquisition position distance, the matching degree between the characteristic points among the point cloud data acquired aiming at different geographic areas or different subareas in the same geographic area is lower. Based on the method, in order to reduce the calculated amount, pose information recorded by a plurality of environment detectors in the process of collecting point cloud data can be acquired; according to the position information in the pose information, calculating the distance between the acquisition positions of the point cloud data corresponding to different environment detectors; and selecting target point cloud data with the distance between acquisition positions smaller than or equal to a set distance threshold from the point cloud data corresponding to different environment detectors. The point cloud data corresponding to the environment detector are the point cloud data collected by the environment detector.
Further, target feature points belonging to the target point cloud data can be obtained from feature points corresponding to a plurality of environment detectors; and calculating the position information of the target feature points under a set coordinate system according to the pose information recorded by the plurality of environment detectors in the process of collecting the target feature points and the point cloud coordinates of the target feature points. In this embodiment, the above formula (3) may be used to convert the point cloud coordinates of the target feature point into coordinates of the target feature point under the IMU coordinate system; further, positional information of the target feature point in the set coordinate system is calculated according to the above formula (4). In calculating the position information of the target feature point in the set coordinate system, in the formula (4) And->And respectively adopting the gesture and the position information recorded by the environment detector in the process of collecting the target characteristic points.
Further, the homonymous points in the target feature points corresponding to the plurality of environment detectors can be determined according to the position information of the target feature points under the set coordinate system.
Optionally, according to the position information of the target feature points under the set coordinate system, calculating the distances between the target feature points corresponding to different environment detectors; and determining that the target feature points with the distance between the target feature points being smaller than or equal to the set distance threshold value are homonymous points in the target feature points corresponding to different environment detectors. Or alternatively; according to the position information of the target feature points corresponding to the plurality of environment detectors under the set coordinate system, calculating the included angles between the normal vectors of the target feature points corresponding to the different environment detectors; and determining target feature points with included angles smaller than or equal to a set angle threshold value between normal vectors as homonymy points in the target feature points corresponding to different environment detectors. Or, the distance between the target feature points corresponding to different environment detectors can be calculated according to the position information of the target feature points under the set coordinate system; calculating included angles between normal vectors of the target feature points corresponding to different environment detectors according to the position information of the target feature points corresponding to the environment detectors under the set coordinate system; further, it may be determined that the target feature points, for which the distance between the target feature points is less than or equal to a set distance threshold and the included angle between the normal vectors is less than or equal to a set angle threshold, are homonymous points in the target feature points corresponding to different environment detectors.
After determining the homonymous points in the feature points corresponding to the plurality of environment detectors, in step 604, pose information recorded during the process of collecting the homonymous points by the plurality of environment detectors may be obtained. Optionally, pose information recorded in the process of collecting the same name point by the plurality of environment detectors can be obtained from pose information of the plurality of environment detectors collected by the integrated navigation module in the process of collecting the point cloud data.
Further, in step 605, with the distance between corresponding coordinates of the same-name points corresponding to the plurality of environmental detectors in the same coordinate system as a target, according to the point cloud coordinates of the same-name points corresponding to the plurality of environmental detectors and pose information recorded by the plurality of detectors during the process of collecting the same-name points, the pose information recorded by the plurality of environmental detectors during the process of collecting the point cloud data is optimized, and the optimized pose of the plurality of detectors during the process of collecting the point cloud data is obtained.
Optionally, the gesture correction parameter may be used as a to-be-calculated quantity, and the coordinate expressions of the homonymous points corresponding to the plurality of environmental detectors under the set coordinate system are determined according to the point cloud coordinates of the homonymous points in the point cloud data of each frame and the gesture information recorded by the plurality of environmental detectors in the process of collecting the homonymous points. The specific embodiments can be referred to the related content of the above formula (6.1), and will not be described herein.
Further, a mathematical model reflecting the distance between the coordinates of the homonymous points corresponding to the plurality of environment detectors in the set coordinate system can be constructed according to the coordinate expression of the homonymous points corresponding to the plurality of environment detectors in the set coordinate system; and solving the mathematical model reflecting the distances between the coordinates of the homonymous points corresponding to the plurality of environment detectors under the set coordinate system by taking the minimum distances between the coordinates of the homonymous points corresponding to the plurality of environment detectors under the set coordinate system as a target, and obtaining the value of the attitude correction parameter, namely the attitude correction quantity. The specific embodiments and principles can be found in the above embodiment formula (6.2), and will not be described herein.
Further, after the attitude correction amount Δr is obtained, attitude information recorded by the plurality of environment detectors in the process of collecting point cloud data can be obtained by utilizing the attitude correction amount ΔrAnd correcting to obtain optimized posture information in the process of collecting the point cloud data of each frame by the plurality of detectors. Wherein, the optimization formula can be seen from the formula (7).
In this embodiment, the characteristic that different environment detectors can scan the same object at different times may be used to perform feature extraction on the point cloud data collected by the plurality of environment detectors, so as to obtain feature points in the point cloud data; matching the characteristic points corresponding to the plurality of environment detectors to obtain homonymous points in the characteristic points corresponding to the plurality of environment detectors; and the distances between the coordinates of the same-name points corresponding to the plurality of environment detectors under the same coordinates are minimized as targets, and the gestures of the plurality of environment detectors in the process of collecting the point cloud data are optimized according to the point cloud coordinates of the same-name points corresponding to the plurality of environment detectors and the pose information of the plurality of environment detectors in the process of collecting the same-name points, so that the gestures of the environment detectors are optimized, and the improvement of the gesture precision of the environment detectors is facilitated.
After the gesture of the environment detector is optimized, the space information of the point cloud data collected by the environment detectors under the set coordinate system can be calculated according to the optimized gesture information of the environment detectors in the process of collecting the point cloud data and the point cloud coordinates in the point cloud data. The spatial information of the point cloud data may be a position coordinate distribution of the point cloud data under a set coordinate system. The specific embodiment of calculating the spatial information of the point cloud data collected by the plurality of environment detectors under the set coordinate system can be referred to the related content of the above formula (8).
Furthermore, the electronic map can be constructed according to the space information of the point cloud data collected by the environment detector under the set coordinate system. Because the posture of the environment detector is used as the optimized posture, the precision of the spatial information of the point cloud data acquired by the environment detector under the set coordinate system is higher, and the precision of the constructed electronic map is also higher.
For the autonomous mobile device, after the electronic map is built, autonomous route planning can be performed based on the electronic map, so that the accuracy of route planning is improved, and the accuracy of subsequent navigation is improved.
It should be noted that, the execution subjects of each step of the method provided in the above embodiment may be the same device, or the method may also be executed by different devices. For example, the execution subject of steps 601 and 602 may be device a; for another example, the execution body of step 601 may be device a, and the execution body of step 602 may be device B; etc.
In addition, in some of the flows described in the above embodiments and the drawings, a plurality of operations appearing in a specific order are included, but it should be clearly understood that the operations may be performed out of the order in which they appear herein or performed in parallel, the sequence numbers of the operations such as 601, 602, etc. are merely used to distinguish between the various operations, and the sequence numbers themselves do not represent any order of execution. In addition, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel.
Accordingly, embodiments of the present application also provide a computer-readable storage medium storing computer instructions that, when executed by one or more processors, cause the one or more processors to perform the steps in the above-described pose optimization method.
Embodiments of the present application also provide a computer program product comprising: computer program. Wherein execution of the computer program product by the processor may implement the gesture optimization method described above. The computer program product provided in this embodiment may be map data creation software or the like.
Fig. 7 is a schematic structural diagram of a computer device according to an embodiment of the present application. The computer equipment can be server equipment such as a server, a cloud server array and the like; or can be a terminal device such as a computer. As shown in fig. 7, the computer device includes: a memory 70a and a processor 70b; wherein the memory 70a is for storing a computer program.
The processor 70b is coupled to the memory 70a for executing a computer program for: acquiring point cloud data of each frame acquired by a plurality of environment detectors; extracting features of each frame of point cloud data to obtain feature points corresponding to a plurality of environment detectors from each frame of point cloud data; matching the characteristic points corresponding to the plurality of environment detectors to determine homonymous points in the characteristic points corresponding to the plurality of environment detectors; the homonymy points are data points corresponding to the same physical point in the real world in the point cloud data acquired by the plurality of environment detectors; acquiring pose information recorded in the process of acquiring the homonymous points by a plurality of environment detectors; and the distance between the coordinates of the same-name points corresponding to the plurality of environment detectors under the same coordinates is minimized as a target, and the gesture information recorded in the process of collecting the point cloud data by the plurality of environment detectors is corrected according to the point cloud coordinates of the same-name points in the point cloud data of each frame and the gesture information recorded in the process of collecting the same-name points by the plurality of environment detectors.
Optionally, the processor 70b is specifically configured to, when performing feature extraction on each frame of point cloud data: calculating the curvature of the data points in the point cloud data of each frame according to the point cloud coordinates of the data points in the point cloud data of each frame; and acquiring data points with curvature meeting the set requirement from each frame of point cloud data as characteristic points corresponding to the environment detector for acquiring the frame of point cloud data.
Optionally, the processor 70b is specifically configured to, when acquiring a data point with curvature meeting a set requirement from each frame of point cloud data: acquiring edge points with curvature larger than or equal to a set first curvature threshold value from each frame of point cloud data, and taking the edge points as characteristic points corresponding to an environment detector for acquiring the frame of point cloud data; and/or obtaining a plane point with curvature smaller than a set second curvature threshold value from each frame of point cloud data as a characteristic point corresponding to an environment detector for collecting the frame of point cloud data; the first curvature threshold is greater than the second curvature threshold.
In some embodiments, the processor 70b is further configured to: acquiring pose information of a plurality of environment detectors acquired by the integrated navigation module in the process of acquiring point cloud data of each frame; according to the position information in the pose information, calculating the distance between the acquisition positions of the point cloud data corresponding to different environment detectors; and selecting target point cloud data of which the distance between acquisition positions is smaller than or equal to a set first distance threshold value from the point cloud data corresponding to different environment detectors.
Accordingly, the processor 70b is specifically configured to, when matching the feature points corresponding to the plurality of environment detectors: acquiring target characteristic points belonging to target point cloud data from the characteristic points corresponding to the plurality of environment detectors; the target point cloud data are data points, wherein the distance between the acquisition positions of a plurality of environment detectors corresponding to each frame of the target point cloud data is smaller than or equal to a set first distance threshold value; further, position information of the target feature points under a set coordinate system can be calculated according to pose information of the plurality of environment detectors in the process of collecting the target feature points and point cloud coordinates of the target feature points; and determining homonymy points in the target feature points corresponding to the plurality of environment detectors according to the position information of the target feature points under the set coordinate system.
Optionally, the processor 70b is specifically configured to, when determining homonymous points among the target feature points corresponding to the plurality of environment detectors: according to the position information of the target feature points under the set coordinate system, calculating the distances between the target feature points corresponding to different environment detectors; determining that the target feature points with the distance between the target feature points being smaller than or equal to a set second distance threshold value are homonymous points in the target feature points corresponding to different environment detectors; and/or; calculating included angles between normal vectors of the target feature points corresponding to different environment detectors according to the position information of the target feature points corresponding to the environment detectors under the set coordinate system; and determining that the target characteristic points of which the included angles between normal vectors are smaller than or equal to the set angle threshold value are homonymous points in the target characteristic points corresponding to different environment detectors.
In other embodiments, the processor 70b is specifically configured to, when acquiring pose information recorded by a plurality of environment detectors during the process of acquiring the same name point: and acquiring pose information recorded by the plurality of environment detectors in the process of acquiring the same name point from the pose information of the plurality of environment detectors in the process of acquiring the point cloud data acquired by the integrated navigation module. Accordingly, the processor 70b is specifically configured to, when correcting the gestures recorded by the plurality of environment detectors during the process of collecting the point cloud data of each frame: taking the attitude correction amount for carrying out attitude optimization on a plurality of environment detectors as a to-be-solved amount, and determining coordinate expressions of the homonymous points corresponding to the plurality of environment detectors under a set coordinate system according to the point cloud coordinates of the homonymous points in the point cloud data of each frame and the pose information recorded by the plurality of environment detectors in the process of collecting the homonymous points; constructing a mathematical model reflecting the distances between coordinates of the homonymous points corresponding to the plurality of environment detectors under the set coordinate system according to the coordinate expression of the homonymous points corresponding to the plurality of environment detectors under the set coordinate system; taking the minimum distance between the coordinates of the same-name points corresponding to the plurality of environment detectors under the set coordinate system as a target, and solving the data model to obtain the attitude correction quantity; and correcting the gestures recorded by the plurality of environment detectors in the process of collecting the point cloud data by using the solved gesture deviation correction amount so as to obtain optimized gesture information of the plurality of environment detectors in the process of collecting the point cloud data.
Optionally, the processor 70b is further configured to: and calculating the space information of each frame of point cloud data under a set coordinate system according to the optimized posture information of the plurality of environment detectors in the process of collecting each frame of point cloud data and the point cloud coordinates corresponding to each frame of point cloud data.
Optionally, the processor 70b is further configured to: and constructing an electronic map according to the space information of the point cloud data of each frame under the set coordinate system.
In some alternative embodiments, as shown in fig. 7, the computer device may further include: communication component 70c, power component 70d, and the like. In some embodiments, the computer device may be implemented as a terminal device such as a computer, and may further include: optional components such as a display component 70e and an audio component 70 f. Only a part of the components are schematically shown in fig. 7, which does not mean that the computer device must contain all the components shown in fig. 7, nor that the computer device can only contain the components shown in fig. 7.
The computer equipment provided by the embodiment can utilize the characteristic that different environment detectors can scan the same object at different moments to perform feature extraction on each frame of point cloud data acquired by the plurality of environment detectors, so as to obtain feature points corresponding to the plurality of environment detectors in each frame of point cloud data; matching the characteristic points corresponding to the plurality of environment detectors to obtain homonymous points in the characteristic points corresponding to the plurality of environment detectors; and the distances between the coordinates of the same-name points corresponding to the plurality of environment detectors under the same coordinates are minimized as targets, and the posture information of the plurality of environment detectors in the process of collecting the point cloud data is corrected according to the point cloud coordinates of the same-name points in the point cloud data and the posture information recorded by the plurality of environment detectors in the process of collecting the same-name points, so that the posture of the environment detectors is optimized, and the posture precision of the environment detectors is improved.
In embodiments of the present application, the memory is used to store a computer program and may be configured to store various other data to support operations on the device on which it resides. Wherein the processor may execute a computer program stored in the memory to implement the corresponding control logic. The memory may be implemented by any type of volatile or nonvolatile memory device or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
In the embodiments of the present application, the processor may be any hardware processing device that may execute the above-described method logic. Alternatively, the processor may be a central processing unit (Central Processing Unit, CPU), a graphics processor (Graphics Processing Unit, GPU) or a micro control unit (Microcontroller Unit, MCU); programmable devices such as Field programmable gate arrays (Field-Programmable Gate Array, FPGA), programmable array logic devices (Programmable Array Logic, PAL), general array logic devices (General Array Logic, GAL), complex programmable logic devices (Complex Programmable Logic Device, CPLD), and the like; or an advanced Reduced Instruction Set (RISC) processor (Advanced RISC Machines, ARM) or System On Chip (SOC), etc., but is not limited thereto.
In embodiments of the present application, the communication component is configured to facilitate wired or wireless communication between the device in which it resides and other devices. The device in which the communication component is located may access a wireless network based on a communication standard, such as WiFi,2G or 3G,4G,5G or a combination thereof. In one exemplary embodiment, the communication component receives a broadcast signal or broadcast-related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component may also be implemented based on Near Field Communication (NFC) technology, radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, or other technologies.
In embodiments of the present application, the display assembly may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the display assembly includes a touch panel, the display assembly may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation.
In embodiments of the present application, the power supply assembly is configured to provide power to the various components of the device in which it is located. The power components may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the devices in which the power components are located.
In embodiments of the present application, the audio component may be configured to output and/or input audio signals. For example, the audio component includes a Microphone (MIC) configured to receive external audio signals when the device in which the audio component is located is in an operational mode, such as a call mode, a recording mode, and a speech recognition mode. The received audio signal may be further stored in a memory or transmitted via a communication component. In some embodiments, the audio assembly further comprises a speaker for outputting audio signals. For example, for a device with language interaction functionality, voice interaction with a user, etc., may be accomplished through an audio component.
It should be noted that, the descriptions of "first" and "second" herein are used to distinguish different messages, devices, modules, etc., and do not represent a sequence, and are not limited to the "first" and the "second" being different types.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and changes may be made to the present application by those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. which are within the spirit and principles of the present application are intended to be included within the scope of the claims of the present application.

Claims (12)

1. A method of gesture optimization, comprising:
acquiring point cloud data of each frame obtained by collecting environmental information by a plurality of environmental detectors;
extracting features of the point cloud data of each frame to obtain feature points corresponding to the plurality of environment detectors from the point cloud data of each frame;
matching the characteristic points corresponding to the plurality of environment detectors to determine homonymous points in the characteristic points corresponding to the plurality of environment detectors, wherein the homonymous points are data points corresponding to the same physical point in the real world in the cloud data of each frame of points acquired by the plurality of environment detectors;
acquiring pose information recorded in the process of acquiring the homonymous points by the plurality of environment detectors;
and correcting the gesture information recorded in the process of acquiring the point cloud data of each frame by the plurality of environment detectors according to the point cloud coordinates of the point cloud of each frame of the point cloud data of the same name point and the gesture information recorded in the process of acquiring the point cloud of the same name point by the plurality of environment detectors by taking the minimization of the distance between the corresponding coordinates of the same name points corresponding to the plurality of environment detectors in the same coordinate system as a target.
2. The method of claim 1, wherein the extracting features of the point cloud data of each frame to obtain feature points corresponding to the plurality of environment detectors from the point cloud data of each frame includes:
calculating the curvature of the data points in each frame of point cloud data according to the point cloud coordinates of the data points in each frame of point cloud data;
and acquiring data points with curvature meeting the set requirement from each frame of point cloud data as characteristic points corresponding to an environment detector for acquiring the frame of point cloud data.
3. The method according to claim 2, wherein the acquiring, from each frame of point cloud data, a data point whose curvature meets a set requirement as a feature point corresponding to the environment detector that acquires the frame of point cloud data includes:
acquiring edge points with curvature larger than or equal to a set first curvature threshold value from each frame of point cloud data, and taking the edge points as characteristic points corresponding to an environment detector for acquiring the frame of point cloud data;
and/or the number of the groups of groups,
obtaining a plane point with curvature smaller than a set second curvature threshold value from each frame of point cloud data as a characteristic point corresponding to the environment detector for collecting the frame of point cloud data;
the first curvature threshold is greater than the second curvature threshold.
4. The method of claim 1, wherein the matching the feature points corresponding to the plurality of environment detectors to determine homonymous points in the feature points corresponding to the plurality of environment detectors comprises:
acquiring target characteristic points belonging to target point cloud data from the characteristic points corresponding to the plurality of environment detectors; the target point cloud data are data points, wherein the distance between the acquisition positions of the plurality of environment detectors corresponding to each frame of the target point cloud data is smaller than or equal to a set first distance threshold value;
calculating the position information of the target feature points under a set coordinate system according to the pose information of the plurality of environment detectors in the process of collecting the target feature points and the point cloud coordinates of the target feature points;
and determining homonymy points in the target feature points corresponding to the plurality of environment detectors according to the position information of the target feature points under a set coordinate system.
5. The method of claim 4, further comprising:
calculating the distance between the acquisition positions of the point cloud data corresponding to different environment detectors according to the position information in the pose information of the plurality of environment detectors acquired by the integrated navigation module in the process of acquiring the point cloud data of each frame;
And selecting target point cloud data of which the distance between acquisition positions is smaller than or equal to a set first distance threshold from the point cloud data corresponding to the different environment detectors.
6. The method of claim 4, wherein the determining, according to the position information of the target feature point in the set coordinate system, the homonymy point in the target feature points corresponding to the plurality of environment detectors includes:
according to the position information of the target feature points under a set coordinate system, calculating the distances between the target feature points corresponding to different environment detectors;
determining that the target feature points with the distance between the target feature points being smaller than or equal to a set second distance threshold value are homonymous points in the target feature points corresponding to the different environment detectors;
and/or;
calculating included angles between normal vectors of the target feature points corresponding to different environment detectors according to the position information of the target feature points corresponding to the plurality of environment detectors under a set coordinate system;
and determining target feature points with included angles smaller than or equal to a set angle threshold between normal vectors as homonymy points in the target feature points corresponding to different environment detectors.
7. The method of claim 1, wherein the acquiring pose information recorded during the acquisition of the homonym points by the plurality of environmental detectors comprises:
The pose information of the plurality of environment detectors, which is acquired by the integrated navigation module, in the process of acquiring the point cloud data is acquired, and the pose information recorded by the plurality of environment detectors in the process of acquiring the same name point is acquired;
correcting the posture information recorded in the process of collecting the point cloud data by the plurality of environment detectors according to the point cloud coordinates of the point with the same name in the point cloud data of each frame and the posture information in the process of collecting the point with the same name by the plurality of environment detectors, wherein the correcting comprises the following steps:
taking the attitude correction amount for carrying out attitude optimization on the plurality of environment detectors as an amount to be solved, and determining a coordinate expression of the corresponding homonymous points of the plurality of environment detectors under a set coordinate system according to the point cloud coordinates of the homonymous points in the point cloud data of each frame and the pose information recorded by the plurality of environment detectors in the process of collecting the homonymous points;
constructing a mathematical model reflecting the distances between the coordinates of the homonymous points corresponding to the plurality of environment detectors under the set coordinate system according to the coordinate expression of the homonymous points corresponding to the plurality of environment detectors under the set coordinate system;
solving the mathematical model by taking the minimum distance between coordinates of the homonymous points corresponding to the plurality of environment detectors under a set coordinate system as a target so as to obtain the attitude correction quantity;
Correcting the gestures recorded by the plurality of environment detectors in the process of collecting the point cloud data by using the solved gesture deviation correction amount to obtain optimized gesture information of the plurality of environment detectors in the process of collecting the point cloud data of each frame.
8. The method of any of claims 1-7, further comprising:
and calculating the space information of the point cloud data of each frame under a set coordinate system according to the optimized posture information of the plurality of environment detectors in the process of collecting the point cloud data of each frame and the point cloud coordinates corresponding to the point cloud data of each frame.
9. The method of claim 8, further comprising:
and constructing an electronic map according to the space information of the point cloud data of each frame under the set coordinate system.
10. A computer device, comprising: a memory and a processor; wherein the memory is used for storing a computer program;
the processor is coupled to the memory for executing the computer program for performing the steps in the method of any of claims 1-9.
11. A data acquisition device, comprising: a machine body; the machine body is provided with a memory, a processor and a plurality of environment detectors;
The environment detectors are used for collecting environment information to obtain point cloud data;
the memory is used for storing a computer program;
the processor is coupled to the memory for executing the computer program for performing the steps in the method of any of claims 1-9.
12. The device of claim 11, wherein the data acquisition device is an autonomous mobile device.
CN202111666849.9A 2021-12-31 2021-12-31 Gesture optimization method and device Active CN114353780B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111666849.9A CN114353780B (en) 2021-12-31 2021-12-31 Gesture optimization method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111666849.9A CN114353780B (en) 2021-12-31 2021-12-31 Gesture optimization method and device

Publications (2)

Publication Number Publication Date
CN114353780A CN114353780A (en) 2022-04-15
CN114353780B true CN114353780B (en) 2024-04-02

Family

ID=81104925

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111666849.9A Active CN114353780B (en) 2021-12-31 2021-12-31 Gesture optimization method and device

Country Status (1)

Country Link
CN (1) CN114353780B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107133325A (en) * 2017-05-05 2017-09-05 南京大学 A kind of internet photo geographical space localization method based on streetscape map
CN107657656A (en) * 2017-08-31 2018-02-02 成都通甲优博科技有限责任公司 Homotopy mapping and three-dimensional rebuilding method, system and photometric stereo camera shooting terminal
CN107767440A (en) * 2017-09-06 2018-03-06 北京建筑大学 Historical relic sequential images subtle three-dimensional method for reconstructing based on triangulation network interpolation and constraint
CN110473239A (en) * 2019-08-08 2019-11-19 刘秀萍 A kind of high-precision point cloud registration method of 3 D laser scanning
CN112241010A (en) * 2019-09-17 2021-01-19 北京新能源汽车技术创新中心有限公司 Positioning method, positioning device, computer equipment and storage medium
CN112862894A (en) * 2021-04-12 2021-05-28 中国科学技术大学 Robot three-dimensional point cloud map construction and expansion method
CN113240740A (en) * 2021-05-06 2021-08-10 四川大学 Attitude measurement method based on phase-guided binocular vision dense marking point matching
WO2021189468A1 (en) * 2020-03-27 2021-09-30 深圳市速腾聚创科技有限公司 Attitude correction method, apparatus and system for laser radar
CN113608170A (en) * 2021-07-07 2021-11-05 云鲸智能(深圳)有限公司 Radar calibration method, radar, robot, medium, and computer program product

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6030549B2 (en) * 2011-04-13 2016-11-24 株式会社トプコン 3D point cloud position data processing apparatus, 3D point cloud position data processing system, 3D point cloud position data processing method and program

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107133325A (en) * 2017-05-05 2017-09-05 南京大学 A kind of internet photo geographical space localization method based on streetscape map
CN107657656A (en) * 2017-08-31 2018-02-02 成都通甲优博科技有限责任公司 Homotopy mapping and three-dimensional rebuilding method, system and photometric stereo camera shooting terminal
CN107767440A (en) * 2017-09-06 2018-03-06 北京建筑大学 Historical relic sequential images subtle three-dimensional method for reconstructing based on triangulation network interpolation and constraint
CN110473239A (en) * 2019-08-08 2019-11-19 刘秀萍 A kind of high-precision point cloud registration method of 3 D laser scanning
CN112241010A (en) * 2019-09-17 2021-01-19 北京新能源汽车技术创新中心有限公司 Positioning method, positioning device, computer equipment and storage medium
WO2021189468A1 (en) * 2020-03-27 2021-09-30 深圳市速腾聚创科技有限公司 Attitude correction method, apparatus and system for laser radar
CN112862894A (en) * 2021-04-12 2021-05-28 中国科学技术大学 Robot three-dimensional point cloud map construction and expansion method
CN113240740A (en) * 2021-05-06 2021-08-10 四川大学 Attitude measurement method based on phase-guided binocular vision dense marking point matching
CN113608170A (en) * 2021-07-07 2021-11-05 云鲸智能(深圳)有限公司 Radar calibration method, radar, robot, medium, and computer program product

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于空间拓扑关系的目标自动跟踪与位姿测量技术;晏晖;胡丙华;;中国测试(第04期);全文 *

Also Published As

Publication number Publication date
CN114353780A (en) 2022-04-15

Similar Documents

Publication Publication Date Title
EP3542182B1 (en) Methods and systems for vehicle environment map generation and updating
US11002840B2 (en) Multi-sensor calibration method, multi-sensor calibration device, computer device, medium and vehicle
US11474247B2 (en) Methods and systems for color point cloud generation
CN108419446B (en) System and method for laser depth map sampling
KR101539270B1 (en) sensor fusion based hybrid reactive motion planning method for collision avoidance and autonomous navigation, recording medium and mobile robot for performing the method
CN110889808B (en) Positioning method, device, equipment and storage medium
US20200353914A1 (en) In-vehicle processing device and movement support system
CN111121754A (en) Mobile robot positioning navigation method and device, mobile robot and storage medium
US11227395B2 (en) Method and apparatus for determining motion vector field, device, storage medium and vehicle
JP2016080460A (en) Moving body
CN112630798B (en) Method and apparatus for estimating ground
CN114353780B (en) Gesture optimization method and device
US11645762B2 (en) Obstacle detection
CN114115263B (en) Autonomous mapping method and device for AGV, mobile robot and medium
JP2023510507A (en) Sequential Mapping and Localization (SMAL) for Navigation
CN116222544B (en) Automatic navigation and positioning method and device for feeding vehicle facing to feeding farm
CN116630923B (en) Marking method and device for vanishing points of roads and electronic equipment
CN115019167B (en) Fusion positioning method, system, equipment and storage medium based on mobile terminal
CN118225078A (en) Vehicle positioning method and device, vehicle and storage medium
Tahir Development of an Autonomous Vehicle Platform
CN113625271A (en) Millimeter wave radar and binocular camera based simultaneous positioning and image building method
CN114663605A (en) Instant positioning map construction method, unmanned mobile equipment and computer program product
CN115409986A (en) Laser SLAM loop detection method and device based on point cloud semantics and robot
CN116734840A (en) Mowing robot positioning method and device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant