CN113362328B - Point cloud picture generation method and device, electronic equipment and storage medium - Google Patents

Point cloud picture generation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113362328B
CN113362328B CN202110914790.4A CN202110914790A CN113362328B CN 113362328 B CN113362328 B CN 113362328B CN 202110914790 A CN202110914790 A CN 202110914790A CN 113362328 B CN113362328 B CN 113362328B
Authority
CN
China
Prior art keywords
track
point cloud
cloud picture
offset
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110914790.4A
Other languages
Chinese (zh)
Other versions
CN113362328A (en
Inventor
胡亘谦
杨超
赵佳南
蔡恩祥
吴志浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Xinrun Fulian Digital Technology Co Ltd
Original Assignee
Shenzhen Xinrun Fulian Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Xinrun Fulian Digital Technology Co Ltd filed Critical Shenzhen Xinrun Fulian Digital Technology Co Ltd
Priority to CN202110914790.4A priority Critical patent/CN113362328B/en
Publication of CN113362328A publication Critical patent/CN113362328A/en
Application granted granted Critical
Publication of CN113362328B publication Critical patent/CN113362328B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Abstract

The invention discloses a point cloud picture generation method and device, electronic equipment and a storage medium. The method comprises the steps of obtaining initial point cloud pictures of a plurality of objects obtained by scanning the objects at the same position on a plurality of parallel tracks by a line laser three-dimensional sensor; defining one of the plurality of parallel tracks as a base track; defining the other tracks except the base track in the plurality of parallel tracks as second tracks; acquiring a translation transformation relation of the second track for translating relative to the base track; converting the initial point cloud picture corresponding to the second track into a second conversion point cloud picture relative to the base track according to the translation transformation relation; and superposing the second conversion point cloud picture and the initial point cloud picture corresponding to the basic orbit to generate a complete point cloud picture of the object. The scheme provided by the invention can be used for acquiring the high-precision point cloud picture of the large-size object, and the cost is lower.

Description

Point cloud picture generation method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a method and an apparatus for generating a point cloud chart, an electronic device, and a storage medium.
Background
The rapid development of automobiles greatly increases the demand for hubs. In the wheel hub manufacturing process, because the restriction of machining technology, wheel hub inevitably can produce the burr in process of production, and the burr belongs to wheel hub's surface defect, consequently need polish the wheel hub burr.
At present, a solution for polishing a hub according to a teaching track through a robot exists, but the burrs of the hub are tiny and the positions are random, and meanwhile, the hub has tolerance generated due to machining precision, so that the positions of all the possible burrs of the hub are polished once when the hub is polished according to the established teaching track, and meanwhile, the polishing effect can be different on different hub products of the same model due to the tolerance.
However, in the prior art, when the point cloud picture of the hub is obtained, the high-precision point cloud picture of the hub cannot be obtained; or the acquisition cost is high, namely, no method which can obtain the high-precision point cloud picture and is low in cost exists at the present stage.
Disclosure of Invention
In order to solve the technical problem that a hub point cloud picture cannot be obtained at low cost and high precision, embodiments of the present invention provide a point cloud picture generation method, an apparatus, an electronic device, and a storage medium.
The technical scheme of the embodiment of the invention is realized as follows:
the embodiment of the invention provides a point cloud picture generation method, which comprises the following steps:
acquiring initial point cloud pictures of a plurality of objects obtained by scanning the objects at the same position on a plurality of parallel tracks by a line laser three-dimensional sensor;
defining one of the plurality of parallel tracks as a base track; defining the other tracks except the base track in the plurality of parallel tracks as second tracks;
acquiring a translation transformation relation of the second track for translating relative to the base track;
converting the initial point cloud picture corresponding to the second track into a second conversion point cloud picture relative to the base track according to the translation transformation relation;
and superposing the second conversion point cloud picture and the initial point cloud picture corresponding to the basic orbit to generate a complete point cloud picture of the object.
In the foregoing solution, the obtaining a translation transformation relationship of the second track with respect to the base track includes:
acquiring the unit coordinate offset of the track coordinate system after the basic track translates by a unit amount relative to the basic track coordinate system;
acquiring the track pitch of the second track relative to the base track;
determining a second track offset of the second track coordinate system relative to the base track coordinate system according to the unit coordinate offset and the track spacing;
and taking the second track offset as a translation transformation relation of the second track for translating relative to the base track.
In the foregoing solution, the determining, according to the unit coordinate offset and the track pitch, a second track offset of the second track coordinate system relative to the base track coordinate system includes:
confirming a second track offset of the second track coordinate system relative to the base track coordinate system by using the following formula (1):
Figure 944575DEST_PATH_IMAGE001
formula (1)
Wherein V represents a second track offset; d denotes the track pitch and the track pitch,
Figure 330426DEST_PATH_IMAGE002
indicating a unit coordinate offset.
In the foregoing solution, the obtaining a unit coordinate offset of the track position coordinate system relative to the base track coordinate system after the base track translates by one unit amount includes:
acquiring a first position coordinate of a preset position in the base track point cloud picture;
acquiring a second position coordinate of the preset position in the track position point cloud picture after the basic track translates by one unit amount;
and determining the unit coordinate offset according to the first position coordinate and the second position coordinate.
In the foregoing solution, the determining a unit coordinate offset according to the first position coordinate and the second position coordinate includes:
the unit coordinate offset is determined using the following equation (2):
Figure DEST_PATH_IMAGE003
formula (2)
Wherein the content of the first and second substances,
Figure 592780DEST_PATH_IMAGE002
represents a unit coordinate offset;
Figure 382881DEST_PATH_IMAGE004
which represents the coordinates of the first position,
Figure DEST_PATH_IMAGE005
representing the second position coordinate and d representing one unit amount of translation.
In the foregoing solution, the converting, according to the translational transformation relationship, the initial point cloud picture corresponding to the second track into a second conversion point cloud picture relative to the base track includes:
acquiring coordinates of each point cloud in the initial point cloud picture corresponding to the second track;
adding the second track offset to the coordinates of each point cloud to obtain the conversion coordinates of each point cloud;
and obtaining the second conversion point cloud picture based on the conversion coordinates of each point cloud.
In the foregoing solution, after the second conversion point cloud image is superimposed on the initial point cloud image corresponding to the base track, the method further includes:
acquiring coordinates of each point cloud in the superposed point cloud picture;
judging whether the coordinate distance of two point clouds in the superposed point cloud image is smaller than or equal to a preset distance;
and when the coordinate distance of two point clouds is smaller than or equal to a preset distance, deleting one point cloud of the two point clouds.
The embodiment of the invention also provides a point cloud picture generating device, which comprises:
the scanning module is used for acquiring initial point cloud pictures of a plurality of objects obtained by scanning the objects at the same position on a plurality of parallel tracks by using the line laser three-dimensional sensor;
a defining module for defining one of the plurality of parallel tracks as a base track; defining the other tracks except the base track in the plurality of parallel tracks as second tracks;
the acquisition module is used for acquiring a translation transformation relation of the second track for translating relative to the base track;
the conversion module is used for converting the initial point cloud picture corresponding to the second track into a second conversion point cloud picture relative to the base track according to the translation transformation relation;
and the generating module is used for superposing the second conversion point cloud picture and the initial point cloud picture corresponding to the basic orbit to generate a complete point cloud picture of the object.
An embodiment of the present invention further provides an electronic device, including: a processor and a memory for storing a computer program capable of running on the processor; wherein the content of the first and second substances,
the processor is adapted to perform the steps of any of the methods described above when running the computer program.
The embodiment of the invention also provides a storage medium, wherein a computer program is stored in the storage medium, and when the computer program is executed by a processor, the steps of any one of the methods are realized.
According to the point cloud picture generation method and device, the electronic device and the storage medium, initial point cloud pictures of a plurality of objects are obtained by scanning the objects at the same position on a plurality of parallel tracks by a line laser three-dimensional sensor; defining one of the plurality of parallel tracks as a base track; defining the other tracks except the base track in the plurality of parallel tracks as second tracks; acquiring a translation transformation relation of the second track for translating relative to the base track; converting the initial point cloud picture corresponding to the second track into a second conversion point cloud picture relative to the base track according to the translation transformation relation; and superposing the second conversion point cloud picture and the initial point cloud picture corresponding to the basic orbit to generate a complete point cloud picture of the object. The scheme provided by the invention can be used for acquiring the high-precision point cloud picture of the large-size object, and the cost is lower.
Drawings
FIG. 1 is a schematic flow chart of a method for generating a point cloud chart according to an embodiment of the present invention;
FIG. 2 is a schematic top view of a hub point cloud scanning in accordance with an embodiment of the present invention;
FIG. 3 is a schematic side view of a point cloud scanning of a hub according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a first scanning start position according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a first scanning end point position according to an embodiment of the present invention;
FIG. 6 is a diagram illustrating a second scanning start position according to the embodiment of the present invention;
FIG. 7 is a schematic diagram of a second scanning end point according to the embodiment of the present invention;
FIG. 8 is a schematic diagram of a calibration process according to an embodiment of the present invention before movement;
FIG. 9 is a diagram illustrating a calibration process after movement according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of a point cloud image generation apparatus according to an embodiment of the present invention;
fig. 11 is an internal structural diagram of a computer device according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples.
At present, the method for acquiring the hub point cloud picture mainly adopts the following two methods:
1) and obtaining the hub point cloud through a single large-visual-field structured light three-dimensional sensor.
For the mode, because the size of the wheel hub is large, although the large-visual-field structured light three-dimensional sensor can cover the whole wheel hub, the precision is often difficult to reach 1mm, and the requirement on the wheel hub grinding precision is less than 0.2 mm, so that the wheel hub posture can be randomly placed by only obtaining the wheel hub posture by adopting the structured light three-dimensional sensor, a specific burr position cannot be recognized, and the robot still needs to grind according to a complete teaching track.
2) And obtaining the hub point cloud through the combination of a plurality of high-precision three-dimensional sensors.
In this way, a plurality of high-precision three-dimensional sensors (line lasers or structured light) can be combined to obtain a high-precision hub point cloud picture, but the high-precision three-dimensional sensors are expensive, and the cost is greatly increased by using the plurality of sensors.
Based on the method, the complete point cloud picture of the upper surface of the hub can be obtained by scanning a single high-precision line laser sensor twice under the condition of considering the cost, and the point cloud picture is high in precision.
Specifically, an embodiment of the present invention provides a method for generating a point cloud graph, as shown in fig. 1, where the method includes:
step 101: acquiring initial point cloud pictures of a plurality of objects obtained by scanning the objects at the same position on a plurality of parallel tracks by a line laser three-dimensional sensor;
step 102: defining one of the plurality of parallel tracks as a base track; defining the other tracks except the base track in the plurality of parallel tracks as second tracks;
step 103: acquiring a translation transformation relation of the second track for translating relative to the base track;
step 104: converting the initial point cloud picture corresponding to the second track into a second conversion point cloud picture relative to the base track according to the translation transformation relation;
step 105: and superposing the second conversion point cloud picture and the initial point cloud picture corresponding to the basic orbit to generate a complete point cloud picture of the object.
In particular, the present embodiment can be applied to the hub point cloud scanning device as shown in fig. 2 and 3. As shown in fig. 2 and 3, fig. 2 is a top view of the device, and fig. 3 is a side view of the device. In the device, a wheel hub can be placed on a platform, a track support is arranged on the platform and connected with a two-axis motor track, a line laser three-dimensional sensor is arranged on the two-axis motor track and can slide along the track. When the line laser three-dimensional sensor slides along the main shaft track, the acquisition of an object initial point cloud picture on one parallel track can be completed, and simultaneously, when the line laser three-dimensional sensor slides along the auxiliary shaft track, the sliding parallel track is switched.
Further, for different objects, specifically switching several parallel tracks to acquire an initial point cloud image of the object on the parallel tracks may be set. Here, for the hub, switching 2 parallel tracks may be provided to acquire an initial point cloud.
Specifically, referring to fig. 4-7, the line laser sensor may be first positioned as shown in fig. 4, then start scanning along the main axis track, and when positioned as shown in fig. 5, stop scanning and obtain an initial cloud of points on a parallel track. Next, the line laser sensor is moved along the secondary axis rail, and is switched to the next parallel rail, and when the line laser sensor is located at the position shown in fig. 6, that is, when the line laser sensor has been switched to the initial position of the next parallel rail, the movement of the line laser sensor on the secondary axis rail is stopped. Then the line laser sensor is controlled to move along the main shaft track for scanning, when the line laser sensor is positioned at the position shown in figure 7, the scanning is stopped, and an initial point cloud picture on another parallel track is acquired. So far, the acquisition of the initial point cloud picture of the parallel track is finished for 2 times according to the structural size of the hub. Here, the number of parallel tracks (i.e., the number of times of scanning) and the distance between the parallel tracks may be appropriately determined according to actual conditions such as the size of the scanning device and the size of the object to be measured, and may be set according to the conditions.
In actual application, a plurality of initial point cloud pictures are respectively obtained by scanning on different parallel tracks, and different parallel tracks have different coordinate systems; in order to realize the splicing of the multiple initial point cloud pictures, the multiple initial point cloud pictures need to be subjected to coordinate conversion, so that the multiple initial point cloud pictures are located in the same coordinate system, and the splicing of the multiple initial point cloud pictures is completed.
Further, in an embodiment, the obtaining a translation transformation relation of the second track with respect to the base track includes:
acquiring the unit coordinate offset of the track coordinate system after the basic track translates by a unit amount relative to the basic track coordinate system;
acquiring the track pitch of the second track relative to the base track;
determining a second track offset of the second track coordinate system relative to the base track coordinate system according to the unit coordinate offset and the track spacing;
and taking the second track offset as a translation transformation relation of the second track for translating relative to the base track.
Here, the unit amount may be set according to circumstances, for example, according to the machining accuracy of the product. The unit amount may be set to 1mm for the hub.
In addition, the second track offset of the second track coordinate system with respect to the base track coordinate system can be confirmed using the following formula (1):
Figure 74894DEST_PATH_IMAGE001
formula (1)
Wherein V represents a second track offset; d denotes the track pitch and the track pitch,
Figure 202119DEST_PATH_IMAGE002
indicating a unit coordinate offset.
Further, in an embodiment, the obtaining a unit coordinate offset of the track position coordinate system relative to the base track coordinate system after the base track is translated by a unit amount includes:
acquiring a first position coordinate of a preset position in the base track point cloud picture;
acquiring a second position coordinate of the preset position in the track position point cloud picture after the basic track translates by one unit amount;
and determining the unit coordinate offset according to the first position coordinate and the second position coordinate.
Specifically, the unit coordinate offset amount can be determined using the following formula (2):
Figure 663187DEST_PATH_IMAGE003
formula (2)
Wherein the content of the first and second substances,
Figure 889769DEST_PATH_IMAGE002
represents a unit coordinate offset;
Figure 521607DEST_PATH_IMAGE004
which represents the coordinates of the first position,
Figure 327889DEST_PATH_IMAGE005
representing the second position coordinate and d representing one unit amount of translation.
Further, after obtaining the translation transformation relationship, the initial point cloud picture corresponding to the second track may be transformed into a second transformed point cloud picture relative to the base track according to the translation transformation relationship, and the specific process includes:
acquiring coordinates of each point cloud in the initial point cloud picture corresponding to the second track;
adding the second track offset to the coordinates of each point cloud to obtain the conversion coordinates of each point cloud;
and obtaining the second conversion point cloud picture based on the conversion coordinates of each point cloud.
In addition, after the second conversion point cloud image and the initial point cloud image corresponding to the basic track are superposed, the overlapping area has the problem of repeated points, so that the repeated points in the overlapping area can be removed to achieve a better generation effect.
Further, in an embodiment, after the second conversion point cloud image is superimposed on the initial point cloud image corresponding to the base track, the method further includes:
acquiring coordinates of each point cloud in the superposed point cloud picture;
judging whether the coordinate distance of two point clouds in the superposed point cloud image is smaller than or equal to a preset distance;
and when the coordinate distance of two point clouds is smaller than or equal to a preset distance, deleting one point cloud of the two point clouds.
In particular, the preset distance may be determined based on actual conditions, for example, the preset distance may be set to 0.02 mm for hub accuracy. When the distance between two point clouds is within 0.02 mm, one of the point clouds may be deleted to remove duplicate point clouds.
The point cloud picture generation method provided by the embodiment of the invention comprises the steps of obtaining initial point cloud pictures of a plurality of objects, wherein the initial point cloud pictures are obtained by scanning the objects at the same position on a plurality of parallel tracks by a line laser three-dimensional sensor; defining one of the plurality of parallel tracks as a base track; defining the other tracks except the base track in the plurality of parallel tracks as second tracks; acquiring a translation transformation relation of the second track for translating relative to the base track; converting the initial point cloud picture corresponding to the second track into a second conversion point cloud picture relative to the base track according to the translation transformation relation; and superposing the second conversion point cloud picture and the initial point cloud picture corresponding to the basic orbit to generate a complete point cloud picture of the object. The scheme provided by the invention can be used for acquiring the high-precision point cloud picture of the large-size object, and the cost is lower.
The present invention will be described in further detail with reference to the following application examples.
The embodiment provides a method for obtaining high-precision point cloud of a hub based on two-axis motor and point cloud splicing. The general structure of the device can be seen in fig. 2 and 3.
Based on the device, the method for obtaining the high-precision point cloud of the hub based on the two-axis motor and the point cloud splicing comprises the following steps:
(1) carrying out system calibration, and skipping the step if the system is calibrated;
(2) receiving a scanning instruction, the line laser moves to a scanning starting point position as shown in fig. 3;
(3) starting scanning of the line laser, moving to the position shown in fig. 4, stopping scanning, and obtaining a point cloud picture S1;
(4) the line laser is moved to the position shown in fig. 5;
(5) starting scanning by the line laser, moving to the position shown in fig. 6, stopping scanning, and obtaining a point cloud picture S2;
(6) calculating by an algorithm to obtain a complete high-precision hub point cloud picture S;
(7) and returning the complete hub point cloud picture S to the system.
Here, the system calibration method in step (1) specifically includes:
since the movement of the line laser on the two-axis motor track only generates translation transformation and does not generate rotation transformation, the system calibration can be completed by only calculating the translation transformation relation between the position of the line laser shown in fig. 4 and the position shown in fig. 7 by taking the position of the line laser in fig. 4 as a final unified coordinate system.
Here, the line laser is first moved to the position shown in fig. 4 and the standard calibration sphere is placed to a position substantially as shown in fig. 8. The control line laser scans a point cloud picture for a calibration sphere, and the coordinates of the sphere center at the moment are calculated (x 1, y1, z 1). The line laser is then controlled to return to the position shown in fig. 8, and the line laser is translated D millimeters (to meet the calibration ball in the scanning range of the line laser) on the axis of rotation of the motor track to the position shown in fig. 9. And the line laser is controlled to scan a point cloud picture of the calibration sphere, and the coordinates (x 2, y2, z 2) of the sphere center at the moment are calculated.
Then, every time the initial position of the line laser moves 1mm on the short axis of the motor track, the whole coordinate system can translate as follows:
Figure 643464DEST_PATH_IMAGE006
the distance D mm that the laser moves relative to the line of fig. 4 (only along the minor axis of the motor, the position of the major axis remains the same) from fig. 7 yields the following coordinate system offsets:
Figure DEST_PATH_IMAGE007
converting the coordinate system offset into a translation matrix T to finish calibration;
Figure 794610DEST_PATH_IMAGE008
in addition, the step (6) specifically includes:
adding a translation matrix T obtained through system calibration to the point cloud S2 to obtain a point cloud picture S2' of which the coordinate system is consistent with the point cloud picture S1 after translation;
adding the dot cloud picture S2' to the dot cloud picture S1 to obtain a dot cloud picture S containing a complete hub;
because the density of the point cloud image S is uneven due to the overlapping area of S1 and S2', repeated points need to be removed, and a KD-tree is established for the point cloud image S;
for each point P in the point cloud SiSearching for P and P in point cloud picture S through KD-treeiThe Euclidean distance of the point cloud graph S is less than 0.02 mm, and the point cloud graph S is deleted when 1 point meeting the condition is searched; and returning the deleted point cloud picture S to the system.
According to the method for obtaining the high-precision point cloud of the hub based on the two-axis motor and the point cloud splicing, under the condition of considering the cost, the fixed point location is moved on the two-axis motor through a single high-precision line laser sensor, scanning is carried out twice, the relatively complete point cloud of the upper surface of the hub can be obtained, the point cloud precision is high, and the method can be used for scenes needing the high-precision point cloud of the hub, such as burr recognition polishing and tolerance measurement of the hub.
In order to implement the method according to the embodiment of the present invention, an embodiment of the present invention further provides a point cloud chart generating apparatus, as shown in fig. 10, where the point cloud chart generating apparatus 1000 includes: a scanning module 1001, a defining module 1002, an obtaining module 1003, a converting module 1004, and a generating module 1005; wherein the content of the first and second substances,
the scanning module 1001 is configured to obtain initial point cloud charts of a plurality of objects, where the initial point cloud charts are obtained by scanning the objects at the same position on a plurality of parallel tracks by using a line laser three-dimensional sensor;
a defining module 1002, configured to define one of the plurality of parallel tracks as a base track; defining the other tracks except the base track in the plurality of parallel tracks as second tracks;
an obtaining module 1003, configured to obtain a translation transformation relationship that the second track translates relative to the base track;
a converting module 1004, configured to convert the initial point cloud image corresponding to the second track into a second converted point cloud image relative to the base track according to the translation transformation relationship;
a generating module 1005, configured to superimpose the second converted point cloud image and the initial point cloud image corresponding to the base trajectory to generate a complete point cloud image of the object.
In practical applications, the scanning module 1001, the defining module 1002, the obtaining module 1003, the converting module 1004, and the generating module 1005 may be implemented by a processor in the dot cloud image generating apparatus.
It should be noted that: the above-mentioned apparatus provided in the above-mentioned embodiment is only exemplified by the division of the above-mentioned program modules when executing, and in practical application, the above-mentioned processing may be distributed to be completed by different program modules according to needs, that is, the internal structure of the terminal is divided into different program modules to complete all or part of the above-mentioned processing. In addition, the apparatus provided by the above embodiment and the method embodiment belong to the same concept, and the specific implementation process thereof is described in the method embodiment and is not described herein again.
To implement the method of the embodiment of the present invention, the embodiment of the present invention further provides a computer program product, where the computer program product includes computer instructions, and the computer instructions are stored in a computer-readable storage medium. A processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the steps of the above-described method.
Based on the hardware implementation of the program module, in order to implement the method according to the embodiment of the present invention, an electronic device (computer device) is also provided in the embodiment of the present invention. Specifically, in one embodiment, the computer device may be a terminal, and its internal structure diagram may be as shown in fig. 11. The computer apparatus includes a processor a01, a network interface a02, a display screen a04, an input device a05, and a memory (not shown in the figure) connected through a system bus. Wherein processor a01 of the computer device is used to provide computing and control capabilities. The memory of the computer device comprises an internal memory a03 and a non-volatile storage medium a 06. The nonvolatile storage medium a06 stores an operating system B01 and a computer program B02. The internal memory a03 provides an environment for the operation of the operating system B01 and the computer program B02 in the nonvolatile storage medium a 06. The network interface a02 of the computer device is used for communication with an external terminal through a network connection. The computer program is executed by the processor a01 to implement the method of any of the above embodiments. The display screen a04 of the computer device may be a liquid crystal display screen or an electronic ink display screen, and the input device a05 of the computer device may be a touch layer covered on the display screen, a button, a trackball or a touch pad arranged on a casing of the computer device, or an external keyboard, a touch pad or a mouse.
Those skilled in the art will appreciate that the architecture shown in fig. 11 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
The device provided by the embodiment of the present invention includes a processor, a memory, and a program stored in the memory and capable of running on the processor, and when the processor executes the program, the method according to any one of the embodiments described above is implemented.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). The memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include transitory computer readable media (transmyedia) such as modulated data signals and carrier waves.
It will be appreciated that the memory of embodiments of the invention may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. Among them, the nonvolatile Memory may be a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a magnetic random access Memory (FRAM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical disk, or a Compact Disc Read-Only Memory (CD-ROM); the magnetic surface storage may be disk storage or tape storage. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of illustration and not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Synchronous Static Random Access Memory (SSRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (SDRAM), Double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), Enhanced Synchronous Dynamic Random Access Memory (ESDRAM), Enhanced Synchronous Dynamic Random Access Memory (Enhanced DRAM), Synchronous Dynamic Random Access Memory (SLDRAM), Direct Memory (DRmb Access), and Random Access Memory (DRAM). The described memory for embodiments of the present invention is intended to comprise, without being limited to, these and any other suitable types of memory.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
The above are merely examples of the present application and are not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (9)

1. A point cloud graph generation method, the method comprising:
acquiring initial point cloud pictures of a plurality of objects obtained by scanning the objects at the same position on a plurality of parallel tracks by a line laser three-dimensional sensor;
defining one of the plurality of parallel tracks as a base track; defining the other tracks except the base track in the plurality of parallel tracks as second tracks;
acquiring a translation transformation relation of the second track for translating relative to the base track;
converting the initial point cloud picture corresponding to the second track into a second conversion point cloud picture relative to the base track according to the translation transformation relation;
superposing the second conversion point cloud picture and an initial point cloud picture corresponding to the basic orbit to generate a complete point cloud picture of the object; wherein the content of the first and second substances,
the obtaining a translation transformation relationship for translating the second track relative to the base track comprises:
acquiring the unit coordinate offset of the track coordinate system after the basic track translates by a unit amount relative to the basic track coordinate system;
acquiring the track pitch of the second track relative to the base track;
determining a second track offset of the second track coordinate system relative to the base track coordinate system according to the unit coordinate offset and the track spacing;
and taking the second track offset as a translation transformation relation of the second track for translating relative to the base track.
2. The method of claim 1, wherein determining a second track offset of the second track coordinate system relative to the base track coordinate system based on the unit coordinate offset and the track pitch comprises:
confirming a second track offset of the second track coordinate system relative to the base track coordinate system by using the following formula (1):
Figure 131054DEST_PATH_IMAGE001
formula (1)
Wherein V represents a second track offset; d denotes the track pitch and the track pitch,
Figure 985878DEST_PATH_IMAGE002
indicating a unit coordinate offset.
3. The method of claim 1, wherein obtaining a unit coordinate offset of the base track position coordinate system relative to the base track coordinate system after a unit amount of translation of the base track comprises:
acquiring a first position coordinate of a preset position in the base track point cloud picture;
acquiring a second position coordinate of the preset position in the track position point cloud picture after the basic track translates by one unit amount;
and determining the unit coordinate offset according to the first position coordinate and the second position coordinate.
4. The method of claim 3, wherein determining a unit coordinate offset from the first and second position coordinates comprises:
the unit coordinate offset is determined using the following equation (2):
Figure 765615DEST_PATH_IMAGE003
formula (2)
Wherein the content of the first and second substances,
Figure 160824DEST_PATH_IMAGE002
represents a unit coordinate offset;
Figure 975196DEST_PATH_IMAGE004
which represents the coordinates of the first position,
Figure 984609DEST_PATH_IMAGE005
representing the second position coordinate and d representing one unit amount of translation.
5. The method of claim 1, wherein transforming the initial point cloud representation corresponding to the second track into a second transformed point cloud representation relative to the base track according to the translational transformation relationship comprises:
acquiring coordinates of each point cloud in the initial point cloud picture corresponding to the second track;
adding the second track offset to the coordinates of each point cloud to obtain the conversion coordinates of each point cloud;
and obtaining the second conversion point cloud picture based on the conversion coordinates of each point cloud.
6. The method of claim 1, wherein after superimposing the second transformed point cloud with an initial point cloud corresponding to the base track, the method further comprises:
acquiring coordinates of each point cloud in the superposed point cloud picture;
judging whether the coordinate distance of two point clouds in the superposed point cloud image is smaller than or equal to a preset distance;
and when the coordinate distance of two point clouds is smaller than or equal to a preset distance, deleting one point cloud of the two point clouds.
7. A point cloud map generation apparatus, characterized by comprising:
the scanning module is used for acquiring initial point cloud pictures of a plurality of objects obtained by scanning the objects at the same position on a plurality of parallel tracks by using the line laser three-dimensional sensor;
a defining module for defining one of the plurality of parallel tracks as a base track; defining the other tracks except the base track in the plurality of parallel tracks as second tracks;
the acquisition module is used for acquiring a translation transformation relation of the second track for translating relative to the base track;
the conversion module is used for converting the initial point cloud picture corresponding to the second track into a second conversion point cloud picture relative to the base track according to the translation transformation relation;
the generating module is used for superposing the second conversion point cloud picture and an initial point cloud picture corresponding to the basic track to generate a complete point cloud picture of the object; wherein the content of the first and second substances,
the acquisition module is further configured to:
acquiring the unit coordinate offset of the track coordinate system after the basic track translates by a unit amount relative to the basic track coordinate system;
acquiring the track pitch of the second track relative to the base track;
determining a second track offset of the second track coordinate system relative to the base track coordinate system according to the unit coordinate offset and the track spacing;
and taking the second track offset as a translation transformation relation of the second track for translating relative to the base track.
8. An electronic device, comprising: a processor and a memory for storing a computer program capable of running on the processor; wherein the content of the first and second substances,
the processor is adapted to perform the steps of the method of any one of claims 1 to 6 when running the computer program.
9. A storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, performs the steps of the method of any one of claims 1 to 6.
CN202110914790.4A 2021-08-10 2021-08-10 Point cloud picture generation method and device, electronic equipment and storage medium Active CN113362328B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110914790.4A CN113362328B (en) 2021-08-10 2021-08-10 Point cloud picture generation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110914790.4A CN113362328B (en) 2021-08-10 2021-08-10 Point cloud picture generation method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113362328A CN113362328A (en) 2021-09-07
CN113362328B true CN113362328B (en) 2021-11-09

Family

ID=77540873

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110914790.4A Active CN113362328B (en) 2021-08-10 2021-08-10 Point cloud picture generation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113362328B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109064400A (en) * 2018-07-25 2018-12-21 博众精工科技股份有限公司 Three-dimensional point cloud joining method, apparatus and system
CN110189257A (en) * 2019-06-03 2019-08-30 北京石油化工学院 Method, apparatus, system and the storage medium that point cloud obtains
CN111609811A (en) * 2020-04-29 2020-09-01 北京机科国创轻量化科学研究院有限公司 Machine vision-based large-size plate forming online measurement system and method
CN112700536A (en) * 2020-12-31 2021-04-23 广东美的白色家电技术创新中心有限公司 Tire point cloud completion method, assembly method, control device and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106354935B (en) * 2016-08-30 2017-07-18 华中科技大学 Complex curved surface parts matching detection method based on electron outside nucleus probability density distribution
CN109425365B (en) * 2017-08-23 2022-03-11 腾讯科技(深圳)有限公司 Method, device and equipment for calibrating laser scanning equipment and storage medium
CN110412616A (en) * 2019-08-07 2019-11-05 山东金软科技股份有限公司 A kind of mining area underground mining stope acceptance method and device
CN112700537A (en) * 2020-12-31 2021-04-23 广东美的白色家电技术创新中心有限公司 Tire point cloud construction method, tire point cloud assembly method, tire point cloud control device, and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109064400A (en) * 2018-07-25 2018-12-21 博众精工科技股份有限公司 Three-dimensional point cloud joining method, apparatus and system
CN110189257A (en) * 2019-06-03 2019-08-30 北京石油化工学院 Method, apparatus, system and the storage medium that point cloud obtains
CN111609811A (en) * 2020-04-29 2020-09-01 北京机科国创轻量化科学研究院有限公司 Machine vision-based large-size plate forming online measurement system and method
CN112700536A (en) * 2020-12-31 2021-04-23 广东美的白色家电技术创新中心有限公司 Tire point cloud completion method, assembly method, control device and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"结合图像信息的快速点云拼接算法";王瑞岩 等;《测绘学报》;20160131;第45卷(第1期);第36-102页 *

Also Published As

Publication number Publication date
CN113362328A (en) 2021-09-07

Similar Documents

Publication Publication Date Title
Sitnik et al. Digital fringe projection system for large-volume 360-deg shape measurement
WO2018201677A1 (en) Bundle adjustment-based calibration method and device for telecentric lens-containing three-dimensional imaging system
JP2019145085A (en) Method, device, and computer-readable medium for adjusting point cloud data acquisition trajectory
US20020080135A1 (en) Three-dimensional data generating device
US11257232B2 (en) Three-dimensional measurement method using feature amounts and device using the method
Xu et al. A reverse compensation framework for shape deformation control in additive manufacturing
CN108639374B (en) Method and system for processing measured data of airplane component digital assembly
CN116309823A (en) Pose determining method, pose determining device, pose determining equipment and storage medium
CN113470091B (en) Hub point cloud registration method and device, electronic equipment and storage medium
Galantucci et al. Multistack close range photogrammetry for low cost submillimeter metrology
Bradley et al. A complementary sensor approach to reverse engineering
CN113362328B (en) Point cloud picture generation method and device, electronic equipment and storage medium
KR101138149B1 (en) Three-dimensional shape measuring method
CN102625046B (en) Anti-shake device and method for photography
CN111275769B (en) Monocular vision parameter correction method and device
Zhang et al. A 3D machine vision-enabled intelligent robot architecture
CN112927340A (en) Three-dimensional reconstruction acceleration method, system and equipment independent of mechanical placement
CN108989682B (en) Active light field depth imaging method and system
CN116385999A (en) Parking space identification method, device and equipment
CN110047147A (en) A kind of 3D point cloud processing method, device, system and computer storage medium
CN114332239A (en) Kinematic parameter identification method and system of five-axis motion platform
JP6300120B2 (en) Control data generation method and control data generation apparatus
Peng et al. Algorithms for generating adaptive projection patterns for 3D shape measurement
Lin et al. Camera calibration based on Snell’s law
CN113409227B (en) Point cloud picture repairing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant