CN113744378A - Exhibition article scanning method and device, electronic equipment and storage medium - Google Patents

Exhibition article scanning method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113744378A
CN113744378A CN202010481765.7A CN202010481765A CN113744378A CN 113744378 A CN113744378 A CN 113744378A CN 202010481765 A CN202010481765 A CN 202010481765A CN 113744378 A CN113744378 A CN 113744378A
Authority
CN
China
Prior art keywords
point cloud
cloud data
position coordinates
exhibition
scanning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010481765.7A
Other languages
Chinese (zh)
Other versions
CN113744378B (en
Inventor
刘宁
唐建波
覃小春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Digital Sky Technology Co ltd
Original Assignee
Chengdu Digital Sky Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Digital Sky Technology Co ltd filed Critical Chengdu Digital Sky Technology Co ltd
Priority to CN202010481765.7A priority Critical patent/CN113744378B/en
Publication of CN113744378A publication Critical patent/CN113744378A/en
Application granted granted Critical
Publication of CN113744378B publication Critical patent/CN113744378B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J18/00Arms
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J5/00Manipulators mounted on wheels or on carriages
    • B25J5/005Manipulators mounted on wheels or on carriages mounted on endless tracks or belts
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J5/00Manipulators mounted on wheels or on carriages
    • B25J5/007Manipulators mounted on wheels or on carriages mounted on wheels
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Automation & Control Theory (AREA)
  • Processing Or Creating Images (AREA)
  • Manipulator (AREA)

Abstract

The application provides a method and a device for scanning exhibition articles, electronic equipment and a storage medium, which are used for solving the problems that time and labor are wasted when the exhibition articles are photographed and the efficiency is not high. The exhibition article scanning method is applied to electronic equipment and comprises the following steps: acquiring point cloud data of the exhibition article, wherein the point cloud data is acquired by a robot for acquiring the exhibition article; performing principal component analysis on the point cloud data to obtain coplanar point clouds, wherein the coplanar point clouds represent a three-dimensional coordinate set of a common plane in the point cloud data; determining a plurality of position coordinates of the collected exhibition article and orientation angles corresponding to the position coordinates according to the point cloud data and the coplanar point cloud; and sending a control command to the robot according to the position coordinates and the orientation angles corresponding to the position coordinates, wherein the control command is used for enabling the robot to scan the displayed article according to the position coordinates and the orientation angles corresponding to the position coordinates, and returning to scan the plurality of scanned images of the displayed article.

Description

Exhibition article scanning method and device, electronic equipment and storage medium
Technical Field
The application relates to the technical field of three-dimensional modeling and three-dimensional reconstruction, in particular to a method and a device for scanning an exhibition object, electronic equipment and a storage medium.
Background
Three-dimensional scanning, which is a non-contact measurement technique used to acquire and analyze the shape and contour of a physical object; the three-dimensional scanning technology can be used for carrying out three-dimensional reconstruction on an object to be scanned, so that a three-dimensional model of an actual object is created.
At present, when an exhibition object (such as an ancient cultural relic) in a closed space is scanned or a large component is scanned in a three-dimensional mode in a factory, the exhibition object is photographed by manual operation, so that the photographed picture is subjected to modeling analysis processing by using software. In a specific practical process, it is found that taking a picture of an exhibition object manually is time-consuming, labor-consuming and inefficient.
Disclosure of Invention
An object of the embodiments of the present application is to provide a method and an apparatus for scanning an exhibition object, an electronic device, and a storage medium, which are used to solve the problems of time and labor consuming and low efficiency in photographing the exhibition object.
The embodiment of the application provides a method for scanning exhibition articles, which is applied to electronic equipment and comprises the following steps: acquiring point cloud data of the exhibition article, wherein the point cloud data is acquired by a robot for acquiring the exhibition article; performing principal component analysis on the point cloud data to obtain coplanar point clouds, wherein the coplanar point clouds represent a three-dimensional coordinate set of a common plane in the point cloud data; determining a plurality of position coordinates of the collected exhibition article and orientation angles corresponding to the position coordinates according to the point cloud data and the coplanar point cloud; and sending a control command to the robot according to the position coordinates and the orientation angles corresponding to the position coordinates, wherein the control command is used for enabling the robot to scan the displayed article according to the position coordinates and the orientation angles corresponding to the position coordinates, and returning to scan the plurality of scanned images of the displayed article. In the implementation process, the point cloud data of the exhibition article is obtained firstly, and then the point cloud data is subjected to principal component analysis, deletion, fitting and other calculations to obtain a plurality of position coordinates for collecting and scanning the exhibition article and orientation angles corresponding to the position coordinates, so that the robot can scan the exhibition article according to the plurality of position coordinates and the orientation angles corresponding to the position coordinates and return to scan a plurality of scanning images of the exhibition article; therefore, the efficiency of taking pictures of the exhibition objects to obtain the images is improved, and meanwhile, the problems that time and labor are wasted when the exhibition objects are taken pictures in an artificial shooting mode are effectively solved.
Optionally, in this embodiment of the present application, performing principal component analysis on the point cloud data to obtain a coplanar point cloud, includes: performing singular value decomposition on a matrix formed by point cloud data to obtain a point cloud vector; and determining the central point of the point cloud data and the common plane represented by the point cloud vector as a coplanar point cloud. In the implementation process, singular value decomposition is carried out on a matrix formed by point cloud data to obtain a point cloud vector; determining a central point of the point cloud data and a common plane represented by the point cloud vector as a coplanar point cloud; the speed of obtaining the coplanar point cloud is effectively accelerated.
Optionally, in this embodiment of the application, determining, according to the point cloud data and the coplanar point cloud, a plurality of position coordinates and orientation angles corresponding to the position coordinates of the collected exhibition object includes: deleting all three-dimensional coordinates with positions lower than the coplanar point cloud from the point cloud data to obtain target data; fitting the target data by using a spherical model to obtain a fitted spherical center coordinate and a fitted spherical radius; and calculating a plurality of position coordinates for scanning the exhibition object and orientation angles corresponding to the position coordinates according to the sphere center coordinates and the sphere radius. In the implementation process, target data are obtained by deleting all three-dimensional coordinates of which the positions are lower than the coplanar point cloud from the point cloud data; fitting the target data by using a spherical model to obtain a fitted spherical center coordinate and a fitted spherical radius; calculating a plurality of position coordinates for scanning the exhibition object and orientation angles corresponding to the position coordinates according to the sphere center coordinates and the sphere radius; therefore, the accuracy of determining the orientation angle of the collected multiple position coordinates of the exhibition article is effectively improved.
Optionally, in this embodiment of the application, after sending a control command to the robot according to the plurality of position coordinates and the orientation angles corresponding to the position coordinates, the method further includes: receiving a plurality of scanning images sent by a robot; and modeling the displayed article according to the plurality of scanning images to obtain a three-dimensional model. In the implementation process, a plurality of scanning images sent by the robot are received; modeling the displayed article according to a plurality of scanning images to obtain a three-dimensional model; the speed of three-dimensional modeling according to the scanned image of the displayed object is effectively improved.
Optionally, in this embodiment of the present application, after obtaining the three-dimensional model, the method further includes: and mapping the three-dimensional model according to the plurality of scanning images to obtain the mapped three-dimensional model. In the implementation process, the three-dimensional model is mapped according to a plurality of scanning images to obtain the mapped three-dimensional model; therefore, the speed of obtaining the three-dimensional model after mapping is effectively improved.
The embodiment of the application also provides a method for scanning the exhibition article, which is applied to a robot and comprises the following steps: collecting the exhibition article through a depth camera to obtain point cloud data, wherein the point cloud data represents a three-dimensional coordinate set of the exhibition article; sending the point cloud data to the electronic equipment so that the electronic equipment calculates and sends a control command according to the point cloud data; receiving a control command sent by electronic equipment, wherein the control command comprises a plurality of position coordinates for scanning the exhibition article and orientation angles corresponding to the position coordinates, and the orientation angles corresponding to the position coordinates are determined by the electronic equipment according to coplanar point cloud and point cloud data obtained by analysis after the point cloud data is received and analyzed; sequentially moving to each position coordinate in the plurality of position coordinates, and carrying out acquisition scanning according to the orientation angle corresponding to each position coordinate to obtain a plurality of scanning images; a plurality of scanned images are transmitted to an electronic device. In the implementation process, the exhibition article is collected through the depth camera to obtain point cloud data, and the point cloud data is sent to the electronic equipment, so that the electronic equipment calculates and sends a control command according to the point cloud data, wherein the control command comprises a plurality of position coordinates for scanning the exhibition article and orientation angles corresponding to the position coordinates; sequentially moving to each position coordinate in the plurality of position coordinates, and carrying out acquisition scanning according to the orientation angle corresponding to each position coordinate to obtain a plurality of scanning images; finally, a plurality of scanning images are sent to the electronic equipment; therefore, the efficiency of taking pictures of the exhibition objects to obtain the images is improved, and meanwhile, the problems that time and labor are wasted when the exhibition objects are taken pictures in an artificial shooting mode are effectively solved.
Optionally, in an embodiment of the present application, the robot includes: the system comprises a servo motor, a speed reducer and image acquisition equipment; move to each position coordinate in a plurality of position coordinates in proper order to gather the scanning according to the orientation angle that each position coordinate corresponds, include: sequentially moving to each position coordinate in the plurality of position coordinates through a servo motor and a speed reducer; and adjusting the orientation angle of the image acquisition equipment to the orientation angle corresponding to each position coordinate, and performing acquisition scanning by using the image acquisition equipment. In the implementation process, the servo motor and the speed reducer are sequentially moved to each position coordinate in the plurality of position coordinates; adjusting the orientation angle of the image acquisition equipment to the orientation angle corresponding to each position coordinate, and performing acquisition scanning by using the image acquisition equipment; therefore, the efficiency of taking pictures of the exhibition objects to obtain the images is improved, and meanwhile, the problems that time and labor are wasted when the exhibition objects are taken pictures in an artificial shooting mode are effectively solved.
The embodiment of the application further provides a display article scanning device, which is applied to electronic equipment and comprises: the system comprises a point cloud data acquisition module, a display object acquisition module and a display object acquisition module, wherein the point cloud data acquisition module is used for acquiring point cloud data of the display object, and the point cloud data is acquired by a robot; the coplanar point cloud obtaining module is used for carrying out principal component analysis on the point cloud data to obtain coplanar point clouds, and the coplanar point clouds represent a three-dimensional coordinate set of a common plane in the point cloud data; the coordinate angle determination module is used for determining a plurality of position coordinates of the collected exhibition article and orientation angles corresponding to the position coordinates according to the point cloud data and the coplanar point cloud; and the control command sending module is used for sending a control command to the robot according to the position coordinates and the orientation angles corresponding to the position coordinates, and the control command is used for enabling the robot to scan the displayed article according to the position coordinates and the orientation angles corresponding to the position coordinates and returning to scan the plurality of scanned images of the displayed article.
Optionally, in an embodiment of the present application, the coplanar point cloud obtaining module includes: the point cloud vector obtaining module is used for carrying out singular value decomposition on a matrix formed by the point cloud data to obtain a point cloud vector; and the coplanar point cloud determining module is used for determining the central point of the point cloud data and the common plane represented by the point cloud vector as a coplanar point cloud.
Optionally, in an embodiment of the present application, the coordinate angle determining module includes: the target data acquisition module is used for deleting all three-dimensional coordinates with positions lower than the coplanar point cloud from the point cloud data to acquire target data; the target data fitting module is used for fitting the target data by using a spherical model to obtain a fitted spherical center coordinate and a fitted spherical radius; and the coordinate angle calculation module is used for calculating a plurality of position coordinates for scanning the exhibition object and orientation angles corresponding to the position coordinates according to the spherical center coordinates and the spherical radius.
Optionally, in an embodiment of the present application, the display article scanning device further includes: the scanning image receiving module is used for receiving a plurality of scanning images sent by the robot; and the three-dimensional model obtaining module is used for modeling the displayed article according to the plurality of scanning images to obtain a three-dimensional model.
Optionally, in an embodiment of the present application, the display article scanning device further includes: and the three-dimensional model mapping module is used for mapping the three-dimensional model according to the plurality of scanning images to obtain the mapped three-dimensional model.
The embodiment of the application also provides a display article scanning device, is applied to the robot, includes: the point cloud data acquisition module is used for acquiring the exhibition article through the depth camera to obtain point cloud data, and the point cloud data represents a three-dimensional coordinate set of the exhibition article; the point cloud data sending module is used for sending point cloud data to the electronic equipment so that the electronic equipment can calculate and send a control command according to the point cloud data; the control command receiving module is used for receiving a control command sent by the electronic equipment, the control command comprises a plurality of position coordinates for scanning the exhibition article and orientation angles corresponding to the position coordinates, and the orientation angles corresponding to the position coordinates are determined according to coplanar point cloud and point cloud data obtained by analysis after the electronic equipment receives and analyzes the point cloud data; the scanning image obtaining module is used for sequentially moving to each position coordinate in the plurality of position coordinates, and carrying out acquisition scanning according to the orientation angle corresponding to each position coordinate to obtain a plurality of scanning images; and the scanned image sending module is used for sending a plurality of scanned images to the electronic equipment.
Optionally, in an embodiment of the present application, the robot includes: the system comprises a servo motor, a speed reducer and image acquisition equipment; a scanned image acquisition module comprising: the robot moving module is used for sequentially moving to each position coordinate in the plurality of position coordinates through the servo motor and the speed reducer; and the robot scanning module is used for adjusting the orientation angle of the image acquisition equipment to the orientation angle corresponding to each position coordinate and carrying out acquisition scanning by using the image acquisition equipment.
An embodiment of the present application further provides an electronic device, including: a processor and a memory, the memory storing processor-executable machine-readable instructions, the machine-readable instructions when executed by the processor performing the method as described above.
Embodiments of the present application also provide a storage medium having a computer program stored thereon, where the computer program is executed by a processor to perform the method as described above.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
Fig. 1 is a schematic flow chart of a method for scanning an exhibited item applied to an electronic device according to an embodiment of the present application;
FIG. 2 is a schematic view of an exhibition article placed on a display stand according to an embodiment of the present application;
fig. 3 is a schematic diagram illustrating the acquisition of point cloud data of an exhibited item using a robot according to an embodiment of the present application;
FIG. 4 is a schematic diagram illustrating fitting of target data using a spherical model according to an embodiment of the present application;
fig. 5 is a schematic flow chart of a method for scanning an exhibition object applied to a robot according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an exhibition article scanning device provided in the embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
The technical solution in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
Before describing the method for scanning an exhibition object provided by the embodiment of the present application, some concepts related to the embodiment of the present application are described:
the Point Cloud (Point Cloud) refers to a Point data set of the product appearance surface obtained by a measuring instrument, and the Point Cloud can represent a target space expressed under the same space reference system; the attributes of the point cloud include: spatial resolution, point location accuracy, etc.; the number of points obtained by using a three-dimensional coordinate measuring machine is small, and the distance between the points is large, so that the method is called sparse point cloud; the point clouds obtained by using the three-dimensional laser scanner or the photographic scanner have larger and denser point quantities, and are called dense point clouds.
Depth cameras, also known as depth sensors or depth cameras, or TOF (Time of flight) cameras, are interpreted as Time-of-flight cameras, Time-of-flight 3D imaging, by continuously sending light pulses to a target, and then receiving the light returning from the object with a sensor, and finding the target object distance by detecting the Time of flight (round trip) of the light pulses. The principle of the technology is basically similar to that of a 3D laser sensor, except that the 3D laser sensor scans point by point, and a TOF camera obtains depth information of the whole image at the same time. The TOF camera is similar to the common machine vision imaging process, and consists of a light source, an optical component, a sensor, a control circuit, a processing circuit and other units.
Principal Component Analysis (PCA), also known as Principal Component Analysis or Principal Component Analysis, is one of the statistical analyses and methods of simplifying data sets in multivariate statistical Analysis; PCA linearly transforms the observed values of a series of possibly correlated variables using orthogonal transformation to project values of a series of linearly uncorrelated variables called Principal Components (Principal Components); in particular, the principal component can be viewed as a linear equation that contains a series of linear coefficients to indicate the projection direction; PCA is sensitive to regularization or preprocessing of the raw data.
Singular values, which are concepts in a matrix, are generally solved by a singular value decomposition theorem; let A be an mn-th order matrix, q ═ min (m, n), and the arithmetic square root of the q non-negative eigenvalues of AA is called the singular value of A.
Software Development Kit (SDK), which refers to a collection of Development tools used by a Software engineer to build application Software for a specific Software package, Software framework, hardware platform, operating system, etc.; the software development tool comprises a collection of related documents, paradigms and tools that broadly refer to assisting in the development of a certain class of software; the tool is, for example, a data interface in a software development kit, which is investigated to connect with a server to obtain corresponding results, and the language of the software development kit is various, for example: JAVA and Python, and the like.
A server refers to a device that provides computing services over a network, such as: x86 server and non-x 86 server, non-x 86 server includes: mainframe, minicomputer, and UNIX server. Certainly, in a specific implementation process, the server may specifically select a mainframe or a minicomputer, where the mainframe refers to a dedicated processor that mainly supports a closed and dedicated device for providing Computing service of a UNIX operating system, and that uses Reduced Instruction Set Computing (RISC), single-length fixed-point Instruction average execution speed (MIPS), and the like; a mainframe, also known as a mainframe, refers to a device that provides computing services using a dedicated set of processor instructions, an operating system, and application software.
It should be noted that the method for scanning an exhibited item provided in the embodiment of the present application may be executed by an electronic device, where the electronic device refers to a device terminal having a function of executing a computer program or the server described above, and the device terminal includes, for example: a smart phone, a Personal Computer (PC), a tablet computer, a Personal Digital Assistant (PDA), a Mobile Internet Device (MID), a network switch or a network router, and the like.
Before describing the exhibition article scanning method provided by the embodiment of the present application, an application scenario applicable to the exhibition article scanning method is described, where the application scenario includes but is not limited to: the exhibition article scanning method is used for scanning and photographing exhibition articles to obtain scanned images, three-dimensional reconstruction is carried out according to the scanned images, and mapping is carried out to obtain a three-dimensional model with mapping details; the exhibition item herein includes but is not limited to: cultural goods, aerospace models, automobile models, artworks or building models and the like with historical values; or to apply the scanned image to other fields, such as: the scanned image is used as training data for deep learning, and is applied to the fields of image recognition, image processing and the like, or the obtained three-dimensional model is applied to animation industry, teaching demonstration and the like.
Please refer to fig. 1, which is a schematic flow chart of a method for scanning an exhibition object applied to an electronic device according to an embodiment of the present application; the method can be applied to electronic equipment, and the position coordinates and the orientation angle for acquiring and scanning the exhibition article are obtained by firstly obtaining the point cloud data of the exhibition article and then carrying out operation processing such as principal component analysis and the like on the point cloud data, so that the robot can scan the exhibition article according to the position coordinates and the orientation angle and returns to scan a plurality of scanning images of the exhibition article; therefore, the efficiency of scanning and photographing the exhibition article to obtain the image is improved; the above-mentioned exhibition article scanning method may include the steps of:
step S110: and acquiring point cloud data of the exhibition article, wherein the point cloud data is acquired by the robot for acquiring the exhibition article.
Please refer to fig. 2, which is a schematic diagram of an exhibition article placed on a display stand according to an embodiment of the present application; the exhibition article is placed on a display stand, wherein the display stand can comprise: the exhibition stand comprises an exhibition stand plane and an exhibition stand support body below the exhibition stand plane; herein, an exhibit is sometimes referred to as an exhibit, which refers to one or more articles that are displayed for viewing, watching or visiting by a person, such as: cultural goods with historical value, aerospace models, automobile models, artworks or building models and the like.
The point cloud data is a three-dimensional coordinate set obtained by representing the point cloud acquisition equipment carried by the robot to acquire the exhibition article; the point cloud collection device here is, for example: a depth camera, three-dimensional coordinate measuring machine, laser sensor, three-dimensional laser scanner, or photographic scanner; the specific calculation method of the point cloud data is as follows: the robot controls the depth camera to collect the exhibited articles to obtain a sparse point cloud depth map with the resolution of 1280 x 800, and the point cloud depth map represents a distance data set between the depth camera and the exhibited articles; the total number of pixel points of the point cloud depth map is 1024000, the depth camera calls an SDK tool kit to calculate the point cloud depth map, specifically, specific values of the 1024000 pixel points are calculated according to preset parameters of the depth camera, and the point cloud data are obtained; the preset parameters of the depth camera may include: the specific value of each pixel point is the distance between the pixel point cloud of the current exhibition article and the point cloud acquisition equipment.
Please refer to fig. 3, which is a schematic diagram illustrating a robot for collecting point cloud data of an exhibition object according to an embodiment of the present application; there are many ways to obtain the point cloud data of the displayed item in the above step S110, and these ways are, for example: in the first mode, other terminal devices collect point cloud data of an exhibition article, then the other terminal devices send the collected point cloud data to the electronic device, and finally the electronic device receives the point cloud data sent by the other terminal devices, wherein the other terminal devices include: the system comprises a depth camera, a laser sensor, a robot with the depth camera or the laser sensor, a robot for controlling the depth camera or the depth camera to collect point cloud data by using a mechanical arm, and the like; in the second mode, after receiving point cloud data sent by other terminal devices, the electronic device stores the point cloud data in a file system or a database, and when the data is needed, the electronic device obtains pre-stored point cloud data from the file system or the database.
After step S110, step S120 is performed: and (4) carrying out principal component analysis on the point cloud data to obtain coplanar point cloud.
Coplanar point cloud refers to a three-dimensional coordinate set representing the largest coplanar point cloud in point cloud data, where the coplanar point cloud is, for example: if the point cloud data is acquired by using a depth camera to collect exhibition objects placed on the exhibition platform plane, the point cloud data can be understood as the exhibition objects, the exhibition platform plane and the exhibition platform support body below the exhibition platform plane, and the coplanar point cloud can be understood as the exhibition platform plane, i.e. the plane on which the exhibition objects are placed.
The embodiment of performing pivot analysis on the point cloud data in step S120 may include:
step S121: and carrying out singular value decomposition on a matrix formed by the point cloud data to obtain a point cloud vector.
Singular Value Decomposition (SVD), which refers to an important matrix Decomposition in linear algebra, is similar to symmetric matrix or hermitian matrix based on feature vector diagonalization in some aspects, but the two matrix decompositions are obviously different despite their correlation, the basis of symmetric matrix feature vector Decomposition is spectral analysis, and Singular Value Decomposition is a generalization of spectral analysis theory on any matrix.
The embodiment of step S121 described above is, for example: converting the point cloud data into a matrix format to obtain a matrix formed by the point cloud data, wherein each row of data in the matrix corresponds to a point cloud coordinate of the point cloud data, and each column of the matrix corresponds to one point cloud in the point cloud data; according to X ═ U ∑ WTPerforming singular value decomposition on a matrix formed by the point cloud data to obtain a decomposed point cloud vector; the point cloud vector includes a first vector and a second vector, where the first vector and the second vector represent two different directions in which a common plane is located, X represents the point cloud data, and Σ is a preset coefficient, specifically, the preset coefficient Σ may be set to (0,0,1), U represents the first vector, W represents the second vector, and T represents a transposition operation of a matrix.
Step S122: and determining the central point of the point cloud data and the common plane represented by the point cloud vector as a coplanar point cloud.
The embodiment of step S122 is, for example: if the point cloud vectors are the first vector U and the second vector W, the central point (x, y, z) of the point cloud data is obtained through calculation, and then a common plane represented by the central point (x, y, z) of the point cloud data, the first vector U and the second vector W is determined to be a coplanar point cloud, which can be understood as a plane on which the exhibition article is placed. In the implementation process, singular value decomposition is carried out on a matrix formed by point cloud data to obtain a point cloud vector; determining a central point of the point cloud data and a common plane represented by the point cloud vector as a coplanar point cloud; the speed of obtaining the coplanar point cloud is effectively accelerated.
After step S120, step S130 is performed: and determining a plurality of position coordinates of the collected exhibition article and orientation angles corresponding to the position coordinates according to the point cloud data and the coplanar point cloud.
The plurality of position coordinates and orientation angles corresponding to the position coordinates in step S130 refer to the position coordinates acquired by the image acquisition device of the robot for the displayed item and the horizontal orientation angles on the position coordinates, and the vertical orientation angles of the image acquisition device are adjusted in real time according to the height of the display platform plane; the embodiment of determining the position coordinates and the orientation angles corresponding to the position coordinates of the exhibition object in the step S130 may include the following steps:
step S131: and deleting all three-dimensional coordinates with positions lower than the coplanar point cloud from the point cloud data to obtain target data.
The embodiment of step S131 described above is, for example: judging whether the three-dimensional coordinates in the point cloud data are lower than the coplanar point cloud or not, if so, deleting the three-dimensional coordinates from the point cloud data; the point cloud data can be understood as exhibition articles, exhibition platform planes and exhibition platform support bodies below the exhibition platform planes, the exhibition platform planes and the exhibition platform support bodies below the exhibition platform planes need to be deleted from the point cloud data, the precision of the central point coordinates of the positioned exhibition articles is effectively improved, and therefore the accuracy of obtaining the position coordinates of the exhibition articles and the orientation angles corresponding to the position coordinates is improved, and the robot can better collect the exhibition articles according to the position coordinates and the orientation angles corresponding to the position coordinates. In a specific implementation process, in order to further improve the accuracy and the effectiveness of the data, noise data such as isolated point clouds and invalid point clouds in the point cloud data can be deleted.
Step S132: and fitting the target data by using a spherical model to obtain the fitted spherical center coordinate and spherical radius.
Please refer to fig. 4, which is a schematic diagram illustrating a fitting process of target data using a spherical model according to an embodiment of the present application; the embodiment of step S132 described above is, for example: randomly setting a point cloud coordinate in the point cloud data as a sphere center coordinate of a spherical model, and taking a random value as the radius of the spherical model; calculating the proportion of the point cloud data in the spherical model, if the proportion is greater than a preset threshold, wherein the preset threshold can be 80% or 90%, and the like, taking the random value with the minimum radius as the radius of the spherical model, namely, taking the distance between the exhibition article and the image acquisition device of the robot as the spherical radius, and determining the point cloud coordinate corresponding to the random value with the minimum radius as the spherical center coordinate of the spherical model; of course, in a specific implementation process, a binary search algorithm (binary search algorithm) may also be used in combination to obtain the fitted spherical center coordinates and spherical radius; the binary search algorithm is also referred to as a binary search algorithm (half-search algorithm) or a logarithmic search algorithm (logarithmic search algorithm), and is a search algorithm for searching a specific element in an ordered array.
Step S133: and calculating a plurality of position coordinates for scanning the exhibition object and orientation angles corresponding to the position coordinates according to the sphere center coordinates and the sphere radius.
The embodiment of step S133 described above includes, for example: assuming that the spherical center coordinate is represented by o and o is the origin of the three-dimensional coordinate, the direction of the spherical center coordinate o pointing to the image acquisition device of the robot is the x-axis direction, the direction passing through the spherical center coordinate o and perpendicular to the exhibition platform plane (i.e. the plane on which the exhibition article is placed) in the point cloud data is the z-axis, the direction perpendicular to the x-axis and the z-axis is the y-axis, and a coordinate system is established by the x-axis, the y-axis and the z-axis; under this coordinate system, supposing that spherical radius is 30 centimetres, the distance of exhibition article and the image acquisition device of robot is 30 centimetres promptly, the orientation angle and the x axle of the image acquisition device of robot are 0 degree, can be in proper order when the orientation angle of the image acquisition device of robot and the contained angle of x axle are the position coordinate that preset interval degree corresponds, take a picture to the exhibition article, then can calculate the orientation angle that a plurality of position coordinates and position coordinate correspond according to the preset interval degree here, preset interval degree here can set up according to particular case, preset interval degree is specific for example: 10 degrees, 15 degrees, 20 degrees, 25 degrees, 30 degrees, 40 degrees, and the like.
In the implementation process, firstly, all three-dimensional coordinates with positions lower than that of the coplanar point cloud are deleted from the point cloud data, namely if the position (Z coordinate) of the three-dimensional coordinates is lower than that of the coplanar point cloud, the three-dimensional coordinates are deleted from the point cloud data, and then the target data can be obtained; then, fitting the target data by using a spherical model to obtain a fitted spherical center coordinate and a fitted spherical radius; finally, calculating a plurality of position coordinates for scanning the exhibition object and orientation angles corresponding to the position coordinates according to the sphere center coordinates and the sphere radius; therefore, the accuracy of determining the orientation angle of the collected multiple position coordinates of the exhibition article is effectively improved.
After step S130, step S140 is performed: and sending a control command to the robot according to the position coordinates and the orientation angles corresponding to the position coordinates, wherein the control command is used for enabling the robot to scan the displayed article according to the position coordinates and the orientation angles corresponding to the position coordinates, and returning to scan the plurality of scanned images of the displayed article.
The embodiment of step S140 described above is, for example: generating a control command in a preset format according to the plurality of position coordinates and the orientation angles corresponding to the position coordinates, wherein the preset format is as follows: script Object Notation (JSON) and eXtensible Markup Language (XML); the electronic equipment sends a control command to the robot; JSON is a lightweight data exchange format; JSON is based on a subset of ECMAScript (JavaScript specification made by the european computer association), which stores and represents data in a text format that is completely independent of the programming language; XML refers to a subset of standard, general-purpose markup languages, and is also a markup language for marking electronic documents to be structured.
Optionally, in this embodiment of the application, after sending a control command to the robot according to the plurality of position coordinates and the orientation angles corresponding to the position coordinates, a scanned image sent by the robot may also be received according to the control command, and a three-dimensional model is constructed according to the scanned image, and then the following steps are further included after step S140:
step S150: the electronic device receives a plurality of scanned images transmitted by the robot.
The embodiment of the step S150 is, for example: the electronic equipment receives a plurality of scanning images sent by the robot through a Transmission Control Protocol (TCP) or a User Datagram Protocol (UDP); wherein, the TCP protocol is also called network communication protocol, which is a connection-oriented, reliable and byte stream-based transport layer communication protocol; in the internet protocol suite, the TCP layer is an intermediate layer located above the IP layer and below the application layer; reliable, pipe-like connections are often required between the application layers of different hosts, but the IP layer does not provide such a flow mechanism, but rather provides unreliable packet switching; the UDP is a short name of User Datagram Protocol, the chinese name is User Datagram Protocol, and the UDP is a connectionless transport layer Protocol in an Open System Interconnection (OSI) reference model.
Step S160: and modeling the displayed article according to the plurality of scanning images to obtain a three-dimensional model.
The embodiment of step S160 described above is, for example: modeling the displayed object according to a plurality of scanning images by using realtypath software or OpenCV to obtain a three-dimensional model; the OpenCV is called openSource Computer Vision Library in full and is a cross-platform Computer Vision Library; OpenCV may be used to develop real-time image processing, computer vision, and pattern recognition programs. In the implementation process, a plurality of scanning images sent by the robot are received; modeling the displayed article according to a plurality of scanning images to obtain a three-dimensional model; the speed of three-dimensional modeling according to the scanned image of the displayed object is effectively improved.
Optionally, in this embodiment of the present application, after obtaining the three-dimensional model, the three-dimensional model may be mapped, and then after step S160, the following step may be further included:
step S170: and mapping the three-dimensional model according to the plurality of scanning images to obtain the mapped three-dimensional model.
The embodiment of step S170 described above is, for example: mapping the three-dimensional model according to the plurality of scanning images by using realtympature software or an Open Graphics Library (OpenGL) to obtain the mapped three-dimensional model; OpenGL refers to a cross-language, cross-platform Application Program Interface (API) for rendering 2D and 3D vector graphics; this interface consists of nearly 350 different function calls to draw from simple graphics bits to complex three-dimensional scenes. In the implementation process, the three-dimensional model is mapped according to a plurality of scanning images to obtain the mapped three-dimensional model; therefore, the speed of obtaining the three-dimensional model after mapping is effectively improved.
In the implementation process, the point cloud data of the exhibition article is obtained firstly, and then the point cloud data is subjected to principal component analysis, deletion, fitting and other calculations to obtain a plurality of position coordinates for collecting and scanning the exhibition article and orientation angles corresponding to the position coordinates, so that the robot can scan the exhibition article according to the plurality of position coordinates and the orientation angles corresponding to the position coordinates and return to scan a plurality of scanning images of the exhibition article; therefore, the efficiency of taking pictures of the exhibition objects to obtain the images is improved, and meanwhile, the problems that time and labor are wasted when the exhibition objects are taken pictures in an artificial shooting mode are effectively solved.
Please refer to fig. 5, which is a schematic flow chart of a method for scanning an exhibition object applied to a robot according to an embodiment of the present application; the method for scanning the display items can be applied to a robot, the specific structure of which will be described in detail below, and the method for scanning the display items applied to the robot can include:
step S210: the robot collects the exhibition articles through the depth camera to obtain point cloud data.
The robot is a machine device which automatically executes work, can accept human commands, can run a pre-arranged program, and can perform actions according to a principle schema established by an artificial intelligence technology; the robot may be a wheeled mobile robot; the wheeled mobile robot here includes, for example: single-wheel mobile robots, two-wheel mobile robots, four-wheel mobile robots, and the like; the wheels of the wheeled mobile robot can comprise crawler tracks outside, and the robot moves by the friction of the crawler tracks and the ground.
The robot described above may include: the robot comprises a robot body, a servo motor, a speed reducer, a mechanical arm and image acquisition equipment; the image capturing apparatus herein is, for example: depth cameras and single lens reflex cameras, etc.; the robot body is respectively movably connected with the servo motor, the speed reducer and the mechanical arm, and the mechanical arm is movably connected with the image acquisition device; the servo motor and the speed reducer are used for providing moving walking power for the robot and stopping the robot from walking; the arm is used for controlling image acquisition device's angle and shoots the collection action, and specific collection object includes: the system comprises point cloud data, a scanned color image and the like, wherein a depth camera is used for acquiring the point cloud data or the point cloud depth map of the exhibition article, and a single lens reflex camera is used for acquiring the color scanning image of the exhibition article.
The robot in step S210 collects the exhibition item through the depth camera, for example: the robot confirms the concrete position coordinate of gathering through control servo motor and speed reducer, and the robot passes through the shooting angle and the action of shooting of arm control depth camera on this position coordinate, and the depth camera is gathered exhibition article according to shooting angle and the action of shooting, obtains the point cloud data of exhibition article.
Step S220: the robot sends the point cloud data to the electronic equipment, so that the electronic equipment calculates and sends a control command according to the point cloud data.
The robot in step S220 sends the point cloud data to the electronic device in the following embodiments: the robot sends point cloud data to the electronic equipment through a hypertext Transfer Protocol (HTTP) or a hypertext Transfer Protocol Secure (HTTPS); HTTP is a simple request response Protocol, and the HTTP Protocol generally operates on a Transmission Control Protocol (TCP); HTTPS, also known as HTTP Secure, is a transport protocol for Secure communications over computer networks.
Step S230: the robot receives a control command sent by the electronic equipment, wherein the control command comprises a plurality of position coordinates for scanning the displayed object and orientation angles corresponding to the position coordinates.
The position coordinates and the orientation angles corresponding to the position coordinates in step S230 are determined by the electronic device according to the coplanar point cloud and the point cloud data obtained by analysis after the electronic device receives and analyzes the point cloud data, and the specific determination method is shown in steps S110 to S130 executed by the electronic device.
The embodiment of the step S230 is, for example: the robot receives a control command sent by the electronic equipment through an HTTP (hyper text transport protocol), an HTTPS (hyper text transport protocol) or an HTTP/2 protocol; wherein HTTP/2 is the hypertext transfer protocol version 2, originally named HTTP 2.0, abbreviated as h2 (i.e. encrypted connections based on TLS/1.2 or more versions) or h2c (unencrypted connections), which is the second major version of the HTTP protocol; the standardization of HTTP/2 is supported by browsers like Chrome, Opera, Firefox, Internet Explorer 11, Safari, Amazon Silk, and Edge.
Step S240: the robot moves to each position coordinate in the plurality of position coordinates in sequence, and carries out acquisition scanning according to the orientation angle corresponding to each position coordinate to obtain a plurality of scanning images.
The embodiment of obtaining a plurality of scan images in step S240 described above specifically includes: sequentially moving to each position coordinate in the plurality of position coordinates through a servo motor and a speed reducer; and adjusting the orientation angle of the image acquisition equipment to the orientation angle corresponding to each position coordinate, and performing acquisition scanning by using the image acquisition equipment. In the implementation process, the servo motor and the speed reducer are sequentially moved to each position coordinate in the plurality of position coordinates; adjusting the orientation angle of the image acquisition equipment to the orientation angle corresponding to each position coordinate, and performing acquisition scanning by using the image acquisition equipment; therefore, the efficiency of taking pictures of the exhibition objects to obtain the images is improved, and meanwhile, the problems that time and labor are wasted when the exhibition objects are taken pictures in an artificial shooting mode are effectively solved.
Step S250: the robot transmits a plurality of scan images to the electronic device.
The embodiment of the robot in step S250 sending a plurality of scanned images to the electronic device is, for example: the robot transmits the plurality of scanned images to the electronic device via an HTTP protocol, an HTTPs protocol, or an HTTP/2 protocol.
In the implementation process, the exhibition article is collected through the depth camera to obtain point cloud data, and the point cloud data is sent to the electronic equipment, so that the electronic equipment calculates and sends a control command according to the point cloud data, wherein the control command comprises a plurality of position coordinates for scanning the exhibition article and orientation angles corresponding to the position coordinates; sequentially moving to each position coordinate in the plurality of position coordinates, and carrying out acquisition scanning according to the orientation angle corresponding to each position coordinate to obtain a plurality of scanning images; finally, a plurality of scanning images are sent to the electronic equipment; therefore, the efficiency of taking pictures of the exhibition objects to obtain the images is improved, and meanwhile, the problems that time and labor are wasted when the exhibition objects are taken pictures in an artificial shooting mode are effectively solved.
Please refer to fig. 6, which illustrates a schematic structural diagram of an exhibition item scanning apparatus provided in an embodiment of the present application; the embodiment of the present application provides a display article scanning device 300, which is applied to an electronic device, and includes:
the point cloud data obtaining module 310 is configured to obtain point cloud data of the exhibition item, where the point cloud data is acquired by the robot.
And a coplanar point cloud obtaining module 320, configured to perform principal component analysis on the point cloud data to obtain a coplanar point cloud, where the coplanar point cloud represents a three-dimensional coordinate set of a common plane in the point cloud data.
And the coordinate angle determination module 330 is configured to determine a plurality of position coordinates and orientation angles corresponding to the position coordinates of the collected exhibition object according to the point cloud data and the coplanar point cloud.
And the control command sending module 340 is configured to send a control command to the robot according to the plurality of position coordinates and the orientation angles corresponding to the position coordinates, where the control command is used to enable the robot to scan the displayed item according to the plurality of position coordinates and the orientation angles corresponding to the position coordinates, and return to scan the plurality of scanned images of the displayed item.
Optionally, in an embodiment of the present application, the coplanar point cloud obtaining module includes:
and the point cloud vector obtaining module is used for carrying out singular value decomposition on a matrix formed by the point cloud data to obtain a point cloud vector.
And the coplanar point cloud determining module is used for determining the central point of the point cloud data and the common plane represented by the point cloud vector as a coplanar point cloud.
Optionally, in an embodiment of the present application, the coordinate angle determining module includes:
and the target data obtaining module is used for deleting all three-dimensional coordinates with positions lower than the coplanar point cloud from the point cloud data to obtain target data.
And the target data fitting module is used for fitting the target data by using the spherical model to obtain the fitted spherical center coordinate and spherical radius.
And the coordinate angle calculation module is used for calculating a plurality of position coordinates for scanning the exhibition object and orientation angles corresponding to the position coordinates according to the spherical center coordinates and the spherical radius.
Optionally, in an embodiment of the present application, the display article scanning device further includes:
and the scanned image receiving module is used for receiving a plurality of scanned images sent by the robot.
And the three-dimensional model obtaining module is used for modeling the displayed article according to the plurality of scanning images to obtain a three-dimensional model.
Optionally, in this embodiment of the present application, the display article scanning device may further include:
and the three-dimensional model mapping module is used for mapping the three-dimensional model according to the plurality of scanning images to obtain the mapped three-dimensional model.
The embodiment of the application also provides a display article scanning device, is applied to the robot, includes:
and the point cloud data acquisition module is used for acquiring the exhibition article through the depth camera to obtain point cloud data, and the point cloud data represents a three-dimensional coordinate set of the exhibition article.
And the point cloud data sending module is used for sending the point cloud data to the electronic equipment so that the electronic equipment calculates and sends a control command according to the point cloud data.
The control command receiving module is used for receiving a control command sent by the electronic equipment, the control command comprises a plurality of position coordinates for scanning the exhibition article and orientation angles corresponding to the position coordinates, and the orientation angles corresponding to the position coordinates are determined according to coplanar point cloud and point cloud data obtained by analysis after the electronic equipment receives and analyzes the point cloud data.
And the scanning image obtaining module is used for moving to each position coordinate in the plurality of position coordinates in sequence, and acquiring and scanning according to the orientation angle corresponding to each position coordinate to obtain a plurality of scanning images.
And the scanned image sending module is used for sending a plurality of scanned images to the electronic equipment.
Optionally, in an embodiment of the present application, the robot includes: the system comprises a servo motor, a speed reducer and image acquisition equipment; a scanned image acquisition module comprising:
and the robot moving module is used for sequentially moving to each position coordinate in the plurality of position coordinates through the servo motor and the speed reducer.
And the robot scanning module is used for adjusting the orientation angle of the image acquisition equipment to the orientation angle corresponding to each position coordinate and carrying out acquisition scanning by using the image acquisition equipment.
It should be understood that the apparatus corresponds to the above-mentioned embodiment of the method for scanning an exhibited article, and can perform the steps related to the above-mentioned embodiment of the method, and the specific functions of the apparatus can be referred to the above description, and the detailed description is appropriately omitted herein to avoid redundancy. The device includes at least one software function that can be stored in memory in the form of software or firmware (firmware) or solidified in the Operating System (OS) of the device.
Please refer to fig. 7 for a schematic structural diagram of an electronic device according to an embodiment of the present application. An electronic device 400 provided in an embodiment of the present application includes: a processor 410 and a memory 420, the memory 420 storing machine-readable instructions executable by the processor 410, the machine-readable instructions when executed by the processor 410 performing the method as above.
The embodiment of the present application also provides a storage medium 430, where the storage medium 430 stores a computer program, and the computer program is executed by the processor 410 to perform the method as above.
The storage medium 430 may be implemented by any type of volatile or nonvolatile storage device or combination thereof, such as a Static Random Access Memory (SRAM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), an Erasable Programmable Read-Only Memory (EPROM), a Programmable Read-Only Memory (PROM), a Read-Only Memory (ROM), a magnetic Memory, a flash Memory, a magnetic disk, or an optical disk.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The above description is only an alternative embodiment of the embodiments of the present application, but the scope of the embodiments of the present application is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the embodiments of the present application, and all the changes or substitutions should be covered by the scope of the embodiments of the present application.

Claims (10)

1. A method for scanning an exhibition object, applied to an electronic device, includes:
acquiring point cloud data of an exhibition article, wherein the point cloud data is acquired by a robot;
performing principal component analysis on the point cloud data to obtain coplanar point clouds which represent a three-dimensional coordinate set of a common plane in the point cloud data;
determining a plurality of position coordinates of the exhibition article and orientation angles corresponding to the position coordinates according to the point cloud data and the coplanar point cloud;
and sending a control command to the robot according to the position coordinates and the orientation angles corresponding to the position coordinates, wherein the control command is used for enabling the robot to scan the exhibition article according to the position coordinates and the orientation angles corresponding to the position coordinates, and returning to scan the scanning images of the exhibition article.
2. The method of claim 1, wherein the principal component analysis of the point cloud data to obtain a coplanar point cloud comprises:
performing singular value decomposition on a matrix formed by the point cloud data to obtain a point cloud vector;
determining a center point of the point cloud data and a common plane represented by the point cloud vector as the coplanar point cloud.
3. The method of claim 1, wherein determining, from the point cloud data and the coplanar point cloud, a plurality of location coordinates from which the display item is captured and an orientation angle to which the location coordinates correspond comprises:
deleting all three-dimensional coordinates with positions lower than the coplanar point cloud from the point cloud data to obtain target data;
fitting the target data by using a spherical model to obtain a fitted spherical center coordinate and a fitted spherical radius;
and calculating a plurality of position coordinates for collecting and scanning the exhibition object and orientation angles corresponding to the position coordinates according to the sphere center coordinates and the sphere radius.
4. The method according to any one of claims 1-3, further comprising, after said sending a control command to the robot based on said plurality of position coordinates and an orientation angle corresponding to said position coordinates:
receiving the plurality of scanning images sent by the robot;
and modeling the exhibition article according to the plurality of scanning images to obtain a three-dimensional model.
5. The method of claim 4, further comprising, after said obtaining the three-dimensional model:
and mapping the three-dimensional model according to the plurality of scanning images to obtain the mapped three-dimensional model.
6. A method for scanning an exhibition object, applied to a robot, includes:
acquiring an exhibition article through a depth camera to obtain point cloud data, wherein the point cloud data represents a three-dimensional coordinate set of the exhibition article;
sending the point cloud data to electronic equipment so that the electronic equipment calculates and sends a control command according to the point cloud data;
receiving the control command sent by the electronic equipment, wherein the control command comprises a plurality of position coordinates for scanning the exhibition article and orientation angles corresponding to the position coordinates, and the orientation angles corresponding to the position coordinates are determined according to coplanar point cloud and point cloud data obtained by analysis after the electronic equipment receives and analyzes the point cloud data;
sequentially moving to each position coordinate in the plurality of position coordinates, and carrying out acquisition scanning according to the orientation angle corresponding to each position coordinate to obtain a plurality of scanning images;
transmitting the plurality of scanned images to the electronic device.
7. The method of claim 6, wherein the robot comprises: the system comprises a servo motor, a speed reducer and image acquisition equipment; the moving to each position coordinate in the plurality of position coordinates in sequence and the collecting and scanning according to the orientation angle corresponding to each position coordinate comprise:
sequentially moving to each position coordinate of the plurality of position coordinates through the servo motor and the speed reducer;
and adjusting the orientation angle of the image acquisition equipment to the orientation angle corresponding to each position coordinate, and performing acquisition scanning by using the image acquisition equipment.
8. An exhibition article scanning device, applied to an electronic apparatus, comprising:
the system comprises a point cloud data acquisition module, a point cloud data acquisition module and a display control module, wherein the point cloud data acquisition module is used for acquiring point cloud data of an exhibition article, and the point cloud data is acquired by a robot;
a coplanar point cloud obtaining module, configured to perform principal component analysis on the point cloud data to obtain a coplanar point cloud, where the coplanar point cloud represents a three-dimensional coordinate set of a common plane in the point cloud data;
the coordinate angle determining module is used for determining and acquiring a plurality of position coordinates of the exhibition article and orientation angles corresponding to the position coordinates according to the point cloud data and the coplanar point cloud;
and the control command sending module is used for sending a control command to the robot according to the position coordinates and the orientation angles corresponding to the position coordinates, wherein the control command is used for enabling the robot to scan the exhibition article according to the position coordinates and the orientation angles corresponding to the position coordinates, and returning to scan the scanning images of the exhibition article.
9. An electronic device, comprising: a processor and a memory, the memory storing machine-readable instructions executable by the processor, the machine-readable instructions, when executed by the processor, performing the method of any of claims 1 to 5.
10. A storage medium, having stored thereon a computer program which, when executed by a processor, performs the method of any one of claims 1 to 7.
CN202010481765.7A 2020-05-27 2020-05-27 Exhibition article scanning method and device, electronic equipment and storage medium Active CN113744378B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010481765.7A CN113744378B (en) 2020-05-27 2020-05-27 Exhibition article scanning method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010481765.7A CN113744378B (en) 2020-05-27 2020-05-27 Exhibition article scanning method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113744378A true CN113744378A (en) 2021-12-03
CN113744378B CN113744378B (en) 2024-02-20

Family

ID=78727849

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010481765.7A Active CN113744378B (en) 2020-05-27 2020-05-27 Exhibition article scanning method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113744378B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114664126A (en) * 2022-03-23 2022-06-24 中国地质大学(武汉) Art design multimedia teaching instrument based on computer network and operation method thereof

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150062120A1 (en) * 2013-08-30 2015-03-05 Qualcomm Incorporated Method and apparatus for representing a physical scene
CN105574812A (en) * 2015-12-14 2016-05-11 深圳先进技术研究院 Multi-angle three-dimensional data registration method and device
CN107782240A (en) * 2017-09-27 2018-03-09 首都师范大学 A kind of two dimensional laser scanning instrument scaling method, system and device
WO2019161558A1 (en) * 2018-02-26 2019-08-29 Intel Corporation Method and system of point cloud registration for image processing
CN110443840A (en) * 2019-08-07 2019-11-12 山东理工大学 The optimization method of sampling point set initial registration in surface in kind
CN111028340A (en) * 2019-12-10 2020-04-17 苏州大学 Three-dimensional reconstruction method, device, equipment and system in precision assembly
CN111080805A (en) * 2019-11-26 2020-04-28 北京云聚智慧科技有限公司 Method and device for generating three-dimensional block diagram of marked object, electronic equipment and storage medium
CN111179433A (en) * 2019-12-31 2020-05-19 杭州阜博科技有限公司 Three-dimensional modeling method and device for target object, electronic device and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150062120A1 (en) * 2013-08-30 2015-03-05 Qualcomm Incorporated Method and apparatus for representing a physical scene
CN105574812A (en) * 2015-12-14 2016-05-11 深圳先进技术研究院 Multi-angle three-dimensional data registration method and device
CN107782240A (en) * 2017-09-27 2018-03-09 首都师范大学 A kind of two dimensional laser scanning instrument scaling method, system and device
WO2019161558A1 (en) * 2018-02-26 2019-08-29 Intel Corporation Method and system of point cloud registration for image processing
CN110443840A (en) * 2019-08-07 2019-11-12 山东理工大学 The optimization method of sampling point set initial registration in surface in kind
CN111080805A (en) * 2019-11-26 2020-04-28 北京云聚智慧科技有限公司 Method and device for generating three-dimensional block diagram of marked object, electronic equipment and storage medium
CN111028340A (en) * 2019-12-10 2020-04-17 苏州大学 Three-dimensional reconstruction method, device, equipment and system in precision assembly
CN111179433A (en) * 2019-12-31 2020-05-19 杭州阜博科技有限公司 Three-dimensional modeling method and device for target object, electronic device and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
GUSTAVO SANDRI .ETC: "compression of plenoptic point clouds", 《IEEE TRANSACTION ON IMAGE PROCESSING》, vol. 28, no. 3, pages 1419 - 1427, XP011698185, DOI: 10.1109/TIP.2018.2877486 *
皮佳静: "大尺寸形貌测量的三维点云拼接技术", 《中国优秀硕士学位论文全文数据库(电子期刊)》, pages 140 - 689 *
韦盛斌等: "用于三维重建的点云单应性迭代最近点配准算法", 《光学学报》, vol. 35, no. 5, pages 252 - 258 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114664126A (en) * 2022-03-23 2022-06-24 中国地质大学(武汉) Art design multimedia teaching instrument based on computer network and operation method thereof

Also Published As

Publication number Publication date
CN113744378B (en) 2024-02-20

Similar Documents

Publication Publication Date Title
US10740694B2 (en) System and method for capture and adaptive data generation for training for machine vision
US9235928B2 (en) 3D body modeling, from a single or multiple 3D cameras, in the presence of motion
KR101791590B1 (en) Object pose recognition apparatus and method using the same
CN109993793B (en) Visual positioning method and device
Sweeney et al. Solving for relative pose with a partially known rotation is a quadratic eigenvalue problem
US20140206443A1 (en) Camera pose estimation for 3d reconstruction
US11182928B2 (en) Method and apparatus for determining rotation angle of engineering mechanical device
JP3880702B2 (en) Optical flow detection apparatus for image and self-position recognition system for moving object
JP5555207B2 (en) 3D posture estimation apparatus, 3D posture estimation method, and program
EP3067658B1 (en) 3d-shape measurement device, 3d-shape measurement method, and 3d-shape measurement program
CN112652016A (en) Point cloud prediction model generation method, pose estimation method and device
Taryudi et al. Eye to hand calibration using ANFIS for stereo vision-based object manipulation system
Wang et al. Three-dimensional reconstruction based on visual SLAM of mobile robot in search and rescue disaster scenarios
Ye et al. 6-DOF pose estimation of a robotic navigation aid by tracking visual and geometric features
KR20080029080A (en) System for estimating self-position of the mobile robot using monocular zoom-camara and method therefor
JP7114686B2 (en) Augmented reality device and positioning method
CN113409397A (en) Storage tray detecting and positioning method based on RGBD camera
Al-Temeemy et al. Laser-based structured light technique for 3D reconstruction using extreme laser stripes extraction method with global information extraction
EP4261789A1 (en) Method for displaying posture of robot in three-dimensional map, apparatus, device, and storage medium
CN113744378B (en) Exhibition article scanning method and device, electronic equipment and storage medium
Badeka et al. Harvest crate detection for grapes harvesting robot based on YOLOv3 model
Zhou et al. Information-efficient 3-D visual SLAM for unstructured domains
CN113066125A (en) Augmented reality method and related equipment thereof
Adinandra et al. A low cost indoor localization system for mobile robot experimental setup
JP3512894B2 (en) Relative moving amount calculating apparatus and relative moving amount calculating method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant