CN112037279B - Article position identification method and device, storage medium and electronic equipment - Google Patents

Article position identification method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN112037279B
CN112037279B CN202010920699.9A CN202010920699A CN112037279B CN 112037279 B CN112037279 B CN 112037279B CN 202010920699 A CN202010920699 A CN 202010920699A CN 112037279 B CN112037279 B CN 112037279B
Authority
CN
China
Prior art keywords
plane
article
point cloud
determining
panoramic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010920699.9A
Other languages
Chinese (zh)
Other versions
CN112037279A (en
Inventor
李林原
田明哲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Seashell Housing Beijing Technology Co Ltd
Original Assignee
Seashell Housing Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Seashell Housing Beijing Technology Co Ltd filed Critical Seashell Housing Beijing Technology Co Ltd
Priority to CN202010920699.9A priority Critical patent/CN112037279B/en
Publication of CN112037279A publication Critical patent/CN112037279A/en
Application granted granted Critical
Publication of CN112037279B publication Critical patent/CN112037279B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/06Topological mapping of higher dimensional structures onto lower dimensional surfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the disclosure discloses an article position identification method and device, a storage medium and an electronic device, wherein the method comprises the following steps: carrying out article detection on a panoramic image acquired at the current position in a set scene, and determining the position information of at least one article in the set scene in the panoramic image; determining at least one point cloud corresponding to the at least one article in three-dimensional space based on the position of each article in the panoramic image and the panoramic depth image corresponding to the panoramic image; mapping the point cloud corresponding to each article to a plane structure diagram corresponding to the set scene, and determining the position of each article in the plane structure diagram; in the embodiment, the position of the article in the plane structure diagram is determined by combining the panoramic image with the point cloud recovered by the panoramic depth image, so that the position of the article in the plane structure diagram is automatically determined by using the panoramic image and the panoramic depth image, and the efficiency and the accuracy of determining the position of the article are improved.

Description

Article position identification method and device, storage medium and electronic equipment
Technical Field
The present disclosure relates to location identification technologies, and in particular, to a method and an apparatus for identifying a location of an article, a storage medium, and an electronic device.
Background
In the prior art, it is sometimes necessary to determine the position of an article in a plan view diagram to achieve a corresponding purpose, for example, in the field of housing, it is desirable to determine the position of an article such as a door, a window, furniture, etc. in a house layout diagram; in the prior art, in order to determine the position of an object in a planar structure, manual mapping is generally carried out through actual measurement. However, the manual mapping efficiency is very low, and the requirement of large-scale application is difficult to meet.
Disclosure of Invention
The present disclosure is proposed to solve the above technical problems. The embodiment of the disclosure provides an article position identification method and device, a storage medium and an electronic device.
According to an aspect of an embodiment of the present disclosure, there is provided an article position identification method including:
carrying out article detection on a panoramic image acquired at the current position in a set scene, and determining the position information of at least one article in the set scene in the panoramic image;
determining at least one point cloud corresponding to the at least one article in three-dimensional space based on the position of each article in the panoramic image and the panoramic depth image corresponding to the panoramic image;
and mapping the point cloud corresponding to each article to a plane structure diagram corresponding to the set scene, and determining the position of each article in the plane structure diagram.
Optionally, the detecting an article on the panorama acquired at the current position in the set scene, and determining the position of at least one article in the set scene in the panorama, includes:
carrying out article detection on the panoramic image by using an article detection model, and determining a mask and an article name of each article in the at least one article in the panoramic image; the article detection model is obtained through training of a plurality of training panoramas with known masks and article names.
Optionally, pixels in the panoramic image correspond to pixels in the panoramic depth image one to one;
the determining at least one piece of point cloud corresponding to the at least one item in three-dimensional space based on the position of each item in the panoramic image and the panoramic depth map corresponding to the panoramic image comprises:
determining a corresponding article area in the panoramic depth map based on the mask of each article to obtain at least one article area;
and recovering three-dimensional information of at least one article area based on the depth information of each pixel in the panoramic depth map to obtain at least one point cloud.
Optionally, the mapping the point cloud corresponding to each article to the plan structure diagram corresponding to the set scene, and determining the position of each article in the plan structure diagram includes:
determining the gravity center of each article in a three-dimensional space according to the coordinates of the point cloud corresponding to each article;
and projecting the gravity center of the object in the three-dimensional space into the plane structure diagram, and determining the position of each object in the plane structure diagram.
Optionally, the plan structure diagram is a house layout diagram, and the article is furniture; wherein the house layout comprises at least one cell;
further comprising:
and determining the function of each room in the house floor plan according to a preset rule and the position of each piece of furniture in the floor plan.
Optionally, before determining the corresponding at least one piece of point cloud of the at least one item in the three-dimensional space based on the position of each item in the panoramic image and the panoramic depth map corresponding to the panoramic image, the method further includes:
restoring a panoramic point cloud corresponding to the panoramic image based on the panoramic depth image;
performing normal vector estimation on the panoramic point cloud, and determining a normal vector of each point in the panoramic point cloud;
and carrying out plane segmentation on the panoramic point cloud according to the normal vector of each point by using a region growing algorithm to obtain a plurality of plane point clouds in a three-dimensional space.
Optionally, the setting scene is a house, and the plane structure diagram is a house floor plan;
the determining at least one piece of point cloud corresponding to the at least one item in three-dimensional space based on the position of each item in the panoramic image and the panoramic depth map corresponding to the panoramic image comprises:
determining at least one corresponding plane point cloud for each article in the point cloud corresponding to the panoramic depth map based on the position of each article in the panoramic map to obtain a plurality of plane point clouds;
determining a corresponding item plane point cloud for each of the items based on the plurality of plane point clouds.
Optionally, after performing plane segmentation on the panoramic point cloud according to the normal vector of each point by using a region growing algorithm to obtain a plurality of plane point clouds in a three-dimensional space, the method further includes:
determining a plane equation and a plane normal vector corresponding to each plane point cloud based on coordinates of a plurality of points included in each plane point cloud;
said determining a corresponding item plane point cloud for each of said items based on said plurality of plane point clouds comprises:
performing fusion operation on the plurality of plane point clouds based on a plurality of plane equations and a plurality of plane normal vectors corresponding to the plurality of plane point clouds to obtain at least one article plane point cloud; wherein each item plane point cloud corresponds to one item.
Optionally, the performing a fusion operation on the plurality of plane point clouds based on a plurality of plane equations and a plurality of plane normal vectors corresponding to the plurality of plane point clouds to obtain at least one item plane point cloud includes:
determining whether at least one group of parallel planes exists in the plurality of plane point clouds based on the plane normal vector corresponding to each plane point cloud; each parallel plane group comprises at least two plane point clouds which are parallel to each other;
in response to the existence of at least one set of parallel plane sets, determining a distance between at least two plane point clouds in the set of parallel planes according to a plane equation corresponding to each plane point cloud in each set of parallel plane sets;
and in response to the distance between at least two plane point clouds in the parallel plane group being smaller than a preset threshold, at least two plane point clouds in the parallel plane group are fused into one item plane point cloud.
Optionally, said merging at least two plane point clouds of said set of parallel planes into one said item plane point cloud comprises:
determining a normal vector of the item plane point cloud based on normal vectors of the at least two plane point clouds;
determining the middle points of the at least two plane point clouds based on the number of points included in each plane point cloud of the at least two plane point clouds and the coordinates corresponding to the included points;
determining the item plane point cloud based on the midpoints of the at least two plane point clouds and a normal vector of the item plane point cloud.
Optionally, the article comprises a window;
the step of mapping the point cloud corresponding to each article to the plan structure diagram corresponding to the set scene and determining the position of each article in the plan structure diagram comprises the following steps:
mapping at least one item plane point cloud into the house layout to obtain at least one item line segment;
and fusing the item line segments mapped in the house floor plan with the wall surfaces in the house floor plan, and determining the position of the window in the house floor plan.
Optionally, the item comprises a gate and/or a bealock;
the step of mapping the point cloud corresponding to each article to the plan structure diagram corresponding to the set scene and determining the position of each article in the plan structure diagram comprises the following steps:
determining a cross-section point set in the point cloud recovered from the panoramic depth map based on a set longitudinal coordinate value; wherein the ordinate of each point in the cross-section point set is equal to the set ordinate value;
and in response to the shape of the cross-section point set corresponding to at least two singles in the house floor type graph, determining the positions of the gate and/or the bealock connecting the at least two singles according to the position of the singles where the current position is located and the shape of the cross-section point set in the at least two singles.
According to another aspect of the embodiments of the present disclosure, there is provided an article position recognition apparatus including:
the system comprises an article detection module, a storage module and a display module, wherein the article detection module is used for carrying out article detection on a panoramic image acquired at the current position in a set scene and determining the position of at least one article in the set scene in the panoramic image;
a depth recovery module, configured to determine at least one piece of point cloud corresponding to the at least one item in a three-dimensional space based on a position of each item in the panoramic image and a panoramic depth image corresponding to the panoramic image;
and the position determining module is used for mapping the point cloud corresponding to each article to a plane structure diagram corresponding to the set scene and determining the position of each article in the plane structure diagram.
Optionally, the article detection module is specifically configured to perform article detection on the panoramic image by using an article detection model, and determine a mask and an article name of each article in the at least one article in the panoramic image; the article detection model is obtained through training of a plurality of training panoramas with known masks and article names.
Optionally, pixels in the panoramic image correspond to pixels in the panoramic depth image one to one;
the depth recovery module is specifically configured to determine a corresponding article area in the panoramic depth map based on a mask of each article, so as to obtain at least one article area; and recovering three-dimensional information of at least one article area based on the depth information of each pixel in the panoramic depth map to obtain at least one point cloud.
Optionally, the position determining module is specifically configured to determine a center of gravity of the article in a three-dimensional space according to coordinates of a piece of point cloud corresponding to each article; and projecting the gravity center of the object in the three-dimensional space into the plane structure diagram, and determining the position of each object in the plane structure diagram.
Optionally, the plan structure diagram is a house layout diagram, and the article is furniture; wherein the house layout comprises at least one cell;
the device further comprises:
and the function determining module is used for determining the function of each room in the house floor plan according to a preset rule and the position of each piece of furniture in the floor plan.
Optionally, the apparatus further comprises:
the point cloud recovery module is used for recovering a panoramic point cloud corresponding to the panoramic image based on the panoramic depth image;
the normal vector estimation module is used for performing normal vector estimation on the panoramic point cloud and determining a normal vector of each point in the panoramic point cloud;
and the plane segmentation module is used for carrying out plane segmentation on the panoramic point cloud according to the normal vector of each point by utilizing a region growing algorithm to obtain a plurality of plane point clouds in a three-dimensional space.
Optionally, the setting scene is a house, and the plane structure diagram is a house floor plan;
the depth recovery module includes:
the planar point cloud unit is used for determining at least one corresponding planar point cloud for each article in the point cloud corresponding to the panoramic depth map based on the position of each article in the panoramic depth map to obtain a plurality of planar point clouds;
an item point cloud unit to determine a corresponding item plane point cloud for each of the items based on the plurality of plane point clouds.
Optionally, the plane segmentation module is further configured to determine a plane equation and a plane normal vector corresponding to each of the plane point clouds based on coordinates of a plurality of points included in each of the plane point clouds;
the article point cloud unit is specifically configured to perform fusion operation on the plurality of plane point clouds based on a plurality of plane equations and a plurality of plane normal vectors corresponding to the plurality of plane point clouds to obtain at least one article plane point cloud; wherein each item plane point cloud corresponds to one item.
Optionally, the article point cloud unit is specifically configured to determine whether at least one parallel plane group exists in the plurality of plane point clouds based on a plane normal vector corresponding to each plane point cloud; each parallel plane group comprises at least two plane point clouds which are parallel to each other; in response to the existence of at least one set of parallel plane sets, determining a distance between at least two plane point clouds in the set of parallel planes according to a plane equation corresponding to each plane point cloud in each set of parallel plane sets; and in response to the distance between at least two plane point clouds in the parallel plane group being smaller than a preset threshold, at least two plane point clouds in the parallel plane group are fused into one item plane point cloud.
Optionally, the item point cloud unit, when fusing at least two plane point clouds of the set of parallel planes into one item plane point cloud, is configured to determine a normal vector of the item plane point cloud based on a normal vector of the at least two plane point clouds; determining the middle points of the at least two plane point clouds based on the number of points included in each plane point cloud of the at least two plane point clouds and the coordinates corresponding to the included points; determining the item plane point cloud based on the midpoints of the at least two plane point clouds and a normal vector of the item plane point cloud.
Optionally, the article comprises a window;
the position determining module is specifically configured to map the at least one item plane point cloud into the house layout to obtain at least one item segment; and fusing the item line segments mapped in the house floor plan with the wall surfaces in the house floor plan, and determining the position of the window in the house floor plan.
Optionally, the item comprises a gate and/or a bealock;
the position determining module is specifically configured to determine a cross-section point set in the point cloud restored by the panoramic depth map based on a set longitudinal coordinate value; wherein the ordinate of each point in the cross-section point set is equal to the set ordinate value; and in response to the shape of the cross-section point set corresponding to at least two singles in the house floor type graph, determining the positions of the gate and/or the bealock connecting the at least two singles according to the position of the singles where the current position is located and the shape of the cross-section point set in the at least two singles.
According to still another aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium storing a computer program for executing the item location identification method according to any of the embodiments.
According to still another aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to read the executable instructions from the memory and execute the instructions to implement the article location identification method according to any of the above embodiments.
Based on the article position identification method and device, the storage medium and the electronic device provided by the above embodiments of the present disclosure, article detection is performed on a panoramic image acquired at a current position in a set scene, and position information of at least one article in the set scene in the panoramic image is determined; determining at least one point cloud corresponding to the at least one article in three-dimensional space based on the position of each article in the panoramic image and the panoramic depth image corresponding to the panoramic image; mapping the point cloud corresponding to each article to a plane structure diagram corresponding to the set scene, and determining the position of each article in the plane structure diagram; in the embodiment, the position of the article in the plane structure diagram is determined by combining the panoramic image with the point cloud recovered by the panoramic depth image, so that the position of the article in the plane structure diagram is automatically determined by using the panoramic image and the panoramic depth image, and the efficiency and the accuracy of determining the position of the article are improved.
The technical solution of the present disclosure is further described in detail by the accompanying drawings and examples.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing in more detail embodiments of the present disclosure with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the principles of the disclosure and not to limit the disclosure. In the drawings, like reference numbers generally represent like parts or steps.
Fig. 1 is a schematic flowchart of an item location identification method according to an exemplary embodiment of the present disclosure.
Fig. 2a is a schematic diagram of a panoramic view acquired from a house according to an alternative example of the present disclosure.
Fig. 2b is a panoramic depth map corresponding to the panoramic view shown in fig. 2 a.
Fig. 2c is a house floor plan corresponding to the panorama shown in fig. 2 a.
Fig. 2d is a schematic mask diagram of a plurality of articles obtained by performing article inspection on the panoramic image provided in fig. 2 a.
Fig. 2e is a schematic illustration of the location of a plurality of items included in the house layout provided in fig. 2 c.
Fig. 2f is a division plan view obtained by division processing based on the panoramic depth map shown in fig. 2 b.
FIG. 2g is a schematic view of a plurality of planar point clouds resulting from recovering the point clouds from the segmented planar view shown in FIG. 2 f.
Fig. 2h is a schematic view of the object plane point cloud obtained by fusing the multiple plane point clouds shown in fig. 2 g.
FIG. 2i is a projection view of the object obtained by mapping the object plane point cloud shown in FIG. 2h to the house layout shown in FIG. 2 c.
Fig. 2j is a schematic view of a possible opening position of a door obtained by mapping the item plane point cloud to a house layout.
Fig. 2k is a schematic diagram of a set of projected points corresponding to a mid-point cross-section of the panoramic depth map provided in fig. 2 b.
Fig. 3 is a schematic flow chart of step 104 in the embodiment shown in fig. 1 of the present disclosure.
Fig. 4 is a schematic flow chart of step 106 in the embodiment shown in fig. 1 of the present disclosure.
Fig. 5 is a flowchart illustrating an item location identification method according to another exemplary embodiment of the present disclosure.
FIG. 6 is a flow chart illustrating step 506 in the embodiment shown in FIG. 5 according to the present disclosure.
FIG. 7 is a schematic flow chart of step 5062 in the embodiment of FIG. 6 of the present disclosure.
Fig. 8 is a schematic structural diagram of an article position identification device according to an exemplary embodiment of the present disclosure.
Fig. 9 is a block diagram of an electronic device provided in an exemplary embodiment of the present disclosure.
Detailed Description
Hereinafter, example embodiments according to the present disclosure will be described in detail with reference to the accompanying drawings. It is to be understood that the described embodiments are merely a subset of the embodiments of the present disclosure and not all embodiments of the present disclosure, with the understanding that the present disclosure is not limited to the example embodiments described herein.
It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless specifically stated otherwise.
It will be understood by those of skill in the art that the terms "first," "second," and the like in the embodiments of the present disclosure are used merely to distinguish one element from another, and are not intended to imply any particular technical meaning, nor is the necessary logical order between them.
It is also understood that in embodiments of the present disclosure, "a plurality" may refer to two or more and "at least one" may refer to one, two or more.
It is also to be understood that any reference to any component, data, or structure in the embodiments of the disclosure, may be generally understood as one or more, unless explicitly defined otherwise or stated otherwise.
In addition, the term "and/or" in the present disclosure is only one kind of association relationship describing an associated object, and means that three kinds of relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" in the present disclosure generally indicates that the former and latter associated objects are in an "or" relationship.
It should also be understood that the description of the various embodiments of the present disclosure emphasizes the differences between the various embodiments, and the same or similar parts may be referred to each other, so that the descriptions thereof are omitted for brevity.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
The disclosed embodiments may be applied to electronic devices such as terminal devices, computer systems, servers, etc., which are operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known terminal devices, computing systems, environments, and/or configurations that may be suitable for use with electronic devices, such as terminal devices, computer systems, servers, and the like, include, but are not limited to: personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, microprocessor-based systems, set top boxes, programmable consumer electronics, network pcs, minicomputer systems, mainframe computer systems, distributed cloud computing environments that include any of the above systems, and the like.
Electronic devices such as terminal devices, computer systems, servers, etc. may be described in the general context of computer system-executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, etc. that perform particular tasks or implement particular abstract data types. The computer system/server may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices.
Summary of the application
In the course of implementing the present disclosure, the inventor finds that the prior art generally adopts the way of manual mapping to determine the position of the article, but the prior art solution has at least the following problems: it is long, and extravagant manpower and materials, and it is inaccurate enough.
Exemplary method
Fig. 1 is a schematic flowchart of an item location identification method according to an exemplary embodiment of the present disclosure. The embodiment can be applied to an electronic device, as shown in fig. 1, and includes the following steps:
step 102, detecting articles in the panoramic image collected at the current position in the set scene, and determining the position of at least one article in the set scene in the panoramic image.
Optionally, the present embodiment may employ a shooting and shooting device such as a camera to perform panoramic acquisition on the set scene at the current position to obtain a panoramic image, where the panoramic image is a panoramic color image, for example, as shown in fig. 2a, the panoramic image is an example panoramic image acquired by acquiring the set scene as a house; at least one item may be displayed in the panoramic view.
And 104, determining at least one point cloud corresponding to at least one article in the three-dimensional space based on the position of each article in the panoramic image and the panoramic depth image corresponding to the panoramic image.
Optionally, the panoramic depth map in this embodiment is in one-to-one correspondence with a panoramic map at a pixel level, where the panoramic depth map is acquired based on an image pickup device such as a depth camera that can obtain depth information at a current position, for example, as shown in fig. 2b, the panoramic depth map corresponds to the panoramic map shown in fig. 2 a; each pixel in the panoramic depth map includes not only planar coordinates in the panoramic depth map, but also depth information determined based on external parameters (known data) of the depth camera; accordingly, the panoramic depth map may be restored to a point cloud in three-dimensional space based on the planar coordinates and depth information of each point in the panoramic depth map.
And 106, mapping the point cloud corresponding to each article to a plane structure diagram corresponding to a set scene, and determining the position of each article in the plane structure diagram.
The plan structure diagram corresponding to the setting scene in the embodiment of the present disclosure is known, for example, as shown in fig. 2c, it is a house layout diagram corresponding to the house shown in fig. 2a, of course, fig. 2a is only a panoramic diagram collected from the house at a current position, and cannot completely embody the house layout diagram shown in fig. 2c, in order to determine the positions of all items in the house, it is necessary to collect and restore the panoramic diagram and the panoramic depth diagram from the house at a plurality of positions, where the processing procedures of the panoramic diagrams and the panoramic depth diagrams collected from other positions are the same as those provided in the three steps of the embodiment, and are not described herein again.
In this embodiment, the point cloud corresponding to each article is mapped into the planar structure diagram, the three-dimensional coordinates of each point in the point cloud are mapped into the plane where the planar structure is located, the two-dimensional coordinates of each point in the point cloud in the plane are obtained, and the position of the article in the planar structure diagram can be determined according to the two-dimensional coordinates mapped by the point cloud corresponding to each article.
The present embodiment determines the position of at least one object in the planar structure view in the planar panoramic view through coordinate system transformation by restoring an image obtained in one planar coordinate system (e.g., a planar coordinate system composed of an x-axis and a z-axis) to a point cloud under a three-dimensional coordinate system and then mapping the point cloud restored under the three-dimensional coordinate system to another planar coordinate system (e.g., a planar coordinate system composed of an x-axis and a y-axis).
In the article position identification method provided by the above embodiment of the present disclosure, article detection is performed on a panoramic image acquired at a current position in a set scene, and position information of at least one article in the set scene in the panoramic image is determined; determining at least one point cloud corresponding to the at least one article in three-dimensional space based on the position of each article in the panoramic image and the panoramic depth image corresponding to the panoramic image; mapping the point cloud corresponding to each article to a plane structure diagram corresponding to the set scene, and determining the position of each article in the plane structure diagram; in the embodiment, the position of the article in the plane structure diagram is determined by combining the panoramic image with the point cloud recovered by the panoramic depth image, so that the position of the article in the plane structure diagram is automatically determined by using the panoramic image and the panoramic depth image, and the efficiency and the accuracy of determining the position of the article are improved.
In some alternative embodiments, step 102 may include:
and carrying out article detection on the panoramic image by using an article detection model, and determining a mask and an article name of each article in at least one article in the panoramic image.
The article detection model is obtained through training of a plurality of training panoramas with known masks and article names.
Optionally, the article detection model in this embodiment may be a deep neural network, which is widely applied to the field of image processing in the prior art, and this embodiment detects at least one article in the panoramic image by using the trained deep neural network, and determines a mask and an article name corresponding to each article; for example, as shown in fig. 2d, the panoramic image provided in fig. 2a is subjected to article detection to obtain masks of a plurality of articles in the panoramic image, and the names of the articles are standardized on the masks of each article.
As shown in fig. 3, based on the embodiment shown in fig. 1, step 104 may include the following steps:
step 1041, determining a corresponding article area in the panoramic depth map based on the mask of each article, obtaining at least one article area.
And 1042, recovering three-dimensional information of at least one article area based on the depth information of each pixel in the panoramic depth map to obtain at least one point cloud.
In this embodiment, pixels in the panoramic image correspond to pixels in the panoramic depth image one to one; the mask (mask) in the image has multiple functions, and the mask in this embodiment is mainly used for extracting the region of interest (the region corresponding to the article), for example, a pre-made mask of the region of interest is multiplied by the image to be processed to obtain an image of the region of interest, the image value in the region of interest remains unchanged, and the image values outside the region are all 0 (in this embodiment, the other regions are not processed after the mask region is determined). Therefore, a corresponding coordinate area can be determined in the panoramic image based on the mask of each article as an article area corresponding to the article, and due to the correspondence between the panoramic image and the panoramic depth image in pixel level, a corresponding area can be determined in the panoramic depth image based on each article area, and the coordinate corresponding to each article area is combined with the corresponding depth information, so that a point cloud including three-dimensional coordinates recovered from each article area can be obtained; according to the embodiment, the efficiency of acquiring the three-dimensional coordinate information of the article is improved through the pixel correspondence and the depth recovery determined point cloud corresponding to each article.
As shown in fig. 4, based on the embodiment shown in fig. 1, step 106 may include the following steps:
step 1061, determining the gravity center of each article in the three-dimensional space according to the coordinates of the point cloud corresponding to each article.
Step 1062, projecting the gravity center of the article in the three-dimensional space into the planar structure diagram, and determining the position of each article in the planar structure diagram.
In this embodiment, the object in the set scene may be of a planar structure (e.g., a door, a window, etc.) or a three-dimensional structure (e.g., furniture, etc.), and the planar object will obtain a line segment after being projected into the planar structure diagram; for the three-dimensional structure of the article, the position of the article in the plane structure diagram is increasedThe accuracy of the method is that the center of gravity of the article is obtained by calculating coordinates in a piece of point cloud corresponding to the article, and the center of gravity is projected into the plane structure diagram as the position of the article in the plane structure diagram, optionally, when the point cloud corresponding to the article includes m points, the coordinates of each point are (x) and (y) isi,yi,zi) And the value of i is 1 to m, and at this time, the determined coordinates of the gravity center can be expressed as:
Figure BDA0002666632300000121
for example, as shown in FIG. 2e, the locations of the plurality of items included therein are determined in each cell included in the house floor plan provided in FIG. 2 c.
In some alternative embodiments, when the plan view diagram is a house layout diagram, the article is furniture; wherein the house floor plan comprises at least one cell;
on the basis of the embodiment shown in fig. 4, the method may further include: and determining the function of each single room in the house floor plan according to a preset rule and the position of each furniture in the plane structure plan.
In this embodiment, after the position and the name of the furniture included in each single room in the house layout drawing are determined, the function of each single room can be determined based on the name of the furniture, and the preset rules can be set according to actual conditions, for example, the single room including the range hood is set as a kitchen, the single room including the bed is set as a bedroom, the single room including the toilet is a toilet, the single room including the sofa and the television is a living room, and the like.
Fig. 5 is a flowchart illustrating an item location identification method according to another exemplary embodiment of the present disclosure. As shown in fig. 5, the method comprises the following steps:
step 502, detecting an article in the panoramic image collected at the current position in the set scene, and determining the position information of at least one article in the set scene in the panoramic image.
And 503, recovering a panoramic point cloud corresponding to the panoramic image based on the panoramic depth image.
In this embodiment, on the premise that the external parameters of the depth camera for shooting the panoramic depth map are known, the depth coordinate of each pixel in the panoramic map can be determined, and the three-dimensional coordinate of each pixel can be recovered and obtained by combining the two-dimensional coordinates in the panoramic map, so as to obtain the panoramic point cloud.
And step 504, performing normal vector estimation on the panoramic point cloud, and determining a normal vector of each point in the panoramic point cloud.
In this embodiment, the panoramic point cloud includes a plurality of points, and a normal vector can be determined for each point under the condition that the three-dimensional coordinates of each point are known.
And 505, performing plane segmentation on the panoramic point cloud according to the normal vector of each point by using a region growing algorithm to obtain a plurality of plane point clouds in a three-dimensional space.
Step 506, determining at least one piece of point cloud corresponding to the at least one article in the three-dimensional space based on the plane point clouds in the three-dimensional spaces.
Step 507, mapping the point cloud corresponding to each article to a plane structure diagram corresponding to a set scene, and determining the position of each article in the plane structure diagram.
The region growing algorithm is an image segmentation technique in the prior art, and in this embodiment, the region growing algorithm is used to segment the panoramic point cloud to obtain at least one planar point cloud in the panoramic point cloud, for example, as shown in fig. 2f, a segmented planar graph is obtained by performing segmentation processing based on the panoramic depth map shown in fig. 2 b. Matching the area corresponding to the mask in the panoramic image with the plane point cloud to determine the point cloud corresponding to each article; for example, as shown in fig. 2g, a plurality of plane point cloud schematics obtained by recovering the point cloud according to the plane cloud segmentation shown in fig. 2 f.
As shown in fig. 6, based on the embodiment shown in fig. 5, step 506 may include the following steps:
step 5061, determining at least one corresponding plane point cloud for each article in the point cloud corresponding to the panoramic depth map based on the position of each article in the panoramic depth map, and obtaining a plurality of plane point clouds.
At step 5062, a corresponding item plane point cloud is determined for each item based on the plurality of plane point clouds.
The system comprises a setting scene, a plane structure diagram and a floor plan, wherein the setting scene is a house, and the plane structure diagram is a house floor plan; optionally, the position of the mask determined in the panoramic image based on each article is aligned in at least one planar point cloud, and there may be a case where each mask corresponds to multiple planar point clouds, and at this time, at least one planar point cloud with a larger area in the corresponding multiple planar point clouds is used as the planar point cloud corresponding to the article.
On the basis of the embodiment shown in fig. 5, after step 505, the method may further include:
and determining a plane equation and a plane normal vector corresponding to each plane point cloud based on the coordinates of a plurality of points included in each plane point cloud.
At this time, step 5062 may include:
and performing fusion operation on the plurality of plane point clouds based on a plurality of plane equations and a plurality of plane normal vectors corresponding to the plurality of plane point clouds to obtain at least one article plane point cloud.
Wherein each item plane point cloud corresponds to an item.
In this embodiment, each plane point cloud is a plane in a three-dimensional space, and therefore, each point includes coordinate values of three coordinate axes, and optionally, the plane equation of the plane a may be expressed as: a is1x+b1y+c1z+d 10, wherein a1、b1、c1And d1Is a constant. Since articles (such as doors, windows, and the like) may be collected at a plurality of shooting points, and due to a part of data errors, there may be a case where one article corresponds to a plurality of plane point clouds, in this embodiment, a plurality of point clouds corresponding to the same article are fused by using a normal vector of the plane point clouds and a distance between the plane point clouds, so as to determine a unique corresponding article plane point cloud for the article, and improve the accuracy of determining the position of the article.
As shown in fig. 7, based on the embodiment shown in fig. 6, step 5062 may include the following steps:
step 701, determining whether at least one group of parallel plane groups exists in a plurality of plane point clouds based on a plane normal vector corresponding to each plane point cloud; if so, go to step 702; otherwise, taking the plane point cloud as the object plane point cloud.
Each group of parallel planes comprises at least two plane point clouds which are parallel to each other;
step 702, determining the distance between at least two plane point clouds in each parallel plane group according to the plane equation corresponding to each plane point cloud in each parallel plane group.
Step 703, judging whether the distance between the point clouds of at least two planes in the parallel plane group is smaller than a preset threshold value, if so, executing step 704; otherwise, taking the plane point cloud as the object plane point cloud.
Step 704, at least two plane point clouds in the parallel plane group are fused into one item plane point cloud.
In this embodiment, whether the plane point clouds need to be fused is determined according to two criteria, first, whether the at least two plane point clouds are parallel or approximately parallel (whether the at least two plane point clouds are parallel may be determined based on a plane equation corresponding to the plane point clouds), on the premise that the parallel is satisfied, a distance between the two plane point clouds is determined (for example, the distance may be determined based on a distance between midpoints of the two plane point clouds), and only when the distance is smaller than a preset threshold (which may be set according to specific situations or experience), the at least two plane point clouds are fused, and the operation of the fusion may be specifically implemented with reference to the following embodiments.
Optionally, step 704 in the above embodiment may include: determining a normal vector of the article plane point cloud based on normal vectors of at least two plane point clouds;
determining the middle points of the at least two plane point clouds based on the number of points included in each plane point cloud of the at least two plane point clouds and the coordinates corresponding to the included points;
and determining the article plane point cloud based on the midpoints of the at least two plane point clouds and the normal vector of the article plane point cloud.
For example, the two planar point clouds to be fused are plane A and plane B, plane AThe plane equation of (a) is: a is1x+b1y+c1z+d 10; the plane equation of plane B is a2x+b2y+c2z+d 20; then the normal vectors of A and B are respectively
Figure BDA0002666632300000151
Assuming that the number of points of the plane A is m and the number of points of the plane B is n, the normal vector of the fused plane C is
Figure BDA0002666632300000152
Determining the midpoint of the two point clouds as the plane A and the plane B
Figure BDA0002666632300000153
Determining a plane by combining the normal vector and any point (the middle point in the embodiment), namely obtaining a fused article plane point cloud; in an alternative example, as shown in fig. 2h, a schematic diagram of a planar point cloud of an article obtained by fusing a plurality of planar point clouds shown in fig. 2g is shown.
In some alternative embodiments, the article comprises a window;
step 106 may include:
mapping at least one item plane point cloud into a house layout to obtain at least one item line segment;
and fusing the object line segments mapped in the house floor plan with the wall surfaces in the house floor plan to determine the position of the window in the house floor plan.
In this embodiment, after the item plane point clouds corresponding to each item have been determined, the item plane point clouds are mapped into a house layout, so as to obtain at least one item line segment, for example, as shown in fig. 2i, the item plane point cloud shown in fig. 2h is mapped into a house layout as shown in fig. 2c, so as to obtain an item projection diagram; at this time, the article line segment cannot completely coincide with the wall surface in the house floor plan, but in the house structure, the window is always on the wall, so that the article line segment corresponding to the window can be directly attached to the closest wall surface (for example, the x-axis or y-axis coordinate of the line segment on the wall is determined according to the x-axis or y-axis coordinate of the line segment corresponding to the window).
In further alternative embodiments, the article comprises a gate and/or a bealock;
step 106 may include:
determining a cross section point set in the point cloud recovered from the panoramic depth map based on a set longitudinal coordinate value; the longitudinal coordinate of each point in the cross section point set is equal to a set longitudinal coordinate value;
and responding to the shape of the cross-section point set corresponding to at least two singles in the house floor type graph, and determining the positions of the gate and/or the bealock connecting the at least two singles according to the position of the singles where the current position is located and the shape of the cross-section point set in the at least two singles.
Because the doors in the house are both in an open state when the panoramic image and the panoramic depth image are collected, two openings in possible directions exist on each door. As shown in fig. 2j, where the door in the circle has a possible opening position as shown by the two short lines at the two ends of the projected line segment, the possible opening position needs to be determined first. For the determination of the opening position, optionally, a circle of point sets on a longitudinal coordinate value (for example, a transverse center line) of the panoramic depth map at each current position is taken and projected onto a two-dimensional plane, for example, as shown in fig. 2k, a circle of point clouds on the transverse center line of the panoramic depth map is projected onto a house layout map to obtain a circle of point sets; optionally, a plurality of circles of point sets may be obtained based on a plurality of set vertical coordinate values, each circle of point set is projected into the house layout, and the position of the door is determined by synthesizing the plurality of circles of point sets (e.g., averaging the plurality of circles of point sets); after a circle of point sets is obtained, the current position is determined to be within a middle single room by combining with the known current position (indicated by a larger dot in the figure in the example shown in fig. 2 k), and the projection point set comprises three single rooms, so that the panoramic depth map shot at the current position can see other two adjacent single rooms through the gate or the bealock, and at the moment, the part connecting the two single rooms can be determined as the position of the gate or the bealock.
Any of the article location identification methods provided by the embodiments of the present disclosure may be performed by any suitable device having data processing capabilities, including but not limited to: terminal equipment, a server and the like. Alternatively, any of the article location identification methods provided by the embodiments of the present disclosure may be executed by a processor, for example, the processor may execute any of the article location identification methods mentioned by the embodiments of the present disclosure by calling a corresponding instruction stored in a memory. And will not be described in detail below.
Exemplary devices
Fig. 8 is a schematic structural diagram of an article position identification device according to an exemplary embodiment of the present disclosure. The device provided by the embodiment comprises:
and the article detection module 81 is configured to perform article detection on the panoramic image acquired at the current position in the set scene, and determine a position of at least one article in the set scene in the panoramic image.
And the depth recovery module 82 is used for determining at least one piece of point cloud corresponding to at least one article in the three-dimensional space based on the position of each article in the panoramic image and the panoramic depth image corresponding to the panoramic image.
And the position determining module 83 is configured to map the point cloud corresponding to each article into a planar structure diagram corresponding to the set scene, and determine a position of each article in the planar structure diagram.
The article position identification device provided by the above embodiment of the present disclosure performs article detection on a panoramic image acquired at a current position in a set scene, and determines position information of at least one article in the set scene in the panoramic image; determining at least one point cloud corresponding to the at least one article in three-dimensional space based on the position of each article in the panoramic image and the panoramic depth image corresponding to the panoramic image; mapping the point cloud corresponding to each article to a plane structure diagram corresponding to the set scene, and determining the position of each article in the plane structure diagram; in the embodiment, the position of the article in the plane structure diagram is determined by combining the panoramic image with the point cloud recovered by the panoramic depth image, so that the position of the article in the plane structure diagram is automatically determined by using the panoramic image and the panoramic depth image, and the efficiency and the accuracy of determining the position of the article are improved.
In some optional embodiments, the article detection module 81 is specifically configured to perform article detection on the panoramic image by using an article detection model, and determine a mask and an article name of each article in at least one article in the panoramic image.
The article detection model is obtained through training of a plurality of training panoramas with known masks and article names.
Optionally, pixels in the panoramic image correspond to pixels in the panoramic depth image one to one;
a depth recovery module 82, configured to determine a corresponding article area in the panoramic depth map based on a mask of each article, to obtain at least one article area; and recovering three-dimensional information of at least one article area based on the depth information of each pixel in the panoramic depth map to obtain at least one piece of point cloud.
Optionally, the position determining module 83 is specifically configured to determine a barycenter of the article in the three-dimensional space according to coordinates of a piece of point cloud corresponding to each article; the center of gravity of each article in the three-dimensional space is projected into the plane structure diagram, and the position of each article in the plane structure diagram is determined.
Optionally, the plan structure diagram is a house type diagram, and the article is furniture; wherein the house floor plan comprises at least one cell;
the apparatus provided in this embodiment further includes:
and the function determining module is used for determining the function of each room in the house floor plan according to a preset rule and the position of each piece of furniture in the plane structure diagram.
In some optional embodiments, the apparatus further comprises:
the point cloud recovery module is used for recovering a panoramic point cloud corresponding to the panoramic image based on the panoramic depth image;
the normal vector estimation module is used for performing normal vector estimation on the panoramic point cloud and determining a normal vector of each point in the panoramic point cloud;
and the plane segmentation module is used for performing plane segmentation on the panoramic point cloud according to the normal vector of each point by using a region growing algorithm to obtain a plurality of plane point clouds in a three-dimensional space.
Optionally, setting a scene as a house, and setting a plane structure diagram as a house floor plan;
a depth recovery module 82, comprising:
the planar point cloud unit is used for determining at least one corresponding planar point cloud for each article in the point cloud corresponding to the panoramic depth map based on the position of each article in the panoramic depth map to obtain a plurality of planar point clouds;
and the article point cloud unit is used for determining a corresponding article plane point cloud for each article based on the plurality of plane point clouds.
Optionally, the plane segmentation module is further configured to determine a plane equation and a plane normal vector corresponding to each plane point cloud based on coordinates of a plurality of points included in each plane point cloud;
the article point cloud unit is specifically used for performing fusion operation on the plurality of plane point clouds to obtain at least one article plane point cloud based on a plurality of plane equations and a plurality of plane normal vectors corresponding to the plurality of plane point clouds; wherein each item plane point cloud corresponds to one of the items.
Optionally, the article point cloud unit is specifically configured to determine whether at least one parallel plane group exists in the plurality of plane point clouds based on a plane normal vector corresponding to each plane point cloud; each group of parallel planes comprises at least two plane point clouds which are parallel to each other; in response to the existence of at least one set of parallel plane sets, determining a distance between at least two plane point clouds in the set of parallel planes according to a plane equation corresponding to each plane point cloud in each set of parallel plane sets; and in response to the distance between at least two plane point clouds in the parallel plane group being smaller than a preset threshold, at least two plane point clouds in the parallel plane group are fused into one item plane point cloud.
Optionally, the article point cloud unit is configured to determine a normal vector of the article plane point cloud based on a normal vector of at least two plane point clouds when at least two plane point clouds in the parallel plane group are fused into one article plane point cloud; determining the middle points of the at least two plane point clouds based on the number of points included in each plane point cloud of the at least two plane point clouds and the coordinates corresponding to the included points; and determining the article plane point cloud based on the midpoints of the at least two plane point clouds and the normal vector of the article plane point cloud.
In some alternative embodiments, the article comprises a window;
the position determining module 83 is specifically configured to map at least one item plane point cloud into a house layout to obtain at least one item line segment; and fusing the object line segments mapped in the house floor plan with the wall surfaces in the house floor plan to determine the position of the window in the house floor plan.
In further alternative embodiments, the article comprises a gate and/or a bealock;
the position determining module 83 is specifically configured to determine a cross-section point set in the point cloud restored from the panoramic depth map based on a set longitudinal coordinate value; the longitudinal coordinate of each point in the cross section point set is equal to a set longitudinal coordinate value; and responding to the shape of the cross-section point set corresponding to at least two singles in the house floor type graph, and determining the positions of the gate and/or the bealock connecting the at least two singles according to the position of the singles where the current position is located and the shape of the cross-section point set in the at least two singles.
Exemplary electronic device
Next, an electronic apparatus according to an embodiment of the present disclosure is described with reference to fig. 9. The electronic device may be either or both of the first device 100 and the second device 200, or a stand-alone device separate from them that may communicate with the first device and the second device to receive the collected input signals therefrom.
FIG. 9 illustrates a block diagram of an electronic device in accordance with an embodiment of the disclosure.
As shown in fig. 9, the electronic device 90 includes one or more processors 91 and memory 92.
The processor 91 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 90 to perform desired functions.
Memory 92 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on the computer-readable storage medium and executed by the processor 91 to implement the article location identification methods of the various embodiments of the present disclosure described above and/or other desired functions. Various contents such as an input signal, a signal component, a noise component, etc. may also be stored in the computer-readable storage medium.
In one example, the electronic device 90 may further include: an input device 93 and an output device 94, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
For example, when the electronic device is the first device 100 or the second device 200, the input device 93 may be a microphone or a microphone array as described above for capturing an input signal of a sound source. When the electronic device is a stand-alone device, the input means 93 may be a communication network connector for receiving the acquired input signals from the first device 100 and the second device 200.
The input device 93 may also include, for example, a keyboard, a mouse, and the like.
The output device 94 may output various information including the determined distance information, direction information, and the like to the outside. The output devices 94 may include, for example, a display, speakers, a printer, and a communication network and remote output devices connected thereto, among others.
Of course, for simplicity, only some of the components of the electronic device 90 relevant to the present disclosure are shown in fig. 9, omitting components such as buses, input/output interfaces, and the like. In addition, the electronic device 90 may include any other suitable components, depending on the particular application.
Exemplary computingComputer program product and computer-readable storage medium
In addition to the above-described methods and apparatus, embodiments of the present disclosure may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform the steps in the method of item location identification according to various embodiments of the present disclosure described in the "exemplary methods" section of this specification above.
The computer program product may write program code for carrying out operations for embodiments of the present disclosure in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present disclosure may also be a computer-readable storage medium having stored thereon computer program instructions that, when executed by a processor, cause the processor to perform steps in a method of item location identification according to various embodiments of the present disclosure described in the "exemplary methods" section above of this specification.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing describes the general principles of the present disclosure in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present disclosure are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present disclosure. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the disclosure is not intended to be limited to the specific details so described.
In the present specification, the embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts in the embodiments are referred to each other. For the system embodiment, since it basically corresponds to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The block diagrams of devices, apparatuses, systems referred to in this disclosure are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
The methods and apparatus of the present disclosure may be implemented in a number of ways. For example, the methods and apparatus of the present disclosure may be implemented by software, hardware, firmware, or any combination of software, hardware, and firmware. The above-described order for the steps of the method is for illustration only, and the steps of the method of the present disclosure are not limited to the order specifically described above unless specifically stated otherwise. Further, in some embodiments, the present disclosure may also be embodied as programs recorded in a recording medium, the programs including machine-readable instructions for implementing the methods according to the present disclosure. Thus, the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
It is also noted that in the devices, apparatuses, and methods of the present disclosure, each component or step can be decomposed and/or recombined. These decompositions and/or recombinations are to be considered equivalents of the present disclosure.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit embodiments of the disclosure to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (22)

1. An article position identification method, comprising:
carrying out article detection on a panoramic image acquired at the current position in a set scene, and determining the position of at least one article in the set scene in the panoramic image;
restoring a panoramic point cloud corresponding to the panoramic image based on the panoramic depth image;
performing normal vector estimation on the panoramic point cloud, and determining a normal vector of each point in the panoramic point cloud;
carrying out plane segmentation on the panoramic point cloud according to the normal vector of each point by using a region growing algorithm to obtain a plurality of plane point clouds in a three-dimensional space;
determining at least one point cloud corresponding to the at least one article in a three-dimensional space based on the position of each article in the panoramic image and the panoramic depth image corresponding to the panoramic image; the setting scene is a house, and the plane structure diagram is a house floor plan; the method comprises the following steps: determining at least one corresponding plane point cloud for each article in the point cloud corresponding to the panoramic depth map based on the position of each article in the panoramic map to obtain a plurality of plane point clouds; determining a corresponding item plane point cloud for each of the items based on the plurality of plane point clouds;
and mapping the point cloud corresponding to each article to a plane structure diagram corresponding to the set scene, and determining the position of each article in the plane structure diagram.
2. The method of claim 1, wherein the detecting the item in the panorama captured at the current position in the set scene, and determining the position of at least one item in the set scene in the panorama comprises:
carrying out article detection on the panoramic image by using an article detection model, and determining a mask and an article name of each article in the at least one article in the panoramic image; the article detection model is obtained through training of a plurality of training panoramas with known masks and article names.
3. The method of claim 2, wherein the pixels in the panoramic image correspond one-to-one to the pixels in the panoramic depth image;
the determining at least one piece of point cloud corresponding to the at least one item in three-dimensional space based on the position of each item in the panoramic image and the panoramic depth map corresponding to the panoramic image comprises:
determining a corresponding article area in the panoramic depth map based on the mask of each article to obtain at least one article area;
and recovering three-dimensional information of at least one article area based on the depth information of each pixel in the panoramic depth map to obtain at least one point cloud.
4. The method of claim 3, wherein the mapping the point cloud corresponding to each article to the plan structure map corresponding to the setting scene, and determining the position of each article in the plan structure map comprises:
determining the gravity center of each article in a three-dimensional space according to the coordinates of the point cloud corresponding to each article;
and projecting the gravity center of the object in the three-dimensional space into the plane structure diagram, and determining the position of each object in the plane structure diagram.
5. The method of claim 4, wherein the plan view diagram is a house layout diagram, and the article is furniture; wherein the house layout comprises at least one cell;
further comprising:
and determining the function of each room in the house floor plan according to a preset rule and the position of each piece of furniture in the floor plan.
6. The method according to any one of claims 1 to 5, wherein after performing plane segmentation on the panoramic point cloud according to the normal vector of each point by using a region growing algorithm to obtain a plurality of plane point clouds in a three-dimensional space, the method further comprises:
determining a plane equation and a plane normal vector corresponding to each plane point cloud based on coordinates of a plurality of points included in each plane point cloud;
said determining a corresponding item plane point cloud for each of said items based on said plurality of plane point clouds comprises:
performing fusion operation on the plurality of plane point clouds based on a plurality of plane equations and a plurality of plane normal vectors corresponding to the plurality of plane point clouds to obtain at least one article plane point cloud; wherein each item plane point cloud corresponds to one item.
7. The method of claim 6, wherein said fusing the plurality of plane point clouds to obtain at least one item plane point cloud based on a plurality of plane equations and a plurality of plane normal vectors corresponding to the plurality of plane point clouds comprises:
determining whether at least one group of parallel planes exists in the plurality of plane point clouds based on the plane normal vector corresponding to each plane point cloud; each parallel plane group comprises at least two plane point clouds which are parallel to each other;
in response to the existence of at least one set of parallel plane sets, determining a distance between at least two plane point clouds in the set of parallel planes according to a plane equation corresponding to each plane point cloud in each set of parallel plane sets;
and in response to the distance between at least two plane point clouds in the parallel plane group being smaller than a preset threshold, at least two plane point clouds in the parallel plane group are fused into one item plane point cloud.
8. The method of claim 7, wherein said merging at least two plane point clouds of said set of parallel planes into one said item plane point cloud comprises:
determining a normal vector of the item plane point cloud based on normal vectors of the at least two plane point clouds;
determining the middle points of the at least two plane point clouds based on the number of points included in each plane point cloud of the at least two plane point clouds and the coordinates corresponding to the included points;
determining the item plane point cloud based on the midpoints of the at least two plane point clouds and a normal vector of the item plane point cloud.
9. The method of any of claims 1-5, wherein the article comprises a window;
the step of mapping the point cloud corresponding to each article to the plan structure diagram corresponding to the set scene and determining the position of each article in the plan structure diagram comprises the following steps:
mapping at least one item plane point cloud into the house layout to obtain at least one item line segment;
and fusing the item line segments mapped in the house floor plan with the wall surfaces in the house floor plan, and determining the position of the window in the house floor plan.
10. The method according to any one of claims 1 to 5, wherein the item comprises a gate and/or a bealock;
the step of mapping the point cloud corresponding to each article to the plan structure diagram corresponding to the set scene and determining the position of each article in the plan structure diagram comprises the following steps:
determining a cross-section point set in the point cloud recovered from the panoramic depth map based on a set longitudinal coordinate value; wherein the ordinate of each point in the cross-section point set is equal to the set ordinate value;
and in response to the shape of the cross-section point set corresponding to at least two singles in the house floor type graph, determining the positions of the gate and/or the bealock connecting the at least two singles according to the position of the singles where the current position is located and the shape of the cross-section point set in the at least two singles.
11. An article position identification device, comprising:
the system comprises an article detection module, a storage module and a display module, wherein the article detection module is used for carrying out article detection on a panoramic image acquired at the current position in a set scene and determining the position of at least one article in the set scene in the panoramic image;
the point cloud recovery module is used for recovering a panoramic point cloud corresponding to the panoramic image based on the panoramic depth image;
the normal vector estimation module is used for performing normal vector estimation on the panoramic point cloud and determining a normal vector of each point in the panoramic point cloud;
the plane segmentation module is used for carrying out plane segmentation on the panoramic point cloud according to the normal vector of each point by utilizing a region growing algorithm to obtain a plurality of plane point clouds in a three-dimensional space;
the depth recovery module is used for determining at least one piece of point cloud corresponding to the at least one article in a three-dimensional space based on the position of each article in the panoramic image and the panoramic depth image corresponding to the panoramic image; the setting scene is a house, and the plane structure diagram is a house floor plan; the depth recovery module includes: the planar point cloud unit is used for determining at least one corresponding planar point cloud for each article in the point cloud corresponding to the panoramic depth map based on the position of each article in the panoramic depth map to obtain a plurality of planar point clouds; an item point cloud unit for determining a corresponding item plane point cloud for each of the items based on the plurality of plane point clouds;
and the position determining module is used for mapping the point cloud corresponding to each article to a plane structure diagram corresponding to the set scene and determining the position of each article in the plane structure diagram.
12. The apparatus according to claim 11, wherein the article detection module is specifically configured to perform article detection on the panoramic image by using an article detection model, and determine a mask and an article name of each article in the at least one article in the panoramic image; the article detection model is obtained through training of a plurality of training panoramas with known masks and article names.
13. The apparatus of claim 12, wherein the pixels in the panoramic image correspond one-to-one to the pixels in the panoramic depth image;
the depth recovery module is specifically configured to determine a corresponding article area in the panoramic depth map based on a mask of each article, so as to obtain at least one article area; and recovering three-dimensional information of at least one article area based on the depth information of each pixel in the panoramic depth map to obtain at least one point cloud.
14. The apparatus according to claim 13, wherein the position determining module is specifically configured to determine a center of gravity of the item in a three-dimensional space according to coordinates of a piece of point cloud corresponding to each item; and projecting the gravity center of the object in the three-dimensional space into the plane structure diagram, and determining the position of each object in the plane structure diagram.
15. The apparatus of claim 14, wherein the plan view structure is a house layout, and the article is furniture; wherein the house layout comprises at least one cell;
the device further comprises:
and the function determining module is used for determining the function of each room in the house floor plan according to a preset rule and the position of each piece of furniture in the floor plan.
16. The apparatus according to any one of claims 11-15, wherein the plane segmentation module is further configured to determine a plane equation and a plane normal vector corresponding to each of the plane point clouds based on coordinates of a plurality of points included in each of the plane point clouds;
the article point cloud unit is specifically configured to perform fusion operation on the plurality of plane point clouds based on a plurality of plane equations and a plurality of plane normal vectors corresponding to the plurality of plane point clouds to obtain at least one article plane point cloud; wherein each item plane point cloud corresponds to one item.
17. The apparatus of claim 16, wherein the item point cloud unit is configured to determine whether at least one set of parallel plane sets exists in the plurality of plane point clouds based on a plane normal vector corresponding to each of the plane point clouds; each parallel plane group comprises at least two plane point clouds which are parallel to each other; in response to the existence of at least one set of parallel plane sets, determining a distance between at least two plane point clouds in the set of parallel planes according to a plane equation corresponding to each plane point cloud in each set of parallel plane sets; and in response to the distance between at least two plane point clouds in the parallel plane group being smaller than a preset threshold, at least two plane point clouds in the parallel plane group are fused into one item plane point cloud.
18. The apparatus according to claim 17, wherein the item point cloud unit, when fusing at least two plane point clouds of the set of parallel planes into one item plane point cloud, is configured to determine a normal vector of the item plane point cloud based on a normal vector of the at least two plane point clouds; determining the middle points of the at least two plane point clouds based on the number of points included in each plane point cloud of the at least two plane point clouds and the coordinates corresponding to the included points; determining the item plane point cloud based on the midpoints of the at least two plane point clouds and a normal vector of the item plane point cloud.
19. The apparatus of any of claims 11-15, wherein the article comprises a window;
the position determining module is specifically configured to map the at least one item plane point cloud into the house layout to obtain at least one item segment; and fusing the item line segments mapped in the house floor plan with the wall surfaces in the house floor plan, and determining the position of the window in the house floor plan.
20. The apparatus according to any of claims 11-15, wherein the item comprises a gate and/or a bealock;
the position determining module is specifically configured to determine a cross-section point set in the point cloud restored by the panoramic depth map based on a set longitudinal coordinate value; wherein the ordinate of each point in the cross-section point set is equal to the set ordinate value; and in response to the shape of the cross-section point set corresponding to at least two singles in the house floor type graph, determining the positions of the gate and/or the bealock connecting the at least two singles according to the position of the singles where the current position is located and the shape of the cross-section point set in the at least two singles.
21. A computer-readable storage medium, characterized in that the storage medium stores a computer program for executing the item location identification method according to any one of claims 1 to 10.
22. An electronic device, characterized in that the electronic device comprises:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to read the executable instructions from the memory and execute the instructions to implement the article location identification method according to any one of claims 1 to 10.
CN202010920699.9A 2020-09-04 2020-09-04 Article position identification method and device, storage medium and electronic equipment Active CN112037279B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010920699.9A CN112037279B (en) 2020-09-04 2020-09-04 Article position identification method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010920699.9A CN112037279B (en) 2020-09-04 2020-09-04 Article position identification method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN112037279A CN112037279A (en) 2020-12-04
CN112037279B true CN112037279B (en) 2021-11-16

Family

ID=73591463

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010920699.9A Active CN112037279B (en) 2020-09-04 2020-09-04 Article position identification method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN112037279B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112200916B (en) * 2020-12-08 2021-03-19 深圳市房多多网络科技有限公司 Method and device for generating house type graph, computing equipment and storage medium
CN112950759B (en) * 2021-01-28 2022-12-06 贝壳找房(北京)科技有限公司 Three-dimensional house model construction method and device based on house panoramic image
CN113269877B (en) * 2021-05-25 2023-02-21 三星电子(中国)研发中心 Method and electronic equipment for acquiring room layout plan

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106971403A (en) * 2017-04-27 2017-07-21 武汉数文科技有限公司 Point cloud chart is as processing method and processing device
CN110148173A (en) * 2019-05-21 2019-08-20 北京百度网讯科技有限公司 The method and apparatus of target positioning, electronic equipment, computer-readable medium
CN110148206A (en) * 2018-08-30 2019-08-20 杭州维聚科技有限公司 The fusion method in more spaces
CN111091594A (en) * 2019-10-17 2020-05-01 贝壳技术有限公司 Multi-point cloud plane fusion method and device
CN111325779A (en) * 2020-02-07 2020-06-23 贝壳技术有限公司 Point cloud registration method and device, electronic equipment and storage medium
CN111369424A (en) * 2020-02-10 2020-07-03 北京城市网邻信息技术有限公司 Method, device, equipment and storage medium for generating three-dimensional space of target house

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110222757A1 (en) * 2010-03-10 2011-09-15 Gbo 3D Technology Pte. Ltd. Systems and methods for 2D image and spatial data capture for 3D stereo imaging
CN109035422B (en) * 2018-06-06 2019-06-07 贝壳找房(北京)科技有限公司 In a kind of extraction chamber in threedimensional model plane domain method and system
CN111145352A (en) * 2019-12-20 2020-05-12 北京乐新创展科技有限公司 House live-action picture display method and device, terminal equipment and storage medium
CN111369664A (en) * 2020-02-10 2020-07-03 北京城市网邻信息技术有限公司 Method, device, equipment and storage medium for displaying house type scene

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106971403A (en) * 2017-04-27 2017-07-21 武汉数文科技有限公司 Point cloud chart is as processing method and processing device
CN110148206A (en) * 2018-08-30 2019-08-20 杭州维聚科技有限公司 The fusion method in more spaces
CN110148173A (en) * 2019-05-21 2019-08-20 北京百度网讯科技有限公司 The method and apparatus of target positioning, electronic equipment, computer-readable medium
CN111091594A (en) * 2019-10-17 2020-05-01 贝壳技术有限公司 Multi-point cloud plane fusion method and device
CN111325779A (en) * 2020-02-07 2020-06-23 贝壳技术有限公司 Point cloud registration method and device, electronic equipment and storage medium
CN111369424A (en) * 2020-02-10 2020-07-03 北京城市网邻信息技术有限公司 Method, device, equipment and storage medium for generating three-dimensional space of target house

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ImVoteNet: Boosting 3D Object Detection in Point Clouds with Image Votes;Charles R. Qi 等;《Conference on Computer Vision and Pattern Recognition》;20200805;第4404-4413页 *
Learning to Segment 3D Point Clouds in 2D Image Space;Yecheng Lyu 等;《2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)》;20200805;第12252-12261页 *
基于影像与激光数据的小交标检测与地理定位;刘力荣 等;《中国激光》;20200514;摘要,第1-4节 *
基于深度图的三维激光雷达点云目标分割方法;范小辉 等;《中国激光》;20190731;第46卷(第7期);第1-8页 *

Also Published As

Publication number Publication date
CN112037279A (en) 2020-12-04

Similar Documents

Publication Publication Date Title
CN112037279B (en) Article position identification method and device, storage medium and electronic equipment
CN108229355B (en) Behavior recognition method and apparatus, electronic device, computer storage medium
US20210209797A1 (en) Point-based object localization from images
CN108229322B (en) Video-based face recognition method and device, electronic equipment and storage medium
US7831087B2 (en) Method for visual-based recognition of an object
WO2022095543A1 (en) Image frame stitching method and apparatus, readable storage medium, and electronic device
CN111612842B (en) Method and device for generating pose estimation model
CN111428805B (en) Method for detecting salient object, model, storage medium and electronic device
JP6590609B2 (en) Image analysis apparatus and image analysis method
CN107122743B (en) Security monitoring method and device and electronic equipment
US9865061B2 (en) Constructing a 3D structure
WO2021084972A1 (en) Object tracking device and object tracking method
WO2022237026A1 (en) Plane information detection method and system
CN113793370B (en) Three-dimensional point cloud registration method and device, electronic equipment and readable medium
CN112507848B (en) Mobile terminal real-time human face attitude estimation method
WO2020137193A1 (en) Human detection device and human detection method
WO2022199360A1 (en) Moving object positioning method and apparatus, electronic device, and storage medium
CN114757301A (en) Vehicle-mounted visual perception method and device, readable storage medium and electronic equipment
WO2023231435A1 (en) Visual perception method and apparatus, and storage medium and electronic device
US10600202B2 (en) Information processing device and method, and program
Jung et al. Object Detection and Tracking‐Based Camera Calibration for Normalized Human Height Estimation
CN112270246A (en) Video behavior identification method and device, storage medium and electronic equipment
WO2022262273A1 (en) Optical center alignment test method and apparatus, and storage medium and electronic device
JP6163732B2 (en) Image processing apparatus, program, and method
CN113379838B (en) Method for generating roaming path of virtual reality scene and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210412

Address after: 100085 Floor 101 102-1, No. 35 Building, No. 2 Hospital, Xierqi West Road, Haidian District, Beijing

Applicant after: Seashell Housing (Beijing) Technology Co.,Ltd.

Address before: Unit 05, room 112, 1st floor, office building, Nangang Industrial Zone, economic and Technological Development Zone, Binhai New Area, Tianjin 300457

Applicant before: BEIKE TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant