WO2021139549A1 - Procédé et appareil de détection de plan et procédé et appareil de suivi de plan - Google Patents

Procédé et appareil de détection de plan et procédé et appareil de suivi de plan Download PDF

Info

Publication number
WO2021139549A1
WO2021139549A1 PCT/CN2020/139837 CN2020139837W WO2021139549A1 WO 2021139549 A1 WO2021139549 A1 WO 2021139549A1 CN 2020139837 W CN2020139837 W CN 2020139837W WO 2021139549 A1 WO2021139549 A1 WO 2021139549A1
Authority
WO
WIPO (PCT)
Prior art keywords
plane
image
effective
current frame
frame image
Prior art date
Application number
PCT/CN2020/139837
Other languages
English (en)
Chinese (zh)
Inventor
王宝林
曹世明
周锋宜
吴涛
Original Assignee
青岛小鸟看看科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 青岛小鸟看看科技有限公司 filed Critical 青岛小鸟看看科技有限公司
Publication of WO2021139549A1 publication Critical patent/WO2021139549A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20164Salient point detection; Corner detection

Definitions

  • the present disclosure relates to the field of image processing technology, and more specifically, the present disclosure relates to a plane detection method and device, and a plane tracking method and device.
  • AR augmented reality
  • mobile devices have become a reality, and are favored by major mobile device manufacturers and users.
  • AR augmented reality
  • plane detection is an important function.
  • An object of the present disclosure is to provide a new technical solution for plane detection and tracking.
  • a plane detection method including:
  • the sparse point data set including the three-dimensional coordinate information of a plurality of the feature points;
  • plane detection is performed on the current frame image to obtain an optimal effective plane of the current frame image, where the optimal effective plane is a plane containing the most feature points.
  • the current frame image includes a first target image and a second target image
  • the feature points include a first feature point and a second feature point
  • the dividing the current frame image of the environment scene collected by the binocular camera, and detecting the feature points corresponding to each sub-image block obtained after the division includes:
  • the method before the dividing the first target image to obtain a plurality of first sub-image blocks, the method further includes:
  • the pixel state of the dots whose Euclidean distance is greater than the first set pixel is set to the second state, and the pixel state of the remaining points in the first target image is set to the first status.
  • the detecting the first feature point corresponding to each of the first sub-image blocks includes:
  • the pixel state of the target point in the first sub-image block is the first state, determining the target point in the first sub-image block as the first feature point;
  • feature point detection is performed on the next first sub-image block until each first sub-image block is traversed.
  • the method further includes:
  • the pixel state of the point with the Euclidean distance less than the second set pixel is set to the second state.
  • the performing plane detection on the current frame image based on the sparse point data set to obtain the optimal effective plane of the current frame image includes:
  • S2 Determine the current effective plane according to the normal vector of the current plane and the number of first interior points, where the first interior point is a feature point whose distance from the current plane is less than a first set distance;
  • S3 Iterate S1-S2 repeatedly until the preset number of iterations is satisfied, and update the current effective plane obtained in the last iteration to the optimal effective plane of the current frame image.
  • the S3 further includes:
  • Every iteration S1-S2, the number of iterations is updated according to the proportion of the first interior points of the current effective plane, where the proportion of the first interior points is the number of the first interior points and the feature point concentration feature The ratio of the total number of points.
  • the method further includes:
  • S5 Use the starting point as a seed point, and determine the neighboring points of the starting point among the feature points included in the optimal effective plane according to a set radius;
  • S6 Use the neighboring points as seed points, and determine the neighboring points of the neighboring points from the feature points included in the optimal effective plane until the end point is determined, and the end point is a feature point without neighboring points;
  • S7 Calculate the number of the seed points until the number of the seed points is greater than the preset minimum number of clustering points, and output an effective plane;
  • S8 Re-select a feature from the remaining feature points of the optimal effective plane as the starting point, and execute S5-S7 until all feature points contained in the optimal effective plane are traversed, and multiple effective points of the current frame image are output. flat.
  • a planar tracking method including:
  • the multiple effective planes of the current frame image are merged with multiple effective planes of the previous frame image to determine the tracking plane of the continuous multiple frame images; wherein, the current frame is obtained according to the optimal effective plane of the current frame image Multiple effective planes of the image, and the plane frame number of the effective plane is consistent with the frame number of the corresponding image.
  • the fusion of multiple effective planes of the current frame of image with multiple effective planes of the previous frame of image to determine the tracking plane of consecutive multiple frames of image includes:
  • the plane that satisfies the plane fusion condition in the current frame image is fused with the corresponding plane of the previous frame image to obtain The first plane, and update the plane frame number of the first plane to the frame number of the current frame image;
  • the effective plane of the last frame of image is updated according to the first plane and the second plane, and the tracking plane of the continuous multiple frames of images is determined.
  • the plane fusion condition includes:
  • the height difference between the effective plane of the current frame image and any effective plane of the previous frame image is less than the first set height
  • the angle between the normal vector of the effective plane of the current frame image and the normal vector of any effective plane of the previous frame image is smaller than the first set angle
  • the ratio of points falling in any one of the effective planes of the last frame of image to all points in the effective plane of the current frame of image is greater than the first set ratio.
  • the fusing a plane that meets a plane fusion condition in the current frame of image with a corresponding plane of the previous frame of image to obtain the first plane includes:
  • a point whose distance to any point in the corresponding plane of the last frame of image is greater than the second set distance is added to the corresponding plane to obtain the first plane.
  • the method before the updating the effective plane of the last frame of image according to the first plane and the second plane, and determining the tracking plane of the continuous multiple frames of images, the method further includes:
  • the invalid planes in the first plane and the second plane are eliminated; wherein, the invalid plane is a plane whose aspect ratio is less than a preset aspect ratio threshold.
  • the method before the updating the effective plane of the last frame of image according to the first plane and the second plane, and determining the tracking plane of the continuous multiple frames of images, the method further includes:
  • the lost planes in the first plane and the second plane are eliminated; wherein the lost plane is that the difference between the plane frame number and the frame number of the current frame image is greater than the set frame Number of planes.
  • the method further includes:
  • the first plane and any unfused effective plane in the current frame image meet the plane fusion condition, the first plane and the corresponding effective plane in the current frame image are plane fused.
  • the method further includes:
  • the center point of the tracking plane is eliminated, and the edge points of the tracking plane are retained.
  • a plane detection device comprising:
  • a memory configured to store computer instructions
  • a processor configured to call the computer instructions from the memory, and execute the method according to any one of the first aspects of the present disclosure under the control of the computer instructions.
  • a planar tracking device comprising:
  • a memory configured to store computer instructions
  • a processor configured to call the computer instructions from the memory, and execute the method according to any one of the second aspects of the present disclosure under the control of the computer instructions.
  • plane detection is performed on the current frame image based on a sparse point data set without storing a large amount of point cloud data, which can increase the speed of plane detection, thereby improving the real-time performance of plane detection, and is based on the sparse point data set
  • the detection can be applied to mobile devices with limited computing power, avoiding the delay of the display screen, and improving the user experience.
  • FIG. 1 shows a schematic diagram of the hardware configuration of an electronic device provided by an embodiment of the present disclosure
  • FIG. 2 shows a first schematic flowchart of a plane detection method provided by an embodiment of the present disclosure
  • Fig. 3 shows a schematic flow chart of a plane detection method according to an example of the present disclosure
  • FIG. 4 shows a second schematic flowchart of a plane detection method provided by an embodiment of the present disclosure
  • FIG. 5 shows a schematic flow chart 1 of a planar tracking method provided by an embodiment of the present disclosure
  • FIG. 6 shows a second schematic flowchart of a planar tracking method provided by an embodiment of the present disclosure
  • FIG. 7 shows a block diagram of a plane detection device provided by an embodiment of the present disclosure
  • Fig. 8 shows a block diagram of a planar tracking device provided by an embodiment of the present disclosure.
  • FIG. 1 is a block diagram of a hardware configuration of an electronic device 100 that can be used to implement an embodiment of the present disclosure.
  • the electronic device may be an intelligent device such as a virtual reality (VR) device, an augmented reality (Augmented Reality, AR) device, or a mixed reality (Mixed Reality) device.
  • VR virtual reality
  • AR Augmented Reality
  • Mixed Reality Mixed reality
  • the electronic device 100 may include a processor 101, a memory 102, an interface device 103, a communication device 104, a display device 105, an input device 106, an audio device 107, a sensor 108, a camera 109, etc. .
  • the processor 101 may include, but is not limited to, a central processing unit (CPU), a microprocessor MCU, and the like.
  • the processor 101 may also include a GPU (Graphics Processing Unit) and the like.
  • the memory 102 may include, but is not limited to, ROM (Read Only Memory), RAM (Random Access Memory), non-volatile memory such as a hard disk, and the like.
  • the interface device 103 may include, but is not limited to, a USB interface, a serial interface, a parallel interface, an infrared interface, and the like.
  • the communication device 104 can perform wired or wireless communication, for example, and specifically may include WiFi communication, Bluetooth communication, 2G/3G/4G/5G communication, and the like.
  • the display device 105 is, for example, a liquid crystal display, an LED display, a touch display, or the like.
  • the input device 106 may include, but is not limited to, a touch screen, a keyboard, a somatosensory input, and the like.
  • the audio device 107 may be configured to input/output voice information.
  • the sensor 108 is, for example, an image sensor, an infrared sensor, a laser sensor, a pressure sensor, a gyroscope sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity light sensor, an ambient light sensor, a fingerprint sensor, a touch sensor, a temperature sensor, etc.
  • the sensor 108 can be It is configured to measure the posture change of the electronic device 100.
  • the camera 109 may be configured to obtain image information, and the camera 109 may be, for example, a binocular camera.
  • the electronic device 100 is used to obtain an image of an environmental scene to perform plane detection and plane tracking on the image.
  • the electronic device 100 shown in FIG. 1 is only illustrative and does not mean any limitation to the embodiment of the present specification, its application or use. Those skilled in the art should understand that although multiple devices of the electronic device 100 are described above, the embodiments of this specification may only involve some of them. Those skilled in the art can design instructions according to the scheme disclosed in the embodiments of this specification, and how the instructions control the processor to operate is a well-known technology in the art, so it will not be described in detail here.
  • Fig. 2 is a schematic diagram of a plane detection method provided by an embodiment of the present specification.
  • the plane detection method provided in this embodiment is implemented by computer technology, and can be implemented by the electronic device 100 described in FIG. 1.
  • the plane detection method provided in this embodiment includes steps S2100-S2300.
  • step S2100 the current frame image of the environment scene collected by the binocular camera is divided, and the feature points corresponding to each sub-image block obtained by the division are detected.
  • the environmental scene may be, for example, a scene including the ground, a table, a platform, and the like.
  • the binocular camera is used to shoot the environment scene, the current frame image collected by the binocular camera is segmented, the feature extraction is performed on each sub-image block obtained after the segmentation, and the feature point corresponding to each sub-image block is obtained.
  • the feature point may be a relatively prominent point in the current frame image, such as a contour point in the current frame image, a bright spot in a darker area, a dark point in a brighter area, etc.
  • the feature point is a point used for plane detection.
  • the binocular camera may be provided on a head-mounted display device, for example, and there is no limitation here.
  • the binocular camera can be a high-resolution camera, a low-resolution camera, and the binocular camera can also be a fisheye camera.
  • the current frame image includes a first target image and a second target image
  • the feature points include a plurality of first feature points corresponding to the first target image and a plurality of second feature points corresponding to the second target image.
  • the first feature point is used for plane detection
  • the second feature point is used for calculating the three-dimensional coordinate information of the first feature point.
  • a binocular camera includes a left-eye camera and a right-eye camera. The current frame image collected by the left-eye camera is used as the first-eye image, and the current frame image collected by the right-eye camera is the second-eye image.
  • the step of dividing the current frame image of the environment scene collected by the binocular camera, and detecting the feature point corresponding to each sub-image block obtained by the division may further include: steps S2110-S2140.
  • Step S2110 Divide the first target image to obtain multiple first sub-image blocks.
  • the divided first sub-image blocks may be image blocks of the same size, or image blocks of different sizes.
  • the first target image is divided according to a preset division method to obtain a plurality of first sub-image blocks.
  • the first target image is divided into i*j sub-image blocks of the same size.
  • Step S2120 Detect the first feature point corresponding to each first sub-image block.
  • FAST Feature from Accelerated Segment Test, accelerated segmentation test feature
  • SIFT Scale Invariant Feature Transform, scale invariant feature transform
  • ORB Oriented FAST and Rotated Brief, fast feature point extraction and description
  • FAST corner detection is performed on the first sub-image block, and the corner point with the highest Harris response in the sub-image block is selected as the first feature point corresponding to the first sub-image block.
  • Step S2130 Use the epipolar matching method to perform feature point matching in the second target image, and obtain a second feature point in the second target image that matches the first feature point.
  • Step S2140 using the first feature point and the second feature point to calculate the three-dimensional coordinate information of the first feature point.
  • the method before dividing the first target image to obtain a plurality of first sub-image blocks, the method further includes: taking the center of the first target image as the origin, and dividing the point whose Euclidean distance is greater than the first set pixel The pixel state is set to the second state, and the pixel state of the remaining dots in the first target image is set to the first state.
  • the pixel state of the dot is set according to the position of the dot in the first target image.
  • the pixel state includes a first state and a second state. According to the degree of distortion of the points in the first target image, set the pixel state of the less distorted point in the first target image to the first state, and set the pixel state of the more distorted point in the first target image to the second state . For example, the first state is true and the second state is false.
  • the first setting pixel can be set according to engineering experience or experimental simulation results. For example, taking the center of the first target image as the origin, the pixel status of the points whose Euclidean distance is greater than m pixels is set to false, and the pixel status of the remaining points in the first target image is set to true.
  • the pixel state of the point in the first target image is set, which can eliminate the severely distorted points in the image and avoid the plane calculated by using the three-dimensional coordinate information of the points.
  • the error of the parameter is relatively large, combined with the subsequent steps, the accuracy of plane detection can be improved.
  • the severely distorted points in the image are excluded, so that this method can be applied to a fisheye camera, and a larger field of view can be obtained.
  • the step of detecting the first feature point corresponding to each first sub-image block may further include: steps S2121-S2124.
  • Step S2121 Perform feature point detection on the first sub-image block, and determine the target point in each first sub-image block.
  • FAST corner detection is performed on the first sub-image block, and the corner point with the highest Harris response in the sub-image block is selected as the target point corresponding to the first sub-image block.
  • Step S2122 Determine the pixel state of the target point in the first sub-image block, and the pixel state includes a first state and a second state.
  • Step S2123 if the pixel state of the target point in the first sub-image block is the first state, determine the target point in the first sub-image block as the first feature point;
  • Step S2124 if the pixel state of the target point in the first sub-image block is in the second state, perform feature point detection on the next first sub-image block until each first sub-image block is traversed.
  • the first feature point corresponding to the first sub-image block is determined according to the pixel state of the point in the first target image, which can prevent the severely distorted point from affecting the accuracy of calculating the plane parameters.
  • the accuracy of plane detection can be improved.
  • the method further includes: taking the first feature point obtained from the first sub-image block as the origin, and reducing the Euclidean distance to be smaller than the second feature point.
  • the pixel state of the point of the set pixel is set to the second state.
  • the second setting pixel can be set according to engineering experience or experimental simulation results. For example, taking the first feature point corresponding to a first sub-image block as the origin, the pixel state of the point with the Euclidean distance less than n pixels is set to false.
  • the pixel state of the point in the first target image is set again, and the pixel state of the points around the first feature point is set to the first feature point.
  • the two states can prevent the detected first feature points from being too dense, and the detected first feature points are more evenly distributed, thereby improving the accuracy of detection.
  • the step of dividing the current frame image of the environment scene collected by the binocular camera and detecting the feature points corresponding to each sub-image block obtained after the division may further include the following steps S301 to S308.
  • step S301 the center of the first target image is used as the origin, the pixel state of the points whose Euclidean distance is greater than m pixels is set to false, and the pixel state of the remaining points in the first target image is set to true.
  • Step S302 Divide the first target image into i*j first sub-image blocks of the same size, and number the first sub-image blocks, and the number num of the first sub-image blocks is 0 ⁇ i*j-1.
  • Step S303 Perform FAST corner detection on the first sub-image block numbered num, and select the corner point with the highest Harris response in the first sub-image block as the target point corresponding to the first sub-image block.
  • Step S304 It is judged whether the pixel state of the target point in the first sub-image block is true, if yes, go to step S305, otherwise, go to step S307.
  • Step S305 Determine the target point as the first feature point corresponding to the first sub-image block numbered num.
  • step S306 the first feature point corresponding to the first sub-image block numbered num is used as the origin, and the pixel state of the point with the Euclidean distance less than n pixels is set to false.
  • step S307 it is judged whether the number of the first sub-image is less than i*j-1, if yes, go to step S303, otherwise, go to step S308.
  • step S308 the epipolar line matching method is used to perform feature point matching in the second target image, and a second feature point matching the first feature point in the second target image is obtained.
  • Step S2200 Obtain a sparse point data set of the current frame image according to the three-dimensional coordinate information of the feature point in the world coordinate system.
  • the sparse point data set includes three-dimensional coordinate information of multiple feature points.
  • the feature points of the current frame image include multiple first feature points in the first target image.
  • obtaining the sparse point data set of the current frame image according to the three-dimensional coordinate information of the feature point in the world coordinate system may further include: steps S2210-S2230.
  • step S2210 the three-dimensional coordinate information of the first feature point of the first eye image under the reference system of the first eye camera is calculated according to the first feature point and the second feature point through triangulation.
  • Step S2220 Obtain the pose of the binocular camera when the current frame of image is collected.
  • the pose of the binocular camera may be 6Dof information of the binocular camera.
  • the pose of the binocular camera can be acquired through a SLAM (Simultaneous Localization And Mapping, which can be translated as a synchronous positioning and mapping) system or an optical tracking system.
  • SLAM Simultaneous Localization And Mapping
  • Step S2230 Perform coordinate system conversion according to the pose of the binocular camera when the current frame image is collected, and obtain the three-dimensional coordinate information of the first feature point of the first eye image in the current frame image in the world coordinate system.
  • the three-dimensional coordinate information of the first feature point of the first item image in the world coordinate system constitutes a sparse point data set.
  • the pose of the binocular camera when collecting the current frame image is acquired through the SLAM system, and the coordinate system conversion is performed according to the pose of the binocular camera, combined with subsequent steps, the horizontal plane or the vertical plane can be accurately detected.
  • Step S2300 Perform plane detection on the current frame image based on the sparse point data set, and obtain the optimal effective plane of the current frame image.
  • the optimal effective plane is the plane containing the most feature points.
  • the plane detection of the current frame image based on the sparse point data set can increase the speed of plane detection, thereby improving the real-time performance of the plane detection, and the detection based on the sparse point data set can be suitable for limited computing power. Mobile devices to avoid display delays and improve user experience.
  • the step of performing plane detection on the current frame image based on the sparse point data set to obtain the optimal effective plane of the current frame image may further include: S2310-S2330.
  • Step S2310 Determine the current plane according to three feature points randomly selected from the sparse point data set.
  • three feature points are randomly selected from the sparse point data set. According to the three-dimensional coordinate information of these three feature points, it is judged whether the selected three feature points are collinear. If the selected three feature points are not Collinear, the current plane is determined according to the three feature points. If the selected three feature points are collinear, the feature points are re-extracted from the sparse point data set.
  • Step S2320 Determine the current effective plane according to the normal vector of the current plane and the number of first interior points, where the first interior point is a feature point whose distance from the current plane is less than a first set distance.
  • the normal vector of the current plane can be calculated according to the three-dimensional coordinate information of the feature point. For example, calculate the covariance matrix of all interior points in the current plane, and use the eigenvector corresponding to the smallest eigenvalue among all interior points in the current plane as the normal vector of the current plane.
  • the current effective plane is determined according to the normal vector of the current plane. Specifically, it is determined whether the current plane is the current effective plane according to the calculated angle between the normal vector of the current plane and the reference normal vector.
  • the reference normal vector can be determined according to the type of plane to be detected. For example, when detecting a horizontal plane, the reference normal vector is the gravity direction vector.
  • Determine the current effective plane according to the number of first interior points of the current plane specifically, determine the first interior point according to the distance from the point in the current frame image to the current plane, and determine whether the current plane is the current plane according to the number of first interior points Effective plane.
  • Step S2330 Iterate steps S2310-S2320 repeatedly until the preset number of iterations is satisfied, and update the current effective plane obtained in the last iteration to the optimal effective plane of the current frame image.
  • step S2330 it further includes: every iteration S1-S2, updating the number of iterations according to the proportion of the first interior point of the current effective plane, wherein the proportion of the first interior point is the first interior point.
  • the number of iterations is updated according to the proportion of the first interior point of the current effective plane, which can improve the efficiency of plane detection and quickly detect the effective plane with the most feature points.
  • the step of performing plane detection on the current frame image based on the sparse point data set to obtain the optimal effective plane point of the current frame image may further include the following steps S401 to S408.
  • Step S401 randomly extract three feature points from the sparse point data set of the current frame image
  • Step S402 according to the three-dimensional coordinate information of the three characteristic points, judge whether the selected three characteristic points are collinear, if yes, go to step S403, if not, go to step S401;
  • Step S403 Determine the current plane according to the selected three characteristic points, and calculate the normal vector of the current plane
  • Step S404 it is judged whether the normal vector of the current plane meets the requirements, if yes, go to step S405, if not, go to step S401;
  • the angle between the normal vector of the current plane and the reference normal vector is calculated. If the angle between the normal vector of the current plane and the reference normal vector is less than the second set angle, it is judged that the normal vector of the current plane meets the requirements If the angle between the normal vector of the current plane and the reference normal vector is not less than the second set angle, it is judged that the normal vector of the current plane does not meet the requirements.
  • the reference normal vector can be determined according to the type of plane to be detected, and the second setting angle can be set according to engineering experience or experimental simulation results. For example, when detecting the horizontal plane, the reference normal vector is the gravity direction vector, and the second set angle is 10°. If the angle between the normal vector of the current plane and the gravity direction vector is less than 10°, it is judged that the normal vector of the current plane meets the requirements . When detecting a vertical plane, the reference normal vector is the gravity direction vector, and the first set angle is 80°. If the angle between the normal vector of the current plane and the gravity direction vector is greater than 80°, it is judged that the normal vector of the current plane meets the requirements.
  • Step S405 In addition to the selected three feature points, calculate the distance from all the feature points in the sparse point data set to the current plane, and set the feature points whose distance from the current plane is less than the first set distance to the first inside of the current plane point;
  • the first set distance can be set based on engineering experience or experimental simulation results. For example, 2cm.
  • Step S406 Determine whether the number of first interior points of the current plane is greater than the number of first interior points of the last effective plane, if yes, calculate the normal vector of the current effective plane according to the first interior points of the current plane. If not, perform step S401 ;
  • Step S407 recalculate the normal vector of the current plane according to the first interior point
  • Step S408 it is judged whether the normal vector of the current plane meets the requirements, if it is, the current plane is taken as the current effective plane, if not, step S401 is executed;
  • Step S409 Iterate S1-S7 repeatedly until the preset number of iterations is met, and update the current effective plane to the optimal effective plane, which is the plane containing the most feature points.
  • the method after performing plane detection on the current frame image based on the sparse point data set, after obtaining the optimal effective plane of the current frame image, the method further includes: performing plane clustering on the obtained optimal effective plane, and outputting the current frame Multiple effective planes of the image. As shown in Figure 5, it specifically includes steps S2400-S2800.
  • step S2400 a feature point is selected arbitrarily from the feature points included in the optimal effective plane as a starting point.
  • step S2500 the starting point is used as a seed point, and the adjacent points of the starting point are determined among the characteristic points contained in the optimal effective plane according to the set radius.
  • step S2600 the neighboring points are used as seed points, and the neighboring points of the neighboring points are determined from the feature points included in the optimal effective plane until the end point is determined, and the end point is a feature point without neighboring points.
  • Step S2700 Calculate the number of seed points until the number of seed points is greater than the preset minimum number of clustering points, and output an effective plane.
  • Step S2800 reselect a feature from the remaining feature points of the optimal effective plane as the starting point, and execute S2500-S2700 until all feature points included in the optimal effective plane are traversed, and multiple effective planes of the current frame image are output.
  • the remaining feature points are the feature points that were not selected when outputting the last effective plane.
  • plane detection is performed on the current frame image based on a sparse point data set without storing a large amount of point cloud data, which can improve the speed of plane detection, thereby improving the real-time performance of plane detection, and performing plane detection based on the sparse point data set.
  • the detection can be applied to mobile devices with limited computing power, avoiding the delay of displaying images and improving user experience.
  • Fig. 5 is a schematic diagram of a plane tracking method provided by an embodiment of the present specification.
  • the plane tracking method provided in this embodiment is implemented by computer technology, and can be implemented by the electronic device 100 described in FIG. 1.
  • the plane tracking method provided in this embodiment is based on the plane detection method provided above, and the method includes steps S3100-S3200.
  • Step S3100 Acquire continuous multiple frames of images of the environment scene collected by the binocular camera.
  • step S3200 according to the optimal effective plane of the current frame image, multiple effective planes of the current frame image are obtained, and the plane frame number of the effective plane is consistent with the frame number of the corresponding image.
  • plane clustering is performed on the optimal effective plane of the current frame image to obtain multiple effective planes of the current frame image, and the plane frame number of the effective plane of the current frame image is consistent with the frame number of the corresponding image.
  • step S3300 the multiple effective planes of the current frame of image and the multiple effective planes of the previous frame of image are merged to determine the tracking plane of the continuous multiple frames of image.
  • the multiple effective planes of each frame of image are detected, and the plane fusion between the multiple frames of images is performed based on the effective plane of each frame of image, which can improve the stability of plane fusion and expansion.
  • the edge of the plane has better fit, and further as the environment changes, the plane can be tracked based on the fusion plane, so that as the user's perspective changes, the plane expands steadily in all directions, thereby improving user experience.
  • the step of fusing multiple effective planes of the current frame image with multiple effective planes of the previous frame image to determine the tracking plane of the continuous multiple frame images may further include step S3310 -S3350.
  • step S3310 the effective plane of the current frame of image is sequentially compared with multiple effective planes of the previous frame of image.
  • Step S3320 It is judged whether there is a plane that meets the plane fusion condition among the multiple effective planes of the current frame image.
  • the plane fusion conditions include:
  • the height difference between the effective plane of the current frame image and any effective plane of the previous frame image is less than the first set height; between the normal vector of the effective plane of the current frame image and the normal vector of any effective plane of the previous frame image
  • the included angle of is smaller than the first set angle; and, the ratio of points falling in any effective plane of the previous frame of image to all points in the effective plane of the current frame of image is greater than the first set ratio.
  • the first set height can be set based on engineering experience or experimental simulation results.
  • the first setting angle can be set according to engineering experience or experimental simulation results.
  • the first setting ratio can be set according to engineering experience or experimental simulation results. If the above three plane fusion conditions are met at the same time, it is determined that there are planes that meet the plane fusion conditions among the multiple effective planes of the current frame image.
  • Step S3330 in the case that there are planes that meet the plane fusion condition in the multiple effective planes of the current frame image, merge the plane that meets the plane fusion condition in the current frame image with the corresponding plane of the previous frame image to obtain the first plane , And update the plane frame number of the first plane to the frame number of the current frame image.
  • the step of fusing the plane that meets the plane fusion condition in the current frame of image with the corresponding plane of the previous frame of image may further include: steps S3331-S3332.
  • Step S3331 Calculate the distance from all points in the plane that meets the plane fusion condition in the current frame of image to any point in the corresponding plane of the previous frame of image.
  • step S3332 a point whose distance to any point in the corresponding plane of the previous frame of image is greater than the second set distance is added to the corresponding plane to obtain the first plane.
  • a point in the current frame of image that satisfies the plane fusion condition and the distance from any point in the corresponding plane of the previous frame of image is not greater than the second set distance is considered to be the coincident point in the two frames of images.
  • the distance of any point in the corresponding plane of the last frame of image, the point fusion between the two planes, can avoid too many repeated points.
  • Step S3340 in the case that there is no plane that meets the plane fusion condition among the multiple effective planes of the current frame image, use the plane that does not meet the plane fusion condition in the current frame image as the second plane.
  • step S3350 the effective plane of the previous frame of image is updated according to the first plane and the second plane, and the tracking plane of the continuous multiple frames of image is determined.
  • the tracking planes of consecutive multiple frames of images include the first plane, the second plane, and the plane in the previous frame of image that is not fused with any plane in the current frame of image.
  • the last frame image includes a1 effective plane and a2 effective plane
  • the current frame image includes b1 effective plane and b2 effective plane
  • the b1 effective plane of the current frame image and the a1 effective plane r of the previous frame image are merged into the c1 effective plane.
  • the b2 effective plane of the current frame image and the a2 effective plane r of the previous frame image are merged into the c2 effective plane
  • the tracking plane of the continuous multiple frames of images includes the c1 effective plane and the c2 effective plane.
  • the previous frame image includes a1 effective plane and a2 effective plane
  • the current frame image includes b1 effective plane and b2 effective plane
  • the b1 effective plane of the current frame image and the a1 effective plane r of the previous frame image are merged into the c1 effective plane.
  • the b2 effective plane of the current frame of image does not merge with any effective plane of the previous frame of image
  • the tracking planes of consecutive multiple frames of images include the c1 effective plane, a2 effective plane, and b2 effective plane.
  • the plane tracking method may further include: steps S4100-S4200 .
  • Step S4100 Calculate the plane parameters of the first plane and the second plane according to the three-dimensional coordinate information of the feature points in the sparse point data set.
  • the plane parameters include plane height, plane length-to-width ratio, and so on.
  • the plane parameters are calculated according to the three-dimensional coordinate information of all points in the plane.
  • the plane height is the average plane height. Specifically, the plane height corresponding to each point is calculated according to the points contained in the plane, and then the average plane height corresponding to all points is calculated, and the average value is the plane height corresponding to the plane.
  • the aspect ratio of the plane can be calculated according to the three-dimensional coordinate information of the four corner points of the plane.
  • Step S4200 eliminate invalid planes in the first plane and the second plane; wherein the invalid plane is a plane whose aspect ratio is less than a preset aspect ratio threshold.
  • the invalid plane may be, for example, a long and narrow plane or a too small plane, and the invalid plane affects the generation of virtual objects.
  • Invalid plane can be judged according to the ratio of plane length to width.
  • the preset aspect ratio threshold can be set according to engineering experience or experimental simulation results.
  • invalid planes are eliminated according to the plane parameters to avoid the influence of invalid planes on the generation of virtual objects and ensure the effectiveness of the tracking plane.
  • the plane tracking method may further include: step S5100.
  • Step S5100 according to the plane frame number, remove the lost planes in the first plane and the second plane; wherein, the lost plane is the one where the difference between the plane frame number and the frame number of the current frame image is greater than the set number of frames flat.
  • the environment scene changes
  • the detected plane also changes.
  • the lost plane is eliminated according to the plane frame number, which reduces the amount of data processing and improves the effectiveness of tracking the plane.
  • the plane tracking method may further include: steps S6100-S6200 .
  • Step S6100 Calculate the number of points in the tracking plane.
  • step S6200 if the number of points in the tracking plane exceeds the first set number, the center point of the tracking plane is removed, and the edge points of the tracking plane are retained.
  • the first set number can be set based on engineering experience or experimental simulation results.
  • the number of points contained in the plane continues to increase, and the storage space occupied by the data becomes larger and larger.
  • the center point of the plane is removed and only the edge points of the plane are retained. It can reduce the storage space occupied by data, thereby increasing the processing speed, realizing the speed of plane detection, fusion and expansion, and the user experience is better.
  • the plane tracking method may further include: step S7100 -S7200.
  • Step S7100 It is judged whether the first plane and any unfused effective plane in the current frame of image meet the plane fusion condition.
  • Step S7200 in the case that the first plane and any unfused effective plane in the current frame image meet the plane fusion condition, perform plane fusion on the first plane and the corresponding effective plane in the current frame image.
  • the effective planes that meet the plane fusion conditions in the current frame image change. After the effective planes are fused, it is judged again whether there are planes that can be fused between the multiple effective planes of the current frame image after the update, and the fusionable planes are merged to avoid plane overlap.
  • the multiple effective planes of each frame of image are detected, based on the effective plane of each frame of image, the plane fusion between the multiple frames of images can be improved, and the stability of plane fusion and expansion can be improved.
  • the edge of the rear plane has better fit.
  • the plane can be tracked based on the merged plane, so that as the user's perspective changes, the plane expands steadily in all directions, thereby improving the user experience.
  • a plane detection device 700 is provided, and the plane detection device 700 may be the electronic device 100 shown in FIG. 1.
  • the plane detection device 700 includes a processor 710 and a memory 720.
  • the memory 720 may be configured to store executable instructions
  • the processor 710 may be configured to execute the plane detection method as provided in the foregoing embodiment of the present disclosure according to the control of the executable instruction.
  • the processor 710 may be configured to execute steps S2100-S2300, steps S2110-S2140, steps S2121-S2124, steps S2210-S2230, S2310- in the foregoing embodiments of the present disclosure under the control of executable instructions. S2330 and/or steps S2400-S2800.
  • plane detection is performed on the current frame image based on a sparse point data set without storing a large amount of point cloud data, which can improve the speed of plane detection, thereby improving the real-time performance of plane detection, and performing plane detection based on the sparse point data set.
  • the detection can be applied to mobile devices with limited computing power, avoiding the delay of displaying images and improving user experience.
  • planar tracking device 800 is provided, and the planar tracking device 800 may be the electronic device 100 shown in FIG. 1.
  • the planar tracking device 800 includes a processor 810 and a memory 820.
  • the memory 820 may be configured to store executable instructions
  • the processor 810 may be configured to execute the planar tracking method as provided in the foregoing embodiment of the present disclosure according to the control of the executable instruction.
  • the processor 810 may be configured to execute steps S3100-S3300, steps S3310-S3350, steps S3331-S3332, steps S4100-S4200, and step S5100 in the foregoing embodiments of the present disclosure according to the control of executable instructions. , Steps S6100-S6200 and/or steps S7100-S7200.
  • the multiple effective planes of each frame of image are detected, based on the effective plane of each frame of image, the plane fusion between the multiple frames of images can be improved, and the stability of plane fusion and expansion can be improved.
  • the edge of the rear plane has better fit.
  • the plane can be tracked based on the merged plane, so that as the user's perspective changes, the plane expands steadily in all directions, thereby improving the user experience.
  • This embodiment provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the method described in any one of the above method embodiments is implemented.
  • the present disclosure may be a system, method and/or computer program product.
  • the computer program product may include a computer-readable storage medium loaded with computer-readable program instructions for enabling a processor to implement various aspects of the present disclosure.
  • the computer-readable storage medium may be a tangible device that can hold and store instructions used by the instruction execution device.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • Non-exhaustive list of computer-readable storage media include: portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM) Or flash memory), static random access memory (SRAM), portable compact disk read-only memory (CD-ROM), digital versatile disk (DVD), memory stick, floppy disk, mechanical encoding device, such as a printer with instructions stored thereon
  • RAM random access memory
  • ROM read-only memory
  • EPROM erasable programmable read-only memory
  • flash memory flash memory
  • SRAM static random access memory
  • CD-ROM compact disk read-only memory
  • DVD digital versatile disk
  • memory stick floppy disk
  • mechanical encoding device such as a printer with instructions stored thereon
  • the computer-readable storage medium used here is not interpreted as the instantaneous signal itself, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (for example, light pulses through fiber optic cables), or through wires Transmission of electrical signals.
  • the computer-readable program instructions described herein can be downloaded from a computer-readable storage medium to various computing/processing devices, or downloaded to an external computer or external storage device via a network, such as the Internet, a local area network, a wide area network, and/or a wireless network.
  • the network may include copper transmission cables, optical fiber transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
  • the network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network, and forwards the computer-readable program instructions for storage in the computer-readable storage medium in each computing/processing device .
  • the computer program instructions used to perform the operations of the present disclosure may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-related instructions, microcode, firmware instructions, state setting data, or in one or more programming languages.
  • Source code or object code written in any combination, the programming language includes object-oriented programming languages such as Smalltalk, C++, etc., and conventional procedural programming languages such as "C" language or similar programming languages.
  • Computer-readable program instructions can be executed entirely on the user's computer, partly on the user's computer, executed as a stand-alone software package, partly on the user's computer and partly executed on a remote computer, or entirely on the remote computer or server carried out.
  • the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (for example, using an Internet service provider to access the Internet connection).
  • LAN local area network
  • WAN wide area network
  • an electronic circuit such as a programmable logic circuit, a field programmable gate array (FPGA), or a programmable logic array (PLA), can be customized by using the status information of the computer-readable program instructions.
  • the computer-readable program instructions are executed to realize various aspects of the present disclosure.
  • These computer-readable program instructions can be provided to the processor of a general-purpose computer, a special-purpose computer, or other programmable data processing device, thereby producing a machine that makes these instructions when executed by the processor of the computer or other programmable data processing device , A device that implements the functions/actions specified in one or more blocks in the flowcharts and/or block diagrams is produced. It is also possible to store these computer-readable program instructions in a computer-readable storage medium. These instructions make computers, programmable data processing apparatuses, and/or other devices work in a specific manner. Thus, the computer-readable medium storing the instructions includes An article of manufacture, which includes instructions for implementing various aspects of the functions/actions specified in one or more blocks in the flowcharts and/or block diagrams.
  • each block in the flowchart or block diagram may represent a module, program segment, or part of an instruction, and the module, program segment, or part of an instruction contains one or more components for realizing the specified logical function.
  • Executable instructions may also occur in a different order from the order marked in the drawings. For example, two consecutive blocks can actually be executed substantially in parallel, or they can sometimes be executed in the reverse order, depending on the functions involved.
  • each block in the block diagram and/or flowchart, and the combination of the blocks in the block diagram and/or flowchart can be implemented by a dedicated hardware-based system that performs the specified functions or actions Or it can be realized by a combination of dedicated hardware and computer instructions. It is well known to those skilled in the art that implementation through hardware, implementation through software, and implementation through a combination of software and hardware are all equivalent.
  • plane detection is performed on the current frame image based on the sparse point data set without storing a large amount of point cloud data, which can increase the speed of plane detection, thereby improving the real-time performance of plane detection, and perform detection based on the sparse point data set It can be applied to mobile devices with limited computing power, avoiding display delays and improving user experience. Therefore, the present disclosure has strong practicability.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un procédé et un appareil de détection de plan ainsi qu'un procédé et un appareil de suivi de plan. Le procédé de détection de plan consiste à diviser une image de trame courante d'une scène d'environnement collectée par une caméra double, et à détecter des points caractéristiques correspondant à chaque sous-bloc d'image obtenu après la division (S2100) ; selon des informations de coordonnées tridimensionnelles des points caractéristiques dans un système de coordonnées global, à obtenir un ensemble de données de points épars de l'image de trame courante (S2200), l'ensemble de données de points épars comprenant des informations de coordonnées tridimensionnelles de multiples points caractéristiques ; et sur la base de l'ensemble de données de points épars, à effectuer une détection de plan sur l'image de trame courante, et à obtenir un plan efficace optimal de l'image de trame courante (S2300), le plan efficace optimal étant le plan qui contient les points les plus caractéristiques. Le procédé de suivi de plan comprend : en fonction du plan effectif optimal de l'image de trame courante, l'obtention de multiples plans efficaces de l'image de trame courante, le numéro de trame plan du plan efficace étant cohérent avec le numéro de trame de l'image correspondante (S3200) ; et la fusion de multiples plans efficaces de l'image de trame courante et de multiples plans efficaces d'une image de trame précédente et la détermination d'un plan de suivi d'images de trame multiples consécutives (S3300).
PCT/CN2020/139837 2020-01-07 2020-12-28 Procédé et appareil de détection de plan et procédé et appareil de suivi de plan WO2021139549A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010014970.2A CN111242908B (zh) 2020-01-07 2020-01-07 一种平面检测方法及装置、平面跟踪方法及装置
CN202010014970.2 2020-01-07

Publications (1)

Publication Number Publication Date
WO2021139549A1 true WO2021139549A1 (fr) 2021-07-15

Family

ID=70866393

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/139837 WO2021139549A1 (fr) 2020-01-07 2020-12-28 Procédé et appareil de détection de plan et procédé et appareil de suivi de plan

Country Status (2)

Country Link
CN (1) CN111242908B (fr)
WO (1) WO2021139549A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113610004A (zh) * 2021-08-09 2021-11-05 上海擎朗智能科技有限公司 一种图像处理方法、机器人及介质
CN113762266A (zh) * 2021-09-01 2021-12-07 北京中星天视科技有限公司 目标检测方法、装置、电子设备和计算机可读介质
CN115810100A (zh) * 2023-02-06 2023-03-17 阿里巴巴(中国)有限公司 确定物体放置平面的方法、设备、存储介质及程序产品

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111242908B (zh) * 2020-01-07 2023-09-15 青岛小鸟看看科技有限公司 一种平面检测方法及装置、平面跟踪方法及装置
CN112017300A (zh) * 2020-07-22 2020-12-01 青岛小鸟看看科技有限公司 混合现实图像的处理方法、装置及设备
CN111967342B (zh) * 2020-07-27 2024-04-12 杭州易现先进科技有限公司 平面参数设置的方法、装置、电子装置和存储介质
EP4318407A1 (fr) * 2021-03-22 2024-02-07 Sony Group Corporation Dispositif de traitement d'informations, procédé de traitement d'informations et programme
CN113052977A (zh) * 2021-03-30 2021-06-29 联想(北京)有限公司 处理方法及装置
CN113689466B (zh) * 2021-07-30 2022-07-12 稿定(厦门)科技有限公司 一种基于特征点的平面跟踪方法、系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103106659A (zh) * 2013-01-28 2013-05-15 中国科学院上海微系统与信息技术研究所 基于双目视觉稀疏点匹配的空旷区域目标检测与跟踪方法
WO2018112788A1 (fr) * 2016-12-21 2018-06-28 华为技术有限公司 Procédé et dispositif de traitement d'images
CN108898661A (zh) * 2018-05-31 2018-11-27 深圳先进技术研究院 三维图像构建的方法、装置及具有存储功能的装置
CN109741240A (zh) * 2018-12-25 2019-05-10 常熟理工学院 一种基于层次聚类的多平面图像拼接方法
CN110120098A (zh) * 2018-02-05 2019-08-13 浙江商汤科技开发有限公司 场景尺度估计及增强现实控制方法、装置和电子设备
CN111242908A (zh) * 2020-01-07 2020-06-05 青岛小鸟看看科技有限公司 一种平面检测方法及装置、平面跟踪方法及装置

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8203582B2 (en) * 2009-08-24 2012-06-19 Samsung Electronics Co., Ltd. Subpixel rendering with color coordinates' weights depending on tests performed on pixels
CN103617624B (zh) * 2013-12-13 2016-05-25 哈尔滨工业大学 用于高速视觉测量的基于合作目标的实时全局搜索方法
US20150227792A1 (en) * 2014-02-10 2015-08-13 Peter Amon Methods and Devices for Object Detection
CN105399020A (zh) * 2015-12-31 2016-03-16 徐州重型机械有限公司 三维空间平面追踪控制方法、系统及高空作业设备
CN106023256B (zh) * 2016-05-19 2019-01-18 石家庄铁道大学 面向增强现实辅助维修系统平面目标粒子滤波跟踪的状态观测方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103106659A (zh) * 2013-01-28 2013-05-15 中国科学院上海微系统与信息技术研究所 基于双目视觉稀疏点匹配的空旷区域目标检测与跟踪方法
WO2018112788A1 (fr) * 2016-12-21 2018-06-28 华为技术有限公司 Procédé et dispositif de traitement d'images
CN110120098A (zh) * 2018-02-05 2019-08-13 浙江商汤科技开发有限公司 场景尺度估计及增强现实控制方法、装置和电子设备
CN108898661A (zh) * 2018-05-31 2018-11-27 深圳先进技术研究院 三维图像构建的方法、装置及具有存储功能的装置
CN109741240A (zh) * 2018-12-25 2019-05-10 常熟理工学院 一种基于层次聚类的多平面图像拼接方法
CN111242908A (zh) * 2020-01-07 2020-06-05 青岛小鸟看看科技有限公司 一种平面检测方法及装置、平面跟踪方法及装置

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113610004A (zh) * 2021-08-09 2021-11-05 上海擎朗智能科技有限公司 一种图像处理方法、机器人及介质
CN113610004B (zh) * 2021-08-09 2024-04-05 上海擎朗智能科技有限公司 一种图像处理方法、机器人及介质
CN113762266A (zh) * 2021-09-01 2021-12-07 北京中星天视科技有限公司 目标检测方法、装置、电子设备和计算机可读介质
CN113762266B (zh) * 2021-09-01 2024-04-26 北京中星天视科技有限公司 目标检测方法、装置、电子设备和计算机可读介质
CN115810100A (zh) * 2023-02-06 2023-03-17 阿里巴巴(中国)有限公司 确定物体放置平面的方法、设备、存储介质及程序产品

Also Published As

Publication number Publication date
CN111242908A (zh) 2020-06-05
CN111242908B (zh) 2023-09-15

Similar Documents

Publication Publication Date Title
WO2021139549A1 (fr) Procédé et appareil de détection de plan et procédé et appareil de suivi de plan
US11625896B2 (en) Face modeling method and apparatus, electronic device and computer-readable medium
US11263816B2 (en) Method and device for a placement of a virtual object of an augmented or mixed reality application in a real-world 3D environment
JP2018507476A (ja) コンピュータビジョンに関する遮蔽処理
US10950036B2 (en) Method and apparatus for three-dimensional (3D) rendering
US10032082B2 (en) Method and apparatus for detecting abnormal situation
CN105518712B (zh) 基于字符识别的关键词通知方法及设备
US10482648B2 (en) Scene-based foveated rendering of graphics content
WO2017107537A1 (fr) Dispositif de réalité virtuelle et procédé d'évitement d'obstacle
JP2017529620A (ja) 姿勢推定のシステムおよび方法
CN112487979B (zh) 目标检测方法和模型训练方法、装置、电子设备和介质
TWI550551B (zh) 景深網格化
US10748000B2 (en) Method, electronic device, and recording medium for notifying of surrounding situation information
US11763479B2 (en) Automatic measurements based on object classification
US10607069B2 (en) Determining a pointing vector for gestures performed before a depth camera
WO2021042638A1 (fr) Procédé et appareil d'extraction d'une image cible de test d'un verre d'inclinaison xpr de projecteur, et dispositif électronique
JP6240706B2 (ja) グラフマッチングおよびサイクル検出による自動モデル初期化を用いた線トラッキング
JP2018067106A (ja) 画像処理装置、画像処理プログラム、及び画像処理方法
JP7262530B2 (ja) 位置情報の生成方法、関連装置及びコンピュータプログラム製品
CN114998433A (zh) 位姿计算方法、装置、存储介质以及电子设备
CN113129249A (zh) 基于深度视频的空间平面检测方法及其系统和电子设备
WO2017112036A2 (fr) Détection de régions d'ombre dans des données de profondeur d'image provoquées par des capteurs d'images multiples
US20230260211A1 (en) Three-Dimensional Point Cloud Generation Method, Apparatus and Electronic Device
WO2020001016A1 (fr) Procédé et appareil de génération d'image animée et dispositif électronique et support d'informations lisible par ordinateur
JP2017016202A (ja) 画像処理装置、画像処理方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20911357

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 03.11.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20911357

Country of ref document: EP

Kind code of ref document: A1