WO2023168957A1 - Pose determination method and apparatus, electronic device, storage medium, and program - Google Patents

Pose determination method and apparatus, electronic device, storage medium, and program Download PDF

Info

Publication number
WO2023168957A1
WO2023168957A1 PCT/CN2022/129083 CN2022129083W WO2023168957A1 WO 2023168957 A1 WO2023168957 A1 WO 2023168957A1 CN 2022129083 W CN2022129083 W CN 2022129083W WO 2023168957 A1 WO2023168957 A1 WO 2023168957A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
point
feature point
target
feature
Prior art date
Application number
PCT/CN2022/129083
Other languages
French (fr)
Chinese (zh)
Inventor
甄佳楠
周晓巍
孙佳明
张思宇
Original Assignee
上海商汤智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤智能科技有限公司 filed Critical 上海商汤智能科技有限公司
Publication of WO2023168957A1 publication Critical patent/WO2023168957A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • the present disclosure relates to the field of computer technology, and in particular, to a posture determination method, device, electronic device, storage medium and program.
  • augmented reality technology One of the important uses of augmented reality technology is to interact with real objects in the real world and render virtual effects based on them.
  • Accurately estimating and tracking the six-dimensional (6D) pose of an object is a prerequisite for interactive rendering and is also a very important research issue in the field of computer vision.
  • the specific definition of the 6D posture of the object is three degrees of freedom of displacement plus three degrees of freedom of rotation.
  • the posture estimation results of related technologies will have certain deviations when the background of the object changes. At the same time, it is difficult to distinguish the difference in posture of objects in the object images collected at two approximate position angles.
  • Embodiments of the present disclosure propose an attitude determination method, device, electronic device, storage medium and program, aiming to improve the accuracy of the determined target attitude.
  • Embodiments of the present disclosure provide a gesture determination method, including: determining at least one first feature point on a target item in an image to be recognized; determining at least one target image from a pre-stored image based on the target item in the image to be recognized. , wherein the target image has a second feature point and a corresponding three-dimensional point cloud, and the second feature point has a corresponding point in the three-dimensional point cloud; according to the at least one first feature point, the third Two feature points, and the corresponding point of the second feature point in the three-dimensional point cloud, determine the target point of the at least one first feature point in the three-dimensional point cloud; determine the target point based on the target point The target pose corresponding to the image to be recognized.
  • An embodiment of the present disclosure provides a gesture determination device, including: a first determination part configured to determine at least one first feature point on a target item in an image to be recognized; a second determination part configured to determine according to the The target item in the image to be recognized determines at least one target image from the pre-stored image, wherein the target image has a second feature point and a corresponding three-dimensional point cloud, and the second feature point has a corresponding corresponding value in the three-dimensional point cloud. points; the target point matching part is configured to determine the at least one first feature point, the second feature point, and the corresponding point of the second feature point in the three-dimensional point cloud.
  • a first feature point is a target point in the three-dimensional point cloud; the posture determination part is configured to determine the target posture corresponding to the image to be recognized based on the target point.
  • An embodiment of the present disclosure provides an electronic device, including: a processor; a memory for storing instructions executable by the processor; wherein the processor is configured to call instructions stored in the memory to perform the above-mentioned gesture determination. method.
  • Embodiments of the present disclosure provide a computer-readable storage medium on which computer program instructions are stored.
  • the computer program instructions are executed by a processor, the above-mentioned posture determination method is implemented.
  • An embodiment of the present disclosure provides a computer program.
  • the computer program includes a computer readable code.
  • the processor of the electronic device executes the above gesture. Determine the method.
  • the two-dimensional second feature point corresponding to each two-dimensional first feature point in the image to be recognized in the pre-stored image is determined, and then, based on the corresponding two-dimensional second feature point with each first feature point, The three-dimensional feature point corresponding to the second feature point matched by the feature point determines the posture of the image acquisition device when collecting the image to be recognized, which can improve the accuracy of the determined target posture.
  • Figure 1A is a schematic diagram of determining the six-dimensional posture of an object using a geometry-based method in the related art
  • Figure 1B is a schematic diagram of determining the six-dimensional posture of an object based on template matching direct regression method in related technologies
  • Figure 2 shows a flow chart of a gesture determination method according to an embodiment of the present disclosure
  • Figure 3 shows a schematic diagram of a three-dimensional point cloud according to an embodiment of the present disclosure
  • Figure 4 shows a schematic diagram of a second feature point matching process according to an embodiment of the present disclosure
  • Figure 5 shows a schematic diagram of determining a target posture according to an embodiment of the present disclosure
  • Figure 6A shows a schematic diagram of determining a reference posture according to an embodiment of the present disclosure
  • Figure 6B shows a schematic diagram of determining the 6D posture of an object based on the posture determination method provided by the embodiment of the present application during practical application;
  • Figure 7 shows a schematic diagram of a gesture determination device according to an embodiment of the present disclosure
  • Figure 8 shows a schematic diagram of an electronic device according to an embodiment of the present disclosure
  • FIG. 9 shows a schematic diagram of another electronic device according to an embodiment of the present disclosure.
  • exemplary means "serving as an example, example, or illustrative.” Any embodiment described herein as “exemplary” is not necessarily to be construed as superior or superior to other embodiments.
  • a and/or B can mean: A exists alone, A and B exist simultaneously, and they exist alone. B these three situations.
  • at least one herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, and C, which can mean including from A, Any one or more elements selected from the set composed of B and C.
  • object 6D posture estimation methods can be divided into methods based on template matching to directly return the 6D posture of the object and geometry-based methods according to their mechanism of action; among them, geometry-based methods: This type of method usually determines the coordinate system of the object itself. Define three-dimensional (3 Dimension, 3D) feature points in three-dimensional space, and mark the two-dimensional (2 Dimension, 2D) feature points corresponding to these 3D feature points on the images from each perspective of the object. These images and annotations will be used to train a 2D feature point detection neural network.
  • the neural network will detect the predefined 2D feature points of the object from the input image, and combine the known matching relationship between 2D feature points and 3D feature points through N-point perspective (Perspective-N-Points, PnP)
  • the algorithm solves the 6D posture of the object; as shown in Figure 1A, it is a schematic diagram of the geometry-based method in related technologies to determine the six-dimensional posture of the object; among them, first the 2D feature points of the object are detected on the input image, that is, execution 101; secondly, In the input image, the image of the area near the object is cropped, and 102 is executed; again, the probability distribution of the object's 2D feature points is estimated for the image of the area near the object, that is, 103 is executed; then, based on the probability distribution of the object's 2D feature points, Estimate and determine the pixel position of the object's 2D feature points, which is executed in 104; finally, use the known relationship between 2D and 3D feature points to determine the 6D posture of the object
  • the method of directly returning the 6D posture of the object based on template matching it is necessary to obtain the image of the object at each viewing angle and its corresponding 6D posture in advance; and generate a matching template by encoding the images from each viewing angle, or directly use the neural network
  • the information of the 6D pose of the object is encoded into the network through a learning-based method.
  • the 6D posture of the target object is directly output by matching the input image directly or indirectly with the template.
  • FIG. 1B it is a schematic diagram of determining the six-dimensional posture of an object based on the direct regression method of template matching in related technologies; wherein, as shown in Figure 1A above, first 2D feature point detection is performed on the input image, that is, step 106 is performed; Then, crop the image of the area near the object in the input image, that is, execute 107; finally, the cropped image of the area near the object is directly outputted with a learning-based method based on the neural network, which directly outputs the 6D pose of the object, that is, execute 108.
  • the above two types of methods have obvious problems: 1. They are sensitive to the object background. During template generation or neural network training, it will inevitably be interfered by background information in the image. Therefore, new background effects may be affected during actual use; 2. Insensitive to subtle changes in object posture, it is impossible to estimate a sufficiently accurate object posture, and whether it is template matching or neural network method, it is essentially They all encode and memorize image information. Therefore, if the posture of the object in the input image during inference has not appeared in the previous training data generated, the algorithm will estimate it as the posture closest to the image in the training data, and thus it will not be able to estimate the posture accurately enough.
  • the usual solution of existing methods is to collect a large amount of data in different backgrounds and cover the viewing angles as densely as possible. This will increase the cost of data collection and acquisition; and due to the limited memory capacity of the neural network itself, it cannot essentially solve the problem, thereby limiting the application scenarios of the object 6D pose estimation algorithm in augmented reality.
  • the representation of existing object 6D pose estimation methods cannot be directly extended to 6D pose tracking algorithms. They usually can only smooth between detection frames by using filtering methods, or additionally use other 6D pose tracking methods (such as Contour tracking) to complete 6D posture tracking, that is, it is impossible to use a unified representation to complete 6D posture detection and tracking of objects.
  • the gesture determination method in the embodiment of the present disclosure can be executed by an electronic device such as a terminal device or a server.
  • the terminal device can be a user equipment (User Equipment, UE), a mobile device, a user terminal, a terminal, a cellular phone, a cordless phone, a personal digital assistant (Personal Digital Assistant, PDA), a handheld device, a computing device, a vehicle-mounted device, or a portable device.
  • Fixed or mobile devices such as wearables.
  • the server can be a single server or a server cluster composed of multiple servers. Any electronic device can implement the gesture determination method of the embodiment of the present disclosure by calling the computer readable instructions stored in the memory through the processor.
  • the embodiments of the present disclosure are used to accurately determine the time to collect the image to be identified based on a three-dimensional point cloud constructed from multiple pre-stored images including the target item after acquiring an image to be identified.
  • the posture of the image acquisition device is used to accurately determine the time to collect the image to be identified based on a three-dimensional point cloud constructed from multiple pre-stored images including the target item after acquiring an image to be identified.
  • FIG. 2 shows a flow chart of a gesture determination method according to an embodiment of the present disclosure.
  • the object posture method according to the embodiment of the present disclosure may include the following steps S10 to S40:
  • Step S10 Determine at least one first feature point on the target item in the image to be recognized.
  • the posture determination method of the embodiment of the present disclosure is used to determine the target posture corresponding to the two-dimensional image to be recognized, that is, the posture of the image acquisition device when collecting the image to be recognized.
  • the image to be identified is determined by the way in which the image acquisition device collects the target item, which can be a single image acquired through separate acquisition, or a frame of an image sequence acquired continuously by the moving image acquisition device during the movement.
  • the electronic device determines at least one first feature point on the target item in the image to be recognized.
  • the target object is a static object that does not change its posture during the image collection process, and can be an inanimate still life or an animate object that is fixed during the image collection process.
  • the determination process of the first feature point on the image to be recognized may be to first obtain the target item in the image through object recognition, and then determine at least one local area on the target item that is used to characterize the target item. the first characteristic point.
  • Each first feature point may also have a first descriptor used to describe local features of the first feature point, that is, the first descriptor is used to characterize the local area features of the target item in the image to be recognized.
  • the first descriptor may be determined by extracting preset features at the location of the first feature point, and determining a feature vector as the first descriptor based on the distribution of the preset features.
  • the preset characteristics can be set according to actual needs, and can include at least one of color characteristics and texture characteristics.
  • the first descriptor can be any feature descriptor, for example, it can be a Histogram of Oriented Gradients (HOG) feature descriptor.
  • HOG Histogram of Oriented Gradients
  • the determination process of the first descriptor may be to determine the gradient change of each pixel in the horizontal and vertical directions within the location of the first feature point, and determine the amplitude and direction of each pixel based on the gradient change. Generate a histogram of gradient magnitudes and directions.
  • the feature vector obtained through histogram normalization can also be used as the first descriptor.
  • Step S20 Determine at least one target image from pre-stored images according to the target item in the image to be recognized.
  • a three-dimensional point cloud of the target item and at least one target image may also be determined.
  • Each pre-stored image is an image obtained by pre-collecting the target object through an image collection device, and can be stored in a database locally or remotely connected to the electronic device.
  • the at least one target image determined by the electronic device may be all pre-stored images, or part of the pre-stored images.
  • the electronic device can obtain at least one target image from the pre-stored images according to any preset rules. For example, the electronic device can first determine the posture information corresponding to each pre-stored image, and determine the initial posture corresponding to the image to be recognized, and then determine at least one of the pre-stored images based on the initial posture of the image to be recognized and the posture information corresponding to the pre-stored image. target image. In this way, the image whose relationship with the posture of the target item in the image to be recognized satisfies the preset relationship can be determined in the prestored image; for example, the similarity between the posture of the target item in the prestored image and the image to be identified can be determined to satisfy the preset relationship. Let the similarity image be used as the target image; this provides a basis for subsequently determining the second feature point of the target item in the target image.
  • the initial posture corresponding to the image to be recognized can be determined according to any existing method; for example, when the image to be recognized is a frame in a continuously collected image sequence, it can be determined that the position in the image sequence to be recognized is
  • the previous posture corresponding to the multiple frames of images before the image is the posture of the image acquisition device when collecting the previous multiple frames of images.
  • the extrapolation method is performed based on multiple previous postures to obtain the initial posture corresponding to the image to be recognized.
  • the image to be recognized can also be directly input into a pre-trained attitude estimation model, and the corresponding initial attitude can be output after the attitude estimation.
  • the initial posture can be represented by a vector, including three displacement parameters and three rotation parameters of the image acquisition device in the target three-dimensional coordinate system when collecting the image to be recognized.
  • the resulting initial posture corresponding to the image to be recognized is more accurate, and the initial posture and the previous posture are more coherent.
  • the initial pose corresponding to the image to be recognized and the pose information corresponding to each pre-stored image can be represented by vectors. Therefore, the poses of the image to be recognized and the pre-stored images can be directly determined by calculating the vector distance. Match the situation, and filter out the pre-stored images whose vector distance is smaller than the preset distance threshold as the target image.
  • the target image has a second feature point and a corresponding three-dimensional point cloud
  • the second feature point of the target image has a corresponding point in the three-dimensional point cloud.
  • the three-dimensional point cloud of the target object can be predetermined based on at least one pre-stored image.
  • multiple pre-stored images including the target item are obtained through screening in the database, and then the three-dimensional point cloud of the target item is determined based on the multiple pre-stored images.
  • the three-dimensional point cloud of the target object can be generated by an electronic device based on at least one pre-stored image, or by other devices such as a server based on at least one pre-stored image.
  • the electronic device may generate the three-dimensional point cloud before executing the attitude determination method of the embodiment of the present disclosure, or generate the three-dimensional point cloud of the target item during the execution of the attitude determination method of the embodiment of the present disclosure. cloud.
  • the process of determining the three-dimensional point cloud of the target object based on multiple pre-stored images may first determine at least one pre-stored image and obtain at least one of the pre-stored images. second feature points; secondly, match the second feature points of each pre-stored image to obtain multiple second feature point groups, wherein at least one second feature point in each second feature point group is used to characterize the target The same position on the item; finally, determine the three-dimensional point cloud corresponding to the pre-stored image based on the plurality of second feature point groups corresponding to the target item, wherein each point in the three-dimensional point cloud corresponds to at least one of the second feature point groups.
  • multiple pre-stored images can be screened through gesture screening to obtain an image similar to the posture of the object in the image to be recognized as the target image; in this way, the target image can be determined based on at least one pre-stored image through local feature matching. Identify the three-dimensional feature points corresponding to each two-dimensional feature point in the image; here, because each local feature point in the image only contains information about its nearby local area, compared with existing methods, it is more difficult to determine each two-dimensional feature point In the process of corresponding three-dimensional feature points, it is relatively less affected by the background and image perspective, which makes the three-dimensional feature points corresponding to the determined two-dimensional feature points more accurate.
  • Each pre-stored image may include a target item, and at least one second feature point is located on the target item in the corresponding pre-stored image, and the second feature point is used to characterize a local area of the target item.
  • each second feature point may also have a corresponding second descriptor for describing local features, that is, the second descriptor is used to characterize the local area features of the target item in the pre-stored image.
  • the second feature point matching process can be implemented based on the second descriptor, that is, the electronic device matches the second feature point of each prestored image according to the second descriptor of at least one second feature point of each prestored image. Multiple second feature point groups are obtained.
  • At least one second feature point in each second feature point group is used to represent the same position on the target item. Then, a three-dimensional point cloud is determined based on a plurality of second feature point groups corresponding to the target item. Each point in the three-dimensional point cloud corresponds to at least one second feature point in a second feature point group.
  • the determination process of the plurality of second feature point groups may be to randomly select one of the plurality of pre-stored images as the target pre-stored image, and add each target in the target pre-stored image based on the second descriptor.
  • the two feature points are respectively matched with other pre-stored images for local features, and a plurality of matched second feature points and the target feature point are combined to form a second feature point group.
  • a pre-stored image in which an unmatched second feature point is located is re-determined as the target pre-stored image, and then the unmatched target second feature point is compared with other unmatched second feature points. Perform second descriptor matching on the second feature points to determine the second feature point group until all second feature points are matched, or the matching process of each second feature point has completed after all second feature points have been matched.
  • the distance between the second descriptors of the two second feature points can be calculated to obtain the matching degree of the local features, The closer the distance, the higher the degree of matching.
  • the second descriptor is a vector, and the distance between the two second descriptors can be obtained by directly performing an inner product on the vector.
  • the second feature point can also be matched through two parameters: local features and spatial location.
  • the multiple second feature point groups can be solved to obtain a three-dimensional point cloud of the target item through a motion recovery structure algorithm. That is, the feature point tracking trajectory is determined based on the acquisition time of the image where the second feature point in each second feature point group is located, and a three-dimensional point of the target item is determined based on the multiple feature point tracking trajectories to form a three-dimensional point cloud. Therefore, each three-dimensional point in the three-dimensional point cloud corresponds to all the second feature points in a second feature point group. That is, during the construction process of the three-dimensional point cloud, it can be obtained that each three-dimensional point corresponds to at least one two-dimensional third feature point. The corresponding relationship between the two feature points; in this way, based on the motion recovery structure algorithm, the image acquisition device tracking and motion matching can be completed more flexibly, thereby making the obtained three-dimensional point cloud of the target object more accurate.
  • Figure 3 shows a schematic diagram of a three-dimensional point cloud according to an embodiment of the present disclosure.
  • the three-dimensional point cloud 20 includes a plurality of three-dimensional points in the three-dimensional coordinate system, and each three-dimensional point has at least one corresponding two-dimensional second feature point.
  • FIG. 4 shows a schematic diagram of a second feature point matching process according to an embodiment of the present disclosure.
  • the first pre-stored image 30 and the second pre-stored image 31 both include a target item 32 .
  • the target item 32 in the first pre-stored image 30 and the second pre-stored image 31 each has a plurality of second feature points.
  • the third point representing the same position of the target item 32 can be determined based on the local features.
  • the two feature points are matched together.
  • each pre-stored image also has corresponding posture information, which is used to characterize the posture of the image acquisition device when collecting the pre-stored image.
  • the posture information can be represented by a vector, including three displacement parameters and three rotation parameters of the image acquisition device in the target three-dimensional coordinate system when collecting the pre-stored image.
  • the target three-dimensional coordinate system can be preset or selected.
  • the gesture information may be determined when the pre-stored image is acquired and stored together with the pre-stored image.
  • the posture information can also be determined based on the corresponding three-dimensional points in the three-dimensional point cloud of the plurality of second feature points included in the three-dimensional point cloud after the three-dimensional point cloud is determined. That is to say, for each pre-stored image, an N-point perspective algorithm can be executed based on the correspondence between each second feature point included in the pre-stored image and the three-dimensional point in the three-dimensional point cloud, to obtain the posture information corresponding to the pre-stored image, that is, to collect each The posture of the image acquisition device when pre-storing images.
  • Step S30 Determine where the at least one first feature point is in the three-dimensional point cloud based on the at least one first feature point, the second feature point, and the corresponding point of the second feature point in the three-dimensional point cloud.
  • Target points in a 3D point cloud Target points in a 3D point cloud.
  • the electronic device can determine based on the plurality of first feature points, The second feature point and the corresponding point of the second feature point in the three-dimensional point cloud determine the target point of at least one first feature point in the three-dimensional point cloud. That is to say, the matching of the two-dimensional points and the three-dimensional points of the first feature point and the target point is indirectly achieved through the two-dimensional point matching of the first feature point and the second feature point.
  • the electronic device can perform feature point matching on the image to be recognized and the target image to obtain second feature points matching each first feature point. Then, the point corresponding to the second feature point matched by each first feature point in the three-dimensional point cloud is determined as the target point.
  • the first feature point and the second feature point can be matched using descriptors.
  • the electronic device may perform feature point matching based on the first descriptor of each first feature point in the image to be recognized and the second descriptor of each second feature point in at least one target image to determine a plurality of first feature points.
  • the second feature point matched by the feature point, and based on the corresponding relationship between the second feature point and the three-dimensional point in the target point cloud, the point corresponding to the first feature point in the three-dimensional point cloud is indirectly determined to be the target point.
  • reference may be made to the process of matching the two second feature points in step S20.
  • the closest third feature point can be determined by calculating the distance between the first descriptor and the second descriptor of the second feature point on each candidate image. Two feature points are matched, and the point corresponding to the second feature point matched by each first feature point in the three-dimensional point cloud is determined to be the target point.
  • Step S40 Determine the target posture corresponding to the image to be recognized according to the target point.
  • the position of the target item represented by each first feature point in the image to be recognized is at position in three-dimensional space.
  • the target posture corresponding to the image to be recognized can be determined based on the correspondence between at least one first feature point and each target point. For example, the target posture corresponding to the at least one first feature point in the three-dimensional point cloud can be determined. Perform N-point perspective algorithm to obtain the target posture corresponding to the image to be recognized. That is, based on the correspondence between multiple target points and the first feature point, a first-written equation can be constructed and solved to obtain the target posture.
  • Figure 5 shows a schematic diagram of determining a target posture according to an embodiment of the present disclosure.
  • posture matching is first performed based on the image 41 to be recognized and the pre-stored image 40, and at least one target image 42 is obtained from multiple pre-stored images 40, and then based on The first feature point in the image to be recognized 41 performs local feature matching with the second feature point in each target image 42 to obtain at least one second feature point that matches the first feature point as the matching feature point 43 .
  • the point matched by each matching feature point 43 in the three-dimensional point cloud 44 is determined to be the target point 45, and then N is performed based on the target point 45 matched by each first feature point in the image 41 to be recognized.
  • Point perspective algorithm is used to obtain the target pose 46 corresponding to the image to be recognized.
  • the image to be recognized is a frame in a continuously collected image sequence
  • it is necessary to determine the posture of the image acquisition device when each frame of image acquisition in the acquired image training it can be based on the previous frame.
  • the attitude of the image acquisition device during image acquisition determines the attitude of the image acquisition device during image acquisition of the current frame. For example, when acquiring a continuously collected image sequence, you can extract image frame 1 from the image sequence as the image to be recognized, and determine the 2nd to N (N is an integer greater than 2) frames in the image sequence as the reference image.
  • each first feature point in the image to be recognized and the midpoint of the three-dimensional point cloud After determining the corresponding relationship between each first feature point in the image to be recognized and the midpoint of the three-dimensional point cloud, determine a plurality of second features in the reference image based on the target point corresponding to each first feature point in the three-dimensional point cloud. point the corresponding reference point in the three-dimensional point cloud, and determine the reference pose of each reference image frame based on the correspondence between the third feature point and the reference point.
  • the reference poses determined multiple times are all based on accurate target poses, as the reference image becomes farther and farther away from the image to be recognized in the image sequence, the accuracy will become lower and lower. .
  • the N+1th frame is extracted as the next image to be recognized.
  • the target posture of the current image to be recognized is determined through the posture determination method of the embodiment of the present disclosure, so as to continue to determine the reference posture of the N+2 to N+N frames through a transfer method.
  • the process of determining the reference pose of the adjacent next frame reference image based on the target pose of the image to be recognized may include the following steps: first determine the next frame image of the image to be recognized in the image sequence as The reference image includes at least one third feature point on the target item. According to the target point corresponding to each first feature point on the image to be recognized, and according to the target point corresponding to each first feature point on the image to be recognized, the target point corresponding to each third feature point on the reference image is determined. Then, the reference posture corresponding to the reference image is determined based on the corresponding relationship between each third feature point and the target point.
  • the corresponding relationship between each third feature point and the target point on the reference image can be determined based on the sparse optical flow algorithm, that is, according to the sparse optical flow algorithm, each first feature point on the image to be identified is tracked to obtain each first feature point.
  • the third feature point matched on the reference image.
  • the third feature point matching each first feature point in the reference image is determined to correspond to the target point corresponding to the first feature point in the three-dimensional point cloud.
  • an N-view algorithm is executed based on each third feature point and the corresponding target point to obtain the reference pose corresponding to the reference image.
  • the reference pose of the next frame image in the image sequence can be determined sequentially based on the correspondence between the third feature point and the target point in the current reference image.
  • FIG. 6A shows a schematic diagram of determining a reference posture according to an embodiment of the present disclosure.
  • the image 51 to be identified determined in the image sequence 50 it can be determined based on local feature matching that each first feature point in the image 51 to be identified is in the target image 52 filtered from the pre-stored images. Matched second feature point 53.
  • the relationship determines the target pose 56 of the image to be recognized.
  • the reference image 57 may be determined based on the image 51 to be recognized, the target point 55 and the reference image 57 based on a tracking algorithm.
  • Reference posture In some embodiments of the present disclosure, the corresponding relationship between each first feature point in the image to be recognized 51 and each third feature point in the reference image 57 can be determined according to a sparse optical flow algorithm, and based on each first feature point The corresponding relationship with the target point 55 determines the corresponding relationship between each third feature point and the target point 55 .
  • the reference pose 58 of the reference image 57 is determined according to the corresponding relationship between each third feature point and the target point 55 .
  • Figure 6B shows a schematic diagram of determining the 6D posture of an object based on the posture determination method provided by the embodiment of the present application in the actual application process; as shown in Figure 6B, based on the posture determination method provided by the embodiment of the present application, that is, a method based on local feature point matching
  • the prerequisite for estimating the object pose is to construct an object point cloud map and associate the local feature points on the data image with the objects in the three-dimensional world; as shown in Figure 6B, for a frame of input image 59, first, through the initial pose estimation, Find data images 510 with similar viewing angles among multiple pre-stored images; then, perform 2D feature point extraction and matching between the data image 510 and the input image 59 to obtain each 2D feature point in the input image 59, in the data image The matched 2D feature points in 510 determine the 2D-2D matching relationship between the input image 58 and the data image 510, as shown in 511 in Figure 6B; at the same time, by performing a motion recovery structure algorithm on the data image
  • the embodiments of the present disclosure can determine the corresponding relationship between each two-dimensional feature point in the image to be identified and each two-dimensional feature point in the target image through local feature matching, and generate a three-dimensional point cloud based on multiple target images, thereby obtaining the target image. Identify the correspondence between each two-dimensional feature point in the image and the three-dimensional feature point in the three-dimensional point cloud to accurately determine the posture of the image acquisition device when collecting the image to be recognized, which can improve the accuracy of the determined target posture.
  • embodiments of the present disclosure can also perform preliminary screening of target images by estimating the preliminary posture of the image to be recognized, which can reduce the calculation amount of the two-dimensional feature point matching process, thereby improving the efficiency of the posture determination process.
  • the posture determination method provided by the embodiment of the present disclosure can determine the target posture corresponding to the image to be recognized based on at least one first feature point on the target item in the image to be recognized.
  • the at least one first feature point usually only includes other Therefore, compared with existing methods, it is relatively less affected by the background and image perspective. Therefore, in actual use, only a small amount of training data is needed to generalize to different scenes.
  • there are many local feature points that can be extracted which results in higher fault tolerance and redundancy when solving the 6D posture corresponding to the image to be recognized.
  • the same set of 2D-3D matching representations is used, which is more concise, elegant and efficient at the algorithm framework level.
  • the two modules can support and help each other.
  • the three-dimensional feature points corresponding to each two-dimensional feature point on the image to be recognized can be obtained through indirect matching through two-dimensional feature point matching. Determine the target attitude.
  • the sparse optical flow algorithm is used to track and determine the correspondence between the two-dimensional feature points of the subsequent multi-frame images and the three-dimensional feature points on the image to be identified. In this way, the multi-frame images collected after the image to be identified can be quickly and accurately determined. reference attitude. At the same time, this method can also accurately perceive posture changes when the image collection position changes slightly.
  • embodiments of the present disclosure also provide posture determination devices, electronic devices, computer-readable storage media, and programs, all of which can be used to implement any posture determination method provided by embodiments of the present disclosure.
  • the corresponding technical solutions and descriptions and reference methods Part of the corresponding records.
  • FIG. 7 shows a schematic diagram of a gesture determination device according to an embodiment of the present disclosure.
  • the attitude determination device according to the embodiment of the present disclosure includes:
  • the first determining part 60 is configured to determine at least one first feature point on the target item in the image to be recognized
  • the second determination part 61 is configured to determine at least one target image from a pre-stored image according to the target item in the image to be recognized, wherein the target image has a second feature point and a corresponding three-dimensional point cloud, and the target The second feature point of the image has a corresponding point in the three-dimensional point cloud;
  • the target point matching part 62 is configured to determine the at least one first feature point according to the at least one first feature point, the second feature point, and the corresponding point of the second feature point in the three-dimensional point cloud.
  • the posture determination part 63 is configured to determine the target posture corresponding to the image to be recognized according to the target point.
  • the device further includes: a pre-stored image determining part configured to determine at least one of the pre-stored images and obtain at least one second feature point of the pre-stored image; the first feature point matching The part is configured to match the second feature points of each of the pre-stored images to obtain a plurality of second feature point groups, wherein at least one second feature point in each of the second feature point groups Used to characterize the same position on the target item; the three-dimensional point matching part is configured to determine the three-dimensional point cloud corresponding to the pre-stored image according to the plurality of second feature point groups, wherein the three-dimensional point cloud Each point in the cloud corresponds to at least one second feature point in one of the second feature point groups.
  • a pre-stored image determining part configured to determine at least one of the pre-stored images and obtain at least one second feature point of the pre-stored image
  • the first feature point matching The part is configured to match the second feature points of each of the pre-stored images to obtain a plurality of second feature point groups,
  • the three-dimensional point matching part includes: a point cloud generation sub-part configured to obtain the target item by solving a plurality of the second feature point groups through a motion recovery structure algorithm. 3D point cloud.
  • the second determination part 61 includes: a first posture determination sub-part configured to determine posture information corresponding to each of the pre-stored images; a second posture determination sub-part configured to In order to determine the initial posture corresponding to the image to be recognized; the posture screening sub-section is configured to determine the at least one target image from the pre-stored image according to the initial posture and the posture information.
  • the first posture determination sub-part is further configured to, for each of the pre-stored images, determine at the three-dimensional point based on each of the second feature points included in the pre-stored image.
  • the corresponding point in the cloud executes the N-point perspective algorithm to obtain the attitude information.
  • the image to be recognized is a frame in a continuously collected image sequence
  • the second posture determination sub-part includes: a second posture determination part configured to determine the image sequence and a third posture determination part configured to perform an extrapolation method based on a plurality of the previous postures to obtain the initial posture.
  • the target point matching part 62 includes: a first feature point matching sub-part configured to perform feature point matching between the image to be recognized and the target image to obtain each a second feature point matched by the first feature point; a first target point matching sub-part configured to match the corresponding point of each second feature point matched by the first feature point in the three-dimensional point cloud , determined as the target point.
  • the posture determination part 63 includes: a third posture determination sub-part configured to perform an N-point perspective algorithm according to the target point to obtain the target posture corresponding to the image to be recognized.
  • the image to be recognized is a frame in a continuously collected image sequence
  • the device further includes: a reference image determining part configured to determine whether the image to be recognized is in the image sequence.
  • the next frame image in the image is used as a reference image, wherein the reference image includes at least one third feature point on the target item;
  • the second feature point matching part is configured to match each image based on the image to be identified.
  • the target point corresponding to the first feature point determines the target point corresponding to each third feature point in the reference image;
  • the reference posture determination part is configured to determine the target point corresponding to each third feature point and the The corresponding relationship between the target points determines the reference posture corresponding to the reference image.
  • the second feature point matching part includes: a second feature point matching sub-part configured to track each of the first features in the image to be identified according to a sparse optical flow algorithm points to obtain third feature points matching each of the first feature points in the reference image; a second target point matching sub-part configured to determine that each of the first feature points matches in the reference image The matched third feature point corresponds to the target point corresponding to the first feature point in the three-dimensional point cloud.
  • the functions or included parts of the device provided by the embodiments of the present disclosure can be used to perform the methods described in the above method embodiments.
  • Embodiments of the present disclosure also provide a computer-readable storage medium on which computer program instructions are stored, and when the computer program instructions are executed by a processor, the above method is implemented.
  • the computer-readable storage medium may be a volatile or non-volatile computer-readable storage medium.
  • An embodiment of the present disclosure also provides an electronic device, including: a processor; a memory for storing instructions executable by the processor; wherein the processor is configured to call instructions stored in the memory to execute the instructions provided by the above embodiments. attitude determination method.
  • Embodiments of the present disclosure also provide a computer program, the computer program including computer readable code, when the computer readable code is run in an electronic device, the processor of the electronic device executes the above implementation.
  • the attitude determination method provided in the example.
  • Embodiments of the present disclosure also provide a computer program product, including computer readable code, or a non-volatile computer readable storage medium carrying the computer readable code.
  • computer readable code When the computer readable code is stored in a processor of an electronic device, When running, the processor in the electronic device executes the gesture determination method provided by the above embodiment.
  • the electronic device may be provided as a terminal, a server, or other forms of equipment.
  • FIG. 8 shows a schematic diagram of an electronic device 800 according to an embodiment of the present disclosure.
  • the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a PDA and other terminals.
  • the electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power supply component 806, a multimedia component 808, an audio component 810, an input/output (Input/Output, I/O) interface 812, Sensor component 814, and communication component 816.
  • Processing component 802 generally controls the overall operations of electronic device 800, such as operations associated with display, phone calls, data communications, camera operations, and recording operations.
  • the processing component 802 may include one or more processors 820 to execute instructions to complete all or part of the steps of the above method.
  • processing component 802 may include one or more modules that facilitate interaction between processing component 802 and other components.
  • processing component 802 may include a multimedia module to facilitate interaction between multimedia component 808 and processing component 802.
  • Memory 804 is configured to store various types of data to support operations at electronic device 800 . Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, etc.
  • Memory 804 may be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as Static Random-Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (Electrically Erasable Programmable Read-Only Memory) Erasable Programmable Read Only Memory (EEPROM), Erasable Programmable Read Only Memory (Electrical Programmable Read Only Memory, EPROM), Programmable Read-Only Memory (PROM), Read Only Memory (Read Only Memory, ROM), magnetic memory, flash memory, magnetic disk or optical disk.
  • SRAM Static Random-Access Memory
  • EEPROM Electrically Erasable Programmable Read-Only Memory
  • EEPROM Electrically Erasable Programmable Read-Only Memory
  • EPROM Electrical Programmable Read Only Memory
  • PROM Programmable Read-
  • Power supply component 806 provides power to various components of electronic device 800 .
  • Power supply components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power to electronic device 800 .
  • Multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and the user.
  • the screen may include a liquid crystal display (Liquid Crystal Display, LCD) and a touch panel (TouchPanel, TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide action.
  • multimedia component 808 includes a front-facing camera and/or a rear-facing camera.
  • the front camera and/or the rear camera may receive external multimedia data.
  • Each front-facing camera and rear-facing camera can be a fixed optical lens system or have a focal length and optical zoom capabilities.
  • Audio component 810 is configured to output and/or input audio signals.
  • the audio component 810 includes a microphone (Microphone, MIC).
  • the microphone is configured to receive an external audio signal.
  • the received audio signal may be stored in memory 804 or sent via communication component 816 .
  • audio component 810 also includes a speaker for outputting audio signals.
  • the I/O interface 812 provides an interface between the processing component 802 and a peripheral interface module, which may be a keyboard, a click wheel, a button, etc. These buttons may include, but are not limited to: Home button, Volume buttons, Start button, and Lock button.
  • Sensor component 814 includes one or more sensors for providing various aspects of status assessment for electronic device 800 .
  • the sensor component 814 can detect the open/closed state of the electronic device 800, the relative positioning of components, such as the display and keypad of the electronic device 800, the sensor component 814 can also detect the electronic device 800 or an electronic device 800.
  • the position of components changes, the presence or absence of user contact with the electronic device 800 , the orientation or acceleration/deceleration of the electronic device 800 and the temperature of the electronic device 800 change.
  • Sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby items without any physical contact.
  • Sensor assembly 814 may also include a light sensor, such as a Complementary Metal Oxide Semiconductor (CMOS) or Charge-coupled Device (CCD) image sensor, for use in imaging applications.
  • CMOS Complementary Metal Oxide Semiconductor
  • CCD Charge-coupled Device
  • the sensor component 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • Communication component 816 is configured to facilitate wired or wireless communication between electronic device 800 and other devices.
  • the electronic device 800 can access a wireless network based on a communication standard, such as a wireless network (WiFi), a second generation mobile communication technology (2-Generation, 2G) or a third generation mobile communication technology (3-Generation, 3G), or they The combination.
  • the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communication component 816 also includes a Near Field Communication (NFC) module to facilitate short-range communication.
  • NFC Near Field Communication
  • the NFC module can be based on Radio Frequency Identification (RFID) technology, Infrared Data Association (IrDA) technology, Ultra Wideband (Ultra Wide Band, UWB) technology, Bluetooth (BitTorrent, BT) technology and other Technology to achieve.
  • RFID Radio Frequency Identification
  • IrDA Infrared Data Association
  • UWB Ultra Wideband
  • Bluetooth BitTorrent, BT
  • the electronic device 800 may be configured by one or more application specific integrated circuits (Application Specific Integrated Circuit, ASIC), digital signal processor (Digital Signal Processor, DSP), digital signal processing device (Digital Signal Processor Device). , DSPD), programmable logic device (Programmable Lofic Device, PLD), field programmable gate array (Field Programmable Gate Array, FPGA), controller, microcontroller, microprocessor or other electronic component implementation, used to perform the above method.
  • ASIC Application Specific Integrated Circuit
  • DSP Digital Signal Processor
  • DSP Digital Signal Processor Device
  • DSPD programmable logic device
  • PLD programmable Lofic Device
  • FPGA Field Programmable Gate Array
  • controller microcontroller, microprocessor or other electronic component implementation, used to perform the above method.
  • a non-volatile computer-readable storage medium is also provided, such as a memory 804 including computer program instructions, which can be executed by the processor 820 of the electronic device 800 to complete the above method.
  • the present disclosure relates to the field of augmented reality.
  • augmented reality By acquiring image information of target items in the real environment, and then using various visual related algorithms to detect or identify the relevant characteristics, status and attributes of the target items, thereby obtaining information that matches specific applications.
  • AR effect that combines virtuality and reality.
  • the target items may involve faces, limbs, gestures, actions, etc. related to the human body, or objects, markers, or sandboxes, display areas, or display items related to venues or places.
  • Vision-related algorithms can involve visual positioning, SLAM, three-dimensional reconstruction, image registration, background segmentation, object key point extraction and tracking, object pose or depth detection, etc.
  • Convolutional neural networks can be used to detect or identify the relevant features, status and attributes of target items.
  • the above-mentioned convolutional neural network is a network model obtained through model training based on a deep learning framework.
  • FIG. 9 shows a schematic diagram of another electronic device 1900 according to an embodiment of the present disclosure.
  • electronic device 1900 may be provided as a server.
  • electronic device 1900 includes a processing component 1922 , including one or more processors, and memory resources represented by memory 1932 for storing instructions, such as application programs, executable by processing component 1922 .
  • the application program stored in memory 1932 may include one or more modules, each corresponding to a set of instructions.
  • the processing component 1922 is configured to execute instructions to perform the above-described gesture determination method.
  • Electronic device 1900 may also include a power supply component 1926 configured to perform power management of electronic device 1900, a wired or wireless network interface 1950 configured to connect electronic device 1900 to a network, and an input-output (I/O) interface 1958 .
  • the electronic device 1900 can operate based on an operating system stored in the memory 1932, such as a Microsoft server operating system (Windows Server TM ), a graphical user interface operating system (Mac OS X TM ) launched by Apple, a multi-user multi-process computer operating system (Unix TM ), a free and open source Unix-like operating system (Linux TM ), an open source Unix-like operating system (FreeBSD TM ) or similar.
  • Microsoft server operating system Windows Server TM
  • Mac OS X TM graphical user interface operating system
  • Unix TM multi-user multi-process computer operating system
  • Linux TM free and open source Unix-like operating system
  • FreeBSD TM open source Unix-like operating system
  • a non-volatile computer-readable storage medium is also provided, such as a memory 1932 including computer program instructions, which can be executed by the processing component 1922 of the electronic device 1900 to complete the above method.
  • the present disclosure may be a system, method, and/or computer program product.
  • a computer program product may include a computer-readable storage medium having thereon computer-readable program instructions for causing a processor to implement aspects of the present disclosure.
  • Computer-readable storage media may be tangible devices that can retain and store instructions for use by an instruction execution device.
  • the computer-readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the above.
  • computer-readable storage media include: portable computer disk, hard disk, random access memory (RAM), ROM, EPROM or flash memory, SRAM, portable compact disk read-only Memory (Compact Disc-Read Only Memory, CD-ROM), Digital Versatile Disc (DVD), memory stick, floppy disk, mechanical encoding device, such as punched card or grooved embossed card with instructions stored on it structure, and any suitable combination of the above.
  • computer-readable storage media are not to be construed as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., light pulses through fiber optic cables), or through electrical wires. transmitted electrical signals.
  • Computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to various computing/processing devices, or to an external computer or external storage device over a network, such as the Internet, a local area network, a wide area network, and/or a wireless network.
  • the network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
  • a network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage on a computer-readable storage medium in the respective computing/processing device .
  • Computer program instructions for performing operations of the present disclosure may be assembly instructions, Instruction Set Architecture (ISA) instructions, machine instructions, machine-related instructions, microcode, firmware instructions, state setting data, or in one or more Source code or object code written in any combination of programming languages, including object-oriented programming languages, such as Smalltalk, C++, etc., and conventional procedural programming languages, such as the "C" language or similar programming languages.
  • the computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server implement.
  • the remote computer can be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or it can be connected to an external computer (e.g. Use an Internet service provider to connect via the Internet).
  • LAN Local Area Network
  • WAN Wide Area Network
  • an electronic circuit such as a programmable logic circuit, FPGA, or Programmable Logic Arrays (PLA), can be customized by utilizing state information of computer-readable program instructions. Program instructions are read to implement various aspects of the present disclosure.
  • These computer-readable program instructions may be provided to a processor of a general-purpose computer, a special-purpose computer, or other programmable data processing apparatus, thereby producing a machine that, when executed by the processor of the computer or other programmable data processing apparatus, , resulting in an apparatus that implements the functions/actions specified in one or more blocks in the flowchart and/or block diagram.
  • These computer-readable program instructions can also be stored in a computer-readable storage medium. These instructions cause the computer, programmable data processing device and/or other equipment to work in a specific manner. Therefore, the computer-readable medium storing the instructions includes An article of manufacture that includes instructions that implement aspects of the functions/acts specified in one or more blocks of the flowcharts and/or block diagrams.
  • Computer-readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other equipment, causing a series of operating steps to be performed on the computer, other programmable data processing apparatus, or other equipment to produce a computer-implemented process , thereby causing instructions executed on a computer, other programmable data processing apparatus, or other equipment to implement the functions/actions specified in one or more blocks in the flowcharts and/or block diagrams.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions that embody one or more elements for implementing the specified logical function(s).
  • Executable instructions may occur out of the order noted in the figures. For example, two consecutive blocks may actually execute substantially in parallel, or they may sometimes execute in the reverse order, depending on the functionality involved.
  • each block of the block diagram and/or flowchart illustration, and combinations of blocks in the block diagram and/or flowchart illustration can be implemented by special purpose hardware-based systems that perform the specified functions or acts. , or can be implemented using a combination of specialized hardware and computer instructions.
  • the computer program product can be implemented specifically through hardware, software or a combination thereof.
  • the computer program product is embodied as a computer storage medium.
  • the computer program product is embodied as a software product, such as a Software Development Kit (SDK), etc. wait.
  • SDK Software Development Kit
  • the writing order of each step does not mean a strict execution order and does not constitute any limitation on the implementation process.
  • the specific execution order of each step should be based on its function and possible The internal logic is determined.
  • the products applying the disclosed technical solution will clearly inform the personal information processing rules and obtain the individual's independent consent before processing personal information.
  • the product applying the disclosed technical solution must obtain the individual's separate consent before processing the sensitive personal information, and at the same time meet the requirement of "express consent”. For example, setting up clear and conspicuous signs on personal information collection devices such as cameras to inform them that they have entered the scope of personal information collection, and that personal information will be collected.
  • personal information processing rules may include personal information processing rules.
  • Information such as information processors, purposes of processing personal information, methods of processing, and types of personal information processed.
  • Embodiments of the present disclosure relate to a gesture determination method, device, electronic equipment, storage medium and program, which determine at least one first feature point on a target object in an image to be recognized, and extract the pre-stored image from the pre-stored image according to the target object in the image to be recognized.
  • the target image has a second feature point and a corresponding three-dimensional point cloud
  • the second feature point has a corresponding point in the three-dimensional point cloud
  • the target point of at least one first feature point in the three-dimensional point cloud is determined, and the target posture corresponding to the image to be recognized is determined based on the target point.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

Embodiments of the present disclosure relate to a pose determination method and apparatus, an electronic device, a storage medium, and a program. The method comprises: determining at least one first feature point in an image to be recognized on a target object; determining at least one target image from among pre-stored images according to the target object in the image, wherein the target image has a second feature point and a corresponding three-dimensional point cloud, and the second feature point has a corresponding point in the three-dimensional point cloud; determining a target point of the at least one first feature point in the three-dimensional point cloud according to the at least one first feature point, the second feature point, and the point corresponding to the second feature point in the three-dimensional point cloud; and according to the target point, determining a target pose corresponding to the image. In the embodiments of the present disclosure, a three-dimensional feature point corresponding to each two-dimensional feature point in an image to be recognized can be determined by means of local feature matching, so that by means of the pose of an image collection apparatus when the image is collected according to the three-dimensional feature point, the accuracy of a determined target pose can be improved.

Description

姿态确定方法、装置、电子设备、存储介质及程序Attitude determination method, device, electronic equipment, storage medium and program
相关申请的交叉引用Cross-references to related applications
本公开要求2022年03月11日提交的中国专利申请号为202210239443.0、申请人为浙江商汤科技开发有限公司,申请名称为“姿态确定方法及装置、电子设备和存储介质”的优先权,该申请的全文以引用的方式并入本公开中。This disclosure requires the priority of the Chinese patent application number 202210239443.0 submitted on March 11, 2022. The applicant is Zhejiang Shangtang Technology Development Co., Ltd. and the application is titled "Attitude Determination Method and Device, Electronic Equipment and Storage Medium". This application The entire text of is incorporated into this disclosure by reference.
技术领域Technical field
本公开涉及计算机技术领域,尤其涉及一种姿态确定方法、装置、电子设备、存储介质及程序。The present disclosure relates to the field of computer technology, and in particular, to a posture determination method, device, electronic device, storage medium and program.
背景技术Background technique
增强现实技术的重要用途之一就是与现实世界的真实物品进行交互并在其基础上渲染虚拟效果。而准确的估计与跟踪物品的六维(6Dimension,6D)姿态是进行交互渲染的前提,也是计算机视觉领域非常重要的研究问题。其中,物品6D姿态的具体定义为三个位移自由度外加三个旋转自由度。相关技术在物品背景改变等情况下的姿态估计结果会出现一定的偏差。同时,还难以区分两个近似位置角度下采集物品图像中物品的姿态差异。One of the important uses of augmented reality technology is to interact with real objects in the real world and render virtual effects based on them. Accurately estimating and tracking the six-dimensional (6D) pose of an object is a prerequisite for interactive rendering and is also a very important research issue in the field of computer vision. Among them, the specific definition of the 6D posture of the object is three degrees of freedom of displacement plus three degrees of freedom of rotation. The posture estimation results of related technologies will have certain deviations when the background of the object changes. At the same time, it is difficult to distinguish the difference in posture of objects in the object images collected at two approximate position angles.
发明内容Contents of the invention
本公开实施例提出了一种姿态确定方法、装置、电子设备、存储介质及程序,旨在提高确定的目标姿态准确性。Embodiments of the present disclosure propose an attitude determination method, device, electronic device, storage medium and program, aiming to improve the accuracy of the determined target attitude.
本公开实施例提供了一种姿态确定方法,包括:确定待识别图像中在目标物品上的至少一个第一特征点;根据所述待识别图像中的目标物品从预存图像中确定至少一个目标图像,其中,所述目标图像具有第二特征点和对应的三维点云,所述第二特征点在所述三维点云中具有对应的点;根据所述至少一个第一特征点、所述第二特征点,以及所述第二特征点在所述三维点云中对应的点,确定所述至少一个第一特征点在所述三维点云中的目标点;根据所述目标点确定所述待识别图像对应的目标姿态。Embodiments of the present disclosure provide a gesture determination method, including: determining at least one first feature point on a target item in an image to be recognized; determining at least one target image from a pre-stored image based on the target item in the image to be recognized. , wherein the target image has a second feature point and a corresponding three-dimensional point cloud, and the second feature point has a corresponding point in the three-dimensional point cloud; according to the at least one first feature point, the third Two feature points, and the corresponding point of the second feature point in the three-dimensional point cloud, determine the target point of the at least one first feature point in the three-dimensional point cloud; determine the target point based on the target point The target pose corresponding to the image to be recognized.
本公开实施例提供了一种姿态确定装置,包括:第一确定部分,被配置为确定待识别图像中在目标物品上的至少一个第一特征点;第二确定部分,被配置为根据所述待识别图像中的目标物品从预存图像中确定至少一个目标图像,其中,所述目标图像具有第二特征点和对应的三维点云,所述第二特征点在所述三维点云中具有对应的点;目标点匹配部分,被配置为根据所述至少一个第一特征点、所述第二特征点,以及所述第二特征点在所述三维点云中对应的点,确定所述至少一个第一特征点在所述三维点云中的目标点;姿态确定部分,被配置为根据所述目标点确定所述待识别图像对应的目标姿态。An embodiment of the present disclosure provides a gesture determination device, including: a first determination part configured to determine at least one first feature point on a target item in an image to be recognized; a second determination part configured to determine according to the The target item in the image to be recognized determines at least one target image from the pre-stored image, wherein the target image has a second feature point and a corresponding three-dimensional point cloud, and the second feature point has a corresponding corresponding value in the three-dimensional point cloud. points; the target point matching part is configured to determine the at least one first feature point, the second feature point, and the corresponding point of the second feature point in the three-dimensional point cloud. A first feature point is a target point in the three-dimensional point cloud; the posture determination part is configured to determine the target posture corresponding to the image to be recognized based on the target point.
本公开实施例提供了一种电子设备,包括:处理器;用于存储处理器可执行指令的存储器;其中,所述处理器被配置为调用所述存储器存储的指令,以执行上述的姿态确定方法。An embodiment of the present disclosure provides an electronic device, including: a processor; a memory for storing instructions executable by the processor; wherein the processor is configured to call instructions stored in the memory to perform the above-mentioned gesture determination. method.
本公开实施例提供了一种计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现上述的姿态确定方法。Embodiments of the present disclosure provide a computer-readable storage medium on which computer program instructions are stored. When the computer program instructions are executed by a processor, the above-mentioned posture determination method is implemented.
本公开实施例提供一种计算机程序,所述计算机程序包括计算机可读代码,在所述计算机可读代码在电子设备中运行的情况下,所述电子设备的处理器执行用于实现上述的姿态确定方法。An embodiment of the present disclosure provides a computer program. The computer program includes a computer readable code. When the computer readable code is run in an electronic device, the processor of the electronic device executes the above gesture. Determine the method.
在本公开实施例中,首先,通过局部特征匹配的方式,确定待识别图像中每个二维第一特征点在预存图像中对应的二维第二特征点,然后,根据与每个第一特征点匹配的第二特征点对应的三维特征点,确定采集待识别图像时图像采集装置的姿态,能够提高确定的目标姿态准确性。In the embodiment of the present disclosure, first, through local feature matching, the two-dimensional second feature point corresponding to each two-dimensional first feature point in the image to be recognized in the pre-stored image is determined, and then, based on the corresponding two-dimensional second feature point with each first feature point, The three-dimensional feature point corresponding to the second feature point matched by the feature point determines the posture of the image acquisition device when collecting the image to be recognized, which can improve the accuracy of the determined target posture.
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,而非限制本公开。根据下面参考附图对示例性实施例的详细说明,本公开的其它特征及方面将变得清楚。It is to be understood that the foregoing general description and the following detailed description are exemplary and explanatory only, and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments with reference to the accompanying drawings.
附图说明Description of the drawings
此处的附图被并入说明书中并构成本说明书的一部分,这些附图示出了符合本公开的实施例,并与说明书一起用于说明本公开的技术方案。The accompanying drawings herein are incorporated into and constitute a part of this specification. They illustrate embodiments consistent with the disclosure and, together with the description, serve to explain the technical solutions of the disclosure.
图1A为相关技术中基于几何的方法确定物体的六维姿态的示意图;Figure 1A is a schematic diagram of determining the six-dimensional posture of an object using a geometry-based method in the related art;
图1B为相关技术中基于模板匹配直接回归法确定物体的六维姿态的示意图;Figure 1B is a schematic diagram of determining the six-dimensional posture of an object based on template matching direct regression method in related technologies;
图2示出根据本公开实施例的一种姿态确定方法的流程图;Figure 2 shows a flow chart of a gesture determination method according to an embodiment of the present disclosure;
图3示出根据本公开实施例的一种三维点云的示意图;Figure 3 shows a schematic diagram of a three-dimensional point cloud according to an embodiment of the present disclosure;
图4示出根据本公开实施例的一种第二特征点匹配过程的示意图;Figure 4 shows a schematic diagram of a second feature point matching process according to an embodiment of the present disclosure;
图5示出根据本公开实施例的一种确定目标姿态的示意图;Figure 5 shows a schematic diagram of determining a target posture according to an embodiment of the present disclosure;
图6A示出根据本公开实施例的一种确定参考姿态的示意图;Figure 6A shows a schematic diagram of determining a reference posture according to an embodiment of the present disclosure;
图6B示出实际应用过程中基于本申请实施例提供的姿态确定方法确定物体6D姿态的示意图;Figure 6B shows a schematic diagram of determining the 6D posture of an object based on the posture determination method provided by the embodiment of the present application during practical application;
图7示出根据本公开实施例的一种姿态确定装置的示意图;Figure 7 shows a schematic diagram of a gesture determination device according to an embodiment of the present disclosure;
图8示出根据本公开实施例的一种电子设备的示意图;Figure 8 shows a schematic diagram of an electronic device according to an embodiment of the present disclosure;
图9示出根据本公开实施例的另一种电子设备的示意图。FIG. 9 shows a schematic diagram of another electronic device according to an embodiment of the present disclosure.
具体实施方式Detailed ways
以下将参考附图详细说明本公开的各种示例性实施例、特征和方面。附图中相同的附图标记表示功能相同或相似的元件。尽管在附图中示出了实施例的各种方面,但是除非特别指出,不必按比例绘制附图。Various exemplary embodiments, features, and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. The same reference numbers in the drawings identify functionally identical or similar elements. Although various aspects of the embodiments are illustrated in the drawings, the drawings are not necessarily drawn to scale unless otherwise indicated.
在这里专用的词“示例性”意为“用作例子、实施例或说明性”。这里作为“示例性”所说明的任何实施例不必解释为优于或好于其它实施例。The word "exemplary" as used herein means "serving as an example, example, or illustrative." Any embodiment described herein as "exemplary" is not necessarily to be construed as superior or superior to other embodiments.
本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中术语“至少一种”表示多种中的任意一种或多种中的至少两种的任意组合,例如,包括A、B、C中的至少一种,可以表示包括从A、B和C构成的集合中选择的任意一个或多个元素。The term "and/or" in this article is just an association relationship that describes related objects, indicating that three relationships can exist. For example, A and/or B can mean: A exists alone, A and B exist simultaneously, and they exist alone. B these three situations. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, and C, which can mean including from A, Any one or more elements selected from the set composed of B and C.
另外,为了更好地说明本公开,在下文的具体实施方式中给出了众多的具体细节。本领域技术人员应当理解,没有某些具体细节,本公开同样可以实施。在一些实例中,对于本领域技术人员熟知的方法、手段、元件和电路未作详细描述,以便于凸显本公开的主旨。In addition, in order to better explain the present disclosure, numerous specific details are given in the following detailed description. It will be understood by those skilled in the art that the present disclosure may be practiced without certain specific details. In some instances, methods, means, components and circuits that are well known to those skilled in the art are not described in detail in order to emphasize the subject matter of the disclosure.
相关技术中,物体6D姿态估计方法可以根据其作用机理分为基于模板匹配直接回归物体6D姿态的方法与基于几何的方法;其中,基于几何的方法:此类方法通常会在物体 自身坐标系的三维空间上定义三维(3 Dimension,3D)特征点,并在物体各个视角的图像上标注这些3D特征点对应的二维(2 Dimension,2D)特征点。这些图像与标注会被用训练一个2D特征点检测神经网络。在实际使用时,神经网络会从输入图像中检测出事先定义好的物体2D特征点,结合已知的2D特征点和3D特征点匹配的关系通过N点透视(Perspective-N-Points,PnP)算法解算出物体的6D姿态;如图1A所示,为相关技术中基于几何的方法确定物体的六维姿态的示意图;其中,首先对输入图像进行物体的2D特征点检测即执行101;其次,在输入图像中裁剪出物体附近区域的图像即执行102;再次,对裁剪出物体附近区域的图像进行物体2D特征点概率分布的估计,即执行103;然后,基于该物体2D特征点概率分布的估计,确定物体2D特征点的像素位置,即执行104;最后,利用已知的2D与3D特征点关系通过PnP算法,确定物体的6D姿态,即执行105。In related technologies, object 6D posture estimation methods can be divided into methods based on template matching to directly return the 6D posture of the object and geometry-based methods according to their mechanism of action; among them, geometry-based methods: This type of method usually determines the coordinate system of the object itself. Define three-dimensional (3 Dimension, 3D) feature points in three-dimensional space, and mark the two-dimensional (2 Dimension, 2D) feature points corresponding to these 3D feature points on the images from each perspective of the object. These images and annotations will be used to train a 2D feature point detection neural network. In actual use, the neural network will detect the predefined 2D feature points of the object from the input image, and combine the known matching relationship between 2D feature points and 3D feature points through N-point perspective (Perspective-N-Points, PnP) The algorithm solves the 6D posture of the object; as shown in Figure 1A, it is a schematic diagram of the geometry-based method in related technologies to determine the six-dimensional posture of the object; among them, first the 2D feature points of the object are detected on the input image, that is, execution 101; secondly, In the input image, the image of the area near the object is cropped, and 102 is executed; again, the probability distribution of the object's 2D feature points is estimated for the image of the area near the object, that is, 103 is executed; then, based on the probability distribution of the object's 2D feature points, Estimate and determine the pixel position of the object's 2D feature points, which is executed in 104; finally, use the known relationship between 2D and 3D feature points to determine the 6D posture of the object through the PnP algorithm, which is executed in 105.
同时,基于模板匹配直接回归物体6D姿态的方法中:需要提前获得物体在各个视角下的图像及其对应的6D姿态;并通过将各个视角的图像进行编码生成匹配模板,或是直接利用神经网络通过基于学习的方法将物体6D姿态的信息编码到网络中。且在实际使用时,通过将输入图像直接或间接与模板进行匹配直接输出目标物体的6D姿态。这里,如图1B所示,为相关技术中基于模板匹配直接回归法确定物体的六维姿态的示意图;其中,与上图1A所示,首先对输入图像进行2D特征点检测,即执行106;然后,在输入图像中裁剪物体附近区域的图像,即执行107;最后,裁剪得到的物体附近区域的图像,基于神经网络通过基于学习的方法,直接输出物体6D姿态,即执行108。At the same time, in the method of directly returning the 6D posture of the object based on template matching: it is necessary to obtain the image of the object at each viewing angle and its corresponding 6D posture in advance; and generate a matching template by encoding the images from each viewing angle, or directly use the neural network The information of the 6D pose of the object is encoded into the network through a learning-based method. And in actual use, the 6D posture of the target object is directly output by matching the input image directly or indirectly with the template. Here, as shown in Figure 1B , it is a schematic diagram of determining the six-dimensional posture of an object based on the direct regression method of template matching in related technologies; wherein, as shown in Figure 1A above, first 2D feature point detection is performed on the input image, that is, step 106 is performed; Then, crop the image of the area near the object in the input image, that is, execute 107; finally, the cropped image of the area near the object is directly outputted with a learning-based method based on the neural network, which directly outputs the 6D pose of the object, that is, execute 108.
这里,以上两类方法都存显著的问题:1、对于物体背景较为敏感。在模板生成或是在神经网络训练的时候都会不可避免的受到图像中背景信息的干扰。因而在实际使用过程中面对新的背景效果可能会受到影响;2、对于物体姿态的细微变化不敏感,无法估计出足够精确的物体姿态,且无论是模板匹配还是神经网络的方法,本质上都是将图像信息进行编码和记忆。因此如果推理时输入图像中物体的姿态在生成先前的训练数据中没有出现过,那么算法会将其估计为与训练数据中最接近图像的姿态,进而无法足够准确的进行姿态估计。Here, the above two types of methods have obvious problems: 1. They are sensitive to the object background. During template generation or neural network training, it will inevitably be interfered by background information in the image. Therefore, new background effects may be affected during actual use; 2. Insensitive to subtle changes in object posture, it is impossible to estimate a sufficiently accurate object posture, and whether it is template matching or neural network method, it is essentially They all encode and memorize image information. Therefore, if the posture of the object in the input image during inference has not appeared in the previous training data generated, the algorithm will estimate it as the posture closest to the image in the training data, and thus it will not be able to estimate the posture accurately enough.
针对相关技术中出现得到问题,现有方法通常的解决方式是采集大量不同背景下的数据,且尽可能将视角覆盖的足够密。这样会提高数据采集和获取的成本;并且由于神经网络本身的记忆容量有限,也不能从本质上解决问题,进而限制物体6D姿态估计算法在增强现实中的应用场景。此外,现有的物体6D姿态估计方法的表示无法直接拓展到6D姿态跟踪算法中,其通常只能通过使用滤波的方法在检测帧之间进行平滑,或者额外使用其他6D姿态跟踪的方法(例如轮廓线跟踪)完成6D姿态跟踪,即无法用一种统一表示完成物体的6D姿态检测与跟踪。In order to solve the problems that arise in related technologies, the usual solution of existing methods is to collect a large amount of data in different backgrounds and cover the viewing angles as densely as possible. This will increase the cost of data collection and acquisition; and due to the limited memory capacity of the neural network itself, it cannot essentially solve the problem, thereby limiting the application scenarios of the object 6D pose estimation algorithm in augmented reality. In addition, the representation of existing object 6D pose estimation methods cannot be directly extended to 6D pose tracking algorithms. They usually can only smooth between detection frames by using filtering methods, or additionally use other 6D pose tracking methods (such as Contour tracking) to complete 6D posture tracking, that is, it is impossible to use a unified representation to complete 6D posture detection and tracking of objects.
本公开实施例的姿态确定方法可以由终端设备或服务器等电子设备执行。其中,终端设备可以为用户设备(User Equipment,UE)、移动设备、用户终端、终端、蜂窝电话、无绳电话、个人数字助理(Personal Digital Assistant,PDA)、手持设备、计算设备、车载设备、可穿戴设备等固定或移动装置。服务器可以为单独的服务器或多个服务器组成的服务器集群。任意电子设备均可以通过处理器调用存储器中存储的计算机可读指令的方式来实现本公开实施例的姿态确定方法。The gesture determination method in the embodiment of the present disclosure can be executed by an electronic device such as a terminal device or a server. Among them, the terminal device can be a user equipment (User Equipment, UE), a mobile device, a user terminal, a terminal, a cellular phone, a cordless phone, a personal digital assistant (Personal Digital Assistant, PDA), a handheld device, a computing device, a vehicle-mounted device, or a portable device. Fixed or mobile devices such as wearables. The server can be a single server or a server cluster composed of multiple servers. Any electronic device can implement the gesture determination method of the embodiment of the present disclosure by calling the computer readable instructions stored in the memory through the processor.
在本公开的一些实施例中,本公开实施例用于在获取到包括目标物品的待识别图像后,根据多个包括目标物品的预存图像构建的三维点云,准确地确定采集待识别图像时图像采集装置的姿态。In some embodiments of the present disclosure, the embodiments of the present disclosure are used to accurately determine the time to collect the image to be identified based on a three-dimensional point cloud constructed from multiple pre-stored images including the target item after acquiring an image to be identified. The posture of the image acquisition device.
图2示出根据本公开实施例的一种姿态确定方法的流程图。如图2所示,本公开实施例的物品姿态方法可以包括以下步骤S10至S40:FIG. 2 shows a flow chart of a gesture determination method according to an embodiment of the present disclosure. As shown in Figure 2, the object posture method according to the embodiment of the present disclosure may include the following steps S10 to S40:
步骤S10、确定待识别图像中在目标物品上的至少一个第一特征点。Step S10: Determine at least one first feature point on the target item in the image to be recognized.
在本公开的一些实施例中,本公开实施例的姿态确定方法用于确定二维的待识别图 像对应的目标姿态,即采集该待识别图像时图像采集装置的姿态。待识别图像通过图像采集装置采集目标物品的方式确定,其中,可以通过单独采集的方式获取的单张图像,或者移动的图像采集装置在移动过程中连续获取的图像序列中的一帧图像。电子设备在确定待识别图像后,确定待识别图像中在目标物品上的至少一个第一特征点。其中,目标物品为在图像采集过程中不改变姿态的静态物品,可以为无生命静物或有生命但在图像采集过程中固定不动的物品。In some embodiments of the present disclosure, the posture determination method of the embodiment of the present disclosure is used to determine the target posture corresponding to the two-dimensional image to be recognized, that is, the posture of the image acquisition device when collecting the image to be recognized. The image to be identified is determined by the way in which the image acquisition device collects the target item, which can be a single image acquired through separate acquisition, or a frame of an image sequence acquired continuously by the moving image acquisition device during the movement. After determining the image to be recognized, the electronic device determines at least one first feature point on the target item in the image to be recognized. Among them, the target object is a static object that does not change its posture during the image collection process, and can be an inanimate still life or an animate object that is fixed during the image collection process.
在本公开的一些实施例中,待识别图像上的第一特征点的确定过程可以为先通过对象识别,得到图像中的目标物品,然后在目标物品上确定至少一个用于表征目标物品局部区域的第一特征点。其中,每个第一特征点还可以具有用于描述第一特征点局部特征的第一描述子,即第一描述子用于表征待识别图像中目标物品的局部区域特征。In some embodiments of the present disclosure, the determination process of the first feature point on the image to be recognized may be to first obtain the target item in the image through object recognition, and then determine at least one local area on the target item that is used to characterize the target item. the first characteristic point. Each first feature point may also have a first descriptor used to describe local features of the first feature point, that is, the first descriptor is used to characterize the local area features of the target item in the image to be recognized.
在本公开的一些实施例中,第一描述子的确定方式可以为提取第一特征点所在位置中的预设特征,并根据预设特征的分布情况确定特征向量作为第一描述子。预设特征可以根据实际需要设定,可以包括颜色特征和纹理特征中的至少一种。第一描述子可以为任意特征描述子,例如,可以是方向梯度直方图(Histogram of Oriented Gradients,HOG)特征描述子。第一描述子的确定过程可以为确定第一特征点所在位置内每个像素在水平和垂直两个方向上的梯度变化,并根据梯度变化确定每个像素的幅值和方向以根据每个像素的梯度幅值和方向生成直方图。同时还可以通过直方图标准化得到的特征向量作为第一描述子。In some embodiments of the present disclosure, the first descriptor may be determined by extracting preset features at the location of the first feature point, and determining a feature vector as the first descriptor based on the distribution of the preset features. The preset characteristics can be set according to actual needs, and can include at least one of color characteristics and texture characteristics. The first descriptor can be any feature descriptor, for example, it can be a Histogram of Oriented Gradients (HOG) feature descriptor. The determination process of the first descriptor may be to determine the gradient change of each pixel in the horizontal and vertical directions within the location of the first feature point, and determine the amplitude and direction of each pixel based on the gradient change. Generate a histogram of gradient magnitudes and directions. At the same time, the feature vector obtained through histogram normalization can also be used as the first descriptor.
步骤S20、根据所述待识别图像中的目标物品从预存图像中确定至少一个目标图像。Step S20: Determine at least one target image from pre-stored images according to the target item in the image to be recognized.
在本公开的一些实施例中,在确定待识别图像中目标物品后,还可以确定该目标物品的三维点云和至少一个目标图像。其中,每个预存图像均为通过图像采集装置预先采集目标物品得到的图像,可以存储在电子设备本地或远程连接的数据库中。电子设备确定的至少一个目标图像可以为全部预存图像,或者部分预存图像。In some embodiments of the present disclosure, after determining the target item in the image to be recognized, a three-dimensional point cloud of the target item and at least one target image may also be determined. Each pre-stored image is an image obtained by pre-collecting the target object through an image collection device, and can be stored in a database locally or remotely connected to the electronic device. The at least one target image determined by the electronic device may be all pre-stored images, or part of the pre-stored images.
在本公开的一些实施例中,电子设备可以根据任意预设的规则在预存图像中获取至少一个目标图像。例如,电子设备首先可以确定每个预存图像对应的姿态信息,以及确定待识别图像对应的初始姿态,然后,根据待识别图像的初始姿态和预存图像对应的姿态信息,从预存图像中确定至少一个目标图像。这样,能够在预存图像中确定与待识别图像中目标物品的姿态之间的关系满足预设关系的图像;例如,可以将预存图像中与待识别图像中目标物品的姿态之间相似度满足预设相似度的图像,作为目标图像;如此,为后续在目标图像中确定目标物品的第二特征点提供基础。In some embodiments of the present disclosure, the electronic device can obtain at least one target image from the pre-stored images according to any preset rules. For example, the electronic device can first determine the posture information corresponding to each pre-stored image, and determine the initial posture corresponding to the image to be recognized, and then determine at least one of the pre-stored images based on the initial posture of the image to be recognized and the posture information corresponding to the pre-stored image. target image. In this way, the image whose relationship with the posture of the target item in the image to be recognized satisfies the preset relationship can be determined in the prestored image; for example, the similarity between the posture of the target item in the prestored image and the image to be identified can be determined to satisfy the preset relationship. Let the similarity image be used as the target image; this provides a basis for subsequently determining the second feature point of the target item in the target image.
在本公开的一些实施例中,待识别图像对应的初始姿态可以根据任意现有方式确定;例如,当待识别图像为连续采集的图像序列中的一帧时,可以确定图像序列中在待识别图像之前的多帧图像对应的在先姿态,即采集之前多帧图像时图像采集装置的姿态。基于多个在先姿态执行外插值法,得到待识别图像对应的初始姿态。或者,还可以直接将待识别图像输入一个预先训练得到的姿态估计模型,进行姿态估计后输出对应的初始姿态等。初始姿态可以通过向量表示,包括采集待识别图像时图像采集装置在目标三维坐标系中的三个位移参数以及三个旋转参数。这样,通过对待识别图像之前的多帧图像对应的在先姿态进行插值处理,使得得到的待识别图像对应的初始姿态精准度更高,且该初始姿态与在先姿态之间更加连贯。In some embodiments of the present disclosure, the initial posture corresponding to the image to be recognized can be determined according to any existing method; for example, when the image to be recognized is a frame in a continuously collected image sequence, it can be determined that the position in the image sequence to be recognized is The previous posture corresponding to the multiple frames of images before the image is the posture of the image acquisition device when collecting the previous multiple frames of images. The extrapolation method is performed based on multiple previous postures to obtain the initial posture corresponding to the image to be recognized. Alternatively, the image to be recognized can also be directly input into a pre-trained attitude estimation model, and the corresponding initial attitude can be output after the attitude estimation. The initial posture can be represented by a vector, including three displacement parameters and three rotation parameters of the image acquisition device in the target three-dimensional coordinate system when collecting the image to be recognized. In this way, by interpolating the previous postures corresponding to the multiple frames of images before the image to be recognized, the resulting initial posture corresponding to the image to be recognized is more accurate, and the initial posture and the previous posture are more coherent.
在本公开的一些实施例中,待识别图像对应的初始姿态和每个预存图像对应的姿态信息均可以通过向量表示,因此,可以直接通过计算向量距离的方式确定待识别图像和预存图像的姿态匹配情况,并将筛选出向量距离小于预设的距离阈值的预存图像作为目标图像。In some embodiments of the present disclosure, the initial pose corresponding to the image to be recognized and the pose information corresponding to each pre-stored image can be represented by vectors. Therefore, the poses of the image to be recognized and the pre-stored images can be directly determined by calculating the vector distance. Match the situation, and filter out the pre-stored images whose vector distance is smaller than the preset distance threshold as the target image.
在本公开的一些实施例中,目标图像具有第二特征点和对应的三维点云,目标图像的第二特征点在三维点云中具有对应的点。同时目标物品的三维点云可以根据至少一个 预存图像预先确定。或者,在确定待识别图像中的目标物品后,在数据库中筛选得到多个包括目标物品的预存图像,再根据多个预存图像确定目标物品的三维点云。在一些实施例中,目标物品的三维点云可以通过电子设备根据至少一个预存图像生成,或者通过服务器等其他设备根据至少一个预存图像生成。在电子设备生成三维点云的情况下,电子设备可以在执行本公开实施例的姿态确定方法之前生成三维点云,或者在执行本公开实施例的姿态确定方法的过程中生成目标物品的三维点云。In some embodiments of the present disclosure, the target image has a second feature point and a corresponding three-dimensional point cloud, and the second feature point of the target image has a corresponding point in the three-dimensional point cloud. At the same time, the three-dimensional point cloud of the target object can be predetermined based on at least one pre-stored image. Alternatively, after determining the target item in the image to be recognized, multiple pre-stored images including the target item are obtained through screening in the database, and then the three-dimensional point cloud of the target item is determined based on the multiple pre-stored images. In some embodiments, the three-dimensional point cloud of the target object can be generated by an electronic device based on at least one pre-stored image, or by other devices such as a server based on at least one pre-stored image. In the case where the electronic device generates a three-dimensional point cloud, the electronic device may generate the three-dimensional point cloud before executing the attitude determination method of the embodiment of the present disclosure, or generate the three-dimensional point cloud of the target item during the execution of the attitude determination method of the embodiment of the present disclosure. cloud.
在本公开的一些实施例中,目标物品的三维点云由电子设备生成时,根据多个预存图像确定目标物品三维点云的过程,可以先确定至少一个预存图像,并获取预存图像的至少一个第二特征点;其次,对每个预存图像的第二特征点进行匹配,得到多个第二特征点组,其中,每个第二特征点组中的至少一个第二特征点用于表征目标物品上的同一位置;最后,根据目标物品对应的多个第二特征点组,确定预存图像对应的三维点云,其中,三维点云中每一个点与一个第二特征点组中的至少一个第二特征点对应。也就是说,可以通过姿态筛选的方式在多个预存图像中筛选得到与待识别图像中物品姿态相似的图像作为目标图像;这样,能够通过局部特征匹配的方式,基于至少一个预存图像,确定待识别图像中每个二维特征点对应的三维特征点;这里,因图像中每一个局部特征点只包含其附近局部区域的信息,其相比于现有方法,在确定每个二维特征点对应的三维特征点的过程中,其受背景和图像视角的影响相对较小,进而使得确定的二维特征点对应的三维特征点精准度更高。In some embodiments of the present disclosure, when the three-dimensional point cloud of the target object is generated by an electronic device, the process of determining the three-dimensional point cloud of the target object based on multiple pre-stored images may first determine at least one pre-stored image and obtain at least one of the pre-stored images. second feature points; secondly, match the second feature points of each pre-stored image to obtain multiple second feature point groups, wherein at least one second feature point in each second feature point group is used to characterize the target The same position on the item; finally, determine the three-dimensional point cloud corresponding to the pre-stored image based on the plurality of second feature point groups corresponding to the target item, wherein each point in the three-dimensional point cloud corresponds to at least one of the second feature point groups. Corresponds to the second feature point. That is to say, multiple pre-stored images can be screened through gesture screening to obtain an image similar to the posture of the object in the image to be recognized as the target image; in this way, the target image can be determined based on at least one pre-stored image through local feature matching. Identify the three-dimensional feature points corresponding to each two-dimensional feature point in the image; here, because each local feature point in the image only contains information about its nearby local area, compared with existing methods, it is more difficult to determine each two-dimensional feature point In the process of corresponding three-dimensional feature points, it is relatively less affected by the background and image perspective, which makes the three-dimensional feature points corresponding to the determined two-dimensional feature points more accurate.
其中,每个预存图像中可以包括目标物品,至少一个第二特征点位于对应预存图像中的目标物品上,第二特征点用于表征目标物品局部区域。在一些实施例中,每个第二特征点还可以具有对应的用于描述局部特征的第二描述子,即第二描述子用于表征预存图像中目标物品的局部区域特征。这里,第二特征点匹配的过程可以基于第二描述子实现,即电子设备根据每个预存图像的至少一个第二特征点的第二描述子,对每个预存图像的第二特征点进行匹配得到多个第二特征点组。每个第二特征点组中的至少一个第二特征点用于表征目标物品上的同一位置。再根据目标物品对应的多个第二特征点组确定三维点云,三维点云中每一个点与一个第二特征点组中的至少一个第二特征点对应。Each pre-stored image may include a target item, and at least one second feature point is located on the target item in the corresponding pre-stored image, and the second feature point is used to characterize a local area of the target item. In some embodiments, each second feature point may also have a corresponding second descriptor for describing local features, that is, the second descriptor is used to characterize the local area features of the target item in the pre-stored image. Here, the second feature point matching process can be implemented based on the second descriptor, that is, the electronic device matches the second feature point of each prestored image according to the second descriptor of at least one second feature point of each prestored image. Multiple second feature point groups are obtained. At least one second feature point in each second feature point group is used to represent the same position on the target item. Then, a three-dimensional point cloud is determined based on a plurality of second feature point groups corresponding to the target item. Each point in the three-dimensional point cloud corresponds to at least one second feature point in a second feature point group.
在本公开的一些实施例中,多个第二特征点组的确定过程可以为,在多个预存图像中随机抽取一个作为目标预存图像,基于第二描述子将目标预存图像中每一个目标第二特征点分别与其他预存图像进行局部特征匹配,并将匹配到的多个第二特征点与该目标特征点组成一个第二特征点组。在本公开的一些实施例中,再重新确定一个未被匹配到的第二特征点所在的预存图像作为目标预存图像,再将其中未被匹配到的目标第二特征点与其他未被匹配到的第二特征点进行第二描述子匹配确定第二特征点组,直到全部第二特征点均被匹配,或每个第二特征点的匹配过程已经全部第二特征点后。In some embodiments of the present disclosure, the determination process of the plurality of second feature point groups may be to randomly select one of the plurality of pre-stored images as the target pre-stored image, and add each target in the target pre-stored image based on the second descriptor. The two feature points are respectively matched with other pre-stored images for local features, and a plurality of matched second feature points and the target feature point are combined to form a second feature point group. In some embodiments of the present disclosure, a pre-stored image in which an unmatched second feature point is located is re-determined as the target pre-stored image, and then the unmatched target second feature point is compared with other unmatched second feature points. Perform second descriptor matching on the second feature points to determine the second feature point group until all second feature points are matched, or the matching process of each second feature point has completed after all second feature points have been matched.
在本公开的一些实施例中,在每两个第二特征点基于第二描述子进行匹配时,可以计算两个第二特征点的第二描述子之间的距离得到局部特征的匹配程度,距离越近匹配程度越高。其中,第二描述子为向量,可以通过直接对向量进行内积得到两个第二描述子之间的距离。同时,还可以通过局部特征和空间位置两个参数进行第二特征点的匹配。In some embodiments of the present disclosure, when every two second feature points are matched based on the second descriptor, the distance between the second descriptors of the two second feature points can be calculated to obtain the matching degree of the local features, The closer the distance, the higher the degree of matching. Among them, the second descriptor is a vector, and the distance between the two second descriptors can be obtained by directly performing an inner product on the vector. At the same time, the second feature point can also be matched through two parameters: local features and spatial location.
在本公开的一些实施例中,在得到多个第二特征点组后,可以通过运动恢复结构算法,对多个第二特征点组解算得到目标物品的三维点云。即根据每个第二特征点组中第二特征点所在图像的获取时间确定特征点跟踪轨迹,根据多个特征点跟踪轨迹确定目标物品的一个三维点,组成三维点云。因此,三维点云中的每个三维点与一个第二特征点组中的全部第二特征点对应,即在三维点云的构建过程中即可以得到其中每个三维点与至少一个二维第二特征点的对应关系;这样,能够基于运动恢复结构算法,较为灵活地完成图像采集装置追踪与运动匹配,进而使得得到的目标物品的三维点云更加精准。In some embodiments of the present disclosure, after obtaining multiple second feature point groups, the multiple second feature point groups can be solved to obtain a three-dimensional point cloud of the target item through a motion recovery structure algorithm. That is, the feature point tracking trajectory is determined based on the acquisition time of the image where the second feature point in each second feature point group is located, and a three-dimensional point of the target item is determined based on the multiple feature point tracking trajectories to form a three-dimensional point cloud. Therefore, each three-dimensional point in the three-dimensional point cloud corresponds to all the second feature points in a second feature point group. That is, during the construction process of the three-dimensional point cloud, it can be obtained that each three-dimensional point corresponds to at least one two-dimensional third feature point. The corresponding relationship between the two feature points; in this way, based on the motion recovery structure algorithm, the image acquisition device tracking and motion matching can be completed more flexibly, thereby making the obtained three-dimensional point cloud of the target object more accurate.
图3示出根据本公开实施例的一种三维点云的示意图。如图3所示,三维点云20中包 括多个在三维坐标系中的三维点,每个三维点具有至少一个对应的二维第二特征点。Figure 3 shows a schematic diagram of a three-dimensional point cloud according to an embodiment of the present disclosure. As shown in Figure 3, the three-dimensional point cloud 20 includes a plurality of three-dimensional points in the three-dimensional coordinate system, and each three-dimensional point has at least one corresponding two-dimensional second feature point.
图4示出根据本公开实施例的一种第二特征点匹配过程的示意图。如图4所示,对于第一预存图像30和第二预存图像31,其中均包括目标物品32。第一预存图像30和第二预存图像31中的目标物品32上均具有多个第二特征点,在基于对应的第二描述子匹配时,可以根据局部特征将表征目标物品32相同位置的第二特征点匹配在一起。FIG. 4 shows a schematic diagram of a second feature point matching process according to an embodiment of the present disclosure. As shown in FIG. 4 , the first pre-stored image 30 and the second pre-stored image 31 both include a target item 32 . The target item 32 in the first pre-stored image 30 and the second pre-stored image 31 each has a plurality of second feature points. When matching based on the corresponding second descriptor, the third point representing the same position of the target item 32 can be determined based on the local features. The two feature points are matched together.
在本公开的一些实施例中,每个预存图像还具有对应的姿态信息,用于表征采集预存图像时图像采集装置的姿态。该姿态信息可以通过向量表示,包括采集预存图像时图像采集装置在目标三维坐标系中的三个位移参数以及三个旋转参数,目标三维坐标系可以预先设定或选中。In some embodiments of the present disclosure, each pre-stored image also has corresponding posture information, which is used to characterize the posture of the image acquisition device when collecting the pre-stored image. The posture information can be represented by a vector, including three displacement parameters and three rotation parameters of the image acquisition device in the target three-dimensional coordinate system when collecting the pre-stored image. The target three-dimensional coordinate system can be preset or selected.
在本公开的一些实施例中,姿态信息可以在采集预存图像时确定并与预存图像一同存储。或者,姿态信息还可以在确定三维点云后,根据其中包括的多个第二特征点在三维点云中对应的三维点确定。也就是说,可以对于每个预存图像,基于预存图像包括的每个第二特征点与三维点云中三维点的对应关系执行N点透视算法,得到预存图像对应的姿态信息,即采集每个预存图像时图像采集装置的姿态。In some embodiments of the present disclosure, the gesture information may be determined when the pre-stored image is acquired and stored together with the pre-stored image. Alternatively, the posture information can also be determined based on the corresponding three-dimensional points in the three-dimensional point cloud of the plurality of second feature points included in the three-dimensional point cloud after the three-dimensional point cloud is determined. That is to say, for each pre-stored image, an N-point perspective algorithm can be executed based on the correspondence between each second feature point included in the pre-stored image and the three-dimensional point in the three-dimensional point cloud, to obtain the posture information corresponding to the pre-stored image, that is, to collect each The posture of the image acquisition device when pre-storing images.
步骤S30、根据所述至少一个第一特征点、所述第二特征点,以及所述第二特征点在所述三维点云中对应的点,确定所述至少一个第一特征点在所述三维点云中的目标点。Step S30: Determine where the at least one first feature point is in the three-dimensional point cloud based on the at least one first feature point, the second feature point, and the corresponding point of the second feature point in the three-dimensional point cloud. Target points in a 3D point cloud.
在本公开的一些实施例中,电子设备在确定待识别图像中目标物品的至少一个特征点,以及待识别图像对应至少一个目标图像的第二特征点后,可以根据多个第一特征点、第二特征点、以及第二特征点在三维点云中对应的点确定至少一个第一特征点在三维点云中的目标点。也就是说,通过第一特征点和第二特征点的二维点匹配间接实现第一特征点和目标点的二维点与三维点的匹配。In some embodiments of the present disclosure, after the electronic device determines at least one feature point of the target item in the image to be recognized and the second feature point of the image to be recognized corresponding to at least one target image, the electronic device can determine based on the plurality of first feature points, The second feature point and the corresponding point of the second feature point in the three-dimensional point cloud determine the target point of at least one first feature point in the three-dimensional point cloud. That is to say, the matching of the two-dimensional points and the three-dimensional points of the first feature point and the target point is indirectly achieved through the two-dimensional point matching of the first feature point and the second feature point.
在本公开的一些实施例中,电子设备可以将待识别图像与目标图像进行特征点匹配,得到每个第一特征点匹配的第二特征点。再将每个第一特征点匹配的第二特征点在三维点云中对应的点,确定为目标点。In some embodiments of the present disclosure, the electronic device can perform feature point matching on the image to be recognized and the target image to obtain second feature points matching each first feature point. Then, the point corresponding to the second feature point matched by each first feature point in the three-dimensional point cloud is determined as the target point.
示例性地,本公开实施例可以通过描述子进行第一特征点和第二特征点的匹配。例如,电子设备可以根据待识别图像中每个第一特征点的第一描述子,以及至少一个目标图像中每个第二特征点的第二描述子进行特征点匹配,以确定多个第一特征点匹配的第二特征点,并根据第二特征点和目标点云中三维点的对应关系,间接确定第一特征点在三维点云中对应的点为目标点。可选地,根据第一描述子和第二描述子匹配第一特征点对应的第二特征点的过程,可参考步骤S20中两个第二特征点匹配的过程。For example, in this embodiment of the present disclosure, the first feature point and the second feature point can be matched using descriptors. For example, the electronic device may perform feature point matching based on the first descriptor of each first feature point in the image to be recognized and the second descriptor of each second feature point in at least one target image to determine a plurality of first feature points. The second feature point matched by the feature point, and based on the corresponding relationship between the second feature point and the three-dimensional point in the target point cloud, the point corresponding to the first feature point in the three-dimensional point cloud is indirectly determined to be the target point. Optionally, according to the process of matching the first descriptor and the second descriptor to the second feature point corresponding to the first feature point, reference may be made to the process of matching the two second feature points in step S20.
在本公开的一些实施例中,对于待识别图像中每一个第一特征点,可以通过计算第一描述子和每个候选图像上第二特征点第二描述子的距离,确定距离最近的第二特征点匹配,并确定每个第一特征点匹配的第二特征点在三维点云中对应的点为目标点。In some embodiments of the present disclosure, for each first feature point in the image to be recognized, the closest third feature point can be determined by calculating the distance between the first descriptor and the second descriptor of the second feature point on each candidate image. Two feature points are matched, and the point corresponding to the second feature point matched by each first feature point in the three-dimensional point cloud is determined to be the target point.
步骤S40、根据所述目标点确定所述待识别图像对应的目标姿态。Step S40: Determine the target posture corresponding to the image to be recognized according to the target point.
在本公开的一些实施例中,在确定待识别图像中每个第一特征点在三维点云中匹配的目标点后,即确定待识别图像中每一个第一特征点表征的目标物品位置在三维空间中的位置。In some embodiments of the present disclosure, after determining the matching target point of each first feature point in the image to be recognized in the three-dimensional point cloud, it is determined that the position of the target item represented by each first feature point in the image to be recognized is at position in three-dimensional space.
在本公开的一些实施例中,可以根据至少一个第一特征点和每个目标点的对应关系确定待识别图像对应的目标姿态,例如可以根据至少一个第一特征点在三维点云中的目标点执行N点透视算法,得到待识别图像对应的目标姿态。即可以根据多个目标点与第一特征点的对应关系构建先写方程求解,得到目标姿态。In some embodiments of the present disclosure, the target posture corresponding to the image to be recognized can be determined based on the correspondence between at least one first feature point and each target point. For example, the target posture corresponding to the at least one first feature point in the three-dimensional point cloud can be determined. Perform N-point perspective algorithm to obtain the target posture corresponding to the image to be recognized. That is, based on the correspondence between multiple target points and the first feature point, a first-written equation can be constructed and solved to obtain the target posture.
图5示出根据本公开实施例的一种确定目标姿态的示意图。如图5所示,在需要确定待识别图像41对应的目标姿态时,先根据待识别图像41与预存图像40进行姿态匹配,在多个预存图像40中筛选得到至少一个目标图像42,再根据待识别图像41中的第一特征点与每个目标图像42中的第二特征点进行局部特征匹配,得到至少一个与第一特征点匹配 的第二特征点作为匹配特征点43。在本公开的一些实施例中,确定每个匹配特征点43在三维点云44中匹配的点为目标点45,再根据待识别图像41中每个第一特征点匹配的目标点45执行N点透视算法,得到待识别图像对应的目标姿态46。Figure 5 shows a schematic diagram of determining a target posture according to an embodiment of the present disclosure. As shown in Figure 5, when it is necessary to determine the target posture corresponding to the image 41 to be recognized, posture matching is first performed based on the image 41 to be recognized and the pre-stored image 40, and at least one target image 42 is obtained from multiple pre-stored images 40, and then based on The first feature point in the image to be recognized 41 performs local feature matching with the second feature point in each target image 42 to obtain at least one second feature point that matches the first feature point as the matching feature point 43 . In some embodiments of the present disclosure, the point matched by each matching feature point 43 in the three-dimensional point cloud 44 is determined to be the target point 45, and then N is performed based on the target point 45 matched by each first feature point in the image 41 to be recognized. Point perspective algorithm is used to obtain the target pose 46 corresponding to the image to be recognized.
在本公开的一些实施例中,在待识别图像为连续采集的图像序列中的一帧,且需要确定获取的图像训练中每一帧图像采集时图像采集装置的姿态时,可以基于前一帧图像采集时图像采集装置的姿态,确定当前帧图像采集时图像采集装置的姿态。例如,可以在获取连续采集的图像序列时,从图像序列中抽取图像帧1作为待识别图像,并确定图像序列中第2至N(N为大于2的整数)帧图像帧为参考图像。在确定待识别图像的中每个第一特征点与三维点云中点的对应关系后,根据每个第一特征点在三维点云中对应的目标点,确定参考图像中多个第二特征点在三维点云中对应的参考点,并根据第三特征点和参考点的对应关系,确定每个参考图像帧的参考姿态。在本公开的一些实施例中,由于多次确定的参考姿态均基于准确的目标姿态得到,随着参考图像在图像序列中距离待识别图像的距离越来越远,准确度会越来越低。为避免上述问题,在确定第N帧图像帧为参考图像后,再抽取第N+1帧为下一个待识别图像。通过本公开实施例的姿态确定方法确定当前待识别图像的目标姿态,以继续通过传递方式确定N+2至N+N帧的参考姿态。In some embodiments of the present disclosure, when the image to be recognized is a frame in a continuously collected image sequence, and it is necessary to determine the posture of the image acquisition device when each frame of image acquisition in the acquired image training, it can be based on the previous frame. The attitude of the image acquisition device during image acquisition determines the attitude of the image acquisition device during image acquisition of the current frame. For example, when acquiring a continuously collected image sequence, you can extract image frame 1 from the image sequence as the image to be recognized, and determine the 2nd to N (N is an integer greater than 2) frames in the image sequence as the reference image. After determining the corresponding relationship between each first feature point in the image to be recognized and the midpoint of the three-dimensional point cloud, determine a plurality of second features in the reference image based on the target point corresponding to each first feature point in the three-dimensional point cloud. point the corresponding reference point in the three-dimensional point cloud, and determine the reference pose of each reference image frame based on the correspondence between the third feature point and the reference point. In some embodiments of the present disclosure, since the reference poses determined multiple times are all based on accurate target poses, as the reference image becomes farther and farther away from the image to be recognized in the image sequence, the accuracy will become lower and lower. . In order to avoid the above problems, after determining that the Nth image frame is the reference image, the N+1th frame is extracted as the next image to be recognized. The target posture of the current image to be recognized is determined through the posture determination method of the embodiment of the present disclosure, so as to continue to determine the reference posture of the N+2 to N+N frames through a transfer method.
在本公开的一些实施例中,根据待识别图像的目标姿态确定相邻的下一帧参考图像的参考姿态的过程可以包括以下步骤:先确定待识别图像在图像序列中的下一帧图像作为参考图像,参考图像中包括在目标物品上的至少一个第三特征点。根据待识别图像上每个第一特征点对应的目标点,根据待识别图像上每个第一特征点对应的目标点,确定参考图像上每个第三特征点对应的目标点。再根据每个第三特征点与目标点的对应关系确定参考图像对应的参考姿态。其中,参考图像上每个第三特征点和目标点的对应关系可以基于稀疏光流算法确定,即根据稀疏光流算法跟踪待识别图像上每个第一特征点,得到每个第一特征点在参考图像上匹配的第三特征点。确定每个第一特征点在所述参考图像中匹配的第三特征点,与第一特征点在三维点云中对应的目标点对应。在本公开的一些实施例中,根据每个第三特征点和对应目标点执行N视算法,得到参考图像对应的参考姿态。同时,还可以在确定当前参考图像的参考姿态后,根据当前参考图像中第三特征点和目标点的对应关系顺序确定图像序列中下一帧图像的参考姿态。In some embodiments of the present disclosure, the process of determining the reference pose of the adjacent next frame reference image based on the target pose of the image to be recognized may include the following steps: first determine the next frame image of the image to be recognized in the image sequence as The reference image includes at least one third feature point on the target item. According to the target point corresponding to each first feature point on the image to be recognized, and according to the target point corresponding to each first feature point on the image to be recognized, the target point corresponding to each third feature point on the reference image is determined. Then, the reference posture corresponding to the reference image is determined based on the corresponding relationship between each third feature point and the target point. Among them, the corresponding relationship between each third feature point and the target point on the reference image can be determined based on the sparse optical flow algorithm, that is, according to the sparse optical flow algorithm, each first feature point on the image to be identified is tracked to obtain each first feature point. The third feature point matched on the reference image. The third feature point matching each first feature point in the reference image is determined to correspond to the target point corresponding to the first feature point in the three-dimensional point cloud. In some embodiments of the present disclosure, an N-view algorithm is executed based on each third feature point and the corresponding target point to obtain the reference pose corresponding to the reference image. At the same time, after determining the reference pose of the current reference image, the reference pose of the next frame image in the image sequence can be determined sequentially based on the correspondence between the third feature point and the target point in the current reference image.
图6A示出根据本公开实施例的一种确定参考姿态的示意图。如图6A所示,对于图像序列50中确定的待识别图像51,可以根据局部特征匹配的方式,确定待识别图像51中每个第一特征点在从预存图像中筛选得到的目标图像52中匹配的第二特征点53。在本公开的一些实施例中,确定每个第一特征点匹配的第二特征点53在三维点云54中对应的三维点为目标点55,并确定第一特征点与目标点55的对应关系确定待识别图像的目标姿态56。FIG. 6A shows a schematic diagram of determining a reference posture according to an embodiment of the present disclosure. As shown in FIG. 6A , for the image 51 to be identified determined in the image sequence 50 , it can be determined based on local feature matching that each first feature point in the image 51 to be identified is in the target image 52 filtered from the pre-stored images. Matched second feature point 53. In some embodiments of the present disclosure, it is determined that the corresponding three-dimensional point in the three-dimensional point cloud 54 of the second feature point 53 matched by each first feature point is the target point 55, and the correspondence between the first feature point and the target point 55 is determined. The relationship determines the target pose 56 of the image to be recognized.
在本公开的一些实施例中,对于图像序列50中在待识别图像51下一帧位置的参考图像57,可以基于跟踪算法根据待识别图像51、目标点55和参考图像57确定参考图像57的参考姿态。在本公开的一些实施例中,可以根据稀疏光流算法确定待识别图像51中每个第一特征点和参考图像57中每个第三特征点的对应关系,并根据每个第一特征点和目标点55的对应关系确定每个第三特征点和目标点55的对应关系。根据每个第三特征点和目标点55的对应关系确定参考图像57的参考姿态58。In some embodiments of the present disclosure, for the reference image 57 at the next frame position of the image 51 to be recognized in the image sequence 50 , the reference image 57 may be determined based on the image 51 to be recognized, the target point 55 and the reference image 57 based on a tracking algorithm. Reference posture. In some embodiments of the present disclosure, the corresponding relationship between each first feature point in the image to be recognized 51 and each third feature point in the reference image 57 can be determined according to a sparse optical flow algorithm, and based on each first feature point The corresponding relationship with the target point 55 determines the corresponding relationship between each third feature point and the target point 55 . The reference pose 58 of the reference image 57 is determined according to the corresponding relationship between each third feature point and the target point 55 .
图6B示出实际应用过程中基于本申请实施例提供的姿态确定方法确定物体6D姿态的示意图;如图6B所示,基于本申请实施例提供的姿态确定方法,即基于局部特征点匹配的方法估计物体姿态的前提是需要构建物体点云地图,将数据图像上的局部特征点与三维世界中的物体进行关联;如图6B所示,对于一帧输入图像59,首先,通过初始姿态估计,在多张预存图像中找出视角相近的数据图像510;然后,将数据图像510,与输入图像59之间进行2D特征点提取与匹配,得到输入图像59中每一2D特征点,在数据图像510中匹配的2D特征点,即确定输入图像58和数据图像510之间的2D-2D匹配关系,如图6B 中的511;与此同时,通过对数据图像510进行运动恢复结构算法,得到对应的物体点云地图512;最后,基于输入图像58和数据图像510之间的2D-2D匹配关系,以及物体点云地图512,确定输入图像59中提取的2D特征点与物体点云地图512之间的匹配关系,即构建对应的2D-3D匹配关系,如图6B中的513所示,并基于PnP解算该2D-3D匹配关系513,得到输入图像59中的物体姿态514,即物体6D姿态。Figure 6B shows a schematic diagram of determining the 6D posture of an object based on the posture determination method provided by the embodiment of the present application in the actual application process; as shown in Figure 6B, based on the posture determination method provided by the embodiment of the present application, that is, a method based on local feature point matching The prerequisite for estimating the object pose is to construct an object point cloud map and associate the local feature points on the data image with the objects in the three-dimensional world; as shown in Figure 6B, for a frame of input image 59, first, through the initial pose estimation, Find data images 510 with similar viewing angles among multiple pre-stored images; then, perform 2D feature point extraction and matching between the data image 510 and the input image 59 to obtain each 2D feature point in the input image 59, in the data image The matched 2D feature points in 510 determine the 2D-2D matching relationship between the input image 58 and the data image 510, as shown in 511 in Figure 6B; at the same time, by performing a motion recovery structure algorithm on the data image 510, the corresponding The object point cloud map 512; finally, based on the 2D-2D matching relationship between the input image 58 and the data image 510, and the object point cloud map 512, determine the relationship between the 2D feature points extracted in the input image 59 and the object point cloud map 512 The corresponding 2D-3D matching relationship is constructed, as shown in 513 in Figure 6B, and the 2D-3D matching relationship 513 is solved based on PnP to obtain the object pose 514 in the input image 59, that is, the object 6D attitude.
本公开实施例能够通过局部特征匹配的方式确定待识别图像中每个二维特征点与目标图像中每个二维特征点的对应关系,并根据多个目标图像生成三维点云,从而得到待识别图像中每个二维特征点和三维点云中三维特征点的对应关系,以准确地确定采集待识别图像时图像采集装置的姿态,能够提高确定的目标姿态准确性。同时,本公开实施例还可以通过估计待识别图像的初步姿态的方式对目标图像进行初步筛选,能够减小二维特征点匹配过程的计算量,进而能够提高姿态确定过程的效率。The embodiments of the present disclosure can determine the corresponding relationship between each two-dimensional feature point in the image to be identified and each two-dimensional feature point in the target image through local feature matching, and generate a three-dimensional point cloud based on multiple target images, thereby obtaining the target image. Identify the correspondence between each two-dimensional feature point in the image and the three-dimensional feature point in the three-dimensional point cloud to accurately determine the posture of the image acquisition device when collecting the image to be recognized, which can improve the accuracy of the determined target posture. At the same time, embodiments of the present disclosure can also perform preliminary screening of target images by estimating the preliminary posture of the image to be recognized, which can reduce the calculation amount of the two-dimensional feature point matching process, thereby improving the efficiency of the posture determination process.
本公开实施例提供的姿态确定方法,能够根据待识别图像中在目标物品上的至少一个第一特征点,确定待识别图像对应的目标姿态,这里,由于至少一个第一特征点通常只包含其附近局部区域的信息,因而相比于现有方法,其受背景和图像视角的影响相对较小,进而在实际使用的时候,只需要少量的训练数据就可以泛化到不同的场景上。同时对于一张输入图像,可以提取到的局部特征点非常多,进而使得解算待识别图像对应的6D姿态的时候拥有更高的容错性和冗余度。此外,本公开实施例中对于物体6D姿态检测与跟踪,其使用同一套2D-3D匹配的表示,在算法框架层面更加简洁优雅,效率更高,两个模块可以互相支持和帮助。The posture determination method provided by the embodiment of the present disclosure can determine the target posture corresponding to the image to be recognized based on at least one first feature point on the target item in the image to be recognized. Here, since the at least one first feature point usually only includes other Therefore, compared with existing methods, it is relatively less affected by the background and image perspective. Therefore, in actual use, only a small amount of training data is needed to generalize to different scenes. At the same time, for an input image, there are many local feature points that can be extracted, which results in higher fault tolerance and redundancy when solving the 6D posture corresponding to the image to be recognized. In addition, in the embodiment of the present disclosure, for object 6D posture detection and tracking, the same set of 2D-3D matching representations is used, which is more concise, elegant and efficient at the algorithm framework level. The two modules can support and help each other.
在本公开的一些实施例中,在对多个连续采集的图像确定姿态时,可以先通过二维特征点匹配的方式间接匹配得到待识别图像上每个二维特征点对应的三维特征点,确定目标姿态。后续再基于稀疏光流算法跟踪确定后续获得的多帧图像的二维特征点与待识别图像上三维特征点的对应关系,如此,能够快速且准确的确定在待识别图像后采集的多帧图像的参考姿态。同时,该方式还能够准确地感知图像采集位置变化较小时的姿态变化。In some embodiments of the present disclosure, when determining the posture of multiple continuously collected images, the three-dimensional feature points corresponding to each two-dimensional feature point on the image to be recognized can be obtained through indirect matching through two-dimensional feature point matching. Determine the target attitude. Subsequently, the sparse optical flow algorithm is used to track and determine the correspondence between the two-dimensional feature points of the subsequent multi-frame images and the three-dimensional feature points on the image to be identified. In this way, the multi-frame images collected after the image to be identified can be quickly and accurately determined. reference attitude. At the same time, this method can also accurately perceive posture changes when the image collection position changes slightly.
可以理解,本公开实施例提及的上述各个方法实施例,在不违背原理逻辑的情况下,均可以彼此相互结合形成结合后的实施例。本领域技术人员可以理解,在具体实施方式的上述方法中,各步骤的具体执行顺序应当以其功能和可能的内在逻辑确定。It can be understood that the above-mentioned method embodiments mentioned in the embodiments of the present disclosure can be combined with each other to form a combined embodiment without violating the principle logic. Those skilled in the art can understand that in the above-mentioned methods of specific embodiments, the specific execution order of each step should be determined by its function and possible internal logic.
此外,本公开实施例还提供了姿态确定装置、电子设备、计算机可读存储介质、程序,上述均可用来实现本公开实施例提供的任一种姿态确定方法,相应技术方案和描述和参见方法部分的相应记载。In addition, embodiments of the present disclosure also provide posture determination devices, electronic devices, computer-readable storage media, and programs, all of which can be used to implement any posture determination method provided by embodiments of the present disclosure. The corresponding technical solutions and descriptions and reference methods Part of the corresponding records.
图7示出根据本公开实施例的一种姿态确定装置的示意图。如图7所示,本公开实施例的姿态确定装置,包括:FIG. 7 shows a schematic diagram of a gesture determination device according to an embodiment of the present disclosure. As shown in Figure 7, the attitude determination device according to the embodiment of the present disclosure includes:
第一确定部分60,被配置为确定待识别图像中在目标物品上的至少一个第一特征点;The first determining part 60 is configured to determine at least one first feature point on the target item in the image to be recognized;
第二确定部分61,被配置为根据所述待识别图像中的目标物品从预存图像中确定至少一个目标图像,其中,所述目标图像具有第二特征点和对应的三维点云,所述目标图像的第二特征点在所述三维点云中具有对应的点;The second determination part 61 is configured to determine at least one target image from a pre-stored image according to the target item in the image to be recognized, wherein the target image has a second feature point and a corresponding three-dimensional point cloud, and the target The second feature point of the image has a corresponding point in the three-dimensional point cloud;
目标点匹配部分62,被配置为根据所述至少一个第一特征点、所述第二特征点,以及所述第二特征点在所述三维点云中对应的点,确定所述至少一个第一特征点在所述三维点云中的目标点;The target point matching part 62 is configured to determine the at least one first feature point according to the at least one first feature point, the second feature point, and the corresponding point of the second feature point in the three-dimensional point cloud. A target point of a feature point in the three-dimensional point cloud;
姿态确定部分63,被配置为根据所述目标点确定所述待识别图像对应的目标姿态。The posture determination part 63 is configured to determine the target posture corresponding to the image to be recognized according to the target point.
在本公开的一些实施例中,所述装置还包括:预存图像确定部分,被配置为确定至少一个所述预存图像,并获取所述预存图像的至少一个第二特征点;第一特征点匹配部分,被配置为对每个所述预存图像的第二特征点进行匹配,得到多个第二特征点组,其中,每个所述第二特征点组中的至少一个所述第二特征点用于表征所述目标物品上的同 一位置;三维点匹配部分,被配置为根据所述多个所述第二特征点组,确定所述预存图像对应的三维点云,其中,所述三维点云中每一个点与一个所述第二特征点组中的至少一个所述第二特征点对应。In some embodiments of the present disclosure, the device further includes: a pre-stored image determining part configured to determine at least one of the pre-stored images and obtain at least one second feature point of the pre-stored image; the first feature point matching The part is configured to match the second feature points of each of the pre-stored images to obtain a plurality of second feature point groups, wherein at least one second feature point in each of the second feature point groups Used to characterize the same position on the target item; the three-dimensional point matching part is configured to determine the three-dimensional point cloud corresponding to the pre-stored image according to the plurality of second feature point groups, wherein the three-dimensional point cloud Each point in the cloud corresponds to at least one second feature point in one of the second feature point groups.
在本公开的一些实施例中,所述三维点匹配部分,包括:点云生成子部分,被配置为通过运动恢复结构算法,对多个所述第二特征点组解算得到所述目标物品的三维点云。In some embodiments of the present disclosure, the three-dimensional point matching part includes: a point cloud generation sub-part configured to obtain the target item by solving a plurality of the second feature point groups through a motion recovery structure algorithm. 3D point cloud.
在本公开的一些实施例中,所述第二确定部分61,包括:第一姿态确定子部分,被配置为确定每个所述预存图像对应的姿态信息;第二姿态确定子部分,被配置为确定所述待识别图像对应的初始姿态;姿态筛选子部分,被配置为根据所述初始姿态和所述姿态信息,从所述预存图像中确定所述至少一个目标图像。In some embodiments of the present disclosure, the second determination part 61 includes: a first posture determination sub-part configured to determine posture information corresponding to each of the pre-stored images; a second posture determination sub-part configured to In order to determine the initial posture corresponding to the image to be recognized; the posture screening sub-section is configured to determine the at least one target image from the pre-stored image according to the initial posture and the posture information.
在本公开的一些实施例中,所述第一姿态确定子部分,还被配置为对于每个所述预存图像,基于所述预存图像包括的每个所述第二特征点在所述三维点云中对应的点执行N点透视算法,得到所述姿态信息。In some embodiments of the present disclosure, the first posture determination sub-part is further configured to, for each of the pre-stored images, determine at the three-dimensional point based on each of the second feature points included in the pre-stored image. The corresponding point in the cloud executes the N-point perspective algorithm to obtain the attitude information.
在本公开的一些实施例中,所述待识别图像为连续采集的图像序列中的一帧,所述第二姿态确定子部分,包括:第二姿态确定部分,被配置为确定所述图像序列中在所述待识别图像之前的多帧图像对应的在先姿态;第三姿态确定部分,被配置为基于多个所述在先姿态执行外插值法,得到所述初始姿态。In some embodiments of the present disclosure, the image to be recognized is a frame in a continuously collected image sequence, and the second posture determination sub-part includes: a second posture determination part configured to determine the image sequence and a third posture determination part configured to perform an extrapolation method based on a plurality of the previous postures to obtain the initial posture.
在本公开的一些实施例中,所述目标点匹配部分62,包括:第一特征点匹配子部分,被配置为将所述待识别图像与所述目标图像进行特征点匹配,得到每个所述第一特征点匹配的第二特征点;第一目标点匹配子部分,被配置为将每个所述第一特征点匹配的所述第二特征点在所述三维点云中对应的点,确定为所述目标点。In some embodiments of the present disclosure, the target point matching part 62 includes: a first feature point matching sub-part configured to perform feature point matching between the image to be recognized and the target image to obtain each a second feature point matched by the first feature point; a first target point matching sub-part configured to match the corresponding point of each second feature point matched by the first feature point in the three-dimensional point cloud , determined as the target point.
在本公开的一些实施例中,所述姿态确定部分63,包括:第三姿态确定子部分,被配置为根据所述目标点执行N点透视算法,得到所述待识别图像对应的目标姿态。In some embodiments of the present disclosure, the posture determination part 63 includes: a third posture determination sub-part configured to perform an N-point perspective algorithm according to the target point to obtain the target posture corresponding to the image to be recognized.
在本公开的一些实施例中,所述待识别图像为连续采集的图像序列中的一帧,所述装置还包括:参考图像确定部分,被配置为确定所述待识别图像在所述图像序列中的下一帧图像作为参考图像,其中,所述参考图像包括在所述目标物品上的至少一个第三特征点;第二特征点匹配部分,被配置为根据所述待识别图像中每个所述第一特征点对应的目标点,确定所述参考图像中每个所述第三特征点对应的目标点;参考姿态确定部分,被配置为根据每个所述第三特征点与所述目标点的对应关系,确定所述参考图像对应的参考姿态。In some embodiments of the present disclosure, the image to be recognized is a frame in a continuously collected image sequence, and the device further includes: a reference image determining part configured to determine whether the image to be recognized is in the image sequence. The next frame image in the image is used as a reference image, wherein the reference image includes at least one third feature point on the target item; the second feature point matching part is configured to match each image based on the image to be identified. The target point corresponding to the first feature point determines the target point corresponding to each third feature point in the reference image; the reference posture determination part is configured to determine the target point corresponding to each third feature point and the The corresponding relationship between the target points determines the reference posture corresponding to the reference image.
在本公开的一些实施例中,所述第二特征点匹配部分,包括:第二特征点匹配子部分,被配置为根据稀疏光流算法跟踪所述待识别图像中每个所述第一特征点,得到每个所述第一特征点在所述参考图像中匹配的第三特征点;第二目标点匹配子部分,被配置为确定每个所述第一特征点在所述参考图像中匹配的所述第三特征点,与所述第一特征点在所述三维点云中对应的目标点对应。In some embodiments of the present disclosure, the second feature point matching part includes: a second feature point matching sub-part configured to track each of the first features in the image to be identified according to a sparse optical flow algorithm points to obtain third feature points matching each of the first feature points in the reference image; a second target point matching sub-part configured to determine that each of the first feature points matches in the reference image The matched third feature point corresponds to the target point corresponding to the first feature point in the three-dimensional point cloud.
在一些实施例中,本公开实施例提供的装置具有的功能或包含的部分可以用于执行上文方法实施例描述的方法,其具体实现可以参照上文方法实施例的描述。In some embodiments, the functions or included parts of the device provided by the embodiments of the present disclosure can be used to perform the methods described in the above method embodiments. For specific implementation, reference can be made to the description of the above method embodiments.
本公开实施例还提出一种计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现上述方法。其中,计算机可读存储介质可以是易失性或非易失性计算机可读存储介质。Embodiments of the present disclosure also provide a computer-readable storage medium on which computer program instructions are stored, and when the computer program instructions are executed by a processor, the above method is implemented. Wherein, the computer-readable storage medium may be a volatile or non-volatile computer-readable storage medium.
本公开实施例还提出一种电子设备,包括:处理器;用于存储处理器可执行指令的存储器;其中,所述处理器被配置为调用所述存储器存储的指令,以执行上述实施例提供的姿态确定方法。An embodiment of the present disclosure also provides an electronic device, including: a processor; a memory for storing instructions executable by the processor; wherein the processor is configured to call instructions stored in the memory to execute the instructions provided by the above embodiments. attitude determination method.
本公开实施例还提供了一种计算机程序,所述计算机程序包括计算机可读代码,在所述计算机可读代码在电子设备中运行的情况下,所述电子设备的处理器执行用于上述实施例提供的姿态确定方法。Embodiments of the present disclosure also provide a computer program, the computer program including computer readable code, when the computer readable code is run in an electronic device, the processor of the electronic device executes the above implementation. The attitude determination method provided in the example.
本公开实施例还提供了一种计算机程序产品,包括计算机可读代码,或者承载有计算机可读代码的非易失性计算机可读存储介质,当所述计算机可读代码在电子设备的处理器中运行时,所述电子设备中的处理器执行上述实施例提供的姿态确定方法。Embodiments of the present disclosure also provide a computer program product, including computer readable code, or a non-volatile computer readable storage medium carrying the computer readable code. When the computer readable code is stored in a processor of an electronic device, When running, the processor in the electronic device executes the gesture determination method provided by the above embodiment.
电子设备可以被提供为终端、服务器或其它形态的设备。The electronic device may be provided as a terminal, a server, or other forms of equipment.
图8示出根据本公开实施例的一种电子设备800的示意图。例如,电子设备800可以是移动电话,计算机,数字广播终端,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,PDA等终端。FIG. 8 shows a schematic diagram of an electronic device 800 according to an embodiment of the present disclosure. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a PDA and other terminals.
参照图8,电子设备800可以包括以下一个或多个组件:处理组件802,存储器804,电源组件806,多媒体组件808,音频组件810,输入/输出(Input/Output,I/O)接口812,传感器组件814,以及通信组件816。Referring to Figure 8, the electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power supply component 806, a multimedia component 808, an audio component 810, an input/output (Input/Output, I/O) interface 812, Sensor component 814, and communication component 816.
处理组件802通常控制电子设备800的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件802可以包括一个或多个处理器820来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件802可以包括一个或多个模块,便于处理组件802和其他组件之间的交互。例如,处理组件802可以包括多媒体模块,以方便多媒体组件808和处理组件802之间的交互。Processing component 802 generally controls the overall operations of electronic device 800, such as operations associated with display, phone calls, data communications, camera operations, and recording operations. The processing component 802 may include one or more processors 820 to execute instructions to complete all or part of the steps of the above method. Additionally, processing component 802 may include one or more modules that facilitate interaction between processing component 802 and other components. For example, processing component 802 may include a multimedia module to facilitate interaction between multimedia component 808 and processing component 802.
存储器804被配置为存储各种类型的数据以支持在电子设备800的操作。这些数据的示例包括用于在电子设备800上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器804可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(Static Random-Access Memory,SRAM),电可擦除可编程只读存储器(Electrically Erasable Programmable Read Only Memory,EEPROM),可擦除可编程只读存储器(Electrical Programmable Read Only Memory,EPROM),可编程只读存储器(Programmable Read-Only Memory,PROM),只读存储器(Read Only Memory,ROM),磁存储器,快闪存储器,磁盘或光盘。Memory 804 is configured to store various types of data to support operations at electronic device 800 . Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, etc. Memory 804 may be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as Static Random-Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (Electrically Erasable Programmable Read-Only Memory) Erasable Programmable Read Only Memory (EEPROM), Erasable Programmable Read Only Memory (Electrical Programmable Read Only Memory, EPROM), Programmable Read-Only Memory (PROM), Read Only Memory (Read Only Memory, ROM), magnetic memory, flash memory, magnetic disk or optical disk.
电源组件806为电子设备800的各种组件提供电力。电源组件806可以包括电源管理系统,一个或多个电源,及其他与为电子设备800生成、管理和分配电力相关联的组件。Power supply component 806 provides power to various components of electronic device 800 . Power supply components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power to electronic device 800 .
多媒体组件808包括在所述电子设备800和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(Liquid Crystal Display,LCD)和触摸面板(TouchPanel,TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件808包括一个前置摄像头和/或后置摄像头。当电子设备800处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。Multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and the user. In some embodiments, the screen may include a liquid crystal display (Liquid Crystal Display, LCD) and a touch panel (TouchPanel, TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide action. In some embodiments, multimedia component 808 includes a front-facing camera and/or a rear-facing camera. When the electronic device 800 is in an operating mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each front-facing camera and rear-facing camera can be a fixed optical lens system or have a focal length and optical zoom capabilities.
音频组件810被配置为输出和/或输入音频信号。例如,音频组件810包括一个麦克风(Microphone,MIC),当电子设备800处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。在一些实施例中,所接收的音频信号可以被存储在存储器804或经由通信组件816发送。在一些实施例中,音频组件810还包括一个扬声器,用于输出音频信号。Audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a microphone (Microphone, MIC). When the electronic device 800 is in an operating mode, such as a call mode, a recording mode, and a voice recognition mode, the microphone is configured to receive an external audio signal. In some embodiments, the received audio signal may be stored in memory 804 or sent via communication component 816 . In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
I/O接口812为处理组件802和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。The I/O interface 812 provides an interface between the processing component 802 and a peripheral interface module, which may be a keyboard, a click wheel, a button, etc. These buttons may include, but are not limited to: Home button, Volume buttons, Start button, and Lock button.
传感器组件814包括一个或多个传感器,用于为电子设备800提供各个方面的状态评估。例如,传感器组件814可以检测到电子设备800的打开/关闭状态,组件的相对定位,例如所述组件为电子设备800的显示器和小键盘,传感器组件814还可以检测电子设备800 或电子设备800一个组件的位置改变,用户与电子设备800接触的存在或不存在,电子设备800方位或加速/减速和电子设备800的温度变化。传感器组件814可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物品的存在。传感器组件814还可以包括光传感器,如互补金属氧化物半导体(Complementary Metal Oxide Semiconductor,CMOS)或电荷耦合装置(Charge-coupled Device,CCD)图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件814还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。Sensor component 814 includes one or more sensors for providing various aspects of status assessment for electronic device 800 . For example, the sensor component 814 can detect the open/closed state of the electronic device 800, the relative positioning of components, such as the display and keypad of the electronic device 800, the sensor component 814 can also detect the electronic device 800 or an electronic device 800. The position of components changes, the presence or absence of user contact with the electronic device 800 , the orientation or acceleration/deceleration of the electronic device 800 and the temperature of the electronic device 800 change. Sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby items without any physical contact. Sensor assembly 814 may also include a light sensor, such as a Complementary Metal Oxide Semiconductor (CMOS) or Charge-coupled Device (CCD) image sensor, for use in imaging applications. In some embodiments, the sensor component 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
通信组件816被配置为便于电子设备800和其他设备之间有线或无线方式的通信。电子设备800可以接入基于通信标准的无线网络,如无线网络(WiFi),第二代移动通信技术(2-Generation,2G)或第三代移动通信技术(3-Generation,3G),或它们的组合。在一个示例性实施例中,通信组件816经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,所述通信组件816还包括近场通信(Near Field Communication,NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(Radio Frequency Identification,RFID)技术,红外数据协会(Infrared Data Association,IrDA)技术,超宽带(Ultra Wide Band,UWB)技术,蓝牙(BitTorrent,BT)技术和其他技术来实现。Communication component 816 is configured to facilitate wired or wireless communication between electronic device 800 and other devices. The electronic device 800 can access a wireless network based on a communication standard, such as a wireless network (WiFi), a second generation mobile communication technology (2-Generation, 2G) or a third generation mobile communication technology (3-Generation, 3G), or they The combination. In one exemplary embodiment, the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 also includes a Near Field Communication (NFC) module to facilitate short-range communication. For example, the NFC module can be based on Radio Frequency Identification (RFID) technology, Infrared Data Association (IrDA) technology, Ultra Wideband (Ultra Wide Band, UWB) technology, Bluetooth (BitTorrent, BT) technology and other Technology to achieve.
在示例性实施例中,电子设备800可以被一个或多个应用专用集成电路(Application Specific Integrated Circuit,ASIC)、数字信号处理器(Digital Signal Processor,DSP)、数字信号处理设备(Digital Signal Processor Device,DSPD)、可编程逻辑器件(Programmaile Lofic Device,PLD)、现场可编程门阵列(Field Programmable Gate Array,FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述方法。In an exemplary embodiment, the electronic device 800 may be configured by one or more application specific integrated circuits (Application Specific Integrated Circuit, ASIC), digital signal processor (Digital Signal Processor, DSP), digital signal processing device (Digital Signal Processor Device). , DSPD), programmable logic device (Programmable Lofic Device, PLD), field programmable gate array (Field Programmable Gate Array, FPGA), controller, microcontroller, microprocessor or other electronic component implementation, used to perform the above method.
在示例性实施例中,还提供了一种非易失性计算机可读存储介质,例如包括计算机程序指令的存储器804,上述计算机程序指令可由电子设备800的处理器820执行以完成上述方法。In an exemplary embodiment, a non-volatile computer-readable storage medium is also provided, such as a memory 804 including computer program instructions, which can be executed by the processor 820 of the electronic device 800 to complete the above method.
本公开涉及增强现实领域,通过获取现实环境中的目标物品的图像信息,进而借助各类视觉相关算法实现对目标物品的相关特征、状态及属性进行检测或识别处理,从而得到与具体应用匹配的虚拟与现实相结合的AR效果。示例性的,目标物品可涉及与人体相关的脸部、肢体、手势、动作等,或者与物品相关的标识物、标志物,或者与场馆或场所相关的沙盘、展示区域或展示物品等。视觉相关算法可涉及视觉定位、SLAM、三维重建、图像注册、背景分割、对象的关键点提取及跟踪、对象的位姿或深度检测等。具体应用不仅可以涉及跟真实场景或物品相关的导览、导航、讲解、重建、虚拟效果叠加展示等交互场景,还可以涉及与人相关的特效处理,比如妆容美化、肢体美化、特效展示、虚拟模型展示等交互场景。可通过卷积神经网络,实现对目标物品的相关特征、状态及属性进行检测或识别处理。上述卷积神经网络是基于深度学习框架进行模型训练而得到的网络模型。The present disclosure relates to the field of augmented reality. By acquiring image information of target items in the real environment, and then using various visual related algorithms to detect or identify the relevant characteristics, status and attributes of the target items, thereby obtaining information that matches specific applications. AR effect that combines virtuality and reality. For example, the target items may involve faces, limbs, gestures, actions, etc. related to the human body, or objects, markers, or sandboxes, display areas, or display items related to venues or places. Vision-related algorithms can involve visual positioning, SLAM, three-dimensional reconstruction, image registration, background segmentation, object key point extraction and tracking, object pose or depth detection, etc. Specific applications can not only involve interactive scenes such as tours, navigation, explanations, reconstructions, and virtual effects overlay displays related to real scenes or objects, but also involve special effects processing related to people, such as makeup beautification, body beautification, special effects display, virtual Model display and other interactive scenarios. Convolutional neural networks can be used to detect or identify the relevant features, status and attributes of target items. The above-mentioned convolutional neural network is a network model obtained through model training based on a deep learning framework.
图9示出根据本公开实施例的另一种电子设备1900的示意图。例如,电子设备1900可以被提供为一服务器。参照图9,电子设备1900包括处理组件1922,包括一个或多个处理器,以及由存储器1932所代表的存储器资源,用于存储可由处理组件1922的执行的指令,例如应用程序。存储器1932中存储的应用程序可以包括一个或一个以上的每一个对应于一组指令的模块。此外,处理组件1922被配置为执行指令,以执行上述姿态确定方法。FIG. 9 shows a schematic diagram of another electronic device 1900 according to an embodiment of the present disclosure. For example, electronic device 1900 may be provided as a server. Referring to FIG. 9 , electronic device 1900 includes a processing component 1922 , including one or more processors, and memory resources represented by memory 1932 for storing instructions, such as application programs, executable by processing component 1922 . The application program stored in memory 1932 may include one or more modules, each corresponding to a set of instructions. Furthermore, the processing component 1922 is configured to execute instructions to perform the above-described gesture determination method.
电子设备1900还可以包括一个电源组件1926被配置为执行电子设备1900的电源管理,一个有线或无线网络接口1950被配置为将电子设备1900连接到网络,和一个输入输出(I/O)接口1958。电子设备1900可以操作基于存储在存储器1932的操作系统,例如微软服务器操作系统(Windows Server TM),苹果公司推出的基于图形用户界面操作系统(Mac OS X TM),多用户多进程的计算机操作系统(Unix TM),自由和开放原代码的类Unix 操作系统(Linux TM),开放原代码的类Unix操作系统(FreeBSD TM)或类似。 Electronic device 1900 may also include a power supply component 1926 configured to perform power management of electronic device 1900, a wired or wireless network interface 1950 configured to connect electronic device 1900 to a network, and an input-output (I/O) interface 1958 . The electronic device 1900 can operate based on an operating system stored in the memory 1932, such as a Microsoft server operating system (Windows Server TM ), a graphical user interface operating system (Mac OS X TM ) launched by Apple, a multi-user multi-process computer operating system (Unix TM ), a free and open source Unix-like operating system (Linux TM ), an open source Unix-like operating system (FreeBSD TM ) or similar.
在示例性实施例中,还提供了一种非易失性计算机可读存储介质,例如包括计算机程序指令的存储器1932,上述计算机程序指令可由电子设备1900的处理组件1922执行以完成上述方法。In an exemplary embodiment, a non-volatile computer-readable storage medium is also provided, such as a memory 1932 including computer program instructions, which can be executed by the processing component 1922 of the electronic device 1900 to complete the above method.
本公开可以是系统、方法和/或计算机程序产品。计算机程序产品可以包括计算机可读存储介质,其上载有用于使处理器实现本公开的各个方面的计算机可读程序指令。The present disclosure may be a system, method, and/or computer program product. A computer program product may include a computer-readable storage medium having thereon computer-readable program instructions for causing a processor to implement aspects of the present disclosure.
计算机可读存储介质可以是可以保持和存储由指令执行设备使用的指令的有形设备。计算机可读存储介质例如可以是(但不限于)电存储设备、磁存储设备、光存储设备、电磁存储设备、半导体存储设备或者上述的任意合适的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:便携式计算机盘、硬盘、随机存取存储器(Random Access Memory,RAM)、ROM、EPROM或闪存、SRAM、便携式压缩盘只读存储器(Compact Disc-Read Only Memory,CD-ROM)、数字多功能盘(Digital Versatile Disc,DVD)、记忆棒、软盘、机械编码设备、例如其上存储有指令的打孔卡或凹槽内凸起结构、以及上述的任意合适的组合。这里所使用的计算机可读存储介质不被解释为瞬时信号本身,诸如无线电波或者其他自由传播的电磁波、通过波导或其他传输媒介传播的电磁波(例如,通过光纤电缆的光脉冲)、或者通过电线传输的电信号。Computer-readable storage media may be tangible devices that can retain and store instructions for use by an instruction execution device. The computer-readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the above. More specific examples (non-exhaustive list) of computer-readable storage media include: portable computer disk, hard disk, random access memory (RAM), ROM, EPROM or flash memory, SRAM, portable compact disk read-only Memory (Compact Disc-Read Only Memory, CD-ROM), Digital Versatile Disc (DVD), memory stick, floppy disk, mechanical encoding device, such as punched card or grooved embossed card with instructions stored on it structure, and any suitable combination of the above. As used herein, computer-readable storage media are not to be construed as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., light pulses through fiber optic cables), or through electrical wires. transmitted electrical signals.
这里所描述的计算机可读程序指令可以从计算机可读存储介质下载到各个计算/处理设备,或者通过网络、例如因特网、局域网、广域网和/或无线网下载到外部计算机或外部存储设备。网络可以包括铜传输电缆、光纤传输、无线传输、路由器、防火墙、交换机、网关计算机和/或边缘服务器。每个计算/处理设备中的网络适配卡或者网络接口从网络接收计算机可读程序指令,并转发该计算机可读程序指令,以供存储在各个计算/处理设备中的计算机可读存储介质中。Computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to various computing/processing devices, or to an external computer or external storage device over a network, such as the Internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage on a computer-readable storage medium in the respective computing/processing device .
用于执行本公开操作的计算机程序指令可以是汇编指令、指令集架构(Instruction Set Architecture,ISA)指令、机器指令、机器相关指令、微代码、固件指令、状态设置数据、或者以一种或多种编程语言的任意组合编写的源代码或目标代码,所述编程语言包括面向对象的编程语言,诸如Smalltalk、C++等,以及常规的过程式编程语言,诸如“C”语言或类似的编程语言。计算机可读程序指令可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络,包括局域网(Local Area Network,LAN)或广域网(Wide Area Network,WAN),连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。在一些实施例中,通过利用计算机可读程序指令的状态信息来个性化定制电子电路,例如可编程逻辑电路、FPGA或可编程逻辑阵列(Programmable Logic Arrays,PLA),该电子电路可以执行计算机可读程序指令,从而实现本公开的各个方面。Computer program instructions for performing operations of the present disclosure may be assembly instructions, Instruction Set Architecture (ISA) instructions, machine instructions, machine-related instructions, microcode, firmware instructions, state setting data, or in one or more Source code or object code written in any combination of programming languages, including object-oriented programming languages, such as Smalltalk, C++, etc., and conventional procedural programming languages, such as the "C" language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server implement. In situations involving remote computers, the remote computer can be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or it can be connected to an external computer (e.g. Use an Internet service provider to connect via the Internet). In some embodiments, an electronic circuit, such as a programmable logic circuit, FPGA, or Programmable Logic Arrays (PLA), can be customized by utilizing state information of computer-readable program instructions. Program instructions are read to implement various aspects of the present disclosure.
这里参照根据本公开实施例的方法、装置(系统)和计算机程序产品的流程图和/或框图描述了本公开的各个方面。应当理解,流程图和/或框图的每个方框以及流程图和/或框图中各方框的组合,都可以由计算机可读程序指令实现。Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
这些计算机可读程序指令可以提供给通用计算机、专用计算机或其它可编程数据处理装置的处理器,从而生产出一种机器,使得这些指令在通过计算机或其它可编程数据处理装置的处理器执行时,产生了实现流程图和/或框图中的一个或多个方框中规定的功能/动作的装置。也可以把这些计算机可读程序指令存储在计算机可读存储介质中,这些指令使得计算机、可编程数据处理装置和/或其他设备以特定方式工作,从而,存储有指令的计算机可读介质则包括一个制造品,其包括实现流程图和/或框图中的一个或多个方框中规定的功能/动作的各个方面的指令。These computer-readable program instructions may be provided to a processor of a general-purpose computer, a special-purpose computer, or other programmable data processing apparatus, thereby producing a machine that, when executed by the processor of the computer or other programmable data processing apparatus, , resulting in an apparatus that implements the functions/actions specified in one or more blocks in the flowchart and/or block diagram. These computer-readable program instructions can also be stored in a computer-readable storage medium. These instructions cause the computer, programmable data processing device and/or other equipment to work in a specific manner. Therefore, the computer-readable medium storing the instructions includes An article of manufacture that includes instructions that implement aspects of the functions/acts specified in one or more blocks of the flowcharts and/or block diagrams.
也可以把计算机可读程序指令加载到计算机、其它可编程数据处理装置、或其它设备上,使得在计算机、其它可编程数据处理装置或其它设备上执行一系列操作步骤,以产生计算机实现的过程,从而使得在计算机、其它可编程数据处理装置、或其它设备上执行的指令实现流程图和/或框图中的一个或多个方框中规定的功能/动作。Computer-readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other equipment, causing a series of operating steps to be performed on the computer, other programmable data processing apparatus, or other equipment to produce a computer-implemented process , thereby causing instructions executed on a computer, other programmable data processing apparatus, or other equipment to implement the functions/actions specified in one or more blocks in the flowcharts and/or block diagrams.
附图中的流程图和框图显示了根据本公开的多个实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或指令的一部分,所述模块、程序段或指令的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions that embody one or more elements for implementing the specified logical function(s). Executable instructions. In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two consecutive blocks may actually execute substantially in parallel, or they may sometimes execute in the reverse order, depending on the functionality involved. It will also be noted that each block of the block diagram and/or flowchart illustration, and combinations of blocks in the block diagram and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts. , or can be implemented using a combination of specialized hardware and computer instructions.
该计算机程序产品可以具体通过硬件、软件或其结合的方式实现。在一个可选实施例中,所述计算机程序产品具体体现为计算机存储介质,在另一个可选实施例中,计算机程序产品具体体现为软件产品,例如软件开发包(Software Development Kit,SDK)等等。The computer program product can be implemented specifically through hardware, software or a combination thereof. In an optional embodiment, the computer program product is embodied as a computer storage medium. In another optional embodiment, the computer program product is embodied as a software product, such as a Software Development Kit (SDK), etc. wait.
上文对各个实施例的描述倾向于强调各个实施例之间的不同之处,其相同或相似之处可以互相参考。The above description of various embodiments is intended to emphasize the differences between the various embodiments, and the similarities or similarities may be referred to each other.
本领域技术人员可以理解,在具体实施方式的上述方法中,各步骤的撰写顺序并不意味着严格的执行顺序而对实施过程构成任何限定,各步骤的具体执行顺序应当以其功能和可能的内在逻辑确定。Those skilled in the art can understand that in the above-mentioned methods of specific embodiments, the writing order of each step does not mean a strict execution order and does not constitute any limitation on the implementation process. The specific execution order of each step should be based on its function and possible The internal logic is determined.
若本公开技术方案涉及个人信息,应用本公开技术方案的产品在处理个人信息前,已明确告知个人信息处理规则,并取得个人自主同意。若本公开技术方案涉及敏感个人信息,应用本公开技术方案的产品在处理敏感个人信息前,已取得个人单独同意,并且同时满足“明示同意”的要求。例如,在摄像头等个人信息采集装置处,设置明确显著的标识告知已进入个人信息采集范围,将会对个人信息进行采集,若个人自愿进入采集范围即视为同意对其个人信息进行采集;或者在个人信息处理的装置上,利用明显的标识/信息告知个人信息处理规则的情况下,通过弹窗信息或请个人自行上传其个人信息等方式获得个人授权;其中,个人信息处理规则可包括个人信息处理者、个人信息处理目的、处理方式以及处理的个人信息种类等信息。If the disclosed technical solution involves personal information, the products applying the disclosed technical solution will clearly inform the personal information processing rules and obtain the individual's independent consent before processing personal information. If the disclosed technical solution involves sensitive personal information, the product applying the disclosed technical solution must obtain the individual's separate consent before processing the sensitive personal information, and at the same time meet the requirement of "express consent". For example, setting up clear and conspicuous signs on personal information collection devices such as cameras to inform them that they have entered the scope of personal information collection, and that personal information will be collected. If an individual voluntarily enters the collection scope, it is deemed to have agreed to the collection of his or her personal information; or On personal information processing devices, when using obvious logos/information to inform personal information processing rules, obtain personal authorization through pop-up messages or asking individuals to upload their personal information; among them, personal information processing rules may include personal information processing rules. Information such as information processors, purposes of processing personal information, methods of processing, and types of personal information processed.
以上已经描述了本公开的各实施例,上述说明是示例性的,并非穷尽性的,并且也不限于所披露的各实施例。在不偏离所说明的各实施例的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。本文中所用术语的选择,旨在最好地解释各实施例的原理、实际应用或对市场中的技术的改进,或者使本技术领域的其它普通技术人员能理解本文披露的各实施例。The embodiments of the present disclosure have been described above. The above description is illustrative, not exhaustive, and is not limited to the disclosed embodiments. Many modifications and variations will be apparent to those skilled in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen to best explain the principles, practical applications, or improvements to the technology in the market, or to enable other persons of ordinary skill in the art to understand the embodiments disclosed herein.
工业实用性Industrial applicability
本公开实施例涉及一种姿态确定方法、装置、电子设备、存储介质及程序,确定待识别图像中在目标物品上的至少一个第一特征点,根据待识别图像中的目标物品从预存图像中确定至少一个目标图像,目标图像具有第二特征点和对应的三维点云,第二特征点在三维点云中具有对应的点;根据至少一个第一特征点、第二特征点,以及第二特征点在三维点云中对应的点,确定至少一个第一特征点在三维点云中的目标点,以及根据目标点确定待识别图像对应的目标姿态。Embodiments of the present disclosure relate to a gesture determination method, device, electronic equipment, storage medium and program, which determine at least one first feature point on a target object in an image to be recognized, and extract the pre-stored image from the pre-stored image according to the target object in the image to be recognized. Determine at least one target image, the target image has a second feature point and a corresponding three-dimensional point cloud, and the second feature point has a corresponding point in the three-dimensional point cloud; according to at least one first feature point, a second feature point, and a second The corresponding point of the feature point in the three-dimensional point cloud is determined, and the target point of at least one first feature point in the three-dimensional point cloud is determined, and the target posture corresponding to the image to be recognized is determined based on the target point.

Claims (14)

  1. 一种姿态确定方法,所述方法包括:A posture determination method, the method includes:
    确定待识别图像中在目标物品上的至少一个第一特征点;Determine at least one first feature point on the target item in the image to be recognized;
    根据所述待识别图像中的目标物品从预存图像中确定至少一个目标图像,其中,所述目标图像具有第二特征点和对应的三维点云,所述第二特征点在所述三维点云中具有对应的点;At least one target image is determined from the pre-stored image according to the target item in the image to be recognized, wherein the target image has a second feature point and a corresponding three-dimensional point cloud, and the second feature point is in the three-dimensional point cloud. There are corresponding points in;
    根据所述至少一个第一特征点、所述第二特征点,以及所述第二特征点在所述三维点云中对应的点,确定所述至少一个第一特征点在所述三维点云中的目标点;According to the at least one first feature point, the second feature point, and the corresponding point of the second feature point in the three-dimensional point cloud, it is determined that the at least one first feature point is in the three-dimensional point cloud. target point in;
    根据所述目标点确定所述待识别图像对应的目标姿态。The target posture corresponding to the image to be recognized is determined according to the target point.
  2. 根据权利要求1所述的方法,其中,所述方法还包括:The method of claim 1, further comprising:
    确定至少一个所述预存图像,并获取所述预存图像的至少一个第二特征点;Determine at least one of the pre-stored images, and obtain at least one second feature point of the pre-stored image;
    对每个所述预存图像的第二特征点进行匹配,得到多个第二特征点组,其中,每个所述第二特征点组中的至少一个所述第二特征点用于表征所述目标物品上的同一位置;Match the second feature points of each pre-stored image to obtain a plurality of second feature point groups, wherein at least one second feature point in each second feature point group is used to characterize the the same location on the target item;
    根据多个所述第二特征点组,确定所述预存图像对应的三维点云,其中,所述三维点云中每一个点与一个所述第二特征点组中的至少一个所述第二特征点对应。Determine a three-dimensional point cloud corresponding to the pre-stored image according to a plurality of second feature point groups, wherein each point in the three-dimensional point cloud corresponds to at least one second feature point group in one of the second feature point groups. Feature point correspondence.
  3. 根据权利要求2所述的方法,其中,所述根据多个所述第二特征点组,确定所述预存图像对应的三维点云,包括:The method according to claim 2, wherein determining the three-dimensional point cloud corresponding to the pre-stored image according to a plurality of the second feature point groups includes:
    通过运动恢复结构算法,对多个所述第二特征点组解算得到所述目标物品的三维点云。Through a motion recovery structure algorithm, a plurality of second feature point groups are solved to obtain a three-dimensional point cloud of the target item.
  4. 根据权利要求1至3中任意一项所述的方法,其中,所述根据所述待识别图像中的目标物品从预存图像中确定至少一个目标图像,包括:The method according to any one of claims 1 to 3, wherein determining at least one target image from pre-stored images according to the target item in the image to be recognized includes:
    确定每个所述预存图像对应的姿态信息;Determine the posture information corresponding to each of the pre-stored images;
    确定所述待识别图像对应的初始姿态;Determine the initial posture corresponding to the image to be recognized;
    根据所述初始姿态和所述姿态信息,从所述预存图像中确定所述至少一个目标图像。The at least one target image is determined from the pre-stored image according to the initial posture and the posture information.
  5. 根据权利要求4所述的方法,其中,所述确定每个所述预存图像对应的姿态信息,包括:The method according to claim 4, wherein the determining the posture information corresponding to each of the pre-stored images includes:
    对于每个所述预存图像,基于所述预存图像包括的每个所述第二特征点在所述三维点云中对应的点执行N点透视算法,得到所述姿态信息。For each pre-stored image, an N-point perspective algorithm is performed based on the corresponding point of each second feature point included in the pre-stored image in the three-dimensional point cloud to obtain the posture information.
  6. 根据权利要求4或5所述的方法,其中,所述待识别图像为连续采集的图像序列中的一帧,所述确定所述待识别图像对应的初始姿态,包括:The method according to claim 4 or 5, wherein the image to be recognized is a frame in a continuously collected image sequence, and determining the initial posture corresponding to the image to be recognized includes:
    确定所述图像序列中在所述待识别图像之前的多帧图像对应的在先姿态;Determine the previous posture corresponding to the multiple frame images before the image to be recognized in the image sequence;
    基于多个所述在先姿态执行外插值法,得到所述初始姿态。An extrapolation method is performed based on a plurality of the previous postures to obtain the initial posture.
  7. 根据权利要求1至6中任意一项所述的方法,其中,所述根据所述至少一个第一特征点、所述第二特征点,以及所述第二特征点在所述三维点云中对应的点,确定所述至少一个第一特征点在所述三维点云中的目标点,包括:The method according to any one of claims 1 to 6, wherein the at least one first feature point, the second feature point, and the second feature point are in the three-dimensional point cloud. Corresponding points, determining the target point of the at least one first feature point in the three-dimensional point cloud includes:
    将所述待识别图像与所述目标图像进行特征点匹配,得到每个所述第一特征点匹配的第二特征点;Match the feature points of the image to be recognized with the target image to obtain second feature points matching each of the first feature points;
    将每个所述第一特征点匹配的所述第二特征点在所述三维点云中对应的点,确定为所述目标点。The point corresponding to the second feature point matched by each first feature point in the three-dimensional point cloud is determined as the target point.
  8. 根据权利要求1至7中任意一项所述的方法,其中,所述根据所述目标点确定所述待识别图像对应的目标姿态,包括:The method according to any one of claims 1 to 7, wherein determining the target posture corresponding to the image to be recognized according to the target point includes:
    根据所述目标点执行N点透视算法,得到所述待识别图像对应的目标姿态。An N-point perspective algorithm is executed according to the target point to obtain the target posture corresponding to the image to be recognized.
  9. 根据权利要求1至8中任意一项所述的方法,其中,所述待识别图像为连续采集的图像序列中的一帧,所述方法还包括:The method according to any one of claims 1 to 8, wherein the image to be recognized is a frame in a continuously collected image sequence, and the method further includes:
    确定所述待识别图像在所述图像序列中的下一帧图像作为参考图像,其中,所述参考图像包括在所述目标物品上的至少一个第三特征点;Determine the next frame image of the image to be recognized in the image sequence as a reference image, wherein the reference image includes at least one third feature point on the target item;
    根据所述待识别图像中每个所述第一特征点对应的目标点,确定所述参考图像中每个所述第三特征点对应的目标点;Determine the target point corresponding to each third feature point in the reference image based on the target point corresponding to each first feature point in the image to be recognized;
    根据每个所述第三特征点与所述目标点的对应关系,确定所述参考图像对应的参考姿态。According to the corresponding relationship between each third feature point and the target point, the reference posture corresponding to the reference image is determined.
  10. 根据权利要求9所述的方法,其中,所述根据所述待识别图像中每个所述第一特征点对应的目标点,确定所述参考图像中每个所述第三特征点对应的目标点,包括:The method according to claim 9, wherein the target corresponding to each third feature point in the reference image is determined based on the target point corresponding to each first feature point in the image to be recognized. points, including:
    根据稀疏光流算法跟踪所述待识别图像中每个所述第一特征点,得到每个所述第一特征点在所述参考图像中匹配的第三特征点;Track each first feature point in the image to be identified according to a sparse optical flow algorithm, and obtain a third feature point matched by each first feature point in the reference image;
    确定每个所述第一特征点在所述参考图像中匹配的所述第三特征点,与所述第一特征点在所述三维点云中对应的目标点对应。The third feature point matching each of the first feature points in the reference image is determined to correspond to the target point corresponding to the first feature point in the three-dimensional point cloud.
  11. 一种姿态确定装置,所述装置包括:A posture determination device, the device includes:
    第一确定部分,被配置为确定待识别图像中在目标物品上的至少一个第一特征点;A first determining part configured to determine at least one first feature point on the target item in the image to be recognized;
    第二确定部分,被配置为根据所述待识别图像中的目标物品从预存图像中确定至少一个目标图像,其中,所述目标图像具有第二特征点和对应的三维点云,所述第二特征点在所述三维点云中具有对应的点;The second determination part is configured to determine at least one target image from a pre-stored image according to the target item in the image to be recognized, wherein the target image has a second feature point and a corresponding three-dimensional point cloud, and the second Feature points have corresponding points in the three-dimensional point cloud;
    目标点匹配部分,被配置为根据所述至少一个第一特征点、所述第二特征点,以及所述第二特征点在所述三维点云中对应的点,确定所述至少一个第一特征点在所述三维点云中的目标点;The target point matching part is configured to determine the at least one first feature point according to the at least one first feature point, the second feature point, and the corresponding point of the second feature point in the three-dimensional point cloud. The target point of the feature point in the three-dimensional point cloud;
    姿态确定部分,被配置为根据所述目标点确定所述待识别图像对应的目标姿态。The posture determination part is configured to determine the target posture corresponding to the image to be recognized according to the target point.
  12. 一种电子设备,包括:An electronic device including:
    处理器;processor;
    用于存储处理器可执行指令的存储器;Memory used to store instructions executable by the processor;
    其中,所述处理器被配置为调用所述存储器存储的指令,以执行权利要求1至10中任意一项所述的姿态确定方法。Wherein, the processor is configured to call instructions stored in the memory to execute the posture determination method according to any one of claims 1 to 10.
  13. 一种计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现权利要求1至10中任意一项所述的姿态确定方法。A computer-readable storage medium on which computer program instructions are stored. When the computer program instructions are executed by a processor, the posture determination method according to any one of claims 1 to 10 is implemented.
  14. 一种计算机程序,所述计算机程序包括计算机可读代码,在所述计算机可读代码在电子设备中运行的情况下,所述电子设备的处理器执行用于实现如权利要求1至10任一所述的姿态确定方法。A computer program, the computer program comprising computer readable code, when the computer readable code is run in an electronic device, the processor of the electronic device executes for implementing any one of claims 1 to 10 Described posture determination method.
PCT/CN2022/129083 2022-03-11 2022-11-01 Pose determination method and apparatus, electronic device, storage medium, and program WO2023168957A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210239443.0 2022-03-11
CN202210239443.0A CN114581525A (en) 2022-03-11 2022-03-11 Attitude determination method and apparatus, electronic device, and storage medium

Publications (1)

Publication Number Publication Date
WO2023168957A1 true WO2023168957A1 (en) 2023-09-14

Family

ID=81775356

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/129083 WO2023168957A1 (en) 2022-03-11 2022-11-01 Pose determination method and apparatus, electronic device, storage medium, and program

Country Status (2)

Country Link
CN (1) CN114581525A (en)
WO (1) WO2023168957A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114581525A (en) * 2022-03-11 2022-06-03 浙江商汤科技开发有限公司 Attitude determination method and apparatus, electronic device, and storage medium
CN115131507B (en) * 2022-07-27 2023-06-16 北京百度网讯科技有限公司 Image processing method, image processing device and meta space three-dimensional reconstruction method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120114175A1 (en) * 2010-11-05 2012-05-10 Samsung Electronics Co., Ltd. Object pose recognition apparatus and object pose recognition method using the same
US20190026922A1 (en) * 2017-07-24 2019-01-24 Visom Technology, Inc. Markerless augmented reality (ar) system
CN111652103A (en) * 2020-05-27 2020-09-11 北京百度网讯科技有限公司 Indoor positioning method, device, equipment and storage medium
CN112465903A (en) * 2020-12-21 2021-03-09 上海交通大学宁波人工智能研究院 6DOF object attitude estimation method based on deep learning point cloud matching
CN114581525A (en) * 2022-03-11 2022-06-03 浙江商汤科技开发有限公司 Attitude determination method and apparatus, electronic device, and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120114175A1 (en) * 2010-11-05 2012-05-10 Samsung Electronics Co., Ltd. Object pose recognition apparatus and object pose recognition method using the same
US20190026922A1 (en) * 2017-07-24 2019-01-24 Visom Technology, Inc. Markerless augmented reality (ar) system
CN111652103A (en) * 2020-05-27 2020-09-11 北京百度网讯科技有限公司 Indoor positioning method, device, equipment and storage medium
CN112465903A (en) * 2020-12-21 2021-03-09 上海交通大学宁波人工智能研究院 6DOF object attitude estimation method based on deep learning point cloud matching
CN114581525A (en) * 2022-03-11 2022-06-03 浙江商汤科技开发有限公司 Attitude determination method and apparatus, electronic device, and storage medium

Also Published As

Publication number Publication date
CN114581525A (en) 2022-06-03

Similar Documents

Publication Publication Date Title
Sharp et al. Accurate, robust, and flexible real-time hand tracking
US10043308B2 (en) Image processing method and apparatus for three-dimensional reconstruction
WO2023168957A1 (en) Pose determination method and apparatus, electronic device, storage medium, and program
US11748888B2 (en) End-to-end merge for video object segmentation (VOS)
JP7387202B2 (en) 3D face model generation method, apparatus, computer device and computer program
CN111062263B (en) Method, apparatus, computer apparatus and storage medium for hand gesture estimation
CN111783986A (en) Network training method and device and posture prediction method and device
US11842514B1 (en) Determining a pose of an object from rgb-d images
TW202029125A (en) Method, apparatus and electronic device for image processing and storage medium thereof
JP7181375B2 (en) Target object motion recognition method, device and electronic device
US9224064B2 (en) Electronic device, electronic device operating method, and computer readable recording medium recording the method
CN112036331A (en) Training method, device and equipment of living body detection model and storage medium
CN112906484B (en) Video frame processing method and device, electronic equipment and storage medium
US20140232748A1 (en) Device, method and computer readable recording medium for operating the same
WO2022088819A1 (en) Video processing method, video processing apparatus and storage medium
CN114445562A (en) Three-dimensional reconstruction method and device, electronic device and storage medium
CN107977636B (en) Face detection method and device, terminal and storage medium
CN115035158B (en) Target tracking method and device, electronic equipment and storage medium
WO2023015938A1 (en) Three-dimensional point detection method and apparatus, electronic device, and storage medium
CN116824533A (en) Remote small target point cloud data characteristic enhancement method based on attention mechanism
CN113870413A (en) Three-dimensional reconstruction method and device, electronic equipment and storage medium
CN115049819A (en) Watching region identification method and device
KR101189043B1 (en) Service and method for video call, server and terminal thereof
WO2023155393A1 (en) Feature point matching method and apparatus, electronic device, storage medium and computer program product
WO2023155350A1 (en) Crowd positioning method and apparatus, electronic device, and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22930580

Country of ref document: EP

Kind code of ref document: A1