WO2022222379A1 - 一种位置确定方法及装置、电子设备和存储介质 - Google Patents

一种位置确定方法及装置、电子设备和存储介质 Download PDF

Info

Publication number
WO2022222379A1
WO2022222379A1 PCT/CN2021/120916 CN2021120916W WO2022222379A1 WO 2022222379 A1 WO2022222379 A1 WO 2022222379A1 CN 2021120916 W CN2021120916 W CN 2021120916W WO 2022222379 A1 WO2022222379 A1 WO 2022222379A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
homography matrix
reference points
homography
target image
Prior art date
Application number
PCT/CN2021/120916
Other languages
English (en)
French (fr)
Inventor
邓文钧
吴华栋
张展鹏
成慧
Original Assignee
深圳市商汤科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市商汤科技有限公司 filed Critical 深圳市商汤科技有限公司
Publication of WO2022222379A1 publication Critical patent/WO2022222379A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Definitions

  • the present disclosure relates to the field of computer technology, and relates to a method and apparatus for determining a position, an electronic device, and a storage medium.
  • the parameters of the camera are often calibrated. Whether in image ranging or machine vision, the calibration of camera parameters is a very critical link, and the accuracy of the calibration results directly affects the accuracy of the results produced by the camera work.
  • a homography matrix for indicating the mapping relationship between the world coordinate system and the image coordinate system can be obtained.
  • the response matrix is multiplied to obtain the actual position of the point in the image in the real space. Based on the actual position, applications such as ranging and machine vision can be completed.
  • the present disclosure proposes a technical solution for location determination.
  • a location determination method comprising:
  • the position of the target point in the world coordinate system is determined.
  • the determining the target homography matrix corresponding to the first position includes:
  • the homography matrix corresponding to the target image area where the first position is located is used as the target homography matrix .
  • the determining the target homography matrix corresponding to the first position includes:
  • the homography matrix corresponding to the target image area where the first distance satisfies the distance condition is used as the target homography matrix.
  • the determining the target homography matrix corresponding to the first position includes:
  • the homography matrices corresponding to at least two target image areas corresponding to the overlapping area are used as target homography matrices, and the overlapping area is composed of the at least two target image areas. target image area is determined.
  • the determining the position of the target point in the world coordinate system based on the target homography matrix and the first position includes:
  • the second position of the target point in the world coordinate system is obtained based on each of the target homography matrices and the first position respectively ;
  • the position of the target point in the world coordinate system is determined.
  • the method before determining the target homography matrix corresponding to the first position, the method further includes:
  • a homography matrix corresponding to each of the at least two target image areas in the image collected by the vision sensor is determined.
  • the plurality of reference points in the preset calibration are used to determine, in the image collected by the vision sensor, the corresponding target image area of each of the at least two target image areas.
  • Homography matrices including:
  • the homography matrices corresponding to different reference point combinations are determined, wherein the image area where a single reference point combination is located is a single target In the image area, a single combination of the reference points includes a part of the reference points in the plurality of reference points.
  • the reference point combination includes N adjacent reference points, where N is an integer greater than 1;
  • N adjacent reference points are sequentially selected to form a reference point combination, and multiple reference point combinations are obtained, wherein the number of the remaining reference points is less than N
  • select the reference points adjacent to the remaining reference points to fill up N reference points to form a reference point combination
  • a homography matrix is determined for each combination of the reference points respectively.
  • a position determination device comprising:
  • a first position determination part configured to determine a first position of the target point in the image captured by the vision sensor
  • a target homography matrix determining part configured to determine a target homography matrix corresponding to the first position from the respective homography matrices corresponding to at least two target image regions of the image;
  • a second position determination part is configured to determine the position of the target point in the world coordinate system based on the target homography matrix and the first position.
  • the target homography matrix determination part is further configured to, when the first position is located in any target image area of the at least two target image areas, determine The homography matrix corresponding to the target image region where the first position is located is taken as the target homography matrix.
  • the target homography matrix determination part is further configured to determine that the first position is the same as the one in the case where the first position is located outside the at least two target image areas The first distance between the at least two target image areas; the homography matrix corresponding to the target image area where the first distance satisfies the distance condition is used as the target homography matrix.
  • the target homography matrix determination part is further configured to, in the case that the first position is located in an overlapping area, assign each of the at least two target image areas corresponding to the overlapping area A corresponding homography matrix, as a target homography matrix, the overlapping area is determined by the at least two target image areas.
  • the second position determination part is further configured to, in the case that the first position corresponds to at least two target homography matrices, based on each of the target homography matrices respectively and the first position to obtain the second position of the target point in the world coordinate system; and based on the at least two obtained second positions respectively, determine the position of the target point in the world coordinate system.
  • the apparatus further includes:
  • the homography matrix determination part is configured to use a plurality of reference points in the preset calibration to determine the homography corresponding to each of the at least two target image areas in the image collected by the vision sensor. Responsiveness Matrix.
  • the homography matrix determination part includes:
  • a first coordinate value determination subsection configured to determine first coordinate values of a plurality of reference points in the calibration in the world coordinate system
  • a second coordinate value determination subsection configured to determine second coordinate values of the plurality of reference points in the image captured by the vision sensor
  • the homography matrix determination subsection is configured to determine homography matrices corresponding to different reference point combinations according to the correspondence between the first coordinate value and the second coordinate value of the same reference point, wherein a single The image area where the reference point combination is located is a single target image area, and the single reference point combination includes some reference points in the multiple reference points.
  • the reference point combination includes N adjacent reference points, where N is an integer greater than 1;
  • the homography matrix determination sub-section is further configured to select N adjacent reference points in sequence from the multiple reference points of the calibration object to form a reference point combination according to a preset order, so as to obtain multiple reference points. point combination, wherein, when the number of remaining reference points is less than N, the reference points adjacent to the remaining reference points are selected to complement N reference points to form a reference point combination; according to the description of the same reference point The correspondence between the first coordinate value and the second coordinate value is determined for each combination of the reference points, respectively, to determine a homography matrix.
  • an electronic device comprising: a processor; a memory for storing instructions executable by the processor; wherein the processor is configured to invoke the instructions stored in the memory to execute the above method.
  • a computer-readable storage medium having computer program instructions stored thereon, the computer program instructions implementing the above method when executed by a processor.
  • a computer program comprising computer-readable code, which when the computer-readable code is executed in an electronic device, is executed by a processor in the electronic device for implementing the above method.
  • the first position of the target point in the image collected by the vision sensor is determined; from the homography matrices corresponding to at least two target image regions of the image, the first position is determined.
  • a corresponding target homography matrix based on the target homography matrix and the first position, determine the position of the target point in the world coordinate system. Therefore, due to the nonlinear characteristics such as distortion of the camera lens, the mapping relationship between different target image regions and the world coordinate system is different. Then, a corresponding homography matrix can be set for each target image region.
  • the homography matrix corresponding to the target image area where the position is located can be selected to calculate the position of the target point in the world coordinate system, reducing the interference of nonlinear features such as camera lens distortion. , which improves the accuracy of the determined position of the target point in the world coordinate system.
  • FIG. 1 shows a flowchart of a location determination method according to an embodiment of the present disclosure.
  • FIG. 2 shows a schematic structural diagram of a calibrator according to an embodiment of the present disclosure.
  • FIG. 3 shows a block diagram of a position determination apparatus according to an embodiment of the present disclosure.
  • FIG. 4 shows a block diagram of an electronic device according to an embodiment of the present disclosure.
  • FIG. 5 shows a block diagram of an electronic device according to an embodiment of the present disclosure.
  • the homography matrix represents the mapping relationship between the world coordinate system and the image coordinate system.
  • the coordinates of a point in the image collected by the vision sensor can be multiplied by the homography matrix to get The actual location of the point in the image as represented in world coordinates.
  • the multiplication operation is a linear operation, and due to the production process and quality of the camera lens, there may be nonlinear characteristics such as certain distortion, and the position accuracy of the target point determined based on the image in the world coordinate system is low. It is difficult to meet the high-precision requirements of applications such as ranging and machine vision.
  • An embodiment of the present disclosure provides a method for determining a position, by determining a first position of a target point in an image collected by a vision sensor; A target homography matrix corresponding to the first position; based on the target homography matrix and the first position, determine the position of the target point in the world coordinate system. Therefore, due to the nonlinear characteristics such as distortion of the camera lens, the mapping relationship between different target image regions and the world coordinate system is different. Then, a corresponding homography matrix can be set for each target image region.
  • the homography matrix corresponding to the target image area where the position is located can be selected to calculate the position of the target point in the world coordinate system, reducing the interference of nonlinear features such as camera lens distortion. , which improves the accuracy of the determined position of the target point in the world coordinate system.
  • the location determination method may be performed by an electronic device such as a terminal device or a server
  • the terminal device may be a user equipment (User Equipment, UE), a mobile device, a user terminal, a terminal, a cellular phone, a cordless Telephone, personal digital assistant (Personal Digital Assistant, PDA), handheld device, computing device, vehicle-mounted device, wearable device, etc.
  • the method can be implemented by the processor calling the computer-readable instructions stored in the memory.
  • the method may be performed by a server.
  • FIG. 1 shows a flowchart of a method for determining a location according to an embodiment of the present disclosure. As shown in FIG. 1 , the method for determining a location includes:
  • step S11 the first position of the target point in the image collected by the vision sensor is determined.
  • the first position may be the position of the target point in the image coordinate system
  • the image coordinate system may be a two-dimensional xy coordinate system
  • the origin of the coordinate system may be a certain point in the image, for example, the upper left corner of the image may be As the origin, the positive direction of the x-axis is taken horizontally to the right, and the positive direction of the y-axis is taken vertically downward to construct a coordinate system.
  • the first position of the target point can be represented by coordinates (x, y).
  • a target homography matrix corresponding to the first position is determined from the corresponding homography matrices of at least two target image regions of the image.
  • the target image area is a pre-divided area.
  • the basis of the division may be the position of the reference point when calibrating the vision sensor.
  • a homography matrix can be constructed for each target image area. Different target image areas may have overlapping areas. Of course, in the image, there will also be other images other than the target image area. area. For the process of determining the homography matrices corresponding to the at least two target image regions, reference may be made to possible implementations provided by the present disclosure.
  • the target image area can be in various shapes such as rectangle and circle, and the position of the target image area can also be represented by coordinates in the image coordinate system.
  • the position of the target image area in the image coordinate system can be represented by the coordinates of the upper left corner and the lower right corner of the rectangle.
  • the coordinates of the rectangle can be expressed as (x1, y1; x2, y2), where (x1, y1) represents the coordinates of the upper left corner of the rectangle, and (x2, y2) represents the coordinates of the lower right corner of the rectangle.
  • the target image area corresponding to the first position can be determined according to the first position and the position of the target image area, and the target image area corresponding to the first position can be determined according to the coordinates of the first position and the target image area.
  • the homography matrix corresponding to the first position can be determined.
  • the determining a target homography matrix corresponding to the first position includes: at the first position any target image located in the at least two target image regions In the case of the region, the homography matrix corresponding to the target image region where the first position is located is taken as the target homography matrix.
  • the homography matrix corresponding to the target image area where the first position is located when it is determined that the first position is located in a certain target image area, the homography matrix corresponding to the target image area where the first position is located, as the target homography matrix.
  • the determining the target homography matrix corresponding to the first position includes: when the first position is located outside the at least two target image areas, determining The first distance between the first position and the at least two target image areas; the homography matrix corresponding to the target image area where the first distance satisfies the distance condition is used as the target homography matrix.
  • the homography matrix of the target image area near the first position can be selected as the target Homography matrix.
  • the selection process may be determined according to the first distance between the first position and the target image area.
  • the first distance between the first position and the target image area may be the distance between the first position and the closest pixel in the target image area. For example, the distance between the first position and each pixel in the boundary of the target image area may be determined, and the smallest distance may be used as the first distance between the first position and the target image area.
  • the target image area is often a regular geometric shape, such as a polygon such as a rectangle
  • the distance from the first position to the boundary of the target image area that is, the distance from the point to the line segment.
  • the distance between point p and line segment ab is the distance between points a and p; if the value of l 1 is greater than or equal to the length of line segment ab, it means that point p is projected on the positive side of line segment ab.
  • the distance between point p and line segment ab is the distance between point b and p; if the value of l 1 is positive and less than the length of line segment ab, then find the distance l 2 from point a to point p , and then using the Pythagorean theorem, the distance between the point p and the line segment ab can be obtained as
  • the smallest distance among the distances between the first position and each side of the target image area may be used as the first distance between the first position and the target image area.
  • the distance condition here may include at least one of the following: the distance is less than the distance threshold, and the distance is the smallest.
  • the distance conditions may also include other various conditions, which may be determined according to practical applications.
  • the homography matrix corresponding to the target image area with the smallest distance in the first distance can be used as the target homography matrix; when the distance condition is that the distance is less than the distance threshold, the homography matrix can be used as the target homography matrix.
  • the homography matrix corresponding to at least one target image region whose distance is smaller than the distance threshold in the first distance is used as the target homography matrix.
  • the determining the target homography matrix corresponding to the first position includes: in the case that the first position is located in an overlapping area, assigning at least a target homography corresponding to the overlapping area
  • the respective homography matrices corresponding to the two target image regions are taken as target homography matrices, and the overlapping region is determined by the at least two target image regions.
  • the homography matrices corresponding to the at least two target image regions are taken as target homography matrices.
  • step S13 the position of the target point in the world coordinate system is determined based on the target homography matrix and the first position.
  • the homography matrix indicates the transformation relationship between the world coordinate system and the image coordinate system
  • the first position is the position in the image coordinate system
  • the target homography matrix the first position in the image coordinate system can be converted into A position corresponds to a position in the world coordinate system
  • the distance between the position and the vision sensor can be measured, thereby realizing image ranging.
  • machine vision applications such as obstacle detection, obstacle avoidance, etc., can be implemented.
  • the first position of the target point in the image collected by the vision sensor is determined; from the homography matrices corresponding to at least two target image regions of the image, the first position is determined.
  • a corresponding target homography matrix based on the target homography matrix and the first position, determine the position of the target point in the world coordinate system. Therefore, due to the nonlinear characteristics such as distortion of the camera lens, the mapping relationship between different target image regions and the world coordinate system is different. Then, a corresponding homography matrix can be set for each target image region.
  • the homography matrix corresponding to the target image area where the position is located can be selected to calculate the position of the target point in the world coordinate system, reducing the interference of nonlinear features such as camera lens distortion. , which improves the accuracy of the determined position of the target point in the world coordinate system.
  • the homography transformation from the image plane of the vision sensor to the plane where the object is located can be better fitted.
  • the distortion correction of the image cannot completely remove the nonlinearity of the image, and it is difficult to perfectly fit the homography transformation from the image plane to the plane where the object is located by using a single homography matrix.
  • the sex matrix can improve the ranging accuracy.
  • the target homography matrix corresponding to the first position may be one or at least two.
  • the target point is determined at world coordinates based on the target homography matrix and the first position
  • the position in the system includes: based on the one homography matrix, corresponding the first position in the image coordinate system to the position in the world coordinate system to obtain the position of the target point in the world coordinate system.
  • the target point is determined based on the target homography matrix and the first position
  • the position in the world coordinate system includes: obtaining the second position of the target point in the world coordinate system based on each of the target homography matrices and the first position; The second position is determined, and the position of the target point in the world coordinate system is determined.
  • the second position of the target point in the world coordinate system is obtained based on each target homography matrix and the first position, that is, the target point can be obtained through each target homography matrix At one position in the world coordinate system, then through at least two homography matrices, the second position in at least two world coordinate systems can be obtained.
  • the coordinate values of the at least two second positions may be averaged to obtain the averaged coordinates, for example, the value obtained by averaging the x-axis coordinates is used as the target point The x-axis coordinate in the world coordinate system, and the value obtained by averaging the y-axis coordinates is used as the y-axis coordinate of the target point in the world coordinate system.
  • the coordinate values of the at least two second positions may be weighted and averaged to obtain the position of the target point in the world coordinate system, wherein the weight value may be set according to the actual situation , for example, set according to the positive correlation of the distance between the first position and the target homography matrix.
  • the center point of the geometric figure formed by the plurality of second positions may be determined, and the center point here may include, for example, a geometric figure One of the center of gravity, the outer center, the inner heart, the vertical heart, and the side heart. Then, the center point of the geometric figure formed by these multiple second positions is taken as the position of the target point in the world coordinate system.
  • the second position of the target point in the world coordinate system is obtained based on each target homography matrix and the first position respectively ; and then determine the position of the target point in the world coordinate system based on the at least two second positions.
  • the method before determining the target homography matrix corresponding to the first position, the method further includes: using a plurality of reference points in a preset calibration to determine the visual In the image collected by the sensor, a homography matrix corresponding to each target image area in the at least two target image areas.
  • the calibration object is placed in the world coordinate system in the real space, and the calibration object often contains a reference point. Using the correspondence between the position of the reference point in the world coordinate system and the position in the image coordinate system, the vision can be determined.
  • the calibration object may be a pattern on a plane, for example, a pattern on a plane such as paper.
  • the pattern on the calibration object can be one or more of circles, rectangles, squares, etc.
  • the pattern can be divided into different areas, the adjacent areas can be represented by different colors, and the reference point can be located in the phase. The junction of adjacent areas to facilitate the identification of reference points on the vision sensor image.
  • the homography matrix corresponding to the target image area is determined based on the reference points of the target image area. At least two reference points can be used to demarcate the homography matrix of a target image area. By selecting the reference points of different target image areas, Using at least two reference points of each target image area, the homography matrix of each target image area can be obtained. For the process of obtaining the homography matrix according to the reference point, please refer to the related technology of camera calibration.
  • the homography matrix corresponding to each of the at least two target image regions in the image collected by the vision sensor is determined. Therefore, due to the nonlinear characteristics such as distortion of the camera lens, the mapping relationship between different target image regions and the world coordinate system is different. Then, a corresponding homography matrix can be set for each target image region. After determining that the target point is in the world In the case of the position in the coordinate system, the homography matrix corresponding to the target image area where the position is located can be selected to calculate the position of the target point in the world coordinate system, which improves the determination of the target point in the world coordinate system. accuracy of location.
  • the plurality of reference points in the preset calibration are used to determine, in the image collected by the vision sensor, the corresponding target image area of each of the at least two target image areas.
  • a homography matrix comprising: determining first coordinate values of multiple reference points in the calibration in the world coordinate system; determining second coordinate values of the multiple reference points in the image collected by the vision sensor ; According to the corresponding relationship of the first coordinate value and the second coordinate value of the same reference point, determine the corresponding homography matrix of different reference point combinations, wherein, the image area where the single described reference point combination is located is a single In the target image area, a single combination of the reference points includes part of the reference points in the plurality of reference points.
  • the homography matrix of the visual sensor is determined by the correspondence between the first coordinate value of the same reference point in the world coordinate system and the second coordinate value in the image.
  • one target image area corresponds to one reference point combination
  • the homography matrix corresponding to the target image area is determined by this one reference point combination.
  • the reference point combination includes N adjacent reference points, where N is an integer greater than 1; the first coordinate value and the second coordinate based on the same reference point
  • the corresponding relationship between the values, and determining the homography matrix corresponding to different reference point combinations including: in the multiple reference points of the calibration object, according to a preset order, sequentially selecting N adjacent reference points to form a reference point Combining to obtain multiple reference point combinations, wherein, when the number of remaining reference points is less than N, select the reference points adjacent to the remaining reference points to fill up N reference points to form a reference point combination; according to For the correspondence between the first coordinate value and the second coordinate value of the same reference point, a homography matrix is determined for each combination of the reference points respectively.
  • N adjacent reference points to form a reference point combination from the reference points of the calibrated object in a certain order, for example, from left to right, and then continue to select N to the right A combination of adjacent reference points until the reference points in the entire calibration are traversed.
  • the center of the calibration object can be used as the starting point, and N reference points can be selected, and then N adjacent reference point combinations can be selected outward respectively, until the reference points in the entire calibration object are traversed. It should be noted that, when the number of remaining reference points is less than N, reference points adjacent to the remaining reference points may be selected to supplement N reference points to form a reference point combination.
  • FIG. 2 is a calibration object provided in an embodiment of the present disclosure, wherein symbols A, B, C, D, E...U are used to indicate the positions of reference points.
  • symbols A, B, C, D, E...U are used to indicate the positions of reference points.
  • If there are less than 9 points use the adjacent reference points on the left to make up, select E-F-G-L-M-N-S-T-U as a reference point combination, and then select the remaining reference points in the calibration to continue to form a reference point combination.
  • the first coordinate value and the second coordinate value of the reference point in the reference point combination are used to calculate the homography matrix corresponding to the reference point combination, and the image area enclosed by the single reference point combination , which is a target image area.
  • N adjacent reference points are sequentially selected to form a reference point combination.
  • reference points adjacent to the remaining reference points may also be selected to complement N reference points to form a reference point combination.
  • the homography matrix corresponding to the target image area where the position is located can be selected to calculate the position of the target point in the world coordinate system, which improves the determined target The accuracy of the point's position in the world coordinate system.
  • Step S21 determining the first coordinate value of each reference point in the calibration object in the world coordinate system.
  • the first coordinate value can be obtained by manual measurement, and the world coordinate system can be a two-dimensional coordinate system constructed with the position of the robot as the origin and the ground as the plane.
  • Step S22 determining the second coordinate value of each reference point in the calibration object in the image collected by the vision sensor.
  • the process may be implemented based on image detection technology, and the second coordinate value of the reference point in the image is determined through image detection, or the process may also be determined based on manual labeling operations.
  • Step S23 for each detected reference point, select 3 ⁇ 3 reference point combinations sequentially from the left side of the calibration object to the right side to obtain multiple reference point combinations.
  • the adjacent reference points can be used to fill in.
  • the adjacent reference points can be used to fill in.
  • Step S24 for each reference point combination, determine a homography matrix corresponding to each reference point combination according to the correspondence between the first coordinate value and the second coordinate value of the same reference point.
  • the image area where a single reference point combination is located is a single target image area.
  • Step S31 determining the first position of the target point in the image collected by the vision sensor.
  • Step S32 in the case that the first position is located in a single target image area, use the homography matrix corresponding to the target image area where the first position is located to determine the position of the target point in the world coordinate system.
  • the homography matrix corresponding to the target image area where the A-B-C-H-I-J-O-P-Q reference point is located uses the homography matrix corresponding to the target image area where the A-B-C-H-I-J-O-P-Q reference point is located to determine the position of the target point in the world coordinate system.
  • Step S33 in the case where the first position is located in the overlapping area, use the respective homography matrices corresponding to at least two target image areas corresponding to the overlapping area to obtain the second position of the target point in the world coordinate system, and then obtain the second position of the target point in the world coordinate system. Using the obtained at least two second positions, the position of the target point in the world coordinate system is determined.
  • the first position is at point J
  • the homography matrix of which determines the second position of the target point in the world coordinate system. Take the midpoint of the two second positions as the position of the target point in the world coordinate system.
  • Step S34 when the first position is located outside the target image area, determine the first distance between the first position and the target image area, and use the homography matrix corresponding to the target image area with the smallest first distance to determine the target The position of the point in the world coordinate system.
  • the first position is located at the point to the left of point A, calculate the distance between the first position and each target image area, determine the target image area where the A-B-C-H-I-J-O-P-Q reference point is located as the target image area with the closest distance, and then use the target image area
  • the corresponding homography matrix determines the second position of the target point in the world coordinate system.
  • the homography transformation from the image plane of the vision sensor to the plane where the object is located can be better fitted.
  • the distortion correction of the image cannot completely remove the nonlinearity of the image, and it is difficult to perfectly fit the homography transformation from the image plane to the plane where the object is located by using a single homography matrix.
  • the sex matrix can improve the ranging accuracy.
  • image ranging can be performed based on the calibrated visual sensor, and the target can be determined according to the position of the target in the image.
  • the distance from the vision sensor, or the distance from the robot equipped with the vision sensor, image ranging can be used in applications such as automatic driving and cleaning work of sweeping robots, which has high application value.
  • the present disclosure also provides a position determination device, electronic device, computer-readable storage medium, and program, all of which can be used to implement any one of the position determination methods provided by the present disclosure.
  • a position determination device electronic device, computer-readable storage medium, and program, all of which can be used to implement any one of the position determination methods provided by the present disclosure.
  • FIG. 3 shows a block diagram of a position determination apparatus according to an embodiment of the present disclosure.
  • the position determination apparatus 40 includes:
  • the first position determination part 41 is configured to determine the first position of the target point in the image collected by the vision sensor;
  • the target homography matrix determination part 42 is configured to determine the target homography matrix corresponding to the first position from the corresponding homography matrices of at least two target image regions of the image;
  • the second position determination part 43 is configured to determine the position of the target point in the world coordinate system based on the target homography matrix and the first position.
  • the target homography matrix determination part 42 is further configured to, when the first position is located in any target image area of the at least two target image areas, The homography matrix corresponding to the target image area where the first position is located is used as the target homography matrix.
  • the target homography matrix determination part 42 is further configured to determine the first position when the first position is located outside the at least two target image areas The first distance between the at least two target image areas; the homography matrix corresponding to the target image area where the first distance satisfies the distance condition is used as the target homography matrix.
  • the target homography matrix determining part 42 is further configured to, in the case that the first position is located in an overlapping area, assign at least two target image areas corresponding to the overlapping area
  • the respective corresponding homography matrices are used as target homography matrices, and the overlapping area is determined by the at least two target image areas.
  • the second position determination part 43 is further configured to, in the case that the first position corresponds to at least two target homography matrices, based on each target homography The matrix and the first position are used to obtain the second position of the target point in the world coordinate system; based on the at least two respectively obtained second positions, the position of the target point in the world coordinate system is determined.
  • the apparatus further includes:
  • the homography matrix determination part is configured to use a plurality of reference points in the preset calibration to determine the homography corresponding to each of the at least two target image areas in the image collected by the vision sensor. Responsiveness Matrix.
  • the homography matrix determination part includes:
  • a first coordinate value determination subsection configured to determine first coordinate values of a plurality of reference points in the calibration in the world coordinate system
  • a second coordinate value determination subsection configured to determine second coordinate values of the plurality of reference points in the image captured by the vision sensor
  • the homography matrix determination subsection is configured to determine homography matrices corresponding to different reference point combinations according to the correspondence between the first coordinate value and the second coordinate value of the same reference point, wherein a single The image area where the reference point combination is located is a single target image area, and the single reference point combination includes some reference points in the multiple reference points.
  • the reference point combination includes N adjacent reference points, where N is an integer greater than 1;
  • the homography matrix determination sub-section is further configured to select N adjacent reference points in sequence from the multiple reference points of the calibration object to form a reference point combination according to a preset order, so as to obtain multiple reference points. point combination, wherein, when the number of remaining reference points is less than N, the reference points adjacent to the remaining reference points are selected to complement N reference points to form a reference point combination; according to the description of the same reference point The correspondence between the first coordinate value and the second coordinate value is determined for each combination of the reference points, respectively, to determine a homography matrix.
  • the functions or included parts of the apparatus provided in the embodiments of the present disclosure may be configured to execute the methods described in the above method embodiments, and the implementation manner and technical effects thereof may refer to the descriptions in the above method embodiments.
  • Embodiments of the present disclosure further provide a computer-readable storage medium, on which computer program instructions are stored, and when the computer program instructions are executed by a processor, the foregoing method is implemented.
  • Computer-readable storage media can be volatile or non-volatile computer-readable storage media.
  • An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing instructions executable by the processor; wherein the processor is configured to invoke the instructions stored in the memory to execute the above method.
  • Embodiments of the present disclosure also provide a computer program product, including computer-readable code, when the computer-readable code is executed on the device, the processor in the device executes the method for implementing the location determination method provided by any of the above embodiments. instruction.
  • Embodiments of the present disclosure further provide another computer program product for storing computer-readable instructions, which, when executed, cause the computer to perform the operations of the position determination method provided by any of the foregoing embodiments.
  • the electronic device may be provided as a terminal, server or other form of device.
  • FIG. 4 shows a block diagram of an electronic device 800 according to an embodiment of the present disclosure.
  • electronic device 800 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, fitness device, personal digital assistant, etc. terminal.
  • the electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power supply component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814 , and the communication component 816 .
  • the processing component 802 generally controls the overall operation of the electronic device 800, such as operations associated with display, phone calls, data communications, camera operations, and recording operations.
  • the processing component 802 can include one or more processors 820 to execute instructions to perform all or some of the steps of the methods described above. Additionally, processing component 802 may include one or more sections that facilitate interaction between processing component 802 and other components. For example, processing component 802 may include a multimedia module to facilitate interaction between multimedia component 808 and processing component 802.
  • Memory 804 is configured to store various types of data to support operation at electronic device 800 . Examples of such data include instructions for any application or method operating on electronic device 800, contact data, phonebook data, messages, pictures, videos, and the like.
  • the memory 804 may be implemented by any type of volatile or non-volatile storage device or combination thereof, such as Ferroelectric Random Access Memory (FRAM), Read-Only Memory (ROM), programmable Read-only memory (Programmable read-only memory, PROM), electronic programmable read-only memory (Electrical Programmable read-only memory, EPROM), electrically erasable programmable read-only memory (Electrical Programmable read-only memory, EEPROM), flash memory , magnetic surface memory, optical disk, or CD-ROM and other memories; it can also be various devices including one or any combination of the above memories.
  • FRAM Ferroelectric Random Access Memory
  • ROM Read-Only Memory
  • PROM programmable Read-only memory
  • PROM programmable Read-only memory
  • EPROM Electrically eras
  • Power supply assembly 806 provides power to various components of electronic device 800 .
  • Power supply components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power to electronic device 800 .
  • Multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and the user.
  • the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user.
  • the touch panel includes one or more touch sensors to sense touch, swipe, and gestures on the touch panel. The touch sensor may not only sense the boundaries of a touch or swipe action, but also detect the duration and pressure associated with the touch or swipe action.
  • the multimedia component 808 includes a front-facing camera and/or a rear-facing camera. When the electronic device 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each of the front and rear cameras can be a fixed optical lens system or have focal length and optical zoom capability.
  • Audio component 810 is configured to output and/or input audio signals.
  • audio component 810 includes a microphone (MIC) that is configured to receive external audio signals when electronic device 800 is in operating modes, such as calling mode, recording mode, and voice recognition mode.
  • the received audio signal may be further stored in memory 804 or transmitted via communication component 816 .
  • audio component 810 also includes a speaker for outputting audio signals.
  • the I/O interface 812 provides an interface between the processing component 802 and a peripheral interface module, which may be a keyboard, a click wheel, a button, or the like. These buttons may include, but are not limited to: home button, volume buttons, start button, and lock button.
  • Sensor assembly 814 includes one or more sensors for providing status assessment of various aspects of electronic device 800 .
  • the sensor assembly 814 can detect the on/off state of the electronic device 800, the relative positioning of the components, such as the display and the keypad of the electronic device 800, the sensor assembly 814 can also detect the electronic device 800 or one of the electronic device 800 Changes in the position of components, presence or absence of user contact with the electronic device 800 , orientation or acceleration/deceleration of the electronic device 800 and changes in the temperature of the electronic device 800 .
  • Sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact.
  • Sensor assembly 814 may also include a light sensor, such as a Complementary Metal Oxide Semiconductor (CMOS) or Charge Coupled Device (CCD) image sensor, for use in imaging applications.
  • CMOS Complementary Metal Oxide Semiconductor
  • CCD Charge Coupled Device
  • the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • Communication component 816 is configured to facilitate wired or wireless communication between electronic device 800 and other devices.
  • the electronic device 800 can access a wireless network based on a communication standard, such as a wireless network (WiFi), a second-generation mobile communication technology (2-Generation wireless telephone technology, 2G) or a third-generation mobile communication technology (3-Generation wireless telephone technology) , 3G), or their combination.
  • the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communication component 816 also includes a Near Field Communication (NFC) module to facilitate short-range communication.
  • the NFC module may be based on Radio Frequency Identification (RFID) technology, Infrared Data Association (IrDA) technology, Ultra Wide Band (UWB) technology, Bluetooth (Bluetooth, BT) technology and other technology to achieve.
  • RFID Radio Frequency Identification
  • IrDA Infrared Data Association
  • UWB Ultra Wide Band
  • Bluetooth Bluetooth
  • the electronic device 800 may be implemented by one or more Application Specific Integrated Circuit (ASIC), Digital Signal Process (DSP), Digital Signal Processing Device (Digital Signal Process Device) , DSPD), Programmable Logic Device (PLD), Field Programmable Gate Array (FPGA), controller, microcontroller, microprocessor, or other electronic component implementation, used to perform the above method.
  • ASIC Application Specific Integrated Circuit
  • DSP Digital Signal Process
  • DSPD Digital Signal Processing Device
  • PLD Programmable Logic Device
  • FPGA Field Programmable Gate Array
  • controller microcontroller, microprocessor, or other electronic component implementation, used to perform the above method.
  • a non-volatile computer-readable storage medium is also provided, such as a memory 804 comprising computer program instructions executable by the processor 820 of the electronic device 800 to perform the above method.
  • FIG. 5 shows a block diagram of an electronic device 1900 according to an embodiment of the present disclosure.
  • the electronic device 1900 may be provided as a server.
  • electronic device 1900 includes processing component 1922, which further includes one or more processors, and a memory resource represented by memory 1932 for storing instructions executable by processing component 1922, such as applications.
  • An application program stored in memory 1932 may include one or more portions each corresponding to a set of instructions.
  • the processing component 1922 is configured to execute instructions to perform the above-described methods.
  • the electronic device 1900 may also include a power supply assembly 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input output (I/O) interface 1958 .
  • the electronic device 1900 can operate based on an operating system stored in the memory 1932, such as a Microsoft server operating system (Windows ServerTM), a graphical user interface based operating system (Mac OS XTM) introduced by Apple, a multi-user multi-process computer operating system (UnixTM). ), Free and Open Source Unix-like Operating System (LinuxTM), Open Source Unix-like Operating System (FreeBSDTM) or similar.
  • a non-volatile computer-readable storage medium such as memory 1932 comprising computer program instructions executable by processing component 1922 of electronic device 1900 to perform the above-described method.
  • the present disclosure may be a system, method and/or computer program product.
  • the computer program product may include a computer-readable storage medium having computer-readable program instructions loaded thereon for causing a processor to implement various aspects of the present disclosure.
  • a computer-readable storage medium may be a tangible device that can hold and store instructions for use by the instruction execution device.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • Computer readable storage media include: portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), static Random Access Memory (SRAM), Portable Compact Disc Read Only Memory (CD-ROM), Digital Versatile Disc (DVD), Memory Sticks, Floppy Disks, Mechanically Encoded Devices, such as punched cards or grooves on which instructions are stored Internal convex structure, and any suitable combination of the above.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read only memory
  • SRAM static Random Access Memory
  • CD-ROM Portable Compact Disc Read Only Memory
  • DVD Digital Versatile Disc
  • Memory Sticks Memory Sticks
  • Mechanically Encoded Devices such as punched cards or grooves on which instructions are stored Internal convex structure, and any suitable combination of the above.
  • Computer-readable storage media are not to be construed as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (eg, light pulses through fiber optic cables), or through electrical wires transmitted electrical signals.
  • the computer readable program instructions described herein may be downloaded to various computing/processing devices from a computer readable storage medium, or to an external computer or external storage device over a network, such as the Internet, a local area network, a wide area network, and/or a wireless network.
  • the network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
  • a network adapter card or network interface in each computing/processing device receives computer-readable program instructions from a network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in each computing/processing device .
  • Computer program instructions for carrying out operations of the present disclosure may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-dependent instructions, microcode, firmware instructions, state setting data, or instructions in one or more programming languages.
  • Source or object code written in any combination, including object-oriented programming languages, such as Smalltalk, C++, etc., and conventional procedural programming languages, such as the "C" language or similar programming languages.
  • the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server implement.
  • the remote computer may be connected to the user's computer through any kind of network—including a Local Area Network (LAN) or a Wide Area Network (WAN)—or, may be connected to an external computer (eg, use an internet service provider to connect via the internet).
  • LAN Local Area Network
  • WAN Wide Area Network
  • custom electronic circuits such as programmable logic circuits, field programmable gate arrays (FPGAs), or programmable logic arrays (PLAs), can be personalized by utilizing state information of computer readable program instructions.
  • Computer readable program instructions are executed to implement various aspects of the present disclosure.
  • These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer or other programmable data processing apparatus to produce a machine that causes the instructions when executed by the processor of the computer or other programmable data processing apparatus , resulting in means for implementing the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
  • These computer readable program instructions can also be stored in a computer readable storage medium, these instructions cause a computer, programmable data processing apparatus and/or other equipment to operate in a specific manner, so that the computer readable medium on which the instructions are stored includes An article of manufacture comprising instructions for implementing various aspects of the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
  • Computer readable program instructions can also be loaded onto a computer, other programmable data processing apparatus, or other equipment to cause a series of operational steps to be performed on the computer, other programmable data processing apparatus, or other equipment to produce a computer-implemented process , thereby causing instructions executing on a computer, other programmable data processing apparatus, or other device to implement the functions/acts specified in one or more blocks of the flowcharts and/or block diagrams.
  • each block in the flowchart or block diagrams may represent a section, segment, or portion of instructions that includes one or more functions for implementing the specified logical function(s) executable instructions.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or actions , or can be implemented in a combination of dedicated hardware and computer instructions.
  • the computer program product can be implemented in hardware, software or a combination thereof.
  • the computer program product is embodied as a computer storage medium, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK) and the like.
  • the present disclosure relates to a method and device for determining a position, an electronic device, and a storage medium.
  • the method includes: determining a first position of a target point in an image collected by a vision sensor; In the homography matrix, a target homography matrix corresponding to the first position is determined; based on the target homography matrix and the first position, the position of the target point in the world coordinate system is determined.
  • the embodiments of the present disclosure can improve the accuracy of the determined position of the target point in the world coordinate system.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

一种位置确定方法及装置、电子设备和存储介质,所述方法包括:确定视觉传感器采集的图像中目标点的第一位置(S11);从所述图像的至少两个目标图像区域各自对应的单应性矩阵中,确定与所述第一位置相对应的目标单应性矩阵(S12);基于所述目标单应性矩阵和所述第一位置,确定所述目标点在世界坐标系中的位置(S13)。

Description

一种位置确定方法及装置、电子设备和存储介质
相关申请的交叉引用
本公开基于申请号为202110442278.4、申请日为2021年04月23日、申请名称为“一种位置确定方法及装置、电子设备和存储介质”的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本公开作为参考。
技术领域
本公开涉及计算机技术领域,涉及一种位置确定方法及装置、电子设备和存储介质。
背景技术
在基于视觉传感器的图像测距以及机器视觉等应用中,为确定空间中表面某点的位置与其在图像中像点的位置之间的对应关系,往往会对相机的参数进行标定。无论是在图像测距还是机器视觉中,相机参数的标定都是非常关键的环节,其标定结果的精度直接影响相机工作产生结果的准确性。
在相关技术中,视觉传感器的参数标定后,可以得到用于指示世界坐标系和图像坐标系之间映射关系的单应性矩阵,视觉传感器所采集的图像中某个点的坐标可以通过与单应性矩阵进行相乘操作,得到图像中该点在真实空间中的实际位置,基于该实际位置即可完成测距、机器视觉等应用。
发明内容
本公开提出了一种位置确定技术方案。
根据本公开的一方面,提供了一种位置确定方法,包括:
确定视觉传感器采集的图像中目标点的第一位置;
从所述图像的至少两个目标图像区域各自对应的单应性矩阵中,确定与所述第一位置相对应的目标单应性矩阵;
基于所述目标单应性矩阵和所述第一位置,确定所述目标点在世界坐标系中的位置。
在一种可能的实现方式中,所述确定与所述第一位置相对应的目标单应性矩阵,包括:
在所述第一位置位于所述至少两个目标图像区域中的任一目标图像区域的情况下,将所述第一位置所在的目标图像区域对应的单应性矩阵,作为目标单应性矩阵。
在一种可能的实现方式中,所述确定与所述第一位置相对应的目标单应性矩阵,包括:
在所述第一位置位于所述至少两个目标图像区域外的情况下,确定所述第一位置与所述至少两个目标图像区域之间的第一距离;
将所述第一距离满足距离条件的目标图像区域对应的单应性矩阵,作为目标单应性矩阵。
在一种可能的实现方式中,所述确定与所述第一位置相对应的目标单应性矩阵,包括:
在所述第一位置位于重叠区域的情况下,将所述重叠区域对应的至少两个目标图像区域各自对应的单应性矩阵,作为目标单应性矩阵,所述重叠区域由所述至少两个目标图像区域确定。
在一种可能的实现方式中,所述基于所述目标单应性矩阵和所述第一位置,确定所述目标点在世界坐标系中的位置,包括:
在所述第一位置对应至少两个目标单应性矩阵的情况下,分别基于各所述目标单应性矩阵和所述第一位置,得到所述目标点在世界坐标系中的第二位置;
基于分别得到的至少两个所述第二位置,确定所述目标点在世界坐标系中位置。
在一种可能的实现方式中,在确定与所述第一位置相对应的目标单应性矩阵前,所述方法还包括:
利用预设的标定物中的多个参考点,确定所述视觉传感器所采集的图像中,至少两个目标图像区域中的每一个目标图像区域对应的单应性矩阵。
在一种可能的实现方式中,所述利用预设的标定物中的多个参考点,确定所述视觉传 感器所采集的图像中,至少两个目标图像区域中的每一个目标图像区域对应的单应性矩阵,包括:
确定所述标定物中的多个参考点在世界坐标系中的第一坐标值;
确定所述多个参考点在所述视觉传感器采集的图像中的第二坐标值;
根据同一参考点的所述第一坐标值和所述第二坐标值的对应关系,确定不同的参考点组合对应的单应性矩阵,其中,单个所述参考点组合所在的图像区域为单个目标图像区域,单个所述参考点组合包括所述多个参考点中的部分参考点。
在一种可能的实现方式中,所述参考点组合中包括N个相邻的参考点,N为大于1的整数;
所述根据同一参考点的所述第一坐标值和所述第二坐标值的对应关系,确定不同的参考点组合对应的单应性矩阵,包括:
在所述标定物的多个参考点中,按照预设的顺序,依次选取N个相邻的参考点构成参考点组合,得到多个参考点组合,其中,在剩余的参考点数量不足N个的情况下,选取与所述剩余的参考点相邻的参考点补齐N个参考点,构成参考点组合;
根据同一参考点的所述第一坐标值和所述第二坐标值的对应关系,分别针对各所述参考点组合,确定单应性矩阵。
根据本公开的一方面,提供了一种位置确定装置,包括:
第一位置确定部分,被配置为确定视觉传感器采集的图像中目标点的第一位置;
目标单应性矩阵确定部分,被配置为从所述图像的至少两个目标图像区域各自对应的单应性矩阵中,确定与所述第一位置相对应的目标单应性矩阵;
第二位置确定部分,被配置为基于所述目标单应性矩阵和所述第一位置,确定所述目标点在世界坐标系中的位置。
在一种可能的实现方式中,所述目标单应性矩阵确定部分,还被配置为在所述第一位置位于所述至少两个目标图像区域中的任一目标图像区域的情况下,将所述第一位置所在的目标图像区域对应的单应性矩阵,作为目标单应性矩阵。
在一种可能的实现方式中,所述目标单应性矩阵确定部分,还被配置为在所述第一位置位于所述至少两个目标图像区域外的情况下,确定所述第一位置与所述至少两个目标图像区域之间的第一距离;将所述第一距离满足距离条件的目标图像区域对应的单应性矩阵,作为目标单应性矩阵。
在一种可能的实现方式中,所述目标单应性矩阵确定部分,还被配置为在所述第一位置位于重叠区域的情况下,将所述重叠区域对应的至少两个目标图像区域各自对应的单应性矩阵,作为目标单应性矩阵,所述重叠区域由所述至少两个目标图像区域确定。
在一种可能的实现方式中,所述第二位置确定部分,还被配置为在所述第一位置对应至少两个目标单应性矩阵的情况下,分别基于各所述目标单应性矩阵和所述第一位置,得到所述目标点在世界坐标系中的第二位置;基于分别得到的至少两个所述第二位置,确定所述目标点在世界坐标系中位置。
在一种可能的实现方式中,所述装置还包括:
单应性矩阵确定部分,被配置为利用预设的标定物中的多个参考点,确定所述视觉传感器所采集的图像中,至少两个目标图像区域中的每一个目标图像区域对应的单应性矩阵。
在一种可能的实现方式中,所述单应性矩阵确定部分,包括:
第一坐标值确定子部分,被配置为确定所述标定物中的多个参考点在世界坐标系中的第一坐标值;
第二坐标值确定子部分,被配置为确定所述多个参考点在所述视觉传感器采集的图像中的第二坐标值;
单应性矩阵确定子部分,被配置为根据同一参考点的所述第一坐标值和所述第二坐标 值的对应关系,确定不同的参考点组合对应的单应性矩阵,其中,单个所述参考点组合所在的图像区域为单个目标图像区域,单个所述参考点组合包括所述多个参考点中的部分参考点。
在一种可能的实现方式中,所述参考点组合中包括N个相邻的参考点,N为大于1的整数;
所述单应性矩阵确定子部分,还被配置为在所述标定物的多个参考点中,按照预设的顺序,依次选取N个相邻的参考点构成参考点组合,得到多个参考点组合,其中,在剩余的参考点数量不足N个的情况下,选取与所述剩余的参考点相邻的参考点补齐N个参考点,构成参考点组合;根据同一参考点的所述第一坐标值和所述第二坐标值的对应关系,分别针对各所述参考点组合,确定单应性矩阵。
根据本公开的一方面,提供了一种电子设备,包括:处理器;用于存储处理器可执行指令的存储器;其中,所述处理器被配置为调用所述存储器存储的指令,以执行上述方法。
根据本公开的一方面,提供了一种计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现上述方法。
根据本公开的一方面,提供了一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行用于实现上述方法。
在本公开实施例中,通过确定视觉传感器采集的图像中目标点的第一位置;从所述图像的至少两个目标图像区域各自对应的单应性矩阵中,确定与所述第一位置相对应的目标单应性矩阵;基于所述目标单应性矩阵和所述第一位置,确定所述目标点在世界坐标系中的位置。由此,由于相机镜头存在畸变等非线性特征导致不同的目标图像区域与世界坐标系的映射关系不同,那么,可以针对各目标图像区域分别设置对应的单应性矩阵,在确定目标点在世界坐标系中的位置的情况下,可以选取与该位置所处的目标图像区域对应的单应性矩阵,来计算目标点在世界坐标系中的位置,减少了相机镜头畸变等非线性特征的干扰,提高了确定的目标点在世界坐标系中位置的准确性。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,而非限制本公开。根据下面参考附图对示例性实施例的详细说明,本公开的其它特征及方面将变得清楚。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,这些附图示出了符合本公开的实施例,并与说明书一起用于说明本公开的技术方案。
图1示出根据本公开实施例的位置确定方法的流程图。
图2示出根据本公开实施例的一种标定物的结构示意图。
图3示出根据本公开实施例的一种位置确定装置的框图。
图4示出根据本公开实施例的一种电子设备的框图。
图5示出根据本公开实施例的一种电子设备的框图。
具体实施方式
以下将参考附图详细说明本公开的各种示例性实施例、特征和方面。附图中相同的附图标记表示功能相同或相似的元件。尽管在附图中示出了实施例的各种方面,但是除非特别指出,不必按比例绘制附图。
在这里专用的词“示例性”意为“用作例子、实施例或说明性”。这里作为“示例性”所说明的任何实施例不必解释为优于或好于其它实施例。
本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中术语“至少一种”表示多种中的任意一种或多种中的至少两种的任意组合,例如,包括A、B、C中的至少一种,可以表示包括从A、B和C构成的集合中选择的任意 一个或多个元素。
另外,为了更好地说明本公开,在下文的具体实施方式中给出了众多的实施细节。本领域技术人员应当理解,没有某些实施细节,本公开同样可以实施。在一些实例中,对于本领域技术人员熟知的方法、手段、元件和电路未作详细描述,以便于凸显本公开的主旨。
在基于视觉传感器的图像测距、机器视觉、三维场景重建等应用中,为了校正视觉传感器的镜头畸变、确定三维空间中的物理尺寸与图像的像素尺寸之间的换算关系、以及确定空间物体(或空间物体表面某点)的三维几何位置与该空间物体(或空间物体表面某点)在图像中对应的像素点的坐标之间的对应关系,往往会建立视觉传感器成像的几何模型,该几何模型的参数即为视觉传感器参数。
在视觉传感器参数中,单应性矩阵表征了世界坐标系和图像坐标系之间的映射关系,视觉传感器所采集的图像中某个点的坐标可以通过与单应性矩阵进行相乘操作,得到图像中该点呈现于世界坐标中的实际位置。该相乘操作是一个线性操作,而由于相机镜头生产工艺和出厂质量等问题,本身可能存在一定的畸变等非线性特性,基于图像确定的目标点在世界坐标系中的位置准确性较低,难以满足测距、机器视觉等应用的高精度需求。
本公开实施例提供一种位置确定方法,通过确定视觉传感器采集的图像中目标点的第一位置;从所述图像的至少两个目标图像区域各自对应的单应性矩阵中,确定与所述第一位置相对应的目标单应性矩阵;基于所述目标单应性矩阵和所述第一位置,确定所述目标点在世界坐标系中的位置。由此,由于相机镜头存在畸变等非线性特征导致不同的目标图像区域与世界坐标系的映射关系不同,那么,可以针对各目标图像区域分别设置对应的单应性矩阵,在确定目标点在世界坐标系中的位置的情况下,可以选取与该位置所处的目标图像区域对应的单应性矩阵,来计算目标点在世界坐标系中的位置,减少了相机镜头畸变等非线性特征的干扰,提高了确定的目标点在世界坐标系中位置的准确性。
在一种可能的实现方式中,所述位置确定方法可以由终端设备或服务器等电子设备执行,终端设备可以为用户设备(User Equipment,UE)、移动设备、用户终端、终端、蜂窝电话、无绳电话、个人数字处理(Personal Digital Assistant,PDA)、手持设备、计算设备、车载设备、可穿戴设备等,所述方法可以通过处理器调用存储器中存储的计算机可读指令的方式来实现。或者,可通过服务器执行所述方法。
图1示出根据本公开实施例的位置确定方法的流程图,如图1所示,所述位置确定方法包括:
在步骤S11中,确定视觉传感器采集的图像中目标点的第一位置。
该第一位置可以是目标点在图像坐标系中的位置,图像坐标系可以是一个二维的xy坐标系,该坐标系的原点可以是图像中的某一点,例如,可以以图像的左上角为原点,以横向向右为x轴正方向,以纵向向下为y轴正方向,构建坐标系。那么目标点的第一位置可以通过坐标(x,y)来表示。
在步骤S12中,从所述图像的至少两个目标图像区域各自对应的单应性矩阵中,确定与所述第一位置相对应的目标单应性矩阵。
在本公开实施例中,图像中会存在至少两个目标图像区域,目标图像区域是预先划分的区域,划分的依据可以是对视觉传感器标定时参考点的位置,详细的实施方式可参见本公开后文可能的实现方式。
在构建单应性矩阵的过程中,可以对每一目标图像区域构建一个单应性矩阵,不同的目标图像区域可以存在重叠区域,当然,在图像中,也会存在目标图像区域以外的其它图像区域。确定至少两个目标图像区域各自对应的单应性矩阵的过程,可参见本公开提供的可能的实现方式。
目标图像区域可以是矩形、圆形等各种形状,目标图像区域的位置也可以通过图像坐标系中的坐标来表示。例如,在目标图像区域是矩形的情况下,可以通过矩形左上角角点 的坐标和右下角角点的坐标,来表示目标图像区域在图像坐标系中的位置,例如,矩形的坐标可以表示为(x1,y1;x2,y2),其中,(x1,y1)表示矩形左上角角点的坐标,(x2,y2)表示矩形右下角角点的坐标。
在第一位置在图像坐标系中的位置确定后,即可根据第一位置和目标图像区域的位置,确定与第一位置相对应的目标图像区域,可以根据第一位置的坐标和目标图像区域的坐标,来确定与第一位置相对应的目标图像区域,而目标图像区域又与单应性矩阵相对应,因此,即可确定与第一位置对应的单应性矩阵。
本公开实施例中,可以有多种可能的实现方式确定与第一位置相对应的目标单应性矩阵:
在一种可能的实现方式中,所述确定与所述第一位置相对应的目标单应性矩阵,包括:在所述第一位置位于所述至少两个目标图像区域中的任一目标图像区域的情况下,将所述第一位置所在的目标图像区域对应的单应性矩阵,作为目标单应性矩阵。
也就是说,根据第一位置的坐标和目标图像区域的坐标,在确定第一位置位于某一目标图像区域中的情况下,则将第一位置所在的目标图像区域对应的单应性矩阵,作为目标单应性矩阵。
在一种可能的实现方式中,所述确定与所述第一位置相对应的目标单应性矩阵,包括:在所述第一位置位于所述至少两个目标图像区域外的情况下,确定所述第一位置与所述至少两个目标图像区域之间的第一距离;将所述第一距离满足距离条件的目标图像区域对应的单应性矩阵,作为目标单应性矩阵。
也就是说,根据第一位置的坐标和目标图像区域的坐标,在确定第一位置不在任何目标图像区域中的情况下,则可以选取第一位置附近的目标图像区域的单应性矩阵作为目标单应性矩阵。选取过程可以是根据第一位置与目标图像区域之间的第一距离来确定的。
第一位置与目标图像区域之间的第一距离,可以是第一位置与目标图像区域中距离最近的像素点之间的距离。例如,可以确定第一位置与目标图像区域的边界中各像素点之间的距离,将最小的距离作为第一位置与目标图像区域之间的第一距离。
此外,由于目标图像区域往往是规则的几何形状,例如是矩形等多边形,那么,第一位置到目标图像区域的边界的距离,即点到线段的距离。在确定点p到线段ab(端点为a和b)的距离时,可以利用向量点乘求出向量ap在向量ab上的投影长度l 1,若l 1的值小于等于0,则表明p点投影在线段ab的反向延长线上,则点p和线段ab之间的距离为点a和p的距离;若l 1的值大于等于线段ab长度,则表明点p投影在线段ab的正向延长线上,则点p和线段ab之间的距离为点b和p的距离;若l 1的值为正数且小于线段ab长度,则再求出点a到点p的距离l 2,然后利用勾股定理,即可求出点p和线段ab之间的距离为
Figure PCTCN2021120916-appb-000001
在确定出第一位置到目标图像区域的各边的距离后,可以将第一位置与目标图像区域的各边的距离中最小的距离,作为第一位置与目标图像区域的第一距离。
这里的距离条件可以包括以下至少一种:距离小于距离阈值,距离最小。此外,距离条件还可以包括其它多种,可根据实际应用来确定。
在距离条件是距离最小的情况下,可以将第一距离中距离最小的目标图像区域对应的单应性矩阵,作为目标单应性矩阵;在距离条件是距离小于距离阈值的情况下,可以将第一距离中距离小于距离阈值的至少一个目标图像区域对应的单应性矩阵,作为目标单应性矩阵。
在一种可能的实现方式中,所述确定与所述第一位置相对应的目标单应性矩阵,包括:在所述第一位置位于重叠区域的情况下,将所述重叠区域对应的至少两个目标图像区域各自对应的单应性矩阵,作为目标单应性矩阵,所述重叠区域由所述至少两个目标图像区域确定。
也就是说,在第一位置位于至少两个目标图像区域中的情况下,则将该至少两个目标 图像区域各自对应的单应性矩阵,均作为目标单应性矩阵。
在步骤S13中,基于所述目标单应性矩阵和所述第一位置,确定所述目标点在世界坐标系中的位置。
由于单应性矩阵指示世界坐标系和图像坐标系之间的转换关系,而第一位置是图像坐标系中的位置,那么,根据该目标单应性矩阵,可以将图像坐标系中的该第一位置对应至世界坐标系中的位置。
在一种可能的实现方式中,根据确定的世界坐标系中的位置,可以衡量该位置距离视觉传感器的距离,从而实现图像测距。
在一种可能的实现方式中,根据确定的世界坐标系中的位置,可以实现机器视觉应用,例如障碍物检测、避障等。
在本公开实施例中,通过确定视觉传感器采集的图像中目标点的第一位置;从所述图像的至少两个目标图像区域各自对应的单应性矩阵中,确定与所述第一位置相对应的目标单应性矩阵;基于所述目标单应性矩阵和所述第一位置,确定所述目标点在世界坐标系中的位置。由此,由于相机镜头存在畸变等非线性特征导致不同的目标图像区域与世界坐标系的映射关系不同,那么,可以针对各目标图像区域分别设置对应的单应性矩阵,在确定目标点在世界坐标系中的位置的情况下,可以选取与该位置所处的目标图像区域对应的单应性矩阵,来计算目标点在世界坐标系中的位置,减少了相机镜头畸变等非线性特征的干扰,提高了确定的目标点在世界坐标系中位置的准确性。
在本公开实施例中,通过至少两个单应性矩阵,可以更好地拟合视觉传感器的图像平面到物体所在平面的单应性变换。尤其是对于低成本相机而言,图像的畸变校正无法完全去除图像的非线性,使用单一单应性矩阵很难完美拟合图像平面到物体所在平面的单应性变换,而使用多个单应性矩阵可以提高测距精度。
如前文所述,与第一位置对应的目标单应性矩阵可能是一个,也可能是至少两个。
在一种可能的实现方式中,在第一位置对应于一个目标单应性矩阵的情况下,所述基于所述目标单应性矩阵和所述第一位置,确定所述目标点在世界坐标系中的位置,包括:基于该一个单应性矩阵,将图像坐标系中的第一位置对应至世界坐标系中的位置,得到目标点在世界坐标系中的位置。
在一种可能的实现方式中,在所述第一位置对应至少两个目标单应性矩阵的情况下,所述基于所述目标单应性矩阵和所述第一位置,确定所述目标点在世界坐标系中的位置,包括:分别基于各所述目标单应性矩阵和所述第一位置,得到所述目标点在世界坐标系中的第二位置;基于分别得到的至少两个所述第二位置,确定所述目标点在世界坐标系中位置。
在本实现方式中,分别基于各目标单应性矩阵和所述第一位置,得到目标点在世界坐标系中的第二位置,也就是说,可以通过每个目标单应性矩阵得到目标点在世界坐标系中的一个位置,那么通过至少两个单应性矩阵,就可以得到至少两个世界坐标系中的第二位置。
针对得到的至少两个第二位置,可以对该至少两个第二位置的坐标值取平均值,得到取平均值后的坐标,例如,对x轴坐标取平均值后得到的值作为目标点在世界坐标系中的x轴坐标,对y轴坐标取平均值后得到的值作为目标点在世界坐标系中的y轴坐标。在其他示例中,针对得到的至少两个第二位置,可以对该至少两个第二位置的坐标值进行加权平均,得到目标点在世界坐标系中的位置,其中权值可以根据实际情况设置,例如根据第一位置与目标单应性矩阵之间的距离正相关设置。
针对得到的至少两个第二位置,在第二位置的数量多于两个的情况下,可以确定由这多个第二位置构成的几何图形的中心点,这里的中心点例如可以包括几何图形的重心、外心、内心、垂心、旁心中的一种。然后将这多个第二位置构成的几何图形的中心点,作为 目标点在世界坐标系中的位置。
在本公开实施例中,在第一位置对应至少两个目标单应性矩阵的情况下,通过分别基于各目标单应性矩阵和第一位置,得到目标点在世界坐标系中的第二位置;然后基于至少两个第二位置,确定目标点在世界坐标系中位置。由此,减少了相机镜头畸变等非线性特征的干扰,提高了确定的目标点在世界坐标系中位置的准确性。
此外,在实际应用中,例如,在测距应用中,在分别基于各目标单应性矩阵和第一位置,得到目标点在世界坐标系中的至少两个位置后,可以分别基于这至少两个位置得到目标点的距离,然后对得到的多个距离取平均值,将平均值作为最终的测距结果。
以上描述了本公开实施例中利用至少两个单应性矩阵确定目标点在世界坐标系中位置的过程,下面将会对构建至少两个单应性矩阵的实现方式进行描述。
在一种可能的实现方式中,在确定与所述第一位置相对应的目标单应性矩阵前,所述方法还包括:利用预设的标定物中的多个参考点,确定所述视觉传感器所采集的图像中,至少两个目标图像区域中的每一个目标图像区域对应的单应性矩阵。
标定物置于真实空间中的世界坐标系中,标定物上面往往会包含参考点,利用该参考点在世界坐标系中的位置和在图像坐标系中的位置之间的对应关系,即可确定视觉传感器的单应性矩阵。该标定物可以是位于平面上的图案,例如,可以是位于纸张等平面上的图案。
标定物上的图案可以是圆形、长方形、正方形等图案中的一种或多种,图案中可以划分为不同的区域,相邻的区域可以用不同的颜色来表示,而参考点可以位于相邻区域的交界处,以便于在视觉传感器的图像上对参考点进行识别。
目标图像区域对应的单应性矩阵是基于目标图像区域的参考点来确定的,至少两个参考点即可用来标定一个目标图像区域的单应性矩阵,通过选取不同目标图像区域的参考点,利用各目标图像区域的至少两个参考点,即可得到各目标图像区域的单应性矩阵。对于根据参考点得到单应性矩阵的过程,可以参见相机标定的相关技术。
在本公开实施例中,通过利用预设的标定物中的多个参考点,确定视觉传感器所采集的图像中,至少两个目标图像区域中的每一个目标图像区域对应的单应性矩阵。由此,由于相机镜头存在畸变等非线性特征导致不同的目标图像区域与世界坐标系的映射关系不同,那么,可以针对各目标图像区域分别设置对应的单应性矩阵,在确定目标点在世界坐标系中的位置的情况下,可以选取与该位置所处的目标图像区域对应的单应性矩阵,来计算目标点在世界坐标系中的位置,提高了确定的目标点在世界坐标系中位置的准确性。
在一种可能的实现方式中,所述利用预设的标定物中的多个参考点,确定所述视觉传感器所采集的图像中,至少两个目标图像区域中的每一个目标图像区域对应的单应性矩阵,包括:确定所述标定物中的多个参考点在世界坐标系中的第一坐标值;确定所述多个参考点在所述视觉传感器采集的图像中的第二坐标值;根据同一参考点的所述第一坐标值和所述第二坐标值的对应关系,确定不同的参考点组合对应的单应性矩阵,其中,单个所述参考点组合所在的图像区域为单个目标图像区域,单个所述参考点组合包括所述多个参考点中的部分参考点。
在视觉传感器标定的过程中,会通过同一参考点在世界坐标系中的第一坐标值和图像中的第二坐标值之间的对应关系,来确定视觉传感器的单应性矩阵。在本公开实施例中,一个目标图像区域对应一个参考点组合,通过这一个参考点组合来确定目标图像区域对应的单应性矩阵。
对参考点组合的方式可以有多种,例如,可以将图像中相邻的N参考点组合在一起,构成参考点组合。在一种可能的实现方式中,所述参考点组合中包括N个相邻的参考点,N为大于1的整数;所述根据同一参考点的所述第一坐标值和所述第二坐标值的对应关系,确定不同的参考点组合对应的单应性矩阵,包括:在所述标定物的多个参考点中,按照预 设的顺序,依次选取N个相邻的参考点构成参考点组合,得到多个参考点组合,其中,在剩余的参考点数量不足N个的情况下,选取与所述剩余的参考点相邻的参考点补齐N个参考点,构成参考点组合;根据同一参考点的所述第一坐标值和所述第二坐标值的对应关系,分别针对各所述参考点组合,确定单应性矩阵。
作为一种可能的实现方式,可以从图像中标定物的参考点中,按照一定的顺序,例如从左往右,选取N个相邻的参考点构成参考点组合,然后再继续往右选取N个相邻的参考点组合,直至遍历完整个标定物中的参考点。或者也可以以标定物的中心为起始点,选取N个参考点,然后分别向外以此选取N个相邻的参考点组合,直至遍历完整个标定物中的参考点。需要说明的是,在剩余的参考点数量不足N个的情况下,可以选取与剩余的参考点相邻的参考点补齐N个参考点,构成参考点组合。
请参阅图2,为本公开实施例提供的一种标定物,其中,符号A、B、C、D、E…U用来指示参考点的位置,以参考点的数量为9为例,从左到右依次选取3×3的9个点作为一个参考点组合,第一次选取A-B-C-H-I-J-O-P-Q作为一个参考点组合,第二次选取D-E-F-K-L-M-R-S-T作为一个参考点组合,剩余的G、N、U三个参考点不足9个,则用左侧的相邻参考点补齐,选取E-F-G-L-M-N-S-T-U作为一个参考点组合,后续还会再选取标定物中剩余的参考点继续构成参考点组合。
针对选取的各参考点组合,分别利用参考点组合中的参考点的第一坐标值和第二坐标值,来计算参考点组合对应的单应性矩阵,单个参考点组合所围成的图像区域,即为一个目标图像区域。
在本公开实施例中,还可以通过其它方式来对参考点进行组合。
在本公开实施例中,通过在标定物的多个参考点中,按照预设的顺序,依次选取N个相邻的参考点构成参考点组合,此外,在剩余的参考点数量不足N个的情况下,也可以选取与剩余的参考点相邻的参考点补齐N个参考点,构成参考点组合。由此,得到的多个参考点组合构建的各单应性矩阵能够较为全面地覆盖图像区域,以尽可能地减少不同的图像区域由于相机镜头存在畸变等非线性特征造成的影响。在确定目标点在世界坐标系中的位置的情况下,可以选取与该位置所处的目标图像区域对应的单应性矩阵,来计算目标点在世界坐标系中的位置,提高了确定的目标点在世界坐标系中位置的准确性。
下面对本公开实施例的一个应用场景进行说明,该应用场景中未详尽描述之处可参见前文的相应记载。在该应用场景中,通过如图2所示的标定物来确定视觉传感器的多个单应性矩阵,标定物放置于地面上。在该应用场景中,本公开提供的位置确定方法的实现过程包括:
步骤S21,确定标定物中的各参考点在世界坐标系中的第一坐标值。
该第一坐标值可以由人工测量得到,世界坐标系可以是以机器人所在位置为原点,以地面为平面构建的二维坐标系。
步骤S22,确定标定物中的各参考点在视觉传感器采集的图像中的第二坐标值。
该过程可以基于图像检测技术来实现,通过图像检测来确定图像中的参考点的第二坐标值,或者该过程也可以是基于人工标注操作来确定的。
步骤S23,针对检测到的各参考点,从标定物左侧往右侧依次选取3×3的参考点组合,得到多个参考点组合。
在剩下的角点不足3×3的情况下,可以利用相邻的参考点补齐。选取的过程可参见前文的相关描述。
步骤S24,针对各参考点组合,根据同一参考点的第一坐标值和第二坐标值的对应关系,确定各参考点组合对应的单应性矩阵。
其中,单个参考点组合所在的图像区域为单个目标图像区域。
以上为确定各目标图像区域的单应性矩阵的过程,下面对该应用场景中基于确定的各 单应性矩阵进行位置确定的过程进行描述。
步骤S31,确定视觉传感器采集的图像中目标点的第一位置。
步骤S32,在第一位置位于单个目标图像区域中的情况下,利用第一位置所在的目标图像区域对应的单应性矩阵,确定目标点在世界坐标系中的位置。
例如,若第一位置处于点I处,则使用A-B-C-H-I-J-O-P-Q参考点所在目标图像区域对应的单应性矩阵确定目标点在世界坐标系中的位置。
步骤S33,在第一位置位于重叠区域的情况下,利用所述重叠区域对应的至少两个目标图像区域各自对应的单应性矩阵,分别得到目标点在世界坐标系中的第二位置,然后利用得到的至少两个第二位置,确定目标点在世界坐标系中的位置。
例如,若第一位置处于点J处,则使用A-B-C-H-I-J-O-P-Q参考点所在目标图像区域对应的单应性矩阵,确定目标点在世界坐标系中的第二位置;然后使用D-E-F-K-L-M-R-S-T参考点所在目标图像区域对应的单应性矩阵,确定目标点在世界坐标系中的第二位置。将两个第二位置的中点,作为目标点在世界坐标系中的位置。
步骤S34,在第一位置位于目标图像区域外的情况下,确定第一位置与目标图像区域之间的第一距离,并利用第一距离最小的目标图像区域对应的单应性矩阵,确定目标点在世界坐标系中的位置。
例如,若第一位置位于A点左边的点,则计算第一位置与各目标图像区域之间的距离,确定A-B-C-H-I-J-O-P-Q参考点所在目标图像区域为距离最近的目标图像区域,然后利用该目标图像区域对应的单应性矩阵,确定目标点在世界坐标系中的第二位置。
在本公开实施例中,通过至少两个单应性矩阵,可以更好地拟合视觉传感器的图像平面到物体所在平面的单应性变换。尤其是对于低成本相机而言,图像的畸变校正无法完全去除图像的非线性,使用单一单应性矩阵很难完美拟合图像平面到物体所在平面的单应性变换,而使用多个单应性矩阵可以提高测距精度。
在本公开实施例中,基于标定好的视觉传感器的参数,即可进行后续的应用,例如,可以基于标定后的视觉传感器进行图像测距,根据图像中的目标物的位置,确定该目标物距离视觉传感器的距离,或是距离搭载视觉传感器的机器人的距离,图像测距可应用于自动驾驶、扫地机器人的清扫工作等应用中,具有较高的应用价值。
可以理解,本公开提及的上述各个方法实施例,在不违背原理逻辑的情况下,均可以彼此相互结合形成结合后的实施例。本领域技术人员可以理解,在具体实施方式的上述方法中,各步骤的执行顺序应当以其功能和可能的内在逻辑确定。
此外,本公开还提供了位置确定装置、电子设备、计算机可读存储介质、程序,上述均可用来实现本公开提供的任一种位置确定方法,相应技术方案和描述和参见方法部分的相应记载。
图3示出根据本公开实施例的位置确定装置的框图,如图3所示,所述位置确定装置40包括:
第一位置确定部分41,被配置为确定视觉传感器采集的图像中目标点的第一位置;
目标单应性矩阵确定部分42,被配置为从所述图像的至少两个目标图像区域各自对应的单应性矩阵中,确定与所述第一位置相对应的目标单应性矩阵;
第二位置确定部分43,被配置为基于所述目标单应性矩阵和所述第一位置,确定所述目标点在世界坐标系中的位置。
在一种可能的实现方式中,所述目标单应性矩阵确定部分42,还被配置为在所述第一位置位于所述至少两个目标图像区域中的任一目标图像区域的情况下,将所述第一位置所在的目标图像区域对应的单应性矩阵,作为目标单应性矩阵。
在一种可能的实现方式中,所述目标单应性矩阵确定部分42,还被配置为在所述第一位置位于所述至少两个目标图像区域外的情况下,确定所述第一位置与所述至少两个目标 图像区域之间的第一距离;将所述第一距离满足距离条件的目标图像区域对应的单应性矩阵,作为目标单应性矩阵。
在一种可能的实现方式中,所述目标单应性矩阵确定部分42,还被配置为在所述第一位置位于重叠区域的情况下,将所述重叠区域对应的至少两个目标图像区域各自对应的单应性矩阵,作为目标单应性矩阵,所述重叠区域由所述至少两个目标图像区域确定。
在一种可能的实现方式中,所述第二位置确定部分43,还被配置为在所述第一位置对应至少两个目标单应性矩阵的情况下,分别基于各所述目标单应性矩阵和所述第一位置,得到所述目标点在世界坐标系中的第二位置;基于分别得到的至少两个所述第二位置,确定所述目标点在世界坐标系中位置。
在一种可能的实现方式中,所述装置还包括:
单应性矩阵确定部分,被配置为利用预设的标定物中的多个参考点,确定所述视觉传感器所采集的图像中,至少两个目标图像区域中的每一个目标图像区域对应的单应性矩阵。
在一种可能的实现方式中,所述单应性矩阵确定部分,包括:
第一坐标值确定子部分,被配置为确定所述标定物中的多个参考点在世界坐标系中的第一坐标值;
第二坐标值确定子部分,被配置为确定所述多个参考点在所述视觉传感器采集的图像中的第二坐标值;
单应性矩阵确定子部分,被配置为根据同一参考点的所述第一坐标值和所述第二坐标值的对应关系,确定不同的参考点组合对应的单应性矩阵,其中,单个所述参考点组合所在的图像区域为单个目标图像区域,单个所述参考点组合包括所述多个参考点中的部分参考点。
在一种可能的实现方式中,所述参考点组合中包括N个相邻的参考点,N为大于1的整数;
所述单应性矩阵确定子部分,还被配置为在所述标定物的多个参考点中,按照预设的顺序,依次选取N个相邻的参考点构成参考点组合,得到多个参考点组合,其中,在剩余的参考点数量不足N个的情况下,选取与所述剩余的参考点相邻的参考点补齐N个参考点,构成参考点组合;根据同一参考点的所述第一坐标值和所述第二坐标值的对应关系,分别针对各所述参考点组合,确定单应性矩阵。
在一些实施例中,本公开实施例提供的装置具有的功能或包含的部分可以被配置为执行上文方法实施例描述的方法,其实现方式和技术效果可以参照上文方法实施例的描述。
本公开实施例还提出一种计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现上述方法。计算机可读存储介质可以是易失性或非易失性计算机可读存储介质。
本公开实施例还提出一种电子设备,包括:处理器;用于存储处理器可执行指令的存储器;其中,所述处理器被配置为调用所述存储器存储的指令,以执行上述方法。
本公开实施例还提供了一种计算机程序产品,包括计算机可读代码,当计算机可读代码在设备上运行时,设备中的处理器执行用于实现如上任一实施例提供的位置确定方法的指令。
本公开实施例还提供了另一种计算机程序产品,用于存储计算机可读指令,指令被执行时使得计算机执行上述任一实施例提供的位置确定方法的操作。
电子设备可以被提供为终端、服务器或其它形态的设备。
图4示出根据本公开实施例的一种电子设备800的框图。例如,电子设备800可以是移动电话,计算机,数字广播终端,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理等终端。
参照图4,电子设备800可以包括以下一个或多个组件:处理组件802,存储器804, 电源组件806,多媒体组件808,音频组件810,输入/输出(I/O)的接口812,传感器组件814,以及通信组件816。
处理组件802通常控制电子设备800的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件802可以包括一个或多个处理器820来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件802可以包括一个或多个部分,便于处理组件802和其他组件之间的交互。例如,处理组件802可以包括多媒体模块,以方便多媒体组件808和处理组件802之间的交互。
存储器804被配置为存储各种类型的数据以支持在电子设备800的操作。这些数据的示例包括用于在电子设备800上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器804可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如铁电存储器(Ferroelectric Random Access Memory,FRAM)、只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable read-only memory,PROM)、电子可编程只读存储器(Electrical Programmable read-only memory,EPROM)、带电可擦可编程只读存储器(Electrical Programmable read-only memory,EEPROM)、闪存、磁表面存储器、光盘、或CD-ROM等存储器;也可以是包括上述存储器之一或任意组合的各种设备。
电源组件806为电子设备800的各种组件提供电力。电源组件806可以包括电源管理系统,一个或多个电源,及其他与为电子设备800生成、管理和分配电力相关联的组件。
多媒体组件808包括在所述电子设备800和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(Liquid crystal display,LCD)和触摸面板(Touch Panel,TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件808包括一个前置摄像头和/或后置摄像头。当电子设备800处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。
音频组件810被配置为输出和/或输入音频信号。例如,音频组件810包括一个麦克风(MIC),当电子设备800处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器804或经由通信组件816发送。在一些实施例中,音频组件810还包括一个扬声器,用于输出音频信号。
I/O接口812为处理组件802和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件814包括一个或多个传感器,用于为电子设备800提供各个方面的状态评估。例如,传感器组件814可以检测到电子设备800的打开/关闭状态,组件的相对定位,例如所述组件为电子设备800的显示器和小键盘,传感器组件814还可以检测电子设备800或电子设备800一个组件的位置改变,用户与电子设备800接触的存在或不存在,电子设备800方位或加速/减速和电子设备800的温度变化。传感器组件814可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件814还可以包括光传感器,如互补金属氧化物半导体(Complementary Metal Oxide Semiconductor,CMOS)或电荷耦合装置(Charge coupled Device,CCD)图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件814还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。
通信组件816被配置为便于电子设备800和其他设备之间有线或无线方式的通信。电子设备800可以接入基于通信标准的无线网络,如无线网络(WiFi),第二代移动通信技术(2-Generation wireless telephone technology,2G)或第三代移动通信技术(3-Generation wireless telephone technology,3G),或它们的组合。在一个示例性实施例中,通信组件816经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,所述通信组件816还包括近场通信(Near Field Communication,NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(Radio Frequency Identification,RFID)技术,红外数据协会(Infrared Data Association,IrDA)技术,超宽带(Ultra Wide Band,UWB)技术,蓝牙(Bluetooth,BT)技术和其他技术来实现。
在示例性实施例中,电子设备800可以被一个或多个应用专用集成电路(Application Specific Integrated Circuit,ASIC)、数字信号处理器(Digital Signal Process,DSP)、数字信号处理设备(Digital Signal Process Device,DSPD)、可编程逻辑器件(Programmable Logic Device,PLD)、现场可编程门阵列(Field Programmable Gate Array,FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述方法。
在示例性实施例中,还提供了一种非易失性计算机可读存储介质,例如包括计算机程序指令的存储器804,上述计算机程序指令可由电子设备800的处理器820执行以完成上述方法。
图5示出根据本公开实施例的一种电子设备1900的框图。例如,电子设备1900可以被提供为一服务器。参照图5,电子设备1900包括处理组件1922,其进一步包括一个或多个处理器,以及由存储器1932所代表的存储器资源,用于存储可由处理组件1922的执行的指令,例如应用程序。存储器1932中存储的应用程序可以包括一个或一个以上的每一个对应于一组指令的部分。此外,处理组件1922被配置为执行指令,以执行上述方法。
电子设备1900还可以包括一个电源组件1926被配置为执行电子设备1900的电源管理,一个有线或无线网络接口1950被配置为将电子设备1900连接到网络,和一个输入输出(I/O)接口1958。电子设备1900可以操作基于存储在存储器1932的操作系统,例如微软服务器操作系统(Windows ServerTM),苹果公司推出的基于图形用户界面操作系统(Mac OS XTM),多用户多进程的计算机操作系统(UnixTM),自由和开放原代码的类Unix操作系统(LinuxTM),开放原代码的类Unix操作系统(FreeBSDTM)或类似。
在示例性实施例中,还提供了一种非易失性计算机可读存储介质,例如包括计算机程序指令的存储器1932,上述计算机程序指令可由电子设备1900的处理组件1922执行以完成上述方法。
本公开可以是系统、方法和/或计算机程序产品。计算机程序产品可以包括计算机可读存储介质,其上载有用于使处理器实现本公开的各个方面的计算机可读程序指令。
计算机可读存储介质可以是可以保持和存储由指令执行设备使用的指令的有形设备。计算机可读存储介质例如可以是(但不限于)电存储设备、磁存储设备、光存储设备、电磁存储设备、半导体存储设备或者上述的任意合适的组合。计算机可读存储介质(非穷举的列表)包括:便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、静态随机存取存储器(SRAM)、便携式压缩盘只读存储器(CD-ROM)、数字多功能盘(DVD)、记忆棒、软盘、机械编码设备、例如其上存储有指令的打孔卡或凹槽内凸起结构、以及上述的任意合适的组合。这里所使用的计算机可读存储介质不被解释为瞬时信号本身,诸如无线电波或者其他自由传播的电磁波、通过波导或其他传输媒介传播的电磁波(例如,通过光纤电缆的光脉冲)、或者通过电线传输的电信号。
这里所描述的计算机可读程序指令可以从计算机可读存储介质下载到各个计算/处理设备,或者通过网络、例如因特网、局域网、广域网和/或无线网下载到外部计算机或外部 存储设备。网络可以包括铜传输电缆、光纤传输、无线传输、路由器、防火墙、交换机、网关计算机和/或边缘服务器。每个计算/处理设备中的网络适配卡或者网络接口从网络接收计算机可读程序指令,并转发该计算机可读程序指令,以供存储在各个计算/处理设备中的计算机可读存储介质中。
用于执行本公开操作的计算机程序指令可以是汇编指令、指令集架构(ISA)指令、机器指令、机器相关指令、微代码、固件指令、状态设置数据、或者以一种或多种编程语言的任意组合编写的源代码或目标代码,所述编程语言包括面向对象的编程语言—诸如Smalltalk、C++等,以及常规的过程式编程语言—诸如“C”语言或类似的编程语言。计算机可读程序指令可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络—包括局域网(Local Area Network,LAN)或广域网(Wide Area Network,WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。在一些实施例中,通过利用计算机可读程序指令的状态信息来个性化定制电子电路,例如可编程逻辑电路、现场可编程门阵列(FPGA)或可编程逻辑阵列(PLA),该电子电路可以执行计算机可读程序指令,从而实现本公开的各个方面。
这里参照根据本公开实施例的方法、装置(系统)和计算机程序产品的流程图和/或框图描述了本公开的各个方面。应当理解,流程图和/或框图的每个方框以及流程图和/或框图中各方框的组合,都可以由计算机可读程序指令实现。
这些计算机可读程序指令可以提供给通用计算机、专用计算机或其它可编程数据处理装置的处理器,从而生产出一种机器,使得这些指令在通过计算机或其它可编程数据处理装置的处理器执行时,产生了实现流程图和/或框图中的一个或多个方框中规定的功能/动作的装置。也可以把这些计算机可读程序指令存储在计算机可读存储介质中,这些指令使得计算机、可编程数据处理装置和/或其他设备以特定方式工作,从而,存储有指令的计算机可读介质则包括一个制造品,其包括实现流程图和/或框图中的一个或多个方框中规定的功能/动作的各个方面的指令。
也可以把计算机可读程序指令加载到计算机、其它可编程数据处理装置、或其它设备上,使得在计算机、其它可编程数据处理装置或其它设备上执行一系列操作步骤,以产生计算机实现的过程,从而使得在计算机、其它可编程数据处理装置、或其它设备上执行的指令实现流程图和/或框图中的一个或多个方框中规定的功能/动作。
附图中的流程图和框图显示了根据本公开的多个实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个部分、程序段或指令的一部分,所述部分、程序段或指令的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
该计算机程序产品可以通过硬件、软件或其结合的方式实现。在一个可选实施例中,所述计算机程序产品体现为计算机存储介质,在另一个可选实施例中,计算机程序产品体现为软件产品,例如软件开发包(Software Development Kit,SDK)等等。
以上已经描述了本公开的各实施例,上述说明是示例性的,并非穷尽性的,并且也不限于所披露的各实施例。在不偏离所说明的各实施例的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。本文中所用术语的选择,旨在最好地解释各实施例的原理、实际应用或对市场中的技术的改进,或者使本技术领域的其 它普通技术人员能理解本文披露的各实施例。
工业实用性
本公开涉及一种位置确定方法及装置、电子设备和存储介质,所述方法包括:确定视觉传感器采集的图像中目标点的第一位置;从所述图像的至少两个目标图像区域各自对应的单应性矩阵中,确定与所述第一位置相对应的目标单应性矩阵;基于所述目标单应性矩阵和所述第一位置,确定所述目标点在世界坐标系中的位置。本公开实施例可提高确定的目标点在世界坐标系中位置的准确性。

Claims (19)

  1. 一种位置确定方法,包括:
    确定视觉传感器采集的图像中目标点的第一位置;
    从所述图像的至少两个目标图像区域各自对应的单应性矩阵中,确定与所述第一位置相对应的目标单应性矩阵;
    基于所述目标单应性矩阵和所述第一位置,确定所述目标点在世界坐标系中的位置。
  2. 根据权利要求1所述的方法,其中,所述确定与所述第一位置相对应的目标单应性矩阵,包括:
    在所述第一位置位于所述至少两个目标图像区域中的任一目标图像区域的情况下,将所述第一位置所在的目标图像区域对应的单应性矩阵,作为所述目标单应性矩阵。
  3. 根据权利要求1所述的方法,其中,所述确定与所述第一位置相对应的目标单应性矩阵,包括:
    在所述第一位置位于所述至少两个目标图像区域外的情况下,确定所述第一位置与所述至少两个目标图像区域之间的第一距离;
    将所述第一距离满足距离条件的目标图像区域对应的单应性矩阵,作为所述目标单应性矩阵。
  4. 根据权利要求1所述的方法,其中,所述确定与所述第一位置相对应的目标单应性矩阵,包括:
    在所述第一位置位于重叠区域的情况下,将所述重叠区域对应的至少两个目标图像区域各自对应的单应性矩阵,作为目标单应性矩阵,所述重叠区域由所述至少两个目标图像区域确定。
  5. 根据权利要求1至4任一项所述的方法,其中,所述基于所述目标单应性矩阵和所述第一位置,确定所述目标点在世界坐标系中的位置,包括:
    在所述第一位置对应至少两个所述目标单应性矩阵的情况下,分别基于各所述目标单应性矩阵和所述第一位置,得到所述目标点在所述世界坐标系中的第二位置;
    基于分别得到的至少两个所述第二位置,确定所述目标点在世界坐标系中位置。
  6. 根据权利要求1至5任一项所述的方法,其中,在确定与所述第一位置相对应的目标单应性矩阵前,所述方法还包括:
    利用预设的标定物中的多个参考点,确定所述视觉传感器所采集的图像中,至少两个目标图像区域中的每一个目标图像区域对应的单应性矩阵。
  7. 根据权利要求6所述的方法,其中,所述利用预设的标定物中的多个参考点,确定所述视觉传感器所采集的图像中,至少两个目标图像区域中的每一个目标图像区域对应的单应性矩阵,包括:
    确定所述标定物中的多个参考点在所述世界坐标系中的第一坐标值;
    确定所述多个参考点在所述视觉传感器采集的图像中的第二坐标值;
    根据同一参考点的所述第一坐标值和所述第二坐标值的对应关系,确定不同的参考点组合对应的单应性矩阵,其中,单个所述参考点组合所在的图像区域为单个目标图像区域,所述单个所述参考点组合包括所述多个参考点中的部分参考点。
  8. 根据权利要求7所述的方法,其中,所述参考点组合中包括N个相邻的参考点,N为大于1的整数;
    所述根据同一参考点的所述第一坐标值和所述第二坐标值的对应关系,确定不同的参考点组合对应的单应性矩阵,包括:
    在所述标定物的多个参考点中,按照预设的顺序,依次选取N个相邻的参考点构成所述参考点组合,得到多个所述参考点组合,其中,在剩余的参考点数量不足N个的情况下,选取与所述剩余的参考点相邻的参考点补齐N个参考点,构成所述参考点组合;
    根据所述同一参考点的所述第一坐标值和所述第二坐标值的对应关系,分别针对各所述参考点组合,确定单应性矩阵。
  9. 一种位置确定装置,包括:
    第一位置确定部分,被配置为确定视觉传感器采集的图像中目标点的第一位置;
    目标单应性矩阵确定部分,被配置为从所述图像的至少两个目标图像区域各自对应的单应性矩阵中,确定与所述第一位置相对应的目标单应性矩阵;
    第二位置确定部分,被配置为基于所述目标单应性矩阵和所述第一位置,确定所述目标点在世界坐标系中的位置。
  10. 根据权利要求9所述的装置,其中,所述目标单应性矩阵确定部分,还被配置为在所述第一位置位于所述至少两个目标图像区域中的任一目标图像区域的情况下,将所述第一位置所在的目标图像区域对应的单应性矩阵,作为目标单应性矩阵。
  11. 根据权利要求9所述的装置,其中,所述目标单应性矩阵确定部分,还被配置为在所述第一位置位于所述至少两个目标图像区域外的情况下,确定所述第一位置与所述至少两个目标图像区域之间的第一距离;将所述第一距离满足距离条件的目标图像区域对应的单应性矩阵,作为目标单应性矩阵。
  12. 根据权利要求9所述的装置,其中,所述目标单应性矩阵确定部分,还被配置为在所述第一位置位于重叠区域的情况下,将所述重叠区域对应的至少两个目标图像区域各自对应的单应性矩阵,作为目标单应性矩阵,所述重叠区域由所述至少两个目标图像区域确定。
  13. 根据权利要求9至12任一项所述的装置,其中,所述第二位置确定部分,还被配置为在所述第一位置对应至少两个所述目标单应性矩阵的情况下,分别基于各所述目标单应性矩阵和所述第一位置,得到所述目标点在所述世界坐标系中的第二位置;基于分别得到的至少两个所述第二位置,确定所述目标点在世界坐标系中位置。
  14. 根据权利要求9至13任一项所述的装置,其中,所述装置还包括:
    单应性矩阵确定部分,被配置为利用预设的标定物中的多个参考点,确定所述视觉传感器所采集的图像中,至少两个目标图像区域中的每一个目标图像区域对应的单应性矩阵。
  15. 根据权利要求14所述的装置,其中,所述单应性矩阵确定部分,包括:
    第一坐标值确定子部分,被配置为确定所述标定物中的多个参考点在世界坐标系中的第一坐标值;
    第二坐标值确定子部分,被配置为确定所述多个参考点在所述视觉传感器采集的图像中的第二坐标值;
    单应性矩阵确定子部分,被配置为根据同一参考点的所述第一坐标值和所述第二坐标值的对应关系,确定不同的参考点组合对应的单应性矩阵,其中,单个所述参考点组合所在的图像区域为单个目标图像区域,单个所述参考点组合包括所述多个参考点中的部分参考点。
  16. 根据权利要求15所述的装置,其中,所述参考点组合中包括N个相邻的参考点,N为大于1的整数;
    所述单应性矩阵确定子部分,还被配置为在所述标定物的多个参考点中,按照预设的顺序,依次选取N个相邻的参考点构成所述参考点组合,得到多个所述参考点组合,其中,在剩余的参考点数量不足N个的情况下,选取与所述剩余的参考点相邻的参考点补齐N个参考点,构成所述参考点组合;根据所述同一参考点的所述第一坐标值和所述第二坐标值的对应关系,分别针对各所述参考点组合,确定单应性矩阵。
  17. 一种电子设备,包括:
    处理器;
    用于存储处理器可执行指令的存储器;
    其中,所述处理器被配置为调用所述存储器存储的指令,以执行权利要求1至8中任一项所述的方法。
  18. 一种计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现权利要求1至8中任一项所述的方法。
  19. 一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行用于实现权利要求1至8中任一项所述的方法。
PCT/CN2021/120916 2021-04-23 2021-09-27 一种位置确定方法及装置、电子设备和存储介质 WO2022222379A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110442278.4A CN113052900A (zh) 2021-04-23 2021-04-23 一种位置确定方法及装置、电子设备和存储介质
CN202110442278.4 2021-04-23

Publications (1)

Publication Number Publication Date
WO2022222379A1 true WO2022222379A1 (zh) 2022-10-27

Family

ID=76520102

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/120916 WO2022222379A1 (zh) 2021-04-23 2021-09-27 一种位置确定方法及装置、电子设备和存储介质

Country Status (2)

Country Link
CN (1) CN113052900A (zh)
WO (1) WO2022222379A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113052900A (zh) * 2021-04-23 2021-06-29 深圳市商汤科技有限公司 一种位置确定方法及装置、电子设备和存储介质
CN115275876B (zh) * 2022-08-09 2023-02-03 江阴市浩盛电器线缆制造有限公司 电缆敷设动态释放系统及方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102567989A (zh) * 2011-11-30 2012-07-11 重庆大学 基于双目立体视觉的空间定位方法
CN103578109A (zh) * 2013-11-08 2014-02-12 中安消技术有限公司 一种监控摄像机测距方法及装置
JP2019121176A (ja) * 2018-01-05 2019-07-22 オムロン株式会社 位置特定装置、位置特定方法、位置特定プログラムおよびカメラ装置
CN111598956A (zh) * 2020-04-30 2020-08-28 商汤集团有限公司 标定方法、装置和系统
CN113052900A (zh) * 2021-04-23 2021-06-29 深圳市商汤科技有限公司 一种位置确定方法及装置、电子设备和存储介质

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012086285A (ja) * 2010-10-15 2012-05-10 Seiko Epson Corp 追跡ロボット装置、追跡ロボット制御方法、追跡ロボット制御プログラム、ホモグラフィー行列取得装置、ホモグラフィー行列取得方法、およびホモグラフィー行列取得プログラム
JP2014102746A (ja) * 2012-11-21 2014-06-05 Nippon Telegr & Teleph Corp <Ntt> 被写体認識装置及び被写体認識プログラム
CN105046657B (zh) * 2015-06-23 2018-02-09 浙江大学 一种图像拉伸畸变自适应校正方法
CN107578376B (zh) * 2017-08-29 2021-06-22 北京邮电大学 基于特征点聚类四叉划分和局部变换矩阵的图像拼接方法
CN109978760B (zh) * 2017-12-27 2023-05-02 杭州海康威视数字技术股份有限公司 一种图像拼接方法及装置
CN110223222B (zh) * 2018-03-02 2023-12-05 株式会社理光 图像拼接方法、图像拼接装置和计算机可读存储介质
CN109285136B (zh) * 2018-08-31 2021-06-08 清华-伯克利深圳学院筹备办公室 一种图像的多尺度融合方法、装置、存储介质及终端
CN112488914A (zh) * 2019-09-11 2021-03-12 顺丰科技有限公司 图像拼接方法、装置、终端及计算机可读存储介质
CN110930336B (zh) * 2019-11-29 2023-11-28 深圳市商汤科技有限公司 图像处理方法及装置、电子设备和存储介质
CN111833405B (zh) * 2020-07-27 2023-12-08 北京大华旺达科技有限公司 基于机器视觉的标定识别方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102567989A (zh) * 2011-11-30 2012-07-11 重庆大学 基于双目立体视觉的空间定位方法
CN103578109A (zh) * 2013-11-08 2014-02-12 中安消技术有限公司 一种监控摄像机测距方法及装置
JP2019121176A (ja) * 2018-01-05 2019-07-22 オムロン株式会社 位置特定装置、位置特定方法、位置特定プログラムおよびカメラ装置
CN111598956A (zh) * 2020-04-30 2020-08-28 商汤集团有限公司 标定方法、装置和系统
CN113052900A (zh) * 2021-04-23 2021-06-29 深圳市商汤科技有限公司 一种位置确定方法及装置、电子设备和存储介质

Also Published As

Publication number Publication date
CN113052900A (zh) 2021-06-29

Similar Documents

Publication Publication Date Title
CN110674719B (zh) 目标对象匹配方法及装置、电子设备和存储介质
WO2020135529A1 (zh) 位姿估计方法及装置、电子设备和存储介质
EP3825960A1 (en) Method and device for obtaining localization information
JP6109413B2 (ja) 画像表示方法、画像表示装置、端末、プログラム及び記録媒体
KR102442485B1 (ko) 이미지 처리 방법 및 장치, 전자 기기 및 컴퓨터 판독 가능한 저장 매체
JP6336206B2 (ja) 動画ファイルの識別子を処理する方法、装置、プログラム及び記録媒体
US10798483B2 (en) Audio signal processing method and device, electronic equipment and storage medium
CN109584362B (zh) 三维模型构建方法及装置、电子设备和存储介质
WO2022222379A1 (zh) 一种位置确定方法及装置、电子设备和存储介质
EP2975574B1 (en) Method, apparatus and terminal for image retargeting
CN106648063B (zh) 手势识别方法及装置
CN108900903B (zh) 视频处理方法及装置、电子设备和存储介质
WO2022134475A1 (zh) 点云地图构建方法及装置、电子设备、存储介质和程序
US20200402321A1 (en) Method, electronic device and storage medium for image generation
CN113052919A (zh) 一种视觉传感器的标定方法及装置、电子设备和存储介质
WO2022017140A1 (zh) 目标检测方法及装置、电子设备和存储介质
WO2022179013A1 (zh) 目标定位方法、装置、电子设备、存储介质及程序
WO2023273498A1 (zh) 深度检测方法及装置、电子设备和存储介质
WO2023273499A1 (zh) 深度检测方法及装置、电子设备和存储介质
CN112146576A (zh) 尺寸测量方法及其装置
US9665925B2 (en) Method and terminal device for retargeting images
WO2022179080A1 (zh) 定位方法、装置、电子设备、存储介质、程序及产品
WO2022183656A1 (zh) 数据生成方法、装置、设备、存储介质及程序
CN114549983A (zh) 计算机视觉模型训练方法及装置、电子设备和存储介质
CN113066134A (zh) 一种视觉传感器的标定方法及装置、电子设备和存储介质

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2022529318

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21937601

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 26.01.2024)