CN113902843A - Cross recognition method and device for ray where camera 2D point is located - Google Patents

Cross recognition method and device for ray where camera 2D point is located Download PDF

Info

Publication number
CN113902843A
CN113902843A CN202111118043.6A CN202111118043A CN113902843A CN 113902843 A CN113902843 A CN 113902843A CN 202111118043 A CN202111118043 A CN 202111118043A CN 113902843 A CN113902843 A CN 113902843A
Authority
CN
China
Prior art keywords
ray
point
ith
tubular object
tubular
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111118043.6A
Other languages
Chinese (zh)
Inventor
吴昆临
许秋子
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Realis Multimedia Technology Co Ltd
Original Assignee
Shenzhen Realis Multimedia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Realis Multimedia Technology Co Ltd filed Critical Shenzhen Realis Multimedia Technology Co Ltd
Priority to CN202111118043.6A priority Critical patent/CN113902843A/en
Publication of CN113902843A publication Critical patent/CN113902843A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/21Collision detection, intersection

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a cross recognition method, a device, computer equipment and a storage medium for rays where camera 2D points are located, wherein the method comprises the following steps: when N2D points carrying different numbers are formed in the camera, generating a corresponding tubular object in a space scene for each 2D point, and then binding each tubular object with a corresponding number; acquiring an ith ray generated by an ith 2D point and identifying whether the ith ray collides with at least one of N-1 tubular objects with different numbers; if yes, recording the numbers corresponding to K (K is more than or equal to 1 and less than or equal to N-1) tubular objects collided with the ith ray, and then calculating the crossing position of the ith ray and each tubular object; and assigning i +1 to i, and returning to execute the collision recognition operation of the ray generated by the next 2D point. The method can realize the orderly execution of the process of verifying the cross relationship and further calculating the cross position, thereby reducing the calculation complexity.

Description

Cross recognition method and device for ray where camera 2D point is located
Technical Field
The invention relates to the technical field of ray cross recognition, in particular to a cross recognition method and device for rays where camera 2D points are located, computer equipment and a storage medium.
Background
In the field of camera imaging applications, the three-dimensional reconstruction process refers to a process of regenerating a plurality of related 2D points (i.e., two-dimensional pixel points) into 3D points (i.e., three-dimensional pixel points) captured by the 2D points by using a multi-view geometry, wherein an important step is to identify whether a cross condition exists between a plurality of rays correspondingly generated by the plurality of 2D points and further obtain a specific cross position. In the prior art, a Ray-Triangle intersection detection algorithm is usually adopted for implementation, that is, whether a Ray of one 2D point intersects a Triangle of another 2D point is determined, however, this method needs to calculate the intersection position of the Ray and the plane of the Triangle, and then further verifies whether the intersection position falls within the Triangle to determine the final intersection relationship, which is more complicated in the calculation process.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, provides a cross recognition method and device for rays where a camera 2D point is located, computer equipment and a storage medium, and can realize the ordered execution of the process of verifying the cross relationship and further calculating the cross position, thereby reducing the calculation complexity.
In order to solve at least one technical problem, an embodiment of the present invention provides a method for cross-identifying a ray where a camera 2D point is located, where the method includes:
when N2D points carrying different numbers are formed in the camera, generating a corresponding tubular object in a space scene for each 2D point, and then binding each tubular object with a corresponding number;
starting from i-1, acquiring an ith ray generated by an ith 2D point, and identifying whether the ith ray collides with at least one tubular object in N-1 tubular objects with different numbers;
if yes, recording the numbers corresponding to K (K is more than or equal to 1 and less than or equal to N-1) tubular objects which collide with the ith ray, and then calculating the crossing position of the ith ray and each tubular object;
and assigning i +1 to i, and returning to execute the collision recognition operation of the ray generated by the next 2D point under the condition that i is judged to be less than or equal to N.
Preferably, the generating a corresponding tubular object within the spatial scene for each 2D point comprises:
acquiring the optical center position c of the camera, wherein the ray generated by any 2D point is in the space sceneThe closest distance allowed for capture is tminAnd a maximum distance of tmaxAnd the direction information of the ray generated by the ith 2D point is Di
Defining a distance error value when rays generated by any two 2D points are crossed, and creating a basic circle by taking the distance error value as a radius;
stretching the basic circle to obtain a tubular object corresponding to the ith 2D point, wherein the central position of one end face of the tubular object is defined as c + tmindiAnd the other end face has a center position of c + tmax di
Preferably, the identifying whether the ith ray collides with at least one tubular object in the N-1 tubular objects with different numbers comprises:
starting from j ═ 1, a spatial bounding box is created for the j-th tubular object;
taking an XOY plane in the space scene as a reference, carrying out tiling and expansion processing on the space bounding box according to a cross shape, and further obtaining a fixed boundary interval to cover the size of each plane of the space bounding box in an expansion state;
and after identifying whether the ith ray intersects with the fixed boundary interval, assigning j +1 to j, and returning to execute the collision identification operation of the ith ray and the next tubular object under the condition that j is judged to be not more than N-1.
Preferably, when the space bounding box is unfolded on the XOY plane, the fixed bounding box defines a coordinate position in the y-axis direction (y) where any point on four planes lying next to each other in the transverse direction falls on0≤y≤y1) Within the range and defining the coordinate position in the x-axis direction of any point on three planes lying next to each other in the longitudinal direction to fall within (x)0≤x≤x1) Within the range.
Preferably, the identifying whether the ith ray intersects with the fixed boundary interval comprises:
based on four boundary lines in the fixed boundary interval, the ith ray and the boundary are respectively calculatedX is equal to x0The distance value at which the intersection occurs is tx0The ith ray and the boundary line x ═ x1The distance value at which the intersection occurs is tx1Y is the ith ray and the boundary line y0The distance value at which the intersection occurs is ty0And the ith ray and the boundary line y ═ y1The distance value at which the intersection occurs is ty1
Judging whether the four distance values meet max (t)x0,ty0)<min(tx1,ty1) The conditions of (a);
if yes, determining that the ith ray is intersected with the fixed boundary interval, and marking and distinguishing the jth tubular object;
if not, determining that the ith ray is not intersected with the fixed boundary interval.
Preferably, the calculating of the intersection position of the ith ray with each of the tubular objects comprises:
based on that any ray is emitted from the center of the tubular object with the corresponding number in parallel, when the ith ray collides with the jth tubular object, the jth ray with the same number as the jth tubular object is obtained;
extracting a point P from the ith ray1And extracting a point Q from the jth ray1And make point P1And point Q1The line segment formed between the first ray and the second ray is the shortest distance between the ith ray and the jth ray;
according to point P1And point Q1To obtain a point P1And point Q1The center point of the line segment formed in between, and the coordinate position of the center point is defined as the intersection position of the ith ray and the jth tubular object.
Preferably, after identifying whether the ith ray collides with at least one tubular object in the N-1 tubular objects with different numbers, the method further comprises:
and if not, assigning i +1 to i, and returning to execute the collision recognition operation of the ray generated by the next 2D point under the condition that i is judged to be less than or equal to N.
In addition, an embodiment of the present invention further provides a device for cross-identifying rays where a camera 2D point is located, where the device includes:
the generating module is used for generating a corresponding tubular object in a space scene for each 2D point when N2D points carrying different numbers are formed in the camera, and then binding each tubular object with the corresponding number;
the identification module is used for acquiring the ith ray generated by the ith 2D point from the beginning of the condition that i is 1, and identifying whether the ith ray collides with at least one tubular object in N-1 tubular objects with different numbers;
the calculation module is used for recording the numbers corresponding to K (K is more than or equal to 1 and less than or equal to N-1) tubular objects which collide with the ith ray when recognizing that the ith ray collides with at least one tubular object in N-1 tubular objects with different numbers, and then calculating the intersection position of the ith ray and each tubular object;
and the returning module is used for assigning i +1 to i and returning to execute the collision recognition operation of the ray generated by the next 2D point under the condition that i is judged to be less than or equal to N.
In addition, an embodiment of the present invention further provides a computer device, including: the system comprises a memory, a processor and an application program stored on the memory and capable of running on the processor, wherein the processor realizes the steps of the method of any embodiment when executing the application program.
In addition, an embodiment of the present invention further provides a computer-readable storage medium, on which an application program is stored, and when the application program is executed by a processor, the steps of any one of the above-mentioned embodiments of the method are implemented.
In the embodiment of the invention, based on two rays generated by any two 2D points, the cross identification process between the two rays is converted into a collision identification process between a ray and a tubular object created by another ray, and further converted into a cross judgment process between two-dimensional planes of a space bounding box formed by the ray and the tubular object in a spreading state, so that the reliable verification of the cross relationship between the two rays can be realized; by recalculating their intersection positions while ensuring that an intersection occurs between two rays, computational complexity may be reduced.
Drawings
Fig. 1 is a schematic flowchart of a cross recognition method for rays where a camera 2D point is located in an embodiment of the present invention;
FIG. 2 is a schematic illustration of a space bounding box in an embodiment of the invention in a flat, unfolded state;
fig. 3 is a schematic structural diagram of a device for cross recognition of a ray where a 2D point of a camera is located in an embodiment of the present invention;
fig. 4 is a schematic structural composition diagram of a computer device in the embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention provides a cross identification method for rays where a camera 2D point is located, which comprises the following steps as shown in figure 1:
s101, when N2D points carrying different numbers are formed in a camera, generating a corresponding tubular object in a space scene for each 2D point, and then binding each tubular object by corresponding numbers;
the implementation process of the invention comprises the following steps:
(1) the optical center position of the camera is c, the shortest distance allowed to be captured by the ray generated by any 2D point in the space scene is tminAnd a maximum distance of tmaxAnd the direction information of the ray generated by the ith 2D point is DiWherein the direction information diActually means the direction information that the ith 2D point points to the optical center position;
(2) defining a distance error value when rays generated by any two 2D points are crossed, and creating a basic circle by taking the distance error value as a radius, wherein the distance error value is a three-dimensional model precision value required after subsequent reconstruction;
(3) stretching the basic circle to obtain a tubular object corresponding to the ith 2D point, wherein the central position of one end face of the tubular object is defined as c + tmindiAnd the other end face has a center position of c + tmaxdi
It should be noted that the tubular object corresponding to any 2D point is obtained by stretching the substantially circular shape, and the stretching length and the placing position in the spatial scene are finally defined by the direction information of the ray generated by the 2D point.
S102, starting from the condition that i is 1, obtaining the ith ray generated by the ith 2D point, and identifying whether the ith ray collides with at least one tubular object in N-1 tubular objects with different numbers;
the implementation process of the invention comprises the following steps:
(1) starting from j ═ 1, a spatial bounding box is created for the j-th tubular object, and the spatial bounding box here is preferably an axis-aligned bounding box;
(2) taking an XOY plane in the space scene as a reference, carrying out tiling and expansion processing on the space bounding box according to a cross shape, and further obtaining a fixed boundary interval to cover the size of each plane of the space bounding box in an expansion state;
more specifically, when the space bounding box is expanded on the XOY plane, as can be seen from the schematic view of the space bounding box shown in fig. 2 in a tiled expanded state, the fixed boundary interval defines a coordinate position in the y-axis direction (y-axis direction) where any point on four planes lying next to each other in the lateral direction falls on0≤y≤y1) Within the range and defining the coordinate position in the x-axis direction of any point on three planes lying next to each other in the longitudinal direction to fall within (x)0≤x≤x1) Within the scope of this, the object of covering the respective plane size of the space bounding box in the unfolded state is thereby achieved.
(3) And after identifying whether the ith ray intersects with the fixed boundary interval, assigning j +1 to j, and returning to execute the collision identification operation of the ith ray and the next tubular object under the condition that j is judged to be not more than N-1.
More specifically, the identifying whether the ith ray intersects the fixed boundary interval includes: firstly, based on the four boundary lines in the fixed boundary interval, the ith ray and the boundary line x are respectively calculated to be x0The distance value at which the intersection occurs is tx0The ith ray and the boundary line x ═ x1The distance value at which the intersection occurs is tx1Y is the ith ray and the boundary line y0The distance value at which the intersection occurs is ty0And the ith ray and the boundary line y ═ y1The distance value at which the intersection occurs is ty1(ii) a Secondly, whether the four distance values satisfy max (t)x0,ty0)<min(tx1,ty1) The corresponding judgment results are as follows: if yes, determining that the ith ray is intersected with the fixed boundary interval, namely explaining that the ith ray collides with the jth tubular object, and marking and distinguishing the jth tubular object; if not, determining that the ith ray does not intersect with the fixed boundary interval, namely, explaining that the ith ray does not collide with the jth tubular object.
The calculation formulas of the four distance values are respectively as follows:
Figure BDA0003275024550000071
in the formula: x is the number ofiIs the coordinate value of the 2D point corresponding to the ith ray on the x axis, yiIs the coordinate value of the 2D point corresponding to the ith ray on the y axis, Dix(dixNot equal to 0) is the direction value of the ith ray on the x-axis, diy(diyNot equal to 0) is the y-axis direction value of the ith ray.
After the above steps (1) to (3) are cyclically executed for N-1 times, if it is recognized that there are K (1. ltoreq. K. ltoreq.N-1) marked tubular objects, the step S103 is continuously executed; conversely, if it is recognized that there is no marked tubular object, the jump is performed to step S104.
S103, recording the numbers corresponding to K (K is more than or equal to 1 and less than or equal to N-1) tubular objects which collide with the ith ray, and then calculating the crossing position of the ith ray and each tubular object;
the implementation process of the invention comprises the following steps: firstly, based on that any ray is emitted from the center of the tubular object with the corresponding number in parallel, when the ith ray collides with the jth tubular object, the jth ray with the same number as the jth tubular object is obtained; secondly, extracting a point P from the ith ray1And extracting a point Q from the jth ray1And make point P1And point Q1The line segment formed between the first ray and the second ray is the shortest distance between the ith ray and the jth ray; finally according to point P1And point Q1To obtain a point P1And point Q1The center point of the line segment formed in between, and the coordinate position of the center point is defined as the intersection position of the ith ray and the jth tubular object.
More specifically, for point P in the ith ray1And point Q in the jth ray1The extraction process of (A) is as follows:
(1) defining a point P1And point Q1The coordinate expression of (a) is:
Figure BDA0003275024550000081
wherein, P0Is the coordinate information of the 2D point corresponding to the ith ray, PCIs a point P1Corresponding scalar value, diAs direction information of the ith ray, Q0Coordinate information of the 2D point corresponding to the jth ray, QCIs a point Q1Corresponding scalar value, djDirection information of the jth ray;
(2) determining point Q by the above two coordinate expressions1Point of orientation P1The direction information of (1) is:
w1=P1-Q1=w0+PC·di-QC·dj
wherein, w0The direction information that the 2D point corresponding to the jth ray points to the 2D point corresponding to the ith ray, i.e. w0=P0-Q0
(3) When the ith ray and the jth ray are not parallel to each other in the spatial scene, then the point of presence P1And point Q1The line segment formed between the first ray and the second ray is perpendicular to the ith ray and the jth ray respectively, so that the following relation exists:
Figure BDA0003275024550000082
to reduce complexity, set a to di·di、b=di·dj、c=dj·dj、d=di·w0And e ═ dj·w0At this time, the above relation can be simplified as:
Figure BDA0003275024550000083
then, solving the simplified equation set to obtain:
Figure BDA0003275024550000084
and finally, substituting the solving results into the two coordinate expressions in the step (1) respectively to obtain a point P1Has a coordinate position of P1=P0+[(be-cd)/(ac-b2)]·diAnd point Q1Has a coordinate position of Q1=Q0+[(ae-bd)/(ac-b2)]·dj
And S104, assigning i +1 to i, and returning to execute the collision recognition operation of the ray generated by the next 2D point under the condition that i is judged to be less than or equal to N.
In the embodiment of the invention, based on two rays generated by any two 2D points, the cross identification process between the two rays is converted into a collision identification process between a ray and a tubular object created by another ray, and further converted into a cross judgment process between two-dimensional planes of a space bounding box formed by the ray and the tubular object in a spreading state, so that the reliable verification of the cross relationship between the two rays can be realized; by recalculating their intersection positions while ensuring that an intersection occurs between two rays, computational complexity may be reduced.
In an embodiment, the present invention further provides an apparatus for identifying intersection of rays where a 2D point of a camera is located, as shown in fig. 3, the apparatus includes:
a generating module 201, configured to generate a corresponding tubular object in a spatial scene for each 2D point when N2D points carrying different numbers are formed in a camera, and then perform corresponding number binding on each tubular object;
the identification module 202 is configured to, starting from when i is equal to 1, acquire an ith ray generated by an ith 2D point, and identify whether the ith ray collides with at least one tubular object of N-1 tubular objects with different numbers;
the calculating module 203 is used for recording the numbers corresponding to K (K is more than or equal to 1 and less than or equal to N-1) tubular objects which collide with the ith ray when recognizing that the ith ray collides with at least one tubular object in N-1 tubular objects with different numbers, and then calculating the intersection position of the ith ray and each tubular object;
and the returning module 204 is used for assigning i +1 to i and returning to execute the collision recognition operation of the ray generated by the next 2D point under the condition that i is judged to be less than or equal to N.
For a specific limitation of the device for identifying the intersection of the ray where the camera 2D point is located, reference may be made to the above limitation on the method for identifying the intersection of the ray where the camera 2D point is located, and details are not described herein again. All modules in the device for identifying the intersection of the rays where the 2D points of the camera are located can be completely or partially realized through software, hardware and a combination of the software and the hardware. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In the computer-readable storage medium provided in the embodiments of the present invention, an application program is stored in the computer-readable storage medium, and when the application program is executed by a processor, the method for cross-identifying a ray where a camera 2D point is located in any one of the embodiments is implemented. The computer-readable storage medium includes, but is not limited to, any type of disk including floppy disks, hard disks, optical disks, CD-ROMs, and magneto-optical disks, ROMs (Read-Only memories), RAMs (Random AcceSS memories), EPROMs (EraSable Programmable Read-Only memories), EEPROMs (Electrically EraSable Programmable Read-Only memories), flash memories, magnetic cards, or optical cards. That is, a storage device includes any medium that stores or transmits information in a form readable by a device (e.g., a computer, a cellular phone), which may be a read-only memory, a magnetic or optical disk, or the like.
Fig. 4 is a schematic structural diagram of a computer device in the embodiment of the present invention.
An embodiment of the present invention further provides a computer device, as shown in fig. 4. The computer apparatus includes a processor 302, a memory 303, an input unit 304, a display unit 305, and the like. Those skilled in the art will appreciate that the device configuration means shown in fig. 4 do not constitute a limitation of all devices and may include more or less components than those shown, or some components in combination. The memory 303 may be used to store the application 301 and various functional modules, and the processor 302 executes the application 301 stored in the memory 303, thereby performing various functional applications of the device and data processing. The memory may be internal or external memory, or include both internal and external memory. The memory may comprise read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), flash memory, or random access memory. The external memory may include a hard disk, a floppy disk, a ZIP disk, a usb-disk, a magnetic tape, etc. The disclosed memory includes, but is not limited to, these types of memory. The disclosed memory is by way of example only and not by way of limitation.
The input unit 304 is used for receiving input of signals and receiving keywords input by a user. The input unit 304 may include a touch panel and other input devices. The touch panel can collect touch operations of a user on or near the touch panel (for example, operations of the user on or near the touch panel by using any suitable object or accessory such as a finger, a stylus and the like) and drive the corresponding connecting device according to a preset program; other input devices may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., play control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like. The display unit 305 may be used to display information input by a user or information provided to the user and various menus of the terminal device. The display unit 305 may take the form of a liquid crystal display, an organic light emitting diode, or the like. The processor 302 is a control center of the terminal device, connects various parts of the entire device using various interfaces and lines, and performs various functions and processes data by running or executing software programs and/or modules stored in the memory 303 and calling data stored in the memory.
As one embodiment, the computer device includes: one or more processors 302, a memory 303, and one or more applications 301, wherein the one or more applications 301 are stored in the memory 303 and configured to be executed by the one or more processors 302, and the one or more applications 301 are configured to perform a method of cross-recognition of a ray where a camera 2D point is located in any of the above embodiments.
In addition, the method, the apparatus, the computer device, and the storage medium for cross-recognition of the ray where the 2D point of the camera is located provided in the embodiments of the present invention are described in detail above, and a specific example should be used herein to explain the principle and the implementation manner of the present invention, and the description of the above embodiments is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A cross recognition method for a ray where a camera 2D point is located is characterized by comprising the following steps:
when N2D points carrying different numbers are formed in the camera, generating a corresponding tubular object in a space scene for each 2D point, and then binding each tubular object with a corresponding number;
starting from i-1, acquiring an ith ray generated by an ith 2D point, and identifying whether the ith ray collides with at least one tubular object in N-1 tubular objects with different numbers;
if yes, recording the numbers corresponding to K (K is more than or equal to 1 and less than or equal to N-1) tubular objects which collide with the ith ray, and then calculating the crossing position of the ith ray and each tubular object;
and assigning i +1 to i, and returning to execute the collision recognition operation of the ray generated by the next 2D point under the condition that i is judged to be less than or equal to N.
2. The method for cross-recognition of the ray where the camera 2D point is located according to claim 1, wherein the generating a corresponding tubular object in the spatial scene for each 2D point includes:
the optical center position of the camera is c, the shortest distance allowed to be captured by the ray generated by any 2D point in the space scene is tminAnd a maximum distance of tmaxAnd the direction information of the ray generated by the ith 2D point is Di
Defining a distance error value when rays generated by any two 2D points are crossed, and creating a basic circle by taking the distance error value as a radius;
the base is putStretching the circle to obtain a tubular object corresponding to the ith 2D point, and defining the central position of one end face of the tubular object as c + tmindiAnd the other end face has a center position of c + tmaxdi
3. The method for cross-identifying the ray where the camera 2D point is located according to claim 1, wherein the identifying whether the ith ray collides with at least one tubular object in N-1 tubular objects with different numbers comprises:
starting from j ═ 1, a spatial bounding box is created for the j-th tubular object;
taking an XOY plane in the space scene as a reference, carrying out tiling and expansion processing on the space bounding box according to a cross shape, and further obtaining a fixed boundary interval to cover the size of each plane of the space bounding box in an expansion state;
and after identifying whether the ith ray intersects with the fixed boundary interval, assigning j +1 to j, and returning to execute the collision identification operation of the ith ray and the next tubular object under the condition that j is judged to be not more than N-1.
4. The method of claim 3, wherein the fixed boundary region defines a coordinate position of any point on four planes lying next to each other in the y-axis direction as falling on (y) when the space bounding box is unfolded on the XOY plane0≤y≤y1) Within the range and defining the coordinate position in the x-axis direction of any point on three planes lying next to each other in the longitudinal direction to fall within (x)0≤x≤x1) Within the range.
5. The method for cross-recognizing the ray where the camera 2D point is located according to claim 4, wherein the recognizing whether the ith ray intersects with the fixed boundary interval includes:
based on four boundary lines in the fixed boundary interval, respectively calculating the ithX is the ray and boundary line x0The distance value at which the intersection occurs is tx0The ith ray and the boundary line x ═ x1The distance value at which the intersection occurs is tx1Y is the ith ray and the boundary line y0The distance value at which the intersection occurs is ty0And the ith ray and the boundary line y ═ y1The distance value at which the intersection occurs is ty1
Judging whether the four distance values meet max (t)x0,ty0)<min(tx1,ty1) The conditions of (a);
if yes, determining that the ith ray is intersected with the fixed boundary interval, and marking and distinguishing the jth tubular object;
if not, determining that the ith ray is not intersected with the fixed boundary interval.
6. The method for identifying the intersection of the ray where the camera 2D point is located according to claim 3, wherein the calculating the intersection position of the ith ray and each tubular object comprises:
based on that any ray is emitted from the center of the tubular object with the corresponding number in parallel, when the ith ray collides with the jth tubular object, the jth ray with the same number as the jth tubular object is obtained;
extracting a point P from the ith ray1And extracting a point Q from the jth ray1And make point P1And point Q1The line segment formed between the first ray and the second ray is the shortest distance between the ith ray and the jth ray;
according to point P1And point Q1To obtain a point P1And point Q1The center point of the line segment formed in between, and the coordinate position of the center point is defined as the intersection position of the ith ray and the jth tubular object.
7. The method for cross-identifying the ray where the camera 2D point is located according to claim 1, after identifying whether the ith ray collides with at least one tubular object in N-1 tubular objects with different numbers, the method further comprises:
and if not, assigning i +1 to i, and returning to execute the collision recognition operation of the ray generated by the next 2D point under the condition that i is judged to be less than or equal to N.
8. A device for cross recognition of rays where a camera 2D point is located is characterized by comprising:
the generating module is used for generating a corresponding tubular object in a space scene for each 2D point when N2D points carrying different numbers are formed in the camera, and then binding each tubular object with the corresponding number;
the identification module is used for acquiring the ith ray generated by the ith 2D point from the beginning of the condition that i is 1, and identifying whether the ith ray collides with at least one tubular object in N-1 tubular objects with different numbers;
the calculation module is used for recording the numbers corresponding to K (K is more than or equal to 1 and less than or equal to N-1) tubular objects which collide with the ith ray when recognizing that the ith ray collides with at least one tubular object in N-1 tubular objects with different numbers, and then calculating the intersection position of the ith ray and each tubular object;
and the returning module is used for assigning i +1 to i and returning to execute the collision recognition operation of the ray generated by the next 2D point under the condition that i is judged to be less than or equal to N.
9. A computer device comprising a memory, a processor and an application program stored on the memory and executable on the processor, wherein the steps of the method of any one of claims 1 to 7 are implemented when the application program is executed by the processor.
10. A computer-readable storage medium, on which an application program is stored, which when executed by a processor implements the steps of the method of any one of claims 1 to 7.
CN202111118043.6A 2021-09-23 2021-09-23 Cross recognition method and device for ray where camera 2D point is located Pending CN113902843A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111118043.6A CN113902843A (en) 2021-09-23 2021-09-23 Cross recognition method and device for ray where camera 2D point is located

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111118043.6A CN113902843A (en) 2021-09-23 2021-09-23 Cross recognition method and device for ray where camera 2D point is located

Publications (1)

Publication Number Publication Date
CN113902843A true CN113902843A (en) 2022-01-07

Family

ID=79029190

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111118043.6A Pending CN113902843A (en) 2021-09-23 2021-09-23 Cross recognition method and device for ray where camera 2D point is located

Country Status (1)

Country Link
CN (1) CN113902843A (en)

Similar Documents

Publication Publication Date Title
US10235764B2 (en) Method, terminal, and storage medium for detecting collision between colliders in real-time virtual scene
CN107688342B (en) The obstruction-avoiding control system and method for robot
EP3951721A1 (en) Method and apparatus for determining occluded area of virtual object, and terminal device
EP3514724B1 (en) Depth map-based heuristic finger detection method
CN112150551B (en) Object pose acquisition method and device and electronic equipment
US9779292B2 (en) System and method for interactive sketch recognition based on geometric contraints
KR20120058996A (en) Apparatus and Method for Controlling Object
CN109816730A (en) Workpiece grabbing method, apparatus, computer equipment and storage medium
EP3714433A1 (en) Ray-triangle intersection testing with tetrahedral planes
WO2021077982A1 (en) Mark point recognition method, apparatus and device, and storage medium
US20230015214A1 (en) Planar contour recognition method and apparatus, computer device, and storage medium
CN110276774A (en) Drawing practice, device, terminal and the computer readable storage medium of object
CN103033145B (en) For identifying the method and system of the shape of multiple object
US10146331B2 (en) Information processing system for transforming coordinates of a position designated by a pointer in a virtual image to world coordinates, information processing apparatus, and method of transforming coordinates
JP2023529790A (en) Method, apparatus and program for generating floorplans
JP5762099B2 (en) Posture recognition apparatus, work robot, posture recognition method, program, and recording medium
CN113902843A (en) Cross recognition method and device for ray where camera 2D point is located
CN114510142B (en) Gesture recognition method based on two-dimensional image, gesture recognition system based on two-dimensional image and electronic equipment
CN114494960A (en) Video processing method and device, electronic equipment and computer readable storage medium
US20200167005A1 (en) Recognition device and recognition method
CN115793893B (en) Touch writing handwriting generation method and device, electronic equipment and storage medium
US9858364B2 (en) Computing device and method for processing point clouds
JP7452917B2 (en) Operation input device, operation input method and program
CN109410304A (en) A kind of determining method, device and equipment of projection
CN114615430B (en) Interaction method and device between mobile terminal and external object and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination