CN110705644B - Method for coding azimuth relation between targets - Google Patents

Method for coding azimuth relation between targets Download PDF

Info

Publication number
CN110705644B
CN110705644B CN201910948259.1A CN201910948259A CN110705644B CN 110705644 B CN110705644 B CN 110705644B CN 201910948259 A CN201910948259 A CN 201910948259A CN 110705644 B CN110705644 B CN 110705644B
Authority
CN
China
Prior art keywords
positioning frame
target
reference target
code
orientation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910948259.1A
Other languages
Chinese (zh)
Other versions
CN110705644A (en
Inventor
邓少冬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Mix Intelligent Technology Co ltd
Original Assignee
Xi'an Mix Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Mix Intelligent Technology Co ltd filed Critical Xi'an Mix Intelligent Technology Co ltd
Priority to CN201910948259.1A priority Critical patent/CN110705644B/en
Publication of CN110705644A publication Critical patent/CN110705644A/en
Application granted granted Critical
Publication of CN110705644B publication Critical patent/CN110705644B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a coding method of the orientation relation between targets, the orientation coding of the observation target relative to the reference target comprises two parts of direction coding and distance coding, both the direction coding and the distance coding are relative codes obtained by evaluating the orientation of the observation target by taking the reference target as a starting point, therefore, the coding method has the characteristics of not changing with the scaling of a graph and not changing with an observation angle, and can effectively reflect the orientation relation between targets.

Description

Method for coding azimuth relation between targets
Technical Field
The invention relates to description and judgment of the orientation relation between two-dimensional space targets, which can be applied to the description, generation and judgment of the mutual relation between the targets after target detection in computer vision, and can also be applied to the description, generation and judgment of the mutual relation between two-dimensional space objects in general sense.
Background
Object detection is a computer vision technique that aims at detecting objects such as cars, buildings, and humans from images or videos. Target detection is widely applied in the fields of video monitoring, automatic driving, human body tracking and the like. The output of the target detection comprises the type and the reliability of the target, and a positioning frame of the target in the image, as shown in fig. 1, a dog is included in the positioning frame of a double-dashed line, a bicycle is included in the positioning frame of the double-dashed line, and a car is included in the positioning frame of the dot-dashed line.
Regarding the object detection and the determination of the positioning frame, there are many mature technologies in the industry, which can be roughly divided into two types, one type is the determination of the positioning frame and the determination of the object type are divided into two different stages, which is called a two-stage method, and the representative algorithms are rcnn-prostrcnn-foster rcnn, etc. The other is that the positioning frame and the target category are determined simultaneously, which is called a one-stage method, and a representative algorithm is YOLO, SSD.
Computer vision's understanding of a scene is based on objects detected in the scene and relationships between the objects, with object detection solving the problem of what objects are in the scene, but not paying attention to the relationships between the objects. In practical applications, the relationship between the objects is closely related to the orientation relationship of the objects in the image, such as:
in fig. 2, the bicycle is carried by a person, the bicycle is on the upper part of the person, in fig. 3, the bicycle is ridden by a person, and in the lower part of the person, from the computer vision, the person and the bicycle are detected in two pictures, but how to judge whether the person is riding the bicycle or the person is carrying the bicycle needs to identify two targets: bicycle and people, the bearing relation between the two.
In addition, related applications of the target layout, such as rooms, places, commodity displays and the like, how to describe the layout mode by a mathematical method, and the description of the target formed layout detected by a computer is compared with a standardized layout template, so that the correctness and reasonability of the layout are judged.
In the description of the layout template, or in order to judge whether the layout of the actual scene seen after the target detection is reasonable, a general description method of the orientation relationship between the targets is needed, and the general description method is lacked at present.
Currently, the industry determines the orientation relationship between detected targets based on a target positioning frame, and includes the following methods:
A. IoU mode for determining coincidence of two objects
As shown in FIG. 4, there are two target location boxes, ioU (Intersection over Union, i.e., the ratio of the Intersection area of the location boxes to the phase area). It can be seen that the closer the ratio is to 1, the higher the two target overlap ratio.
B. Point in frame
As shown in fig. 5, blue is a positioning frame of an object, and whether a point is in the positioning frame or not, we can realize the positioning by comparing the magnitude of absolute coordinates. Assuming that the coordinates of the point are (x, y), and the minimum value in the x direction, the minimum value in the y direction, the maximum value in the x direction, and the maximum value in the y direction of the positioning frame are Xmin, ymin, xmax, and Ymax, respectively, the condition of the point in the frame is:
ymin < x < xmax and ymin < y < ymax
C. Absolute distance
This is a way to determine the distance between the center points of the positioning frames, and the positioning distance is generally compared with a threshold value as a basis for determining whether the positioning distance meets the condition.
The above method for discriminating the target azimuth relationship between objects has the following problems:
1. the orientation relationship between two targets cannot be fully described, for example, the IoU can only reflect the contact ratio, and the relationship between a point and a frame can only indicate whether a specific point is included; the threshold value is compared according to the absolute distance, and the problem that the threshold value is unreasonable often exists;
2. depending on the code judgment. The judgment needs to be made by means of program logic, plus mathematical calculations. Different codes are compiled in different orientation relations, and the customization is high.
Disclosure of Invention
The invention aims to provide a coding method of the orientation relation between targets, which is used for realizing the universal description and judgment of the orientation relation between more than two target detection objects.
The invention specifically adopts the following technical scheme:
a method for coding the orientation relation between targets is characterized in that:
the observation target and the reference target are both represented by rectangular positioning frames, and the orientation coding of the observation target relative to the reference target comprises two parts:
(1) Direction coding: coding the observation target in different directions relative to the reference target;
dividing a plane space into 9 different areas by using extension lines of four sides of a rectangular positioning frame of a reference target as a center, respectively representing the 9 different areas by using 9 different codes, and determining a direction code of an observation target relative to the reference target by using a specific area of the center point of the rectangular positioning frame of the observation target falling in the 9 different areas;
(2) Distance encoding: codes of different distances of the observation target relative to the reference target;
and establishing an XY coordinate system by taking the central point of the reference target rectangular positioning frame as an origin and the width direction and the height direction as coordinate axes, and determining the distance code of the observation target relative to the reference target by taking the position relation between the central point of the observation target rectangular positioning frame and the reference target rectangular positioning frame and the quantity relation between the width and the height of the observation target rectangular positioning frame and the unit scale of the corresponding coordinate axes.
The invention has the following beneficial effects:
the target detection obtains a single target, and the orientation relation between the targets is coded by the method of the invention, so that the method can be as follows:
A. describing the relationship between the target orientations: the orientation relationship between two or more objects will be described in common.
B. Judging the orientation relation between the targets: the actual expression of the orientation relation between the targets is obtained through target detection or manual operation, and compared with a standardized layout template (namely, a preset layout specification of the orientation relation between specified objects) to determine whether the layout is consistent.
The direction code and the distance code contained in the orientation code obtained by the coding method are relative codes obtained by evaluating the orientation of the observation target by taking the reference target as a starting point, so the method has the characteristics of no change along with the scaling of the graph and no change along with the observation angle, and can effectively reflect the orientation relation between the targets.
Drawings
FIG. 1 is a schematic view of target detection;
FIG. 2 is an image of a person carrying a bicycle;
FIG. 3 is an image of a human riding bicycle;
FIG. 4 is a schematic diagram of the IoU mode;
FIG. 5 is a schematic diagram of a dot-box relationship;
FIG. 6 is a reference target location box four parameters;
FIG. 7 is a schematic diagram of direction coding region segmentation (digital coding);
FIG. 8 is a schematic diagram of the segmentation of the directional coded region (letter coding);
FIG. 9 is a schematic diagram of distance encoding;
FIG. 10 is a schematic view of example 1;
FIG. 11 is a schematic view of example 2;
fig. 12 is an application illustration.
Detailed Description
The orientation relation between the targets is relative, and relates to two targets, one is a reference target and the other is an observation target, and the orientation code is obtained by evaluating and integrating the direction and the distance of the observation target relative to the reference target by taking the reference target as a center.
The invention provides a coding method of orientation relation between targets, wherein orientation coding of an arbitrary observation target B relative to an arbitrary reference target A comprises two parts:
1. direction coding: the different directions of the target B relative to the target A are represented by different codes;
2. distance encoding: calculating different codes by taking the size of the target A as a measurement scale according to the distance between the target B and the target A;
the targets are all described by rectangular positioning frames, wherein the direction codes are that the plane space is divided into 9 areas by using extension lines of four sides of the rectangular positioning frame of the target A, and the divided different areas are represented by using 9 different codes;
the distance code takes the length and width of the rectangle as the measurement scale, measures the distance of the target B relative to the target A, and forms the distance code.
As shown in fig. 6, in the target detection, the positioning frame of the reference target includes four parameters: xmin, ymin, xmax, ymax are the minimum value in the x direction, the minimum value in the y direction, the maximum value in the x direction, and the maximum value in the y direction, respectively.
A. Determination of directional coding
Two ends of four side lines of the reference target positioning frame are extended, the positioning frame of the reference target is taken as a center, and the two-dimensional plane space is divided into 9 areas as shown in fig. 7 and 8. Each region is represented by a code, which can be a number or other symbols such as letters. The orientation of the viewing object relative to the reference object is coded to determine which of the 9 regions the center of the viewing object location box falls within. Specifically, if the center of the positioning frame of the observation target is located on the extension lines of the four sides of the positioning frame of the reference target, one of the adjacent regions is taken as its direction code according to a uniform rule.
The coding method of the region can be various, for example, expressed by using different numbers, wherein one implementation method is as follows: the area (including the outline of the positioning frame) where the positioning frame of the reference target is located is represented by 0, and the remaining areas are represented by 1 to 8, respectively. The areas 1-8 may be distributed clockwise, counterclockwise or in other distribution around the area 0. When the center of the positioning frame of the observation target is positioned on the extension lines of the four sides of the reference target positioning frame, the digital codes of the clockwise or anticlockwise adjacent areas can be uniformly used as the direction codes, or the larger or smaller digital code in the adjacent areas can be uniformly used as the direction code. In the embodiment of the invention, the numbers 1-8 are distributed clockwise, and when the center of the positioning frame of the observation target is positioned on the extension lines of the four sides of the positioning frame of the reference target, the digital codes of clockwise adjacent areas are uniformly taken as the direction codes.
The codes of the regions can be expressed by using different letters, for example, the region (including the outline of the positioning frame) where the positioning frame of the reference object is located is represented by X, and the rest regions are represented by a to H, respectively. The areas A-H may be distributed clockwise, counterclockwise or in other distribution around the area X. When the center of the positioning frame of the observation target is positioned on the extension lines of the four sides of the reference target positioning frame, the letter codes of the clockwise or anticlockwise adjacent areas can be uniformly used as the direction codes, or the letter codes which are sequenced earlier or later in the adjacent areas can be uniformly used as the direction codes.
B. Determination of distance codes
A coordinate system shown in fig. 9 is established with the center of the reference target rectangular positioning frame as an origin, the width direction as an X axis, and the height direction as a Y axis. The unit scale of the X axis is one half of the width of the reference target; the unit dimension of the Y-axis is one-half of the reference target height.
The range of distance coding is-9 to 9, wherein:
0: the central point of the observation target positioning frame falls on the positioning frame of the reference target
1: the central point of the observation target positioning frame is the same as the positioning frame of the reference target, the width and the height are 2 times of the unit scale of the corresponding coordinate axis, and at the moment, the observation target positioning frame is overlapped with the reference target positioning frame.
2: the central point of the observation target positioning frame is the same as the positioning frame of the reference target, and the width and the height of the observation target positioning frame are 4 times of the unit scale of the corresponding coordinate axis.
8: the central point of the observation target positioning frame is the same as the positioning frame of the reference target, and the width and the height of the observation target positioning frame are 8 times of 2 times of the unit scale of the corresponding coordinate axis.
9: the central point of the observation target positioning frame is the same as the positioning frame of the reference target, and the width and the height of the observation target positioning frame are 9 times of 2 times of the unit scale of the corresponding coordinate axis.
When the central point of the observation target is located at other positions, the distance code is between two integers and has a decimal, and the integer is rounded downwards or upwards at the moment.
The distance code is at most 9, i.e. 9 even further. Because from the intuitive feeling of human, the object at too far distance is regarded as the distant object, and the convenience of expression by one digit is also facilitated.
The inner region of the reference object localization box is a special region whose distance code is negative, and similarly:
-1: the central point of the observation target positioning frame is the same as the positioning frame of the reference target, and the width and the height are 2-1 times of the unit scale of the corresponding coordinate axis.
-2: the central point of the observation target positioning frame is the same as the positioning frame of the reference target, and the width and the height are 2-2 times of the unit scale of the corresponding coordinate axis.
-3: the center point of the observation target positioning frame is the same as the positioning frame of the reference target, and the width and the height of the observation target positioning frame are 2 to 3 times of the unit scale of the corresponding coordinate axis.
-9: the center point of the observation target positioning frame is the same as the positioning frame of the reference target, and the width and the height of the observation target positioning frame are 2 to 9 times of the unit scale of the corresponding coordinate axis.
Likewise, the distance code is not less than-9, and the further approach distance is also-9. Because objects that are too close are considered close to the target from the human visual perception, the simplicity of expression with one digit is also facilitated.
C. Orientation coding
The direction code comprises a direction code and a distance code, and is divided by decimal points, wherein the direction code is arranged before the decimal points, and the distance code is arranged after the decimal points.
The distance-coded symbols are placed in front of the orientation-coded symbols as orientation-coded symbols.
For example:
the direction code is 1, the distance code is 3, and the orientation code is 1.3
The direction code is 0, the distance code is-2, and the orientation code is-0.2
D. Summary of the invention
The direction code and the distance code contained in the orientation code obtained by the coding method are relative codes obtained by evaluating the orientation of the observation target by taking the reference target as a starting point, so the method has the characteristics of no change along with the scaling of the graph and no change along with the observation angle.
Currently, a general description method of the orientation relation between targets is lacked, and the orientation coding method provided by the invention fills the blank in the aspect.
The current direction relation between the targets is judged, the absolute coordinate size of the targets is judged through the customized code logic, and the code customization is strong and cannot be used universally.
The orientation relation between the targets can be generally described through the orientation codes and the matrix formed by the orientation codes. A mode matrix of various inter-target relationships can also be defined, and whether the specified scene exists can be judged by matching the inter-target relationships in the actual scene with the mode matrix.
The coding method of the invention well matches the general method of providing the positioning frame for target detection in the current computer vision technology, the definition of the direction is matched with nine areas naturally divided by each extension line of the reference target positioning frame, the size of the reference target positioning frame is used as the distance measurement basis, and the exponential growth principle is adopted, so that the azimuth relationship between targets can be effectively reflected, and the method is more consistent with the visual feeling.
Example 1
As shown in fig. 10, taking a cup on a table as an example, the orientation code between the table and the cup is as follows:
when the cup is seen from the table, the cup is arranged above the table and belongs to the direction 8, the distance from the center of the cup to the center of the table is close to half of the height of the table, the distance is coded to be 0, namely the direction code of the cup relative to the table is 8.0;
when a user sees the table from a cup, the table is placed down from the cup and belongs to the direction 4, the distance between the center of the table and the center of the cup is about 3 times of the height of the cup, the distance code is between 1 and 2, and the whole is 1, so that the direction code of the table relative to the cup is 4.1.
The orientation code of any one target relative to the target is defined as-1, so the orientation relation matrix of the table and the cup is as follows:
Figure BDA0002224892980000071
if the direction code is expressed by using the letter, the cup is placed above the table and belongs to the direction H, the table is placed below the cup and belongs to the direction D, and the rest of the table is unchanged, so that the orientation relation matrix of the table and the cup is as follows:
Figure BDA0002224892980000072
when the regions are represented by letter codes, the orientation code of any one object relative to itself can also be defined by letter codes accordingly.
Example 2
As in fig. 11, the relationship between the vehicle body and the wheels:
viewing the front wheel from the vehicle, wherein the front wheel is arranged in the vehicle, the direction code is 0, the distance code is-1, and the direction code is-0.1;
viewing the rear wheel from the vehicle, wherein the rear wheel is arranged in the vehicle, the direction code is 0, the distance code is-1, and the direction code is-0.1;
when a vehicle is seen from the front wheel, the center of the vehicle is positioned at the upper right part of the front wheel, the direction code is 1, the distance code is about 1, and the direction code is 1.1;
the rear wheel is seen from the front wheel, the rear wheel is arranged on the right side of the front wheel, the direction code is 2, the distance code is 2, and the azimuth code is 2.2;
when the vehicle is seen from the rear wheel, the center of the vehicle is positioned at the upper left part of the rear wheel, the direction code is 7, the distance code is about 1, and the direction code is 7.1;
viewing the front wheel from the rear wheel, wherein the direction code of the front wheel is 4, the distance code is 2 and the azimuth code is 4.2 on the left side of the rear wheel;
in summary, an orientation relation matrix between an automobile body and front and rear wheels is described as follows:
Figure BDA0002224892980000081
application example of scene matching
Often riders are not allowed in urban traffic, how are they identified if detected by computer vision?
Taking fig. 12 as an example, there are three targets: bicycle, boy head, girl head, interrelationship of objects:
the boy's head is seen from the bicycle, and the center of the boy's head positioning frame is positioned above the bicycle positioning frame, and the distance is about half to 1 bicycle positioning frame. If the height of the half bicycle positioning frame is equal to 1 unit scale, the distance code is 0; if the height of the bicycle positioning frame is 1, the unit dimension is 2, and the distance code is 1. The selectable values of the distance code are 0 and 1. Alternative distance codes are indicated by different numbers in square brackets, here [01]. Thus, the orientation code for the boy's head relative to the bicycle alignment frame is 8.[01] (the numbers inside the square brackets are optional, the same applies hereinafter), and likewise, the center of the girl's head alignment frame is also located above the bicycle alignment frame, with an orientation code of 8.0.
Looking at the bicycle from the boy's head, the center of the bicycle positioning frame is located below, possibly left-down, right-down, at a distance of about 2-4 head heights, and the orientation code is [345] [234]. Looking at the girl's head from the boy's head, the girl's head is at the right or under the right of the boy's head, a distance of about 0-2 heads, and the orientation code is [23] [012].
Looking at the bicycle from the girl's head, the center of the bicycle positioning frame is located at the lower left, the distance is about 2-4 head heights, and the orientation code is 5 [234]. Looking at the boy's head from the girl's head, the boy's head is on the left or upper left of the girl's head, a distance of about 0-2 heads, and the orientation code is [67] [012].
The riding and leading position coding matrix template comprises the following steps:
Figure BDA0002224892980000082
in the actual street view monitoring, the target detection can detect the head of a bicycle, a boy and a girl in a street view picture, and a positioning frame is given. And generating orientation coding matrixes of the bicycle, the boy head and the girl head according to the positioning frame, comparing the orientation coding matrixes with a template of a predefined orientation coding matrix according to the actual matrix, and judging that the bicycle is a cyclist if the comparison is successful.

Claims (10)

1. A method for coding the orientation relation between targets is characterized in that:
the observation target and the reference target are both represented by rectangular positioning boxes, and the orientation coding of the observation target relative to the reference target comprises two parts:
(1) Direction coding: coding of the observation target in different directions relative to the reference target;
dividing a plane space into 9 different areas taking the rectangular positioning frame of the reference target as a center by using extension lines of four sides of the rectangular positioning frame of the reference target, respectively representing the 9 different areas by using 9 different codes, and determining a direction code of an observation target relative to the reference target by using a specific area of which the center point of the rectangular positioning frame of the observation target falls in the 9 different areas;
(2) Distance encoding: codes of different distances of the observation target relative to the reference target;
and establishing an XY coordinate system by taking the central point of the reference target rectangular positioning frame as an origin and the width direction and the height direction as coordinate axes, and determining the distance code of the observation target relative to the reference target by taking the position relation between the central point of the observation target rectangular positioning frame and the reference target rectangular positioning frame and the quantity relation between the width and the height of the observation target rectangular positioning frame and the unit scale of the corresponding coordinate axes.
2. A method for encoding an orientation relation between objects as claimed in claim 1, wherein: the 9 different regions are represented by 9 numbers, respectively.
3. A method for encoding an orientation relation between objects as claimed in claim 1, wherein: the 9 different regions are represented by 9 english letters, respectively.
4. A method of encoding an orientation relation between objects as claimed in any one of claims 1 to 3, wherein: and when the central point of the observation target rectangular positioning frame is positioned on the extension lines of the four sides of the reference target rectangular positioning frame, taking one code in the adjacent area as the direction code according to a unified rule.
5. A method for encoding an orientation relation between objects as claimed in claim 1, wherein: and taking the width direction of the reference target rectangular positioning frame as an X axis and the height direction as a Y axis, wherein the unit dimension of the X axis is one half of the width of the reference target rectangular positioning frame, and the unit dimension of the Y axis is one half of the height of the reference target rectangular positioning frame.
6. The method for encoding an orientation relation between objects according to claim 1 or 5, wherein:
the range of distance coding is-9 to 9, wherein:
0: observing that the central point of the target positioning frame falls on the positioning frame of the reference target;
i, i =1,2,3, \82309; 9: the central point of the observation target positioning frame is the same as the reference target positioning frame, and the width and the height of the observation target positioning frame are the i-th power times of 2 of the unit scale of the corresponding coordinate axis;
-i, i =1,2,3, \82309: the central point of the observation target positioning frame is the same as the reference target positioning frame, and the width and the height are 2-i times of the unit scale of the corresponding coordinate axis.
7. The method of encoding an orientation relation between objects according to claim 6, wherein: and when the central point of the observation target positioning frame is not on the reference target positioning frame and is not the same as the reference target positioning frame, and the distance code is between two integers, uniformly rounding up or down to be used as the distance code.
8. A method of encoding an inter-object bearing relationship according to claim 1, characterized in that: the azimuth code comprises a direction code and a distance code, and is divided by decimal points, wherein the direction code is arranged before the decimal points, and the distance code is arranged after the decimal points.
9. The method for encoding an orientation relation between objects according to claim 8, wherein: the distance-coded symbols are placed in front of the orientation-coded symbols as orientation-coded symbols.
10. A method of encoding an inter-object bearing relationship according to claim 1 or 2, characterized by: the orientation of an arbitrary target relative to itself is encoded as-1.
CN201910948259.1A 2019-10-08 2019-10-08 Method for coding azimuth relation between targets Active CN110705644B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910948259.1A CN110705644B (en) 2019-10-08 2019-10-08 Method for coding azimuth relation between targets

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910948259.1A CN110705644B (en) 2019-10-08 2019-10-08 Method for coding azimuth relation between targets

Publications (2)

Publication Number Publication Date
CN110705644A CN110705644A (en) 2020-01-17
CN110705644B true CN110705644B (en) 2022-11-18

Family

ID=69198260

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910948259.1A Active CN110705644B (en) 2019-10-08 2019-10-08 Method for coding azimuth relation between targets

Country Status (1)

Country Link
CN (1) CN110705644B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113807284A (en) * 2021-09-23 2021-12-17 上海亨临光电科技有限公司 Method for positioning personal object on terahertz image in human body

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102254291A (en) * 2011-06-14 2011-11-23 南京师范大学 Urban address coding method based on hierarchical spatial reference model
CN102622767A (en) * 2012-03-05 2012-08-01 广州乐庚信息科技有限公司 Method for positioning binocular non-calibrated space
CN109166136A (en) * 2018-08-27 2019-01-08 中国科学院自动化研究所 Target object follower method of the mobile robot based on monocular vision sensor

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8274508B2 (en) * 2011-02-14 2012-09-25 Mitsubishi Electric Research Laboratories, Inc. Method for representing objects with concentric ring signature descriptors for detecting 3D objects in range images
CN104951084B (en) * 2015-07-30 2017-12-29 京东方科技集团股份有限公司 Eye-controlling focus method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102254291A (en) * 2011-06-14 2011-11-23 南京师范大学 Urban address coding method based on hierarchical spatial reference model
CN102622767A (en) * 2012-03-05 2012-08-01 广州乐庚信息科技有限公司 Method for positioning binocular non-calibrated space
CN109166136A (en) * 2018-08-27 2019-01-08 中国科学院自动化研究所 Target object follower method of the mobile robot based on monocular vision sensor

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于双目视觉的相对物体的姿态测量;康喜若等;《科协论坛(下半月)》;20130225(第02期);全文 *
鲁棒的自适应尺度和方向的目标跟踪方法;单玉刚等;《计算机工程与应用》;20180315(第21期);全文 *

Also Published As

Publication number Publication date
CN110705644A (en) 2020-01-17

Similar Documents

Publication Publication Date Title
Yang et al. Pixor: Real-time 3d object detection from point clouds
WO2022141910A1 (en) Vehicle-road laser radar point cloud dynamic segmentation and fusion method based on driving safety risk field
Zhe et al. Inter-vehicle distance estimation method based on monocular vision using 3D detection
CN112714913A (en) Structural annotation
TWI308891B (en)
CN110969074A (en) Sensing device for obstacle detection and tracking and sensing method for obstacle detection and tracking
CN113128348A (en) Laser radar target detection method and system fusing semantic information
Fei et al. SemanticVoxels: Sequential fusion for 3D pedestrian detection using LiDAR point cloud and semantic segmentation
CN110059683A (en) A kind of license plate sloped antidote of wide-angle based on end-to-end neural network
CN112097732A (en) Binocular camera-based three-dimensional distance measurement method, system, equipment and readable storage medium
US11869257B2 (en) AR-based labeling tool for 3D object detection model training
CN114821507A (en) Multi-sensor fusion vehicle-road cooperative sensing method for automatic driving
CN113947946A (en) Port area traffic safety monitoring method based on Internet of vehicles V2X and video fusion
Hu et al. Monocular 3-D vehicle detection using a cascade network for autonomous driving
CN115346012A (en) Intersection surface generation method, apparatus, device, storage medium and program product
CN110705644B (en) Method for coding azimuth relation between targets
US20220414917A1 (en) Method and apparatus for obtaining 3d information of vehicle
CN115965970A (en) Method and system for realizing bird&#39;s-eye view semantic segmentation based on implicit set prediction
CN117111055A (en) Vehicle state sensing method based on thunder fusion
CN112017239B (en) Method for determining orientation of target object, intelligent driving control method, device and equipment
CN101145240A (en) Camera image road multiple-point high precision calibration method
Zhang et al. Infrastructure 3D Target detection based on multi-mode fusion for intelligent and connected vehicles
CN116416457B (en) Safety situation sensing and danger early warning method for electric power maintenance vehicle
Liu et al. Automated vehicle wheelbase measurement using computer vision and view geometry
CN113658240B (en) Main obstacle detection method and device and automatic driving system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant