CN110705644A - Method for coding azimuth relation between targets - Google Patents

Method for coding azimuth relation between targets Download PDF

Info

Publication number
CN110705644A
CN110705644A CN201910948259.1A CN201910948259A CN110705644A CN 110705644 A CN110705644 A CN 110705644A CN 201910948259 A CN201910948259 A CN 201910948259A CN 110705644 A CN110705644 A CN 110705644A
Authority
CN
China
Prior art keywords
positioning frame
target
code
reference target
orientation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910948259.1A
Other languages
Chinese (zh)
Other versions
CN110705644B (en
Inventor
邓少冬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Mix Intelligent Technology Co Ltd
Original Assignee
Xi'an Mix Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Mix Intelligent Technology Co Ltd filed Critical Xi'an Mix Intelligent Technology Co Ltd
Priority to CN201910948259.1A priority Critical patent/CN110705644B/en
Publication of CN110705644A publication Critical patent/CN110705644A/en
Application granted granted Critical
Publication of CN110705644B publication Critical patent/CN110705644B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a coding method of the orientation relation between targets, the orientation coding of the observation target relative to the reference target comprises two parts of direction coding and distance coding, both the direction coding and the distance coding are relative codes obtained by evaluating the orientation of the observation target by taking the reference target as a starting point, therefore, the coding method has the characteristics of not changing with the scaling of a graph and not changing with an observation angle, and can effectively reflect the orientation relation between targets.

Description

Method for coding azimuth relation between targets
Technical Field
The invention relates to the description and the discrimination of the orientation relation between two-dimensional space targets, which can be applied to the description, the generation and the discrimination of the mutual relation between the targets after the target detection in the computer vision, and can also be applied to the description, the generation and the discrimination of the mutual relation between two-dimensional space objects in the general sense.
Background
Object detection is a computer vision technique that aims at detecting objects such as cars, buildings, and humans from images or videos. Target detection is widely applied in the fields of video monitoring, automatic driving, human body tracking and the like. The output of the target detection comprises the type and the reliability of the target, and a positioning frame of the target in the image, as shown in fig. 1, a dog is included in the positioning frame of a double-dashed line, a bicycle is included in the positioning frame of the double-dashed line, and a car is included in the positioning frame of the dot-dashed line.
Regarding the object detection and the determination of the positioning frame, there are many mature technologies in the industry, which can be roughly divided into two types, one type is the determination of the positioning frame and the determination of the object type are divided into two different stages, which is called a two-stage method, and the representative algorithms are rcnn-prostrcnn-foster rcnn, etc. The other is that the positioning frame and the target category are determined simultaneously, which is called a one-stage method, and a representative algorithm is YOLO, SSD.
Computer vision's understanding of a scene is based on objects detected in the scene and relationships between the objects, and object detection addresses the problem of what objects are in the scene, without concern for relationships between the objects. In practical applications, the relationship between the objects is closely related to the orientation relationship of the objects in the image, such as:
in fig. 2, the bicycle is carried by a person, the bicycle is on the upper part of the person, in fig. 3, the bicycle is ridden by a person, and in the lower part of the person, from the computer vision, the person and the bicycle are detected in two pictures, but how to judge whether the person is riding the bicycle or the person is carrying the bicycle needs to identify two targets: bicycle and people, the bearing relation between the two.
In addition, the related applications of the target layout, such as rooms, places, commodity displays and the like, how to describe the layout mode by a mathematical method, and the description of the target formed layout detected by a computer is compared with a standardized layout template, so that the correctness and reasonability of the layout are judged.
In the description of the layout template, or in order to judge whether the layout of the actual scene seen after the target detection is reasonable, a general description method of the orientation relationship between the targets is needed, and the general description method is lacked at present.
Currently, the determination of the orientation relationship between the detected targets in the industry is also implemented based on a target positioning frame, and includes the following methods:
A. IoU, for judging the coincidence degree of two objects
As shown in FIG. 4, there are two target location boxes, IoU (Intersection over Union, i.e., the ratio of the Intersection area of the location boxes to the area of the phase in phase). It can be seen that the closer the ratio is to 1, the higher the degree of coincidence of the two targets.
B. Point in frame
As shown in fig. 5, blue is a positioning frame of an object, and whether a point is in the positioning frame or not, we can realize the positioning by comparing the magnitude of absolute coordinates. Assuming that the coordinates of the point are (x, y), and the minimum value in the x direction, the minimum value in the y direction, the maximum value in the x direction, and the maximum value in the y direction of the positioning frame are Xmin, Ymin, Xmax, and Ymax, respectively, the condition of the point in the frame is:
ymin < x < xmax and ymin < y < ymax
C. Absolute distance
This is a way to determine the distance between the center points of the positioning frames, and the positioning distance is generally compared with a threshold value as a basis for determining whether the positioning distance meets the condition.
The above method for discriminating the target azimuth relationship between objects has the following problems:
1. the orientation relationship between two objects cannot be fully described, for example, IoU can only reflect the contact ratio, and the relationship between a point and a frame can only indicate whether a specific point is included; the threshold value is compared according to the absolute distance, and the problem that the threshold value is unreasonable often exists;
2. depending on the code judgment. The judgment needs to be made by means of program logic, plus mathematical calculations. Different codes are compiled in different orientation relations, and the customization is high.
Disclosure of Invention
The invention aims to provide a coding method of the orientation relation between targets, which is used for realizing the universal description and judgment of the orientation relation between more than two target detection objects.
The invention specifically adopts the following technical scheme:
a method for coding the orientation relation between targets is characterized in that:
the observation target and the reference target are both represented by rectangular positioning boxes, and the orientation coding of the observation target relative to the reference target comprises two parts:
(1) direction coding: coding of the observation target in different directions relative to the reference target;
dividing a plane space into 9 different areas by using extension lines of four sides of a rectangular positioning frame of a reference target as a center, respectively representing the 9 different areas by using 9 different codes, and determining a direction code of an observation target relative to the reference target by using a specific area of the center point of the rectangular positioning frame of the observation target falling in the 9 different areas;
(2) distance encoding: codes of different distances of the observation target relative to the reference target;
and establishing an XY coordinate system by taking the central point of the reference target rectangular positioning frame as an origin and the width direction and the height direction as coordinate axes, and determining the distance code of the observation target relative to the reference target by taking the position relation between the central point of the observation target rectangular positioning frame and the reference target rectangular positioning frame and the quantity relation between the width and the height of the observation target rectangular positioning frame and the unit scale of the corresponding coordinate axes.
The invention has the following beneficial effects:
the target detection obtains a single target, and the orientation relation between the targets is coded by the method of the invention, so that the method can be as follows:
A. describing the relationship between the target orientations: the orientation relationship between two or more objects will be described in common.
B. Judging the orientation relation between the targets: the expression of the orientation relation between the actual targets is obtained by target detection or manual work, and compared with a standardized layout template (namely, a preset layout standard of the orientation relation between specified objects) to determine whether the layout is consistent.
The direction code and the distance code contained in the orientation code obtained by the coding method are relative codes obtained by evaluating the orientation of the observation target by taking the reference target as a starting point, so the method has the characteristics of no change along with the scaling of the graph and no change along with the observation angle, and can effectively reflect the orientation relation between the targets.
Drawings
FIG. 1 is a schematic view of target detection;
FIG. 2 is an image of a person carrying a bicycle;
FIG. 3 is an image of a human riding bicycle;
FIG. 4 is a schematic view of IoU;
FIG. 5 is a schematic diagram of a dot-box relationship;
FIG. 6 is a reference object location box four parameters;
FIG. 7 is a schematic diagram of direction coding region segmentation (digital coding);
FIG. 8 is a schematic diagram of the segmentation of the directional coded regions (letter coding);
FIG. 9 is a schematic diagram of distance encoding;
FIG. 10 is a schematic view of example 1;
FIG. 11 is a schematic view of example 2;
fig. 12 is an application illustration.
Detailed Description
The orientation relation between the targets is relative, and relates to two targets, one is a reference target and the other is an observation target, and the orientation code is obtained by evaluating and integrating the direction and the distance of the observation target relative to the reference target by taking the reference target as a center.
The invention provides a coding method of orientation relation between targets, wherein orientation coding of an arbitrary observation target B relative to an arbitrary reference target A comprises two parts:
1. direction coding: the different directions of the target B relative to the target A are represented by different codes;
2. distance encoding: calculating different codes by taking the size of the target A as a measurement scale according to the distance between the target B and the target A;
the targets are all described by rectangular positioning frames, wherein the direction codes are that the plane space is divided into 9 areas by the extension lines of the four sides of the rectangular positioning frame of the target A, and the divided different areas are represented by 9 different codes;
the distance code takes the length and width of the rectangle as the measurement scale, measures the distance of the target B relative to the target A, and forms the distance code.
As shown in fig. 6, in the target detection, the positioning frame of the reference target includes four parameters: xmin, Ymin, Xmax, Ymax are the minimum value in the x direction, the minimum value in the y direction, the maximum value in the x direction, and the maximum value in the y direction, respectively.
A. Determination of directional coding
Two ends of four edge lines of the reference target positioning frame are extended, the positioning frame of the reference target is taken as a center, and the two-dimensional plane space is divided into 9 areas shown in fig. 7 and 8. Each region is represented by a code, which may be a number or other symbols such as letters. The orientation of the viewing object relative to the reference object is coded to determine which of the 9 regions the center of the viewing object location box falls within. Specifically, if the center of the positioning frame of the observation target is located on the extension lines of the four sides of the positioning frame of the reference target, one of the adjacent regions is taken as its direction code according to a uniform rule.
The coding method of the region can be various, for example, expressed by using different numbers, wherein one implementation method is as follows: the area (including the positioning frame boundary) where the positioning frame of the reference target is located is represented by 0, and the rest areas are represented by 1-8 respectively. The areas 1-8 can be distributed clockwise around the area 0, can be distributed anticlockwise or adopt other distribution forms. When the center of the positioning frame of the observation target is positioned on the extension lines of the four sides of the reference target positioning frame, the digital codes of the clockwise or anticlockwise adjacent areas can be uniformly used as the direction codes, or the larger or smaller digital code in the adjacent areas can be uniformly used as the direction code. In the embodiment of the invention, the numbers 1-8 are distributed clockwise, and when the center of the positioning frame of the observation target is positioned on the extension lines of the four sides of the positioning frame of the reference target, the digital codes of clockwise adjacent areas are uniformly taken as the direction codes.
The codes of the regions can be expressed by using different letters, for example, the region (including the outline of the positioning frame) where the positioning frame of the reference target is located is represented by X, and the rest regions are represented by a to H, respectively. The areas A-H may be distributed clockwise, counterclockwise or in other distribution around the area X. When the center of the positioning frame of the observation target is positioned on the extension lines of the four sides of the reference target positioning frame, the letter codes of the clockwise or anticlockwise adjacent areas can be uniformly used as the direction codes, or the letter codes which are sequenced earlier or later in the adjacent areas can be uniformly used as the direction codes.
B. Determination of distance codes
A coordinate system shown in fig. 9 is established with the center of the reference target rectangular positioning frame as an origin, the width direction as an X axis, and the height direction as a Y axis. The unit scale of the X axis is one half of the width of the reference target; the unit dimension of the Y-axis is one-half of the reference target height.
The range of the distance codes is-9, wherein:
0: the central point of the observation target positioning frame falls on the positioning frame of the reference target
1: the central point of the observation target positioning frame is the same as the positioning frame of the reference target, the width and the height are 2 times of the unit scale of the corresponding coordinate axis, and at the moment, the observation target positioning frame is overlapped with the reference target positioning frame.
2: the central point of the observation target positioning frame is the same as the positioning frame of the reference target, and the width and the height of the observation target positioning frame are 4 times of the unit scale of the corresponding coordinate axis.
。。。
8: the central point of the observation target positioning frame is the same as the positioning frame of the reference target, and the width and the height of the observation target positioning frame are 8 times of 2 times of the unit scale of the corresponding coordinate axis.
9: the central point of the observation target positioning frame is the same as the positioning frame of the reference target, and the width and the height of the observation target positioning frame are 9 times of 2 times of the unit scale of the corresponding coordinate axis.
When the central point of the observation target is located at other positions, the distance code is between two integers and has a decimal, and the decimal is rounded downwards or upwards to be an integer.
The distance code is at most 9, i.e. 9 even further. Because from the intuitive feeling of human, the object at too far distance is regarded as the distant object, and the convenience of expression by one digit is also facilitated.
The inner region of the reference target localization box is a special region whose distance is coded as a negative value, and similarly:
-1: the central point of the observation target positioning frame is the same as the positioning frame of the reference target, and the width and the height are 2-1 times of the unit scale of the corresponding coordinate axis.
-2: the central point of the observation target positioning frame is the same as the positioning frame of the reference target, and the width and the height are 2-2 times of the unit scale of the corresponding coordinate axis.
-3: the central point of the observation target positioning frame is the same as the positioning frame of the reference target, and the width and the height of the observation target positioning frame are 2-3 times of the unit scale of the corresponding coordinate axis.
。。。
-9: the central point of the observation target positioning frame is the same as the positioning frame of the reference target, and the width and the height of the observation target positioning frame are 2-9 times of the unit scale of the corresponding coordinate axis.
Likewise, the distance code is not less than-9, and the further approach distance is also-9. Because objects that are too close are considered close to the target from the human visual perception, the simplicity of expression with one digit is also facilitated.
C. Orientation coding
The azimuth code comprises a direction code and a distance code, and is divided by decimal points, wherein the direction code is arranged in front of the decimal points, and the distance code is arranged behind the decimal points.
The distance-coded symbols are placed in front of the orientation-coded symbols as orientation-coded symbols.
For example:
the direction code is 1, the distance code is 3, and the orientation code is 1.3
The direction code is 0, the distance code is-2, and the orientation code is-0.2
D. Summary of the invention
The direction code and the distance code contained in the orientation code obtained by the coding method are relative codes obtained by evaluating the orientation of the observation target by taking the reference target as a starting point, so the method has the characteristics of no change along with the scaling of the graph and no change along with the observation angle.
Currently, a general description method of the orientation relation between targets is lacked, and the orientation coding method provided by the invention fills the blank in the aspect.
The current direction relation between the targets is judged, the absolute coordinate size of the targets is judged through the customized code logic, and the code customization is strong and cannot be used universally.
The orientation relation between the targets can be generally described through the orientation codes and the matrix formed by the orientation codes. A mode matrix of various inter-target relationships can also be defined, and whether the specified scene exists can be judged by matching the inter-target relationships in the actual scene with the mode matrix.
The coding method of the invention well matches the general method of providing the positioning frame for target detection in the current computer vision technology, the definition of the direction is matched with nine areas naturally divided by each extension line of the reference target positioning frame, the size of the reference target positioning frame is used as the distance measurement basis, and the exponential growth principle is adopted, so that the azimuth relationship between targets can be effectively reflected, and the method is more consistent with the visual feeling.
Example 1
As shown in fig. 10, taking a cup placed on a table as an example, the orientation code between the table and the cup is as follows:
when the cup is seen from the table, the cup is arranged above the table and belongs to the direction 8, the distance from the center of the cup to the center of the table is close to half of the height of the table, the distance is coded to be 0, namely the direction code of the cup relative to the table is 8.0;
the table is seen from the cup, the table is placed below the cup and belongs to the direction 4, the distance between the center of the table and the center of the cup is about 3 times of the height of the cup in advance, the distance code is 1-2, the whole is 1, and therefore the position code of the table relative to the cup is 4.1.
The orientation code of any one target relative to the target is defined as-1, so the orientation relation matrix of the table and the cup is as follows:
Figure BDA0002224892980000071
if the direction code is expressed by using the letter, the cup is placed above the table and belongs to the direction H, the table is placed below the cup and belongs to the direction D, and the rest of the table is unchanged, so that the orientation relation matrix of the table and the cup is as follows:
Figure BDA0002224892980000072
when the regions are represented by letter codes, the orientation code of any one object relative to itself can be correspondingly defined by letter codes.
Example 2
As in fig. 11, the relationship between the vehicle body and the wheels:
viewing the front wheel from the vehicle, wherein the front wheel is arranged in the vehicle, the direction code is 0, the distance code is-1, and the direction code is-0.1;
viewing the rear wheel from the vehicle, wherein the rear wheel is arranged in the vehicle, the direction code is 0, the distance code is-1, and the direction code is-0.1;
when a vehicle is seen from the front wheel, the center of the vehicle is positioned at the upper right part of the front wheel, the direction code is 1, the distance code is about 1, and the direction code is 1.1;
the rear wheel is seen from the front wheel, the rear wheel is arranged on the right side of the front wheel, the direction code is 2, the distance code is 2, and the azimuth code is 2.2;
when the vehicle is seen from the rear wheel, the center of the vehicle is positioned at the upper left part of the rear wheel, the direction code is 7, the distance code is about 1, and the direction code is 7.1;
viewing the front wheel from the rear wheel, wherein the direction code of the front wheel is 4, the distance code is 2 and the azimuth code is 4.2 on the left side of the rear wheel;
in summary, an orientation relation matrix between an automobile body and front and rear wheels is described as follows:
application example of scene matching
Often riders are not allowed in urban traffic, how are they identified if detected by computer vision?
Taking fig. 12 as an example, there are three targets: bicycle, boy head, girl head, interrelationship of objects:
the boy's head is seen from the bicycle, and the center of the boy's head positioning frame is positioned above the bicycle positioning frame, and the distance is about half to 1 bicycle positioning frame. If the height of the half bicycle positioning frame is equal to 1 unit scale, the distance code is 0; if the height of the bicycle positioning frame is 1, the unit dimension is 2, and the distance code is 1. The selectable values of the distance code are 0 and 1. Alternative distance codes are indicated by different numbers in square brackets, here [01 ]. Thus, the orientation code for the boy's head relative to the bicycle alignment frame is 8.[01] (the numbers inside the square brackets are optional, the same applies below), and likewise, the center of the girl's head alignment frame is also located above the bicycle alignment frame, with an orientation code of 8.0.
Looking at the bicycle from the boy's head, the center of the bicycle positioning frame is located below, possibly left-down, right-down, at a distance of about 2-4 head heights, and the orientation code is [345] [234 ]. Looking at the girl's head from the boy's head, the girl's head is at the right or below the boy's head, the distance is about 0-2 heads, and the orientation code is [23] [012 ].
Looking at the bicycle from the girl's head, the center of the bicycle positioning frame is located at the lower left, the distance is about 2-4 head heights, and the orientation code is 5 [234 ]. Looking at the boy's head from the girl's head, the boy's head is on the left or upper left of the girl's head, a distance of about 0-2 heads, and orientation codes [67] [012 ].
The riding and leading position coding matrix template comprises the following steps:
Figure BDA0002224892980000082
in the actual street view monitoring, the target detection can detect the head of a bicycle, a boy and a girl in a street view picture, and a positioning frame is given. And generating orientation coding matrixes of the bicycle, the boy head and the girl head according to the positioning frame, comparing the orientation coding matrixes with a template of a predefined orientation coding matrix according to the actual matrix, and judging that the bicycle is a cyclist if the comparison is successful.

Claims (10)

1. A method for coding the orientation relation between targets is characterized in that:
the observation target and the reference target are both represented by rectangular positioning boxes, and the orientation coding of the observation target relative to the reference target comprises two parts:
(1) direction coding: coding of the observation target in different directions relative to the reference target;
dividing a plane space into 9 different areas by using extension lines of four sides of a rectangular positioning frame of a reference target as a center, respectively representing the 9 different areas by using 9 different codes, and determining a direction code of an observation target relative to the reference target by using a specific area of the center point of the rectangular positioning frame of the observation target falling in the 9 different areas;
(2) distance encoding: codes of different distances of the observation target relative to the reference target;
and establishing an XY coordinate system by taking the central point of the reference target rectangular positioning frame as an origin and the width direction and the height direction as coordinate axes, and determining the distance code of the observation target relative to the reference target by taking the position relation between the central point of the observation target rectangular positioning frame and the reference target rectangular positioning frame and the quantity relation between the width and the height of the observation target rectangular positioning frame and the unit scale of the corresponding coordinate axes.
2. A method of encoding an inter-object bearing relationship according to claim 1, characterized in that: the 9 different regions are represented by 9 numbers, respectively.
3. A method of encoding an inter-object bearing relationship according to claim 1, characterized in that: the 9 different regions are represented by 9 english letters, respectively.
4. A method of encoding an inter-object bearing relationship according to any one of claims 1 to 3, characterized by: and when the central point of the observation target rectangular positioning frame is positioned on the extension lines of the four sides of the reference target rectangular positioning frame, taking one code in the adjacent area as the direction code according to a uniform rule.
5. A method of encoding an inter-object bearing relationship according to claim 1, characterized in that: and taking the width direction of the reference target rectangular positioning frame as an X axis and the height direction as a Y axis, wherein the unit dimension of the X axis is one half of the width of the reference target rectangular positioning frame, and the unit dimension of the Y axis is one half of the height of the reference target rectangular positioning frame.
6. The method of encoding an orientation relation between objects according to claim 1 or 5, wherein:
the range of the distance codes is-9, wherein:
0: observing that the central point of the target positioning frame falls on the positioning frame of the reference target;
1,2,3, … 9: the central point of the observation target positioning frame is the same as the reference target positioning frame, and the width and the height of the observation target positioning frame are the i-th power times of 2 of the unit scale of the corresponding coordinate axis;
-i, i ═ 1,2,3, … 9: the central point of the observation target positioning frame is the same as the reference target positioning frame, and the width and the height are 2-i times of the unit scale of the corresponding coordinate axis.
7. The method of encoding an orientation relation between objects according to claim 6, wherein: and when the central point of the observation target positioning frame is not on the reference target positioning frame or is the same as the reference target positioning frame and the distance code is between two integers, uniformly rounding up or down to be used as the distance code.
8. A method of encoding an inter-object bearing relationship according to claim 1, characterized in that: the azimuth code comprises a direction code and a distance code, and is divided by decimal points, wherein the direction code is arranged before the decimal points, and the distance code is arranged after the decimal points.
9. The method of encoding an orientation relation between objects according to claim 8, wherein: the distance-coded symbols are placed in front of the orientation-coded symbols as orientation-coded symbols.
10. A method of encoding an inter-object bearing relationship according to claim 1 or 2, characterized by: the orientation of an arbitrary target relative to itself is encoded as-1.
CN201910948259.1A 2019-10-08 2019-10-08 Method for coding azimuth relation between targets Active CN110705644B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910948259.1A CN110705644B (en) 2019-10-08 2019-10-08 Method for coding azimuth relation between targets

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910948259.1A CN110705644B (en) 2019-10-08 2019-10-08 Method for coding azimuth relation between targets

Publications (2)

Publication Number Publication Date
CN110705644A true CN110705644A (en) 2020-01-17
CN110705644B CN110705644B (en) 2022-11-18

Family

ID=69198260

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910948259.1A Active CN110705644B (en) 2019-10-08 2019-10-08 Method for coding azimuth relation between targets

Country Status (1)

Country Link
CN (1) CN110705644B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102254291A (en) * 2011-06-14 2011-11-23 南京师范大学 Urban address coding method based on hierarchical spatial reference model
CN102622767A (en) * 2012-03-05 2012-08-01 广州乐庚信息科技有限公司 Method for positioning binocular non-calibrated space
US20120206438A1 (en) * 2011-02-14 2012-08-16 Fatih Porikli Method for Representing Objects with Concentric Ring Signature Descriptors for Detecting 3D Objects in Range Images
US20170031437A1 (en) * 2015-07-30 2017-02-02 Boe Technology Group Co., Ltd. Sight tracking method and device
CN109166136A (en) * 2018-08-27 2019-01-08 中国科学院自动化研究所 Target object follower method of the mobile robot based on monocular vision sensor

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120206438A1 (en) * 2011-02-14 2012-08-16 Fatih Porikli Method for Representing Objects with Concentric Ring Signature Descriptors for Detecting 3D Objects in Range Images
CN102254291A (en) * 2011-06-14 2011-11-23 南京师范大学 Urban address coding method based on hierarchical spatial reference model
CN102622767A (en) * 2012-03-05 2012-08-01 广州乐庚信息科技有限公司 Method for positioning binocular non-calibrated space
US20170031437A1 (en) * 2015-07-30 2017-02-02 Boe Technology Group Co., Ltd. Sight tracking method and device
CN109166136A (en) * 2018-08-27 2019-01-08 中国科学院自动化研究所 Target object follower method of the mobile robot based on monocular vision sensor

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
单玉刚等: "鲁棒的自适应尺度和方向的目标跟踪方法", 《计算机工程与应用》 *
康喜若等: "基于双目视觉的相对物体的姿态测量", 《科协论坛(下半月)》 *

Also Published As

Publication number Publication date
CN110705644B (en) 2022-11-18

Similar Documents

Publication Publication Date Title
Reiher et al. A sim2real deep learning approach for the transformation of images from multiple vehicle-mounted cameras to a semantically segmented image in bird’s eye view
TWI308891B (en)
CN110969074A (en) Sensing device for obstacle detection and tracking and sensing method for obstacle detection and tracking
CN112714913A (en) Structural annotation
Kim et al. Rear obstacle detection system with fisheye stereo camera using HCT
Fei et al. SemanticVoxels: Sequential fusion for 3D pedestrian detection using LiDAR point cloud and semantic segmentation
US11869257B2 (en) AR-based labeling tool for 3D object detection model training
CN111899515B (en) Vehicle detection system based on wisdom road edge calculates gateway
CN112097732A (en) Binocular camera-based three-dimensional distance measurement method, system, equipment and readable storage medium
CN112967283A (en) Target identification method, system, equipment and storage medium based on binocular camera
Li et al. A robust lane detection method based on hyperbolic model
Hu et al. Monocular 3-D vehicle detection using a cascade network for autonomous driving
Arsenali et al. RotInvMTL: Rotation invariant multinet on fisheye images for autonomous driving applications
Bi et al. A new method of target detection based on autonomous radar and camera data fusion
CN115965970A (en) Method and system for realizing bird&#39;s-eye view semantic segmentation based on implicit set prediction
CN117111055A (en) Vehicle state sensing method based on thunder fusion
CN112017239B (en) Method for determining orientation of target object, intelligent driving control method, device and equipment
CN115526990A (en) Target visualization method and device for digital twins and electronic equipment
Yang et al. Lite-fpn for keypoint-based monocular 3d object detection
WO2021175119A1 (en) Method and device for acquiring 3d information of vehicle
CN110705644B (en) Method for coding azimuth relation between targets
Jung et al. Intelligent Hybrid Fusion Algorithm with Vision Patterns for Generation of Precise Digital Road Maps in Self-driving Vehicles.
CN113658240B (en) Main obstacle detection method and device and automatic driving system
Zhang et al. Infrastructure 3D Target detection based on multi-mode fusion for intelligent and connected vehicles
Zhang et al. FS-Net: LiDAR-Camera Fusion With Matched Scale for 3D Object Detection in Autonomous Driving

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant