CN112257485A - Object detection method and device, storage medium and electronic equipment - Google Patents

Object detection method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN112257485A
CN112257485A CN201910662162.4A CN201910662162A CN112257485A CN 112257485 A CN112257485 A CN 112257485A CN 201910662162 A CN201910662162 A CN 201910662162A CN 112257485 A CN112257485 A CN 112257485A
Authority
CN
China
Prior art keywords
current
characteristic point
historical
motion
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910662162.4A
Other languages
Chinese (zh)
Inventor
伍宽
赵为
朱继玉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Shuangjisha Technology Co ltd
Original Assignee
Beijing Shuangjisha Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Shuangjisha Technology Co ltd filed Critical Beijing Shuangjisha Technology Co ltd
Priority to CN201910662162.4A priority Critical patent/CN112257485A/en
Publication of CN112257485A publication Critical patent/CN112257485A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The invention provides a method, a device, a storage medium and an electronic device for object detection, wherein the method comprises the following steps: acquiring a current binocular image of a current time node, determining a current characteristic point in the current binocular image, which is matched with a historical characteristic point in a historical binocular image, and determining a three-dimensional coordinate corresponding to the current characteristic point; the current characteristic point and the historical characteristic point are both positioned in the target area; determining the motion parameters of the current characteristic point according to the three-dimensional coordinates corresponding to the current characteristic point and the three-dimensional coordinates corresponding to the historical characteristic point; and generating an object early warning message when the motion parameters are matched with the preset parameters. According to the object detection method, the object detection device, the storage medium and the electronic equipment provided by the embodiment of the invention, only the matched feature points are tracked and the corresponding motion parameters are determined, so that the data volume of image processing is greatly reduced, and the method is low in complexity, high in running speed and good in real-time performance.

Description

Object detection method and device, storage medium and electronic equipment
Technical Field
The present invention relates to the field of object detection technologies, and in particular, to a method and an apparatus for object detection, a storage medium, and an electronic device.
Background
With the development of the urbanization process, the number of motor vehicles is increasing, and the number of traffic accidents caused by pedestrians and vehicles is also increasing. Especially in some areas with more pedestrian flows, roads are open, some pedestrians do not obey traffic instructions, or traffic facilities at some intersections are not sound enough, so that a plurality of potential safety hazards are caused.
In order to improve the safety factor of the zebra crossing, intelligent zebra crossing systems are actively developed in many cities in recent years, namely, the potential safety hazard of the zebra crossing area is solved by an intelligent means. The intelligent zebra crossing system can detect pedestrians, and control the spike lamps to flicker to remind vehicles of speed reduction and courtesy when the pedestrians step into the zebra crossing; when no pedestrian passes, the vehicle can run smoothly.
The solutions for detecting pedestrians mainly include infrared detection, laser radar detection, visual detection and the like. The infrared detection is greatly interfered by environmental factors, and the effect is not good particularly under the night condition; lidar is costly and impractical. Visual detection technology is mainly used in the market at present, but the technology depends on deep learning technology, requires large calculation amount, is poor in real-time performance, and can increase hardware cost.
Disclosure of Invention
In order to solve the above problems, embodiments of the present invention provide a method, an apparatus, a storage medium, and an electronic device for object detection.
In a first aspect, an embodiment of the present invention provides an object detection method, including:
acquiring a current binocular image of a current time node, determining a current characteristic point in the current binocular image, which is matched with a historical characteristic point in a historical binocular image, and determining a three-dimensional coordinate corresponding to the current characteristic point; the current characteristic point and the historical characteristic point are both positioned in a target area;
determining the motion parameters of the current characteristic point according to the three-dimensional coordinates corresponding to the current characteristic point and the three-dimensional coordinates corresponding to the historical characteristic point, wherein the motion parameters comprise one or more of a motion direction, a motion distance and a motion speed;
and generating an object early warning message when the motion parameters are matched with preset parameters.
In one possible implementation, the determining the current feature points in the current binocular image that match the historical feature points in the historical binocular image includes:
acquiring a historical binocular image of a previous time node, and determining historical characteristic points in the historical binocular image;
and performing characteristic point matching processing on the current binocular image and the historical binocular image, and taking a point in the current binocular image, which is matched with the historical characteristic point, as a current characteristic point.
In a possible implementation manner, the determining the motion parameter of the current feature point according to the three-dimensional coordinate corresponding to the current feature point and the three-dimensional coordinate corresponding to the historical feature point includes:
when the historical characteristic point is not matched with the characteristic point of other binocular images before the last time node, determining the motion parameter of the current characteristic point according to the three-dimensional coordinate corresponding to the current characteristic point and the three-dimensional coordinate corresponding to the historical characteristic point;
and when the historical characteristic point is matched with the characteristic points of one or more other binocular images before the last time node, taking the characteristic points matched with the historical characteristic points in all the other binocular images as effective characteristic points, and determining the motion parameters of the current characteristic points according to the three-dimensional coordinates corresponding to the current characteristic points, the three-dimensional coordinates corresponding to the historical characteristic points and the three-dimensional coordinates corresponding to all the effective characteristic points.
In one possible implementation, before the acquiring the current binocular image of the current time node, the method further includes:
determining a three-dimensional area to be detected, determining a coordinate range of the three-dimensional area in a camera coordinate system or a world coordinate system, and taking the coordinate range as a target area.
In one possible implementation, after the acquiring the current binocular image of the current time node, the method further includes:
performing segmentation processing on the current binocular image to determine a plurality of binocular sub-images of the current binocular image;
and respectively carrying out corner point detection on each binocular sub-image, and taking the detected corner points as the characteristic points of the current binocular image.
In a possible implementation manner, the generating an object warning message when the motion parameter matches a preset parameter includes:
when the motion direction of the current feature point is matched with a preset direction, generating an object early warning message; or
When the motion direction of the current feature point is matched with a preset direction and the motion distance of the current feature point is greater than a preset distance, generating an object early warning message; or
When the motion direction of the current feature point is matched with a preset direction and the motion speed of the current feature point is greater than a preset speed, generating an object early warning message; or
And generating an object early warning message when the motion direction of the current characteristic point is matched with a preset direction, the motion distance of the current characteristic point is greater than a preset distance, and the motion speed of the current characteristic point is greater than a preset speed.
In a possible implementation manner, the generating an object warning message when the motion parameter matches a preset parameter includes:
presetting one or more preset parameters and a preset number corresponding to the preset parameters; when a plurality of preset parameters exist, the preset parameters and the preset quantity form a negative correlation relationship;
and taking the current characteristic points with the motion parameters matched with the preset parameters as current target characteristic points, and generating an object early warning message when the number of the current target characteristic points is greater than the preset number.
In a second aspect, an embodiment of the present invention further provides an apparatus for object detection, including:
the characteristic point determining module is used for acquiring a current binocular image of a current time node, determining a current characteristic point in the current binocular image, which is matched with a historical characteristic point in a historical binocular image, and determining a three-dimensional coordinate corresponding to the current characteristic point; the current characteristic point and the historical characteristic point are both positioned in a target area;
the motion parameter determining module is used for determining the motion parameters of the current characteristic point according to the three-dimensional coordinates corresponding to the current characteristic point and the three-dimensional coordinates corresponding to the historical characteristic point, wherein the motion parameters comprise one or more of a motion direction, a motion distance and a motion speed;
and the early warning module is used for generating an object early warning message when the motion parameters are matched with preset parameters.
In a third aspect, an embodiment of the present invention further provides a computer storage medium, where computer-executable instructions are stored, and the computer-executable instructions are used in any one of the above-mentioned object detection methods.
In a fourth aspect, an embodiment of the present invention further provides an electronic device, including:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to cause the at least one processor to perform a method of object detection as described in any one of the above.
In the solution provided by the first aspect of the embodiments of the present invention, a feature point matching and tracking manner is adopted, a current feature point matching and tracking manner is adopted based on a historical feature point of a previous historical binocular image, a function that a binocular camera can quickly determine three-dimensional coordinates of the feature point is utilized, three-dimensional coordinates of the historical feature point and the current feature point can be quickly determined, and then a corresponding motion parameter is determined, and whether object early warning is required is determined based on the motion parameter of the feature point. The method only needs to track the matched feature points and determine corresponding motion parameters, does not need to identify objects in the binocular image, greatly reduces the data volume of image processing, is low in complexity, high in running speed, good in real-time performance and suitable for being applied to scenes with high real-time requirements.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart illustrating a method of object detection provided by an embodiment of the present invention;
fig. 2 is a schematic diagram illustrating a target area in the method for object detection according to the embodiment of the present invention;
fig. 3 is a schematic structural diagram of an apparatus for object detection according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device for performing a method for object detection according to an embodiment of the present invention.
Detailed Description
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "clockwise", "counterclockwise", and the like, indicate orientations and positional relationships based on those shown in the drawings, and are used only for convenience of description and simplicity of description, and do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be considered as limiting the present invention.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
In the present invention, unless otherwise expressly specified or limited, the terms "mounted," "connected," "secured," and the like are to be construed broadly and can, for example, be fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
According to the object detection method provided by the embodiment of the invention, the binocular camera is used for extracting the specific feature points, the detection is carried out based on the motion parameters of the feature points, and the object detection can be completed under the condition that the object is not identified. Referring to fig. 1, the method includes:
step 101: acquiring a current binocular image of a current time node, determining a current characteristic point in the current binocular image, which is matched with a historical characteristic point in a historical binocular image, and determining a three-dimensional coordinate corresponding to the current characteristic point; the current feature point and the historical feature point are both located in the target area.
In the embodiment of the invention, the binocular image of the area needing to be detected is acquired based on the preset binocular camera. For example, when a scene that a pedestrian crosses a road needs to be detected, the detected object is the pedestrian, and the area needing to be detected can be the area where a sidewalk is located; can set up binocular camera this moment on the railing of roadside, or install on the traffic light pole. The binocular camera can acquire binocular images in real time, wherein the binocular images comprise a left eye image and a right eye image; each frame of binocular image collected corresponds to a binocular image of a time node, and the current binocular image of the current time node obtained in the embodiment is the selected binocular image corresponding to the current frame; meanwhile, in order to ensure real-time performance of the object detection process, the current binocular image may be a frame of newly acquired binocular image.
In this embodiment, the binocular camera acquires the binocular images in real time, so at other time nodes before the current time node, the binocular camera may acquire other binocular images, that is, historical binocular images of a previous frame, and the feature points of the current binocular images are determined based on the historical binocular images in this embodiment. Specifically, the current binocular image of the current time node is tracked by adopting a feature matching mode for the historical feature points in the historical binocular image, so that the corresponding current feature points can be matched. In general, the historical feature point and the current feature point both correspond to the same object or point in the real world. At the next time node after the current time node, if a new binocular image is acquired, the binocular image of the next time node can be tracked based on the current feature point based on the same feature point tracking mode, so that the feature point in the next binocular image can be continuously tracked. Optionally, in order to ensure accuracy of feature point tracking, the historical binocular image is a previous frame image of the current binocular image.
After the current characteristic point of the current binocular image is determined, the three-dimensional coordinate of the current characteristic point can be determined based on the position of the current characteristic point in the current binocular image; the three-dimensional coordinates may be coordinates in a camera coordinate system of the binocular camera or coordinates in a world coordinate system, and may be determined according to actual conditions. Setting the image coordinate of the current characteristic point in the left eye image of the current binocular image as (x)l,yl) The image coordinate of the current feature point in the corresponding right eye image is (x)r,yr) After binocular image rectification yl=yrWhen this is the currentParallax D ═ x of feature pointsl-xrThe three-dimensional coordinate after the current feature point is converted into the camera coordinate system is (x)c,yc,zc) And, and:
Figure BDA0002138913560000071
and B is the baseline distance of the binocular camera, f is the focal length of the camera, and the two distances can be predetermined in a camera internal reference calibration mode.
Likewise, the camera coordinate system may be converted to a world coordinate system based on pre-calibrated camera external parameters. Specifically, the camera external parameter T mainly includes a rotation matrix and a translation vector, and the three-dimensional coordinate of the current feature point in the world coordinate system is (x)w,yw,zw) And then:
Figure BDA0002138913560000072
in the same manner, the three-dimensional coordinates of the history feature points can also be determined. It should be noted that, because the binocular image includes a left eye image and a right eye image, the current feature point in this embodiment essentially includes a feature point of the left eye image and a feature point of the right eye image, and the positions of the two feature points in the left eye image and the right eye image are generally different, but since the two feature points are also matched, in this embodiment, the feature point in the left eye image and the feature point in the right eye image are described only by being replaced with the current feature point.
Further, a region to be detected, i.e., a target region, which may be a piece of region in the real world, may be set in advance. Since the target area may be mapped to a corresponding position on the binocular image when the binocular camera collects the binocular image, and the feature point (the current feature point or the historical feature point) is also mapped to a point in the real world, the phrase "the current feature point and the historical feature point are both located in the target area" in this embodiment means: the current characteristic point and the historical characteristic point are both positioned in the area where the target area is mapped to the binocular image; or the three-dimensional coordinates corresponding to the current characteristic point and the historical characteristic point are both positioned in the target area.
In this embodiment, the specific position and size of the target area are determined by actual conditions, and if the whole picture acquired by the current binocular camera needs to be detected, the whole picture is the target area, which is equivalent to that the target area is not set. For example, in a pedestrian detection scene, a pedestrian should walk across a pedestrian road under normal conditions, but in reality, the pedestrian still freely passes through a road, so that the whole picture can be taken as a target area.
Step 102: and determining the motion parameters of the current characteristic point according to the three-dimensional coordinates corresponding to the current characteristic point and the three-dimensional coordinates corresponding to the historical characteristic point, wherein the motion parameters comprise one or more of motion direction, motion distance and motion speed.
In the embodiment of the present invention, because the current feature point is obtained by tracking and matching the historical feature point, and the current feature point corresponds to the same object or point in the real world as the historical feature point, the change in the three-dimensional coordinate of the current feature point and the three-dimensional coordinate of the historical feature point indicates the movement change of the object or point, so that in this embodiment, the corresponding motion parameter, that is, the motion parameter of the current feature point, may be determined based on the three-dimensional coordinate corresponding to the current feature point and the three-dimensional coordinate corresponding to the historical feature point, and the motion parameter includes a motion direction, a motion distance, a motion speed, and the like. In addition, since the current feature point and the historical feature point are both located in the target area, the motion parameter also contains another physical quantity, namely, a motion position, and the motion position is always located in the target area.
For example, the three-dimensional coordinates of the history feature points are (x)w1,yw1,zw1) The three-dimensional coordinate of the current feature point is (x)w2,yw2,zw2) Then the motion direction of the current feature point can be represented by a vector a, and a ═ xw2-xw1,yw2-yw1,zw2-zw1) (ii) a The distance between two three-dimensional coordinates, i.e. the movement distance, can also be determined; in addition, the time difference between the current binocular image and the historical binocular image isIt is known that the speed of movement between two three-dimensional coordinates can be determined.
Step 103: and generating an object early warning message when the motion parameters are matched with the preset parameters.
In the embodiment of the present invention, a standard parameter to be detected, that is, "a preset parameter" in step 103, is preset, and if the currently detected motion parameter matches the preset parameter, it indicates that the motion state of the current feature point is the same as or similar to the motion state to be detected, and at this time, a corresponding object warning message may be generated, that is, an object whose motion parameter matches the preset parameter exists. In this embodiment, similar to the motion parameter, the preset parameter may also include one or more items, that is, the preset parameter may include a preset direction, a preset distance, a preset speed, and the like. For example, if the moving direction of the current feature point matches the preset direction, it is indicated that the moving direction is consistent with the preset direction, and at this time, an object warning message may be generated.
In this embodiment, the object warning message may be generated in a plurality of ways, and may also be generated in different ways in different application scenarios. Taking pedestrian detection as an example, when it is detected that the motion parameters are matched with the preset parameters, it is indicated that a pedestrian is crossing a pedestrian crossing at this time, a pedestrian early warning message can be sent to a driver nearby, for example, a spike light is controlled to flash a red light, so as to warn a vehicle and remind the driver to pay attention to the pedestrian.
According to the object detection method provided by the embodiment of the invention, a characteristic point matching and tracking mode is adopted, the current characteristic point matching and tracking the current binocular image is carried out on the basis of the historical characteristic points of the previous historical binocular image, the function that the three-dimensional coordinates of the characteristic points can be rapidly determined by using a binocular camera, the three-dimensional coordinates of the historical characteristic points and the current characteristic points can be rapidly determined, the corresponding motion parameters are further determined, and whether object early warning is needed or not is determined on the basis of the motion parameters of the characteristic points. The method only needs to track the matched feature points and determine corresponding motion parameters, does not need to identify objects in the binocular image, greatly reduces the data volume of image processing, is low in complexity, high in running speed, good in real-time performance and suitable for being applied to scenes with high real-time requirements.
In the embodiment of the present invention, before "acquiring the current binocular image of the current time node" in step 101, a process of presetting a target area is further included, and the process specifically includes: and determining a three-dimensional area to be detected, determining a coordinate range of the three-dimensional area in a camera coordinate system or a world coordinate system, and taking the coordinate range as a target area.
In the embodiment of the invention, in the actual object detection process, the object and the motion space of the object are three-dimensional, and the motion state of the object can be more truly and accurately determined by setting the three-dimensional target area in the embodiment. Wherein, because the binocular camera is fixed during the shooting process, the corresponding range of the target area can be determined based on the camera coordinate system or the world coordinate system. Specifically, the setting process of the target area is described by taking the case that the object is a pedestrian and the scene is pedestrian detection as an example, wherein the coordinate range of the target area is determined in the world coordinate system.
Referring to fig. 2, when binocular camera external parameters (including a rotation matrix and a translation vector) are calibrated, a three-dimensional pedestrian crossing area is marked, and when a feature point meeting preset parameters is detected in the pedestrian crossing area, a behavior that a pedestrian crosses a pedestrian crossing is determined, and at this time, early warning can be performed. Specifically, O in FIG. 2C-XCYCZCAs camera coordinate system, OW-XWYWZWIs a world coordinate system, O in this embodimentW-XWThe axis is parallel to the zebra crossing direction of the pedestrian crossing, and the positive direction faces to the right of the figure 2; o isW-YWThe positive direction of the axis is perpendicular to the ground plane, i.e. towards the bottom of fig. 2; o isW-ZWThe axis being parallel to the direction in which the pedestrian traverses the crosswalk, i.e. along O when the pedestrian traverses the crosswalkW-ZWThe shaft moves.
In the embodiment of the invention, the world coordinate system OW-XWYWZWAnd selecting a three-dimensional target area corresponding to the pedestrian crossing. As shown in fig. 2, the target area has a height Y, a length Z, and a width X; if it isOrigin O of world coordinate systemWAnd is located at the middle position in the width direction of the crosswalk, the value range of the target area in fig. 2 is as follows:
Figure BDA0002138913560000101
optionally, the binocular camera used to capture the binocular images may be oriented towards the crosswalk, i.e. O of the binocular cameraC-ZCAxis and world coordinate system OW-ZWIn parallel, external parameters of calibration can be simplified at the moment, and the speed and the efficiency of calculating the three-dimensional coordinates of the feature points can be improved.
On the basis of the above embodiment, object detection is performed based on the motion parameters of the feature points, and since the binocular camera can acquire a plurality of frames of binocular images, that is, a plurality of binocular images may exist before the current binocular image, in this embodiment, the previous frame of binocular image of the current time node is used as a historical binocular image, so that the feature point tracking accuracy is ensured, and meanwhile, continuous tracking of the feature points can be realized. Specifically, the step 101 of determining the current feature points in the current binocular image, which are matched with the historical feature points in the historical binocular image, includes:
step A1: and acquiring a historical binocular image of the previous time node, and determining historical characteristic points in the historical binocular image.
Step A2: and performing characteristic point matching processing on the current binocular image and the historical binocular image, and taking a point matched with the historical characteristic point in the current binocular image as a current characteristic point.
In the embodiment of the invention, the current characteristic point of the current binocular image is obtained based on the tracking and matching of the historical characteristic point of the last time node; similarly, for the historical binocular image of the previous time node, the feature point (for example, the historical feature point) therein may also be obtained based on the feature point tracking matching in other binocular images of earlier time nodes; similarly, in the next time node after the current time node, the binocular image of the next time node may also be tracked based on the current feature point in the same feature point tracking manner, so that the feature point in the next binocular image may be continuously tracked. In the embodiment, the feature points of two adjacent frames of binocular images are continuously tracked, so that the continuously matched feature points can be determined, and the motion parameters or even the motion trail of the feature points can be determined.
Optionally, while tracking and matching the feature points of the binocular image, other feature points in the binocular image may be extracted based on other feature point extraction methods; or extracting all feature points in the current binocular image based on a traditional feature point extraction method, then performing feature matching based on the historical feature points of the previous frame, and taking the successfully matched feature points as the current feature points of the current binocular image.
In this embodiment, after "acquiring the current binocular image of the current time node" in step 101, the method further includes a process of extracting feature points, where the process specifically includes: segmenting the current binocular image to determine a plurality of binocular subimages of the current binocular image; and respectively carrying out corner point detection on each binocular subimage, and taking the detected corner points as the characteristic points of the current binocular image.
In the embodiment of the invention, the feature points in the current binocular image are extracted based on a corner detection mode, and a fast (features from estimated Segment test) corner detection method can be specifically adopted, and unstable corners are inhibited and eliminated by using non-maximum values. In the embodiment, the binocular image is segmented and then subjected to corner detection, so that more characteristic points can be extracted, and the result of object detection is more accurate. Specifically, when detecting corner points, a threshold t needs to be preset, and the larger the threshold t is, the fewer the number of detected corner points is; in general, the threshold t used in the corner detection of different images is different, and the threshold t may be specifically determined based on the pixel average value of the image, the difference between the maximum pixel value and the minimum pixel value, and the like, or other manners may be used, which is not limited in this embodiment. By dividing an image into small image blocks (namely binocular subimages), the threshold t of different image blocks is different when the angle point detection is carried out; the smaller the image block size is, the smaller the threshold t is generally, so that the more the number of the feature points can be extracted; the size of the image block is set according to actual needs. For example, in a scene of pedestrian detection, a pedestrian far away is smaller in a binocular image, more feature points are needed for tracking at the moment, and the pedestrian far away can be detected more effectively by extracting more feature points through image partitioning.
In the execution process of the object detection method provided by the embodiment of the invention, the corner detection processing can be performed on each frame of binocular image. In addition, the feature points obtained by corner detection are not obtained by tracking the feature points of the previous frame, but the feature points obtained by corner detection can be used as historical feature points of the next frame, that is, the feature points obtained by corner detection can be tracked and matched with the binocular image of the next frame.
In the embodiment of the invention, along the direction of time reverse order, the current feature point can track one or more matched feature points, so that the motion parameter of the current feature point can be determined based on all the matched feature points in the past. Specifically, the step 102 of determining the motion parameter of the current feature point according to the three-dimensional coordinate corresponding to the current feature point and the three-dimensional coordinate corresponding to the historical feature point includes:
step B1: and when the historical characteristic point is not matched with the characteristic point of other binocular images before the previous time node, determining the motion parameter of the current characteristic point according to the three-dimensional coordinate corresponding to the current characteristic point and the three-dimensional coordinate corresponding to the historical characteristic point.
Step B2: and when the historical characteristic point is matched with the characteristic points of one or more other binocular images before the last time node, taking the characteristic points matched with the historical characteristic points in all other binocular images as effective characteristic points, and determining the motion parameters of the current characteristic points according to the three-dimensional coordinates corresponding to the current characteristic points, the three-dimensional coordinates corresponding to the historical characteristic points and the three-dimensional coordinates corresponding to all the effective characteristic points.
In the embodiment of the present invention, the current feature point is obtained by tracking and matching the historical feature point, but the historical feature point is not necessarily obtained by tracking and matching other previous feature points. Therefore, when the historical characteristic point is not matched with the characteristic point of other binocular images before the last time node, the motion parameter of the current characteristic point can be determined based on the current characteristic point and the historical characteristic point; if the historical feature point is also obtained by tracing and matching other feature points, that is, when the historical feature point is matched with the feature point of one or more other binocular images before the previous time node, the motion parameter of the current feature point needs to be determined based on all the previous feature points (i.e., the historical feature point and the valid feature point in the above step B2) directly or indirectly matched with the current feature point. Optionally, in order to simplify the process of calculating the motion parameter, part of the valid feature points may be selected to calculate the motion parameter of the current feature point.
For example, the binocular camera continuously acquires three binocular images A, B, C, the binocular image a is the initial image, that is, there is no historical binocular image before the binocular image a, at this time, the feature points in the binocular image a may be determined based on a feature point extraction method (e.g., corner point detection), and the binocular image a includes the feature points a1, a2, a 3. Then, taking the binocular image B as a current binocular image, taking the binocular image A as a historical binocular image of the binocular image B, performing feature point tracking matching by using feature points a1, a2 and a3, determining feature points B1 and B2 of the binocular image B, determining that the feature point B1 matches the feature point a1, the feature point B2 matches the feature point a2, and the feature point which is matched with the feature point a3 does not exist in the binocular image B; meanwhile, a feature point B4 is also extracted from the binocular image B based on the feature point extraction method, that is, the binocular image B contains feature points B1, B2 and B4. And then, taking the binocular image C as the current binocular image, and repeating the process, namely tracking the characteristic points in the matched binocular image C based on the characteristic points b1, b2 and b4, wherein the characteristic points C1 and C4 are matched, the characteristic point C1 is matched with the characteristic point b1, and the characteristic point C4 is matched with the characteristic point b 4. For binocular image C, the current feature points therein include feature points C1 and C4, and feature point C1 can be traced back to feature points b1 and a1, and feature point C4 can be traced back to feature point b 4. In calculating the motion parameters of the feature point c1, the motion parameters of the feature point c1 may be comprehensively determined based on the three-dimensional coordinates of the feature points a1, B1, and c1 as described in step B2 above; when the motion parameters of the feature point c4 are calculated, since the historical feature point B4 does not have other previous matched feature points, the motion parameters of the feature point c4 can be determined based on the three-dimensional coordinates of the feature points B4 and c4 as described in step B1.
In this embodiment, the motion parameter of the current feature point is determined comprehensively by using other feature points directly or indirectly matched with the current feature point, the determined result is more accurate, and the motion parameter can more truly and completely describe the motion state of the current feature point.
On the basis of the above embodiments, the object warning message may be generated in various ways. Specifically, the step 103 "generating the object warning message when the motion parameter matches the preset parameter" includes: and when the motion direction of the current feature point is matched with the preset direction, generating an object early warning message.
In the embodiment of the invention, when the motion direction of the current feature point is matched with the preset direction, the motion direction is consistent with the preset direction, and at this time, an object early warning message can be generated. The motion direction and the preset direction of the current feature point can be represented by vectors, so that whether the motion direction is matched with the preset direction or not can be judged by using an included angle between the two vectors. Still taking the scene of pedestrian detection as an example, if the detected motion direction is a vector a, the preset direction is the direction of the crosswalk, and the preset direction is a vector B, when the included angle between the vector a and the vector B is smaller than a preset angle (for example, 45 degrees and the like), the motion direction can be considered to be matched with the preset direction; meanwhile, since the crosswalk is bidirectional, assuming that the range of the included angle between the two vectors is [0,180], when the included angle between the vector a and the vector B is larger than another preset angle (e.g., 135 °), the motion direction is considered to be matched with the preset direction.
Alternatively, the determination may be made based on the moving direction and the moving distance. Specifically, when the motion direction of the current feature point is matched with the preset direction and the motion distance of the current feature point is greater than the preset distance, an object early warning message is generated.
In this embodiment, whether a moving object exists in the target area may be more accurately determined based on the movement direction and the movement distance. For example, if the moving direction of the current feature point is consistent with the preset direction and the moving distance is greater than 1m, it indicates that there is an object moving, and a corresponding warning message may be sent.
Similarly, the determination may be performed based on the movement direction and the movement speed, or may be performed more comprehensively based on the movement direction, the movement distance, and the movement speed. Specifically, the step 103:
when the motion direction of the current feature point is matched with the preset direction and the motion speed of the current feature point is greater than the preset speed, generating an object early warning message; or
And generating an object early warning message when the motion direction of the current characteristic point is matched with the preset direction, the motion distance of the current characteristic point is greater than the preset distance, and the motion speed of the current characteristic point is greater than the preset speed.
Optionally, since the current binocular image may include a plurality of current feature points, the determination may be performed more accurately based on the number of current feature points. Specifically, the step 103 "generating the object warning message when the motion parameter matches the preset parameter" includes:
step C1: presetting one or more preset parameters and a preset number corresponding to the preset parameters; when a plurality of preset parameters exist, the preset parameters and the preset number are in a negative correlation relationship.
Step C2: and taking the current characteristic points with the motion parameters matched with the preset parameters as current target characteristic points, and generating an object early warning message when the number of the current target characteristic points is larger than the preset number.
In the embodiment of the invention, the movement of the object is described by sequentially tracking the matched feature points, and in the process of actually acquiring the binocular images, the problem of interrupting tracking and matching the corresponding feature points due to the fact that part of pixel points of a certain frame of binocular images are abnormal may exist. If the object moves at a constant speed, the larger the movement distance (corresponding to the preset parameter) determined according to the tracked and matched feature points is, the smaller the number (corresponding to the preset number) of the finally tracked and matched feature points is, so that the preset parameter in the embodiment has a negative correlation with the preset number. Meanwhile, multiple groups of preset parameters and preset numbers can be set, and the object early warning message can be generated only by detecting that the motion parameters and the number of the current feature points meet any one group.
For example, in a pedestrian detection scenario, two sets of preset distances and preset numbers are set, and the first set of parameters is: the preset distance is 0.5 m, the preset number is 10, and the second group of parameters are as follows: the preset distance is 1 meter, and the preset number is 2. In this embodiment, the motion parameter of each current feature point in the current binocular image is determined, and the number of current feature points with a motion distance greater than 0.5 m and the number of current feature points with a motion distance greater than 1m are determined. If the number of the current characteristic points with the movement distance of more than 0.5 m is more than 10, an object early warning message can be generated; or if the number of the current feature points with the movement distance larger than 0.5 m is larger than 2, the object early warning message can be generated. By setting the preset parameters and the preset number of the multi-component negative correlation relationship, the problem of abnormal motion parameter calculation caused by interruption of tracking and matching of the feature points can be effectively avoided, and the accuracy of object detection can be improved.
According to the object detection method provided by the embodiment of the invention, a characteristic point matching and tracking mode is adopted, the current characteristic point matching and tracking the current binocular image is carried out on the basis of the historical characteristic points of the previous historical binocular image, the function that the three-dimensional coordinates of the characteristic points can be rapidly determined by using a binocular camera, the three-dimensional coordinates of the historical characteristic points and the current characteristic points can be rapidly determined, the corresponding motion parameters are further determined, and whether object early warning is needed or not is determined on the basis of the motion parameters of the characteristic points. The method only needs to track the matched feature points and determine corresponding motion parameters, does not need to identify objects in the binocular image, greatly reduces the data volume of image processing, is low in complexity, high in running speed, good in real-time performance and suitable for being applied to scenes with high real-time requirements. The previous frame of binocular image of the current time node is used as a historical binocular image, so that the tracking accuracy of the feature points is guaranteed, and meanwhile, the continuous tracking of the feature points can be realized. And the binocular image is segmented and then subjected to angular point detection, so that repeated angular point detection can be reduced, and the processing efficiency is further improved. The motion parameter of the current characteristic point is comprehensively determined by utilizing other characteristic points directly or indirectly matched with the current characteristic point, the determined result is more accurate, and the motion parameter can more truly and completely describe the motion state of the current characteristic point. By setting the preset parameters and the preset number of the multi-component negative correlation relationship, the problem of abnormal motion parameter calculation caused by interruption of tracking and matching of the feature points can be effectively avoided, and the accuracy of object detection can be improved.
The above describes in detail the flow of the method for object detection, which may also be implemented by a corresponding apparatus, and the structure and function of the apparatus are described in detail below.
An object detection apparatus provided in an embodiment of the present invention is shown in fig. 3, and includes:
the feature point determining module 31 is configured to acquire a current binocular image of a current time node, determine a current feature point in the current binocular image, where the current feature point matches a historical feature point in a historical binocular image, and determine a three-dimensional coordinate corresponding to the current feature point; the current characteristic point and the historical characteristic point are both positioned in a target area;
a motion parameter determining module 32, configured to determine a motion parameter of the current feature point according to the three-dimensional coordinate corresponding to the current feature point and the three-dimensional coordinate corresponding to the historical feature point, where the motion parameter includes one or more of a motion direction, a motion distance, and a motion speed;
and the early warning module 33 is configured to generate an object early warning message when the motion parameter matches a preset parameter.
On the basis of the above embodiment, the determining, by the feature point determining module 31, the current feature point in the current binocular image that matches the historical feature point in the historical binocular image includes:
acquiring a historical binocular image of a previous time node, and determining historical characteristic points in the historical binocular image;
and performing characteristic point matching processing on the current binocular image and the historical binocular image, and taking a point in the current binocular image, which is matched with the historical characteristic point, as a current characteristic point.
On the basis of the above embodiment, the determining module 32 determines the motion parameter of the current feature point according to the three-dimensional coordinate corresponding to the current feature point and the three-dimensional coordinate corresponding to the historical feature point, including:
when the historical characteristic point is not matched with the characteristic point of other binocular images before the last time node, determining the motion parameter of the current characteristic point according to the three-dimensional coordinate corresponding to the current characteristic point and the three-dimensional coordinate corresponding to the historical characteristic point;
and when the historical characteristic point is matched with the characteristic points of one or more other binocular images before the last time node, taking the characteristic points matched with the historical characteristic points in all the other binocular images as effective characteristic points, and determining the motion parameters of the current characteristic points according to the three-dimensional coordinates corresponding to the current characteristic points, the three-dimensional coordinates corresponding to the historical characteristic points and the three-dimensional coordinates corresponding to all the effective characteristic points.
On the basis of the above embodiment, the apparatus further comprises a target area setting module;
before the feature point determining module 31 obtains the current binocular image of the current time node, the target area setting module is configured to:
determining a three-dimensional area to be detected, determining a coordinate range of the three-dimensional area in a camera coordinate system or a world coordinate system, and taking the coordinate range as a target area.
On the basis of the embodiment, the device also comprises a characteristic point extraction module;
after the feature point determining module 31 obtains the current binocular image of the current time node, the feature point extracting module is configured to:
performing segmentation processing on the current binocular image to determine a plurality of binocular sub-images of the current binocular image;
and respectively carrying out corner point detection on each binocular sub-image, and taking the detected corner points as the characteristic points of the current binocular image.
On the basis of the above embodiment, the early warning module 33 is configured to:
when the motion direction of the current feature point is matched with a preset direction, generating an object early warning message; or
When the motion direction of the current feature point is matched with a preset direction and the motion distance of the current feature point is greater than a preset distance, generating an object early warning message; or
When the motion direction of the current feature point is matched with a preset direction and the motion speed of the current feature point is greater than a preset speed, generating an object early warning message; or
And generating an object early warning message when the motion direction of the current characteristic point is matched with a preset direction, the motion distance of the current characteristic point is greater than a preset distance, and the motion speed of the current characteristic point is greater than a preset speed.
On the basis of the above embodiment, the early warning module 33 is configured to:
presetting one or more preset parameters and a preset number corresponding to the preset parameters; when a plurality of preset parameters exist, the preset parameters and the preset quantity form a negative correlation relationship;
and taking the current characteristic points with the motion parameters matched with the preset parameters as current target characteristic points, and generating an object early warning message when the number of the current target characteristic points is greater than the preset number.
According to the object detection device provided by the embodiment of the invention, a characteristic point matching and tracking mode is adopted, the current characteristic point matching and tracking the current binocular image is carried out on the basis of the historical characteristic points of the previous historical binocular image, the function of rapidly determining the three-dimensional coordinates of the characteristic points by using a binocular camera can be utilized, the three-dimensional coordinates of the historical characteristic points and the current characteristic points can be rapidly determined, the corresponding motion parameters are further determined, and whether object early warning is needed or not is determined on the basis of the motion parameters of the characteristic points. The method only needs to track the matched feature points and determine corresponding motion parameters, does not need to identify objects in the binocular image, greatly reduces the data volume of image processing, is low in complexity, high in running speed, good in real-time performance and suitable for being applied to scenes with high real-time requirements. The previous frame of binocular image of the current time node is used as a historical binocular image, so that the tracking accuracy of the feature points is guaranteed, and meanwhile, the continuous tracking of the feature points can be realized. And the binocular image is segmented and then subjected to angular point detection, so that repeated angular point detection can be reduced, and the processing efficiency is further improved. The motion parameter of the current characteristic point is comprehensively determined by utilizing other characteristic points directly or indirectly matched with the current characteristic point, the determined result is more accurate, and the motion parameter can more truly and completely describe the motion state of the current characteristic point. By setting the preset parameters and the preset number of the multi-component negative correlation relationship, the problem of abnormal motion parameter calculation caused by interruption of tracking and matching of the feature points can be effectively avoided, and the accuracy of object detection can be improved.
Embodiments of the present invention further provide a computer storage medium, where the computer storage medium stores computer-executable instructions, which include a program for executing the method for object detection, and the computer-executable instructions may execute the method in any of the method embodiments.
The computer storage media may be any available media or data storage device that can be accessed by a computer, including but not limited to magnetic memory (e.g., floppy disks, hard disks, magnetic tape, magneto-optical disks (MOs), etc.), optical memory (e.g., CDs, DVDs, BDs, HVDs, etc.), and semiconductor memory (e.g., ROMs, EPROMs, EEPROMs, non-volatile memory (NAND FLASH), Solid State Disks (SSDs)), etc.
Fig. 4 shows a block diagram of an electronic device according to another embodiment of the present invention. The electronic device 1100 may be a host server with computing capabilities, a personal computer PC, or a portable computer or terminal that is portable, or the like. The specific embodiment of the present invention does not limit the specific implementation of the electronic device.
The electronic device 1100 includes at least one processor (processor)1110, a Communications Interface 1120, a memory 1130, and a bus 1140. The processor 1110, the communication interface 1120, and the memory 1130 communicate with each other via the bus 1140.
The communication interface 1120 is used for communicating with network elements including, for example, virtual machine management centers, shared storage, etc.
Processor 1110 is configured to execute programs. Processor 1110 may be a central processing unit CPU, or an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured to implement embodiments of the present invention.
The memory 1130 is used for executable instructions. The memory 1130 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory. The memory 1130 may also be a memory array. The storage 1130 may also be partitioned and the blocks may be combined into virtual volumes according to certain rules. The instructions stored by the memory 1130 are executable by the processor 1110 to enable the processor 1110 to perform the method of object detection in any of the method embodiments described above.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A method of object detection, comprising:
acquiring a current binocular image of a current time node, determining a current characteristic point in the current binocular image, which is matched with a historical characteristic point in a historical binocular image, and determining a three-dimensional coordinate corresponding to the current characteristic point; the current characteristic point and the historical characteristic point are both positioned in a target area;
determining the motion parameters of the current characteristic point according to the three-dimensional coordinates corresponding to the current characteristic point and the three-dimensional coordinates corresponding to the historical characteristic point, wherein the motion parameters comprise one or more of a motion direction, a motion distance and a motion speed;
and generating an object early warning message when the motion parameters are matched with preset parameters.
2. The method of claim 1, wherein the determining current feature points in the current binocular image that match historical feature points in historical binocular images comprises:
acquiring a historical binocular image of a previous time node, and determining historical characteristic points in the historical binocular image;
and performing characteristic point matching processing on the current binocular image and the historical binocular image, and taking a point in the current binocular image, which is matched with the historical characteristic point, as a current characteristic point.
3. The method according to claim 2, wherein the determining the motion parameter of the current feature point according to the three-dimensional coordinates corresponding to the current feature point and the three-dimensional coordinates corresponding to the historical feature point comprises:
when the historical characteristic point is not matched with the characteristic point of other binocular images before the last time node, determining the motion parameter of the current characteristic point according to the three-dimensional coordinate corresponding to the current characteristic point and the three-dimensional coordinate corresponding to the historical characteristic point;
and when the historical characteristic point is matched with the characteristic points of one or more other binocular images before the last time node, taking the characteristic points matched with the historical characteristic points in all the other binocular images as effective characteristic points, and determining the motion parameters of the current characteristic points according to the three-dimensional coordinates corresponding to the current characteristic points, the three-dimensional coordinates corresponding to the historical characteristic points and the three-dimensional coordinates corresponding to all the effective characteristic points.
4. The method of claim 1, further comprising, prior to said obtaining a current binocular image of a current time node:
determining a three-dimensional area to be detected, determining a coordinate range of the three-dimensional area in a camera coordinate system or a world coordinate system, and taking the coordinate range as a target area.
5. The method of claim 1, further comprising, after said obtaining a current binocular image of a current time node:
performing segmentation processing on the current binocular image to determine a plurality of binocular sub-images of the current binocular image;
and respectively carrying out corner point detection on each binocular sub-image, and taking the detected corner points as the characteristic points of the current binocular image.
6. The method of claim 1, wherein generating the object warning message when the motion parameter matches a preset parameter comprises:
when the motion direction of the current feature point is matched with a preset direction, generating an object early warning message; or
When the motion direction of the current feature point is matched with a preset direction and the motion distance of the current feature point is greater than a preset distance, generating an object early warning message; or
When the motion direction of the current feature point is matched with a preset direction and the motion speed of the current feature point is greater than a preset speed, generating an object early warning message; or
And generating an object early warning message when the motion direction of the current characteristic point is matched with a preset direction, the motion distance of the current characteristic point is greater than a preset distance, and the motion speed of the current characteristic point is greater than a preset speed.
7. The method of claim 1, wherein generating the object warning message when the motion parameter matches a preset parameter comprises:
presetting one or more preset parameters and a preset number corresponding to the preset parameters; when a plurality of preset parameters exist, the preset parameters and the preset quantity form a negative correlation relationship;
and taking the current characteristic points with the motion parameters matched with the preset parameters as current target characteristic points, and generating an object early warning message when the number of the current target characteristic points is greater than the preset number.
8. An apparatus for object detection, comprising:
the characteristic point determining module is used for acquiring a current binocular image of a current time node, determining a current characteristic point in the current binocular image, which is matched with a historical characteristic point in a historical binocular image, and determining a three-dimensional coordinate corresponding to the current characteristic point; the current characteristic point and the historical characteristic point are both positioned in a target area;
the motion parameter determining module is used for determining the motion parameters of the current characteristic point according to the three-dimensional coordinates corresponding to the current characteristic point and the three-dimensional coordinates corresponding to the historical characteristic point, wherein the motion parameters comprise one or more of a motion direction, a motion distance and a motion speed;
and the early warning module is used for generating an object early warning message when the motion parameters are matched with preset parameters.
9. A computer storage medium having stored thereon computer-executable instructions for performing the method of object detection of any of claims 1-7.
10. An electronic device, comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of object detection of any one of claims 1-7.
CN201910662162.4A 2019-07-22 2019-07-22 Object detection method and device, storage medium and electronic equipment Pending CN112257485A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910662162.4A CN112257485A (en) 2019-07-22 2019-07-22 Object detection method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910662162.4A CN112257485A (en) 2019-07-22 2019-07-22 Object detection method and device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN112257485A true CN112257485A (en) 2021-01-22

Family

ID=74224049

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910662162.4A Pending CN112257485A (en) 2019-07-22 2019-07-22 Object detection method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN112257485A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113501398A (en) * 2021-06-29 2021-10-15 江西晶浩光学有限公司 Control method, control device and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113501398A (en) * 2021-06-29 2021-10-15 江西晶浩光学有限公司 Control method, control device and storage medium
CN113501398B (en) * 2021-06-29 2022-08-23 江西晶浩光学有限公司 Control method, control device and storage medium

Similar Documents

Publication Publication Date Title
CN112639821B (en) Method and system for detecting vehicle travelable area and automatic driving vehicle adopting system
KR102266830B1 (en) Lane determination method, device and storage medium
EP3581890A2 (en) Method and device for positioning
US9083856B2 (en) Vehicle speed measurement method and system utilizing a single image capturing unit
JP2023523243A (en) Obstacle detection method and apparatus, computer device, and computer program
US20230245472A1 (en) Dynamic driving metric output generation using computer vision methods
CN105628951A (en) Method and device for measuring object speed
CN112753038B (en) Method and device for identifying lane change trend of vehicle
CN114969221A (en) Method for updating map and related equipment
Petrovai et al. A stereovision based approach for detecting and tracking lane and forward obstacles on mobile devices
CN111160132B (en) Method and device for determining lane where obstacle is located, electronic equipment and storage medium
CN111507204A (en) Method and device for detecting countdown signal lamp, electronic equipment and storage medium
CN115249355A (en) Object association method, device and computer-readable storage medium
CN113223064A (en) Method and device for estimating scale of visual inertial odometer
CN112257485A (en) Object detection method and device, storage medium and electronic equipment
KR20190134303A (en) Apparatus and method for image recognition
KR102316818B1 (en) Method and apparatus of updating road network
Fangfang et al. Real-time lane detection for intelligent vehicles based on monocular vision
CN113902047B (en) Image element matching method, device, equipment and storage medium
CN114782496A (en) Object tracking method and device, storage medium and electronic device
CN112818866A (en) Vehicle positioning method and device and electronic equipment
CN108416305B (en) Pose estimation method and device for continuous road segmentation object and terminal
CN114842431A (en) Method, device and equipment for identifying road guardrail and storage medium
Klette et al. Advance in vision-based driver assistance
CN116912517B (en) Method and device for detecting camera view field boundary

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination