CN117437386A - Target detection method and camera - Google Patents

Target detection method and camera Download PDF

Info

Publication number
CN117437386A
CN117437386A CN202210834005.9A CN202210834005A CN117437386A CN 117437386 A CN117437386 A CN 117437386A CN 202210834005 A CN202210834005 A CN 202210834005A CN 117437386 A CN117437386 A CN 117437386A
Authority
CN
China
Prior art keywords
fish
sensor
eye
target
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210834005.9A
Other languages
Chinese (zh)
Inventor
房世光
沈玉姣
申力强
田仁富
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN202210834005.9A priority Critical patent/CN117437386A/en
Publication of CN117437386A publication Critical patent/CN117437386A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a target detection method and a camera, wherein the method comprises the following steps: when a target object enters an overlapping area between a tele sensor and a fish-eye sensor, determining first target detection information corresponding to the target object under the tele sensor, determining second target detection information corresponding to the target object under the fish-eye sensor, and determining that the first target detection information and the second target detection information belong to the same target object; if the target object enters the overlapping area from the acquisition area of the tele sensor, transmitting first target detection information to the fish-eye sensor, and correlating the first target detection information with second target detection information by the fish-eye sensor; and if the target object enters the overlapping area from the acquisition area of the fish-eye sensor, transmitting second target detection information to the tele sensor, and correlating the second target detection information with the first target detection information by the tele sensor. Through the technical scheme, the full coverage of the target detection blind area is realized, and the real-time position is obtained.

Description

Target detection method and camera
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a target detection method and a camera.
Background
The target detection is also called target extraction, is an image segmentation technology based on geometric features and statistical features of a target object, can integrate segmentation and recognition of the target object, and is an important capability of a system in accuracy and real-time. Along with the development of computer technology and the wide application of computer vision principle, the mode of real-time detection of target objects by using image processing technology is more and more, and the mode of real-time detection of target objects has wide application value in the aspects of intelligent traffic systems, intelligent management systems and the like.
In order to achieve target detection, a camera is usually required to be deployed in a target scene (i.e. an application scene in which a target object needs to be detected), an image of the target scene needs to be acquired by the camera, and by using an image processing technology, a real-time position of the target object can be analyzed based on the image, and then the target object is managed.
However, since the field of view of the camera is limited, when the target object exceeds the field of view of the camera, the camera cannot acquire the image of the target object, and thus the real-time position of the target object cannot be analyzed.
Disclosure of Invention
The application provides a target detection method, which is applied to a camera for realizing full coverage of a target detection blind area, wherein the camera comprises a tele sensor and a fish-eye sensor, and the method comprises the following steps:
When a target object enters an overlapping area between the tele sensor and the fish-eye sensor, determining first target detection information corresponding to the target object under the tele sensor, determining second target detection information corresponding to the target object under the fish-eye sensor, and determining that the first target detection information and the second target detection information belong to the same target object;
if the target object enters the overlapping area from the acquisition area of the tele sensor, transmitting the first target detection information to the fish-eye sensor so that the fish-eye sensor correlates the first target detection information with second target detection information under the fish-eye sensor;
and if the target object enters the overlapping area from the acquisition area of the fish-eye sensor, transmitting the second target detection information to the tele sensor so that the tele sensor correlates the second target detection information with the first target detection information under the tele sensor.
The application provides a camera, which is used for realizing the full coverage of a target detection blind area and comprises a tele sensor, a fish-eye sensor and a processor; wherein:
The long-focus sensor is used for acquiring long-focus images and sending the long-focus images to the processor;
the fish-eye sensor is used for collecting fish-eye images and sending the fish-eye images to the processor;
based on the tele image and the fisheye image, the processor is configured to perform:
when a target object enters an overlapping area between the tele sensor and the fish-eye sensor, determining first target detection information corresponding to the target object under the tele sensor based on the tele image, determining second target detection information corresponding to the target object under the fish-eye sensor based on the fish-eye image, and determining that the first target detection information and the second target detection information belong to the same target object; if the target object enters the overlapping area from the acquisition area of the tele sensor, transmitting the first target detection information to the fish-eye sensor so that the fish-eye sensor correlates the first target detection information with second target detection information under the fish-eye sensor; and if the target object enters the overlapping area from the acquisition area of the fish-eye sensor, transmitting the second target detection information to the tele sensor so that the tele sensor correlates the second target detection information with the first target detection information under the tele sensor.
According to the technical scheme, in the embodiment of the application, the camera can comprise the long-focus sensor and the fish-eye sensor, the visual field range of the camera can be increased through the long-focus sensor and the fish-eye sensor, so that the camera can acquire images in a larger visual field range, the full coverage of the target detection blind area is realized, namely, the full-scene target detection without the blind area is realized, and the situation that the real-time position of the target object cannot be obtained when the target object is in certain positions is avoided. When the target object enters the overlapping area between the long-focus sensor and the fish-eye sensor, the corresponding target detection information of the target object under the long-focus sensor and the corresponding target detection information of the target object under the fish-eye sensor can be associated, so that implementation detection under the condition that the target object moves is realized.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the following description will briefly describe the drawings that are required to be used in the embodiments of the present application or the description in the prior art, and it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings may also be obtained according to these drawings of the embodiments of the present application for a person having ordinary skill in the art.
FIG. 1 is a flow chart of a method of target detection in one embodiment of the present application;
FIG. 2 is a schematic diagram of a camera in one embodiment of the present application;
3A-3D are schematic diagrams of preset deployment center point positions in one embodiment of the present application;
FIG. 4 is a schematic view of an active area in a fisheye image in an embodiment of the application;
FIG. 5A is a schematic diagram of a fish-eye image in one embodiment of the present application;
FIGS. 5B-5D are schematic illustrations of fish eye expanded images in one embodiment of the present application;
FIGS. 6A and 6B are graphs showing the correspondence between half field angle and actual image height in one embodiment of the present application;
FIG. 6C is an initial ideal height versus half field angle in one embodiment of the present application;
FIG. 6D is a relationship between an initial ratio and a target ratio in one embodiment of the present application;
FIG. 6E is a plot of target ideal height versus half field angle in one embodiment of the present application;
FIGS. 6F-6H are schematic illustrations of fish eye expanded images in one embodiment of the present application;
7A-7B are application scenario diagrams of object detection in one embodiment of the present application;
FIG. 8 is a schematic diagram of the structure of an object detection device in one embodiment of the present application;
Fig. 9 is a hardware configuration diagram of a video camera in one embodiment of the present application.
Detailed Description
The terminology used in the embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to any or all possible combinations including one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in embodiments of the present application to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, a first message may also be referred to as a second message, and similarly, a second message may also be referred to as a first message, without departing from the scope of the present application. Depending on the context, furthermore, the word "if" used may be interpreted as "at … …" or "at … …" or "in response to a determination".
The embodiment of the application provides a target detection method, which is applied to a camera for realizing full coverage of a target detection blind area, wherein the camera comprises a long-focus sensor and a fish-eye sensor, the number of the long-focus sensor can be at least one, for example, the number of the long-focus sensor is two or three, the number of the fish-eye sensor can be at least one, for example, the number of the fish-eye sensor is one, and the number of the long-focus sensor and the number of the fish-eye sensor are not limited. Referring to fig. 1, a flow chart of a target detection method is shown, and the method may include:
Step 101, when a target object enters an overlapping area between a tele sensor and a fish-eye sensor, determining first target detection information corresponding to the target object under the tele sensor, determining second target detection information corresponding to the target object under the fish-eye sensor, and determining that the first target detection information and the second target detection information belong to the same target object, namely, the first target detection information and the second target detection information are detection information of the same target object, namely, detection information corresponding to the same target object under different sensors.
Step 102, if the target object enters the overlapping area from the acquisition area of the tele sensor, the first target detection information is transmitted to the fisheye sensor, so that the fisheye sensor correlates the first target detection information under the tele sensor with the second target detection information under the fisheye sensor.
Step 103, if the target object enters the overlapping area from the acquisition area of the fish-eye sensor, the second target detection information is transmitted to the tele sensor, so that the tele sensor correlates the second target detection information under the fish-eye sensor with the first target detection information under the tele sensor.
In one possible implementation, the first target detection information includes a first position corresponding to the target object under the tele sensor, the second target detection information includes a second position corresponding to the target object under the fish-eye sensor, and determining that the first target detection information and the second target detection information belong to the same target object may include, but is not limited to: if the first position and the second position correspond to the same physical position at the same time, the first target detection information and the second target detection information are determined to belong to the same target object.
For example, a tele image may be acquired by a tele sensor, and a first position of the target object corresponding to the tele sensor may be determined based on the tele image; and acquiring a fisheye image through the fisheye sensor, and determining a corresponding second position of the target object under the fisheye sensor based on the fisheye image. On the basis of the method, the device comprises the following steps: the first position can be converted into a mapping position corresponding to the target object under the fish-eye sensor based on a calibration relation between the long-focus sensor and the fish-eye sensor, and if the mapping position is matched with the second position, the first position and the second position are determined to correspond to the same physical position. Or, the second position may be converted into a mapping position corresponding to the target object under the tele sensor based on a calibration relationship between the tele sensor and the fisheye sensor, and if the mapping position matches the first position, it is determined that the first position and the second position correspond to the same physical position.
Illustratively, determining a corresponding second position of the target object under the fish-eye sensor based on the fish-eye image may include, but is not limited to: dividing a first fish-eye region of interest from the fish-eye image based on a preset unfolding center point position, a preset unfolding width and a preset unfolding height, and unfolding the first fish-eye region of interest to obtain a first fish-eye unfolded image; the center pixel point of the first fisheye expansion image corresponds to a preset expansion center point position, and the preset expansion center point position is located in an acquisition area of the tele sensor. Then, a second position corresponding to the target object may be determined based on the first fisheye-expanded image.
Illustratively, expanding the first fisheye region of interest to obtain a first fisheye expanded image comprises: determining projection point coordinates corresponding to each initial pixel point on the hemispherical surface of the fisheye image based on perspective projection aiming at each initial pixel point in the first fisheye expansion image; determining an azimuth angle and an incident angle corresponding to the initial pixel point based on the projection point coordinates; determining a target pixel point corresponding to the initial pixel point in the first fisheye interest region based on the equivalent focal length, the azimuth angle and the incident angle; determining a pixel value of the initial pixel point based on the pixel value of the target pixel point; and generating the first fish-eye expansion image based on pixel values of all initial pixel points.
Exemplary ways of determining the equivalent focal length may include, but are not limited to: the equivalent focal length is determined based on the effective width of the first fisheye region of interest and the maximum field angle of the fisheye sensor.
In one possible implementation manner, if the target object enters the acquisition area of the fish-eye sensor, a fish-eye image can be acquired by the fish-eye sensor, and a corresponding third position of the target object under the fish-eye sensor is determined based on the fish-eye image; on this basis, determining a corresponding third position of the target object under the fish-eye sensor based on the fish-eye image may include, but is not limited to: a second fish-eye region of interest is divided from the fish-eye image, and the second fish-eye region of interest is unfolded to obtain a second fish-eye unfolded image; the center pixel point of the second fisheye expansion image corresponds to the center pixel point of the second fisheye interest region. Then, a third position corresponding to the target object may be determined based on the second fisheye-expanded image.
Illustratively, the expanding the second fisheye region of interest to obtain a second fisheye expanded image may include, but is not limited to: inquiring a configured mapping table based on a half field angle corresponding to each initial pixel point in the second fish-eye interest area to obtain an actual image height corresponding to the initial pixel point; determining a target ideal height corresponding to the initial pixel point based on the half field angle; determining a target pixel point corresponding to the initial pixel point in the second fish-eye expanded image based on the actual image height and the target ideal image height; wherein the actual image height may represent a distance between the initial pixel point and a center pixel point of the second fisheye interest region, and the target ideal height may represent a distance between the target pixel point and a center pixel point of the second fisheye deployment image; determining a pixel value of the target pixel point based on the pixel value of the initial pixel point; and generating a second fisheye-expanded image based on the pixel values of all the target pixel points.
Illustratively, determining the target ideal height corresponding to the initial pixel point based on the half field angle may include, but is not limited to: determining an initial ideal height corresponding to the initial pixel point based on the half field angle and the equivalent focal length; determining an initial ratio based on the initial ideal image height and the actual image height; modulating the initial proportion by adopting a preset modulation proportion to obtain a target proportion; the target ideal height is determined based on the target proportion and the actual image height.
According to the technical scheme, in the embodiment of the application, the camera can comprise the long-focus sensor and the fish-eye sensor, the visual field range of the camera can be increased through the long-focus sensor and the fish-eye sensor, so that the camera can acquire images in a larger visual field range, the full coverage of the target detection blind area is realized, namely, the full-scene target detection without the blind area is realized, and the situation that the real-time position of the target object cannot be obtained when the target object is in certain positions is avoided. When the target object enters the overlapping area between the long-focus sensor and the fish-eye sensor, the corresponding target detection information of the target object under the long-focus sensor and the corresponding target detection information of the target object under the fish-eye sensor can be associated, so that implementation detection under the condition that the target object moves is realized.
The above technical solutions of the embodiments of the present application are described below with reference to specific application scenarios.
The embodiment of the application provides a camera with full coverage of a target detection blind area, which combines a fisheye unfolding technology and a space matching technology, realizes the full-scene target detection without the blind area, and can realize the cross-lens detection of a target track. The camera may include a tele sensor and a fish-eye sensor, the number of the tele sensors may be one or more, the orientation of the tele sensor is directly below the camera, and the orientation of the tele sensor is a remote area. For convenience of description, in the present embodiment, two tele sensors are exemplified, and these are referred to as tele sensor 1 and tele sensor 2. Of course, in practical applications, the number of the tele sensors may be more, such as 3, 4 tele sensors, etc.
Referring to fig. 2, which is a schematic view of a camera, the camera may include a fisheye sensor, a tele sensor 1, and a tele sensor 2. In this embodiment, the area in the target scene may be divided into an area a1, an area a2, an area a3, an area a4, and an area a5, and the target object (such as a target face, a target body, a target vehicle, etc., and the type of the target object is not limited) may be located in these areas.
The area a1 is an acquisition area of the tele sensor 1, and when the target object is in the area a1, the tele sensor 1 can acquire a tele image for the target object (for convenience of distinction, an image acquired by the tele sensor is referred to as a tele image). The area a2 is an overlapping area between the tele sensor 1 and the fisheye sensor, and when the target object is in the area a2, the tele sensor 1 can acquire a tele image for the target object and the fisheye sensor can acquire a fisheye image for the target object (for convenience of distinction, an image acquired by the fisheye sensor is referred to as a fisheye image). The region a3 is an acquisition region of the fisheye sensor, and the fisheye sensor can acquire a fisheye image for the target object when the target object is in the region a 3. The region a4 is an overlapping region between the tele sensor 2 and the fisheye sensor, and when the target object is in the region a4, the tele sensor 2 can acquire a tele image for the target object and the fisheye sensor can acquire a fisheye image for the target object. The region a5 is an acquisition region of the tele sensor 2, and when the target object is in the region a5, the tele sensor 2 can acquire a tele image for the target object.
In the above application scenario, the target detection method of the present embodiment may involve the following procedures:
the first and second sensors 1 and 2 are installed.
In this embodiment, there is a need to make an overlapping region (i.e., region a 2) between the tele sensor 1 and the fish-eye sensor, that is, when the target object is in the overlapping region, the tele image and the fish-eye image can both detect the target object, and in order to make an overlapping region between the tele sensor 1 and the fish-eye sensor, there is a need to restrict the angular relationship between the tele sensor 1 and the fish-eye sensor, and the angular relationship is not limited as long as there is an overlapping region between the tele sensor 1 and the fish-eye sensor. Similarly, it is necessary to make an overlapping region (i.e., region a 4) between the tele sensor 2 and the fisheye sensor, that is, when the target object is in the overlapping region, the tele image and the fisheye image can both detect the target object, and in order to make an overlapping region between the tele sensor 2 and the fisheye sensor, it is necessary to restrict the angular relationship between the tele sensor 2 and the fisheye sensor, and the angular relationship is not limited as long as the overlapping region between the tele sensor 2 and the fisheye sensor is present.
Second, the target object is in the processing mode of the acquisition area (i.e., area a 1) of the tele sensor 1.
When the target object is in the area a1, the tele sensor 1 may acquire a tele image (may be a multi-frame tele image) for the target object, and based on these tele images, the target detection information b1 corresponding to the target object under the tele sensor 1 may be analyzed, where the target detection information b1 may include, but is not limited to, at least one of the following: the tele sensor 1 assigns a unique identifier (denoted as identifier w 1) to the target object, attribute information of the target object (for example, when the target object is a human face, the attribute information may be a human face feature, and when the target object is a vehicle, the attribute information may be a license plate identifier, and the position of the target object and the time when the target object is at the position are not limited). Of course, the above is just a few examples, and the object detection information b1 is not limited thereto.
The unique identifier of the target object may be assigned to the target object by the tele sensor 1 when the target object is detected for the first time, and the unique identifier is not limited as long as the unique identifier is different from the unique identifier of other target objects. The unique identification of the target object does not change during the movement of the target object.
Regarding the attribute information of the target object, based on the tele image acquired by the tele sensor 1, the attribute information of the target object may be determined, for example, the tele image may be input to a neural network model, the attribute information of the target object may be output by the neural network model, or the attribute information of the target object may be obtained in other manners, which is not limited in the process of acquiring the attribute information in this embodiment.
The position of the target object and the time of the target object at the position can be analyzed based on the long-focus image based on each frame of long-focus image acquired by the long-focus sensor 1, the process is not limited, and the acquisition time of the long-focus image indicates the time of the target object at the position. Obviously, a plurality of positions are corresponding to a plurality of tele images, and the positions can form a track.
Third, the target object is disposed in the overlapping area (i.e., area a 2) between the tele sensor 1 and the fish-eye sensor, that is, the target object moves from the area a1 to the area a2.
When the target object moves from the area a1 to the area a2, since the target object is always in the field of view of the tele sensor 1, the tele sensor 1 continuously collects tele images of the target object, and analyzes the target detection information b1 corresponding to the target object under the tele sensor 1 based on the tele images, wherein the target detection information b1 may include a unique identifier w1 of the target object, attribute information of the target object, a position and time when the target object is in the area a1, and a position and time when the target object is in the area a2.
When the target object moves from the area a1 to the area a2, since the target object enters the field of view of the fish-eye sensor, the fish-eye sensor may collect a fish-eye image for the target object and analyze the target detection information b2 corresponding to the target object under the fish-eye sensor based on the fish-eye image, and the target detection information b2 may include, but is not limited to, at least one of the following: the fish-eye sensor assigns a unique identifier (denoted as identifier w2, which is optional) to the target object, attribute information of the target object (which is optional), a position of the target object, and a time when the target object is at the position. Of course, the foregoing is merely a few examples and is not limiting in this regard.
Based on the position and time in the target detection information b1 and the time of the position in the target detection information b2, if the position in the target detection information b1 and the position in the target detection information b2 correspond to the same physical position at the same time, it is determined that the target detection information b1 and the target detection information b2 belong to the same target object, that is, the target detection information b1 and the target detection information b2 are the target detection information of the same target object.
For example, if the target detection information b1 includes the position P1 and the time point T1, it indicates that the target object is at the position P1 at the time point T1, and the target detection information b2 includes the position P2 and the time point T1, it indicates that the target object is at the position P2 at the time point T1. On this basis, if the position P1 and the position P2 correspond to the same physical position, it is determined that the target detection information b1 and the target detection information b2 belong to the same target object.
Illustratively, regarding how to determine whether the position P1 and the position P2 correspond to the same physical position, the following manner may be adopted: the calibration relation between the tele sensor 1 and the fish-eye sensor is calibrated in advance, the calibration relation represents the mapping relation between the coordinate system of the tele sensor 1 and the coordinate system of the fish-eye sensor, and also represents the pixel mapping relation between the tele image (i.e. the image under the coordinate system of the tele sensor 1) and the fish-eye image (i.e. the image under the coordinate system of the fish-eye sensor), and each pixel point in the tele image can be mapped to the fish-eye image or each pixel point in the fish-eye image can be mapped to the tele image.
When determining whether the position P1 and the position P2 correspond to the same physical position, the position P1 (i.e., the pixel point in the tele image) may be converted into a mapping position P3 (the mapping position P3 is the pixel point in the fisheye image) corresponding to the target object under the fisheye sensor based on the calibration relationship between the tele sensor 1 and the fisheye sensor, if the mapping position P3 and the position P2 (i.e., the pixel point in the fisheye image) are matched (if they are the same), the position P1 and the position P2 are determined to correspond to the same physical position, and if the mapping position P3 and the position P2 are not matched (if they are different), the position P1 and the position P2 are determined to not correspond to the same physical position. Alternatively, the position P2 may be converted into a mapped position P4 corresponding to the target object under the tele sensor 1 based on a calibration relationship between the tele sensor 1 and the fisheye sensor, if the mapped position P4 matches (e.g., is the same as) the position P1, it is determined that the position P1 and the position P2 correspond to the same physical position, and if the mapped position P4 does not match (e.g., is different from) the position P1, it is determined that the position P1 and the position P2 do not correspond to the same physical position.
For example, after determining that the target detection information b1 and the target detection information b2 belong to the same target object, the target detection information b1 may be transferred to the fish-eye sensor, so that the target detection information b1 and the target detection information b2 are associated, that is, the target detection information b1 and the target detection information b2 are fused.
When the target detection information b1 and the target detection information b2 are fused, the fish-eye sensor can update the unique identifier of the target object to the identifier w1 (i.e. the unique identifier allocated to the target object by the tele sensor 1), and since the unique identifiers of the target object under different sensors are the same identifier, the target detection information under different sensors can be fused together, so that the target object has the unique identifier on the whole path.
When the target detection information b1 and the target detection information b2 are fused, the fish-eye sensor may update the attribute information of the target object in the target detection information b1 into the target detection information b 2.
When the target detection information b1 and the target detection information b2 are fused, the fish-eye sensor can update the position and time in the target detection information b1 into the target detection information b2, so that the target detection information b2 can comprise the track when the target object is in the area a1 and the track when the target object is in the area a2, and the track fusion of the target object in the areas a1 and a2 is realized.
For example, although the imaging field angle of the fisheye sensor (such as the fisheye lens) is wide (up to 180 degrees), the fisheye sensor is distorted greatly, which results in that the distortion of the fisheye image is large, and the position of the target object cannot be accurately analyzed, so in order to realize the target detection, the position mapping and the visual effect, the fisheye image acquired by the fisheye sensor needs to be converted into a fisheye-spread image, and the position corresponding to the target object is determined on the basis of the fisheye-spread image, that is, when the target detection information b2 corresponding to the target object under the fisheye sensor is determined, the target detection information b2 may be determined based on the fisheye-spread image. In summary, the mapping relationship between the fisheye image and the fisheye-expanded image may be further provided, so as to convert the fisheye image into the fisheye-expanded image, and then determine the position of the target object under the fisheye sensor based on the fisheye-expanded image.
In one possible embodiment, in order to give the mapping relationship between the fisheye image and the fisheye-expanded image, the following steps may be adopted, which are, of course, merely examples and are not limited thereto.
Step S11, a fisheye region of interest is partitioned from the fisheye image based on a preset unfolding center point position (corresponding to the tele sensor 1), a preset unfolding width and a preset unfolding height.
For example, the preset deployment width and the preset deployment height may be empirically configured, which is not limited. In addition, the preset deployment center point position may be empirically configured as long as the preset deployment center point position is located within the acquisition area of the tele sensor 1. For example, the installation direction of the fish-eye sensor may be divided into a road longitudinal installation and a road transverse installation, as shown in fig. 3A, which is a schematic view of the road longitudinal installation, and the upper region is an overlapping region of the fish-eye sensor and the tele sensor 1, and thus, a certain position of the upper region may be configured as a preset deployment center point position corresponding to the tele sensor 1, as shown in fig. 3B, which is an example of a preset deployment center point position corresponding to the tele sensor 1, and a certain point of the lower region may be configured as a preset deployment center point position corresponding to the tele sensor 2. Referring to fig. 3C, which is a schematic view of road lateral installation, the left area is an overlapping area of the fisheye sensor and the tele sensor 1, and thus, a certain position of the left area may be configured as a preset expansion center point position corresponding to the tele sensor 1, referring to fig. 3D, which is an example of a preset expansion center point position corresponding to the tele sensor 1, and a certain point of the right area may be configured as a preset expansion center point position corresponding to the tele sensor 2.
For example, based on the known preset unfolding center point position, the preset unfolding width and the preset unfolding height, a fisheye region of interest (i.e., a partial region in the fisheye image) can be divided from the fisheye image, that is, the number, the center point of the fisheye region of interest is the preset unfolding center point position, the width of the fisheye region of interest is the preset unfolding width, and the height of the fisheye region of interest is the preset unfolding height.
For example, the effective area is extracted from the fisheye image first, as shown in fig. 4, the circular area in the middle is the effective area, and the black area at the edge is the ineffective area, and the effective area can be extracted from the fisheye image, and the extraction mode is not limited. After the effective area is obtained, the fish-eye region of interest is divided from the effective area based on the preset unfolding center point position, the preset unfolding width and the preset unfolding height.
Step S12, for each initial pixel point in the fisheye expansion image, determining projection point coordinates corresponding to the initial pixel point on the hemispherical surface of the fisheye image based on perspective projection. And determining the azimuth angle and the incident angle corresponding to the initial pixel point based on the projection point coordinates. And determining a target pixel point corresponding to the initial pixel point in the fish-eye interest area based on the equivalent focal length, the azimuth angle and the incident angle. The determination mode of the equivalent focal length is as follows: the equivalent focal length is determined based on the effective width of the fisheye region of interest and the maximum field angle of the fisheye sensor.
As for the mapping relationship between the fisheye image and the fisheye deployment image, the mapping relationship between the pixel point in the fisheye deployment image and the pixel point in the fisheye region of interest (i.e., the fisheye image) is actually exemplified, the pixel point in the fisheye deployment image is marked as an initial pixel point, the pixel point in the fisheye region of interest is marked as a target pixel point, and the mapping relationship between the initial pixel point in the fisheye deployment image and the target pixel point in the fisheye region of interest is described below in connection with a specific application scenario.
Referring to fig. 5A, a schematic diagram of a fisheye image (e.g., a fisheye region of interest in the fisheye image) is shown (e.g., PTZ expansion, etc.), based on an equidistant projected fisheye imaging model, x f oy f Is the plane of the fish eye image, p (x) f ,y f ) Is a point on the fisheye image, is projected from P (x, y, z) points on the hemispherical surface of the fisheye projection at equal intervals,the projection radius is the equivalent focal length f. Taking the O2 point as the center, performing perspective projection on the P point to a plane tangent to the spherical surface and O1 (0, f), and intersecting with P1 (x) 1 ,y 1 ,z 1 ) The P1 point is P (x) f ,y f ) Corresponding to a point on the perspective projection surface. On the basis, under equidistant projection, the distance r and the incident angle of the p point relative to the center of the fisheye image Satisfy equation (4). Since the distance r of the projection point from the center reaches the maximum when the incident angle is maximum, the equivalent focal length f is calculated by the formula (5). Wherein W is the maximum effective width of the fish eyes, namely, the effective width is twice the radius of the fish eyes 2*R; the FOV is the maximum field angle of the fish eye, which can be measured in advance.
On the basis, the reverse mapping establishes the position correspondence from the fisheye expansion image to the fisheye image as follows:
(1) For any point (m, n) on the fisheye-expanded image, the point (m, n) corresponds to P1 (x 1, y1, z 1) on the projection plane tangential to the O1 point, and there may be a correspondence relationship shown in formula (6).
(2) P (x, y, z) satisfies x on the projected sphere 2 +y 2 +z 2 =f 2 Due to the similar triangle relationship DeltaO under perspective projection 2 PQ~ΔO 2 P 1 Q 1 Therefore, the following relationship is satisfiedSince the projection point P1 is on the spherical tangential plane, z1=f; in summary, the correspondence of formula (7) can be obtained.
The coordinates (x, y, z) of the P point on the sphere can be obtained by the conversion of the formula (7), and the coordinates are shown in the formula (8):
(3) Azimuth angle θ and incidence angle with respect to fisheye imagingAccording to the triangular relation of the P point coordinates, the azimuth angle theta and the incidence angle +.of the fisheye imaging can be obtained through a formula (9)>
(4) P (x) imaged on equidistant projection onto fish-eye image f ,y f ) When the point is the point, the distance r of the point p relative to the center of the fish-eye image is shown in the formula (10), and the coordinates of the projection point on the fish-eye image are shown in the formula (11).
(5) The center O1 of the fisheye-expanded image corresponds to the center O of the fisheye image, if p (x) f ,y f ) The point corresponds to the center of the fish eye expanded image, and p (x) is calculated according to the formulas (11), (10) and (9) f ,y f ) After the corresponding three-dimensional point P (x, y, z) on the ball surface is mapped in the forward direction, the point is rotated anticlockwise by ang_x along the Xc axis and then rotated anticlockwise by ang_y to O1 (0, f) along the Yc axis. Then, the rotation relationship is calculated according to the formula (12), so that the reverse rotation is performed in step (2).
Thereby, an arbitrary point (m, n) on the fish-eye developed image and a point (x) on the fish-eye image are established f ,y f ) The mapping between the fisheye-expanded image and the fisheye image (fisheye region of interest) is accomplished.
In summary, for each initial pixel point in the fisheye-expanded image, the coordinates of the projection point corresponding to the initial pixel point on the hemispherical surface of the fisheye chart can be determined based on perspective projection. Referring to the formula (6) and the formula (7), the (m, n) represents any initial pixel point on the fisheye deployment image, the (x 1, y1, z 1) represents the projection point coordinate corresponding to the initial pixel point on the hemispherical surface of the fisheye image, and the f represents the equivalent focal length.
And, an azimuth angle and an incident angle corresponding to the initial pixel point (m, n) may be determined based on the projected point coordinates. Referring to the formulas (8) and (9), based on the projection point coordinates (x 1, y1, z 1) and the equivalent focal length f, the azimuth angle θ and the incident angle corresponding to the initial pixel point (m, n) can be obtained
And can be based on the equivalent focal length f, azimuth angle θ, and angle of incidenceDetermining a target pixel point (x) corresponding to the initial pixel point (m, n) in the fish-eye region of interest f ,y f ). Referring to the formula (10) and the formula (11), the equivalent focal length f and the incident angle +.>The distance r of the p point relative to the center of the fisheye image is determined, and then, the target pixel point (x) corresponding to the initial pixel point (m, n) can be determined based on the distance r and the azimuth angle theta f ,y f )。
And, the equivalent focal length f may be determined based on the effective width of the fisheye region of interest and the maximum field angle of the fisheye sensor. As shown in equation (5), R is the fisheye radius, 2R is the effective width of the fisheye region of interest, and FOV is the maximum field angle of the fisheye sensor.
In summary, referring to the formula (4) -formula (10), a mapping relationship between the initial pixel point in the fisheye-expanded image and the target pixel point in the fisheye region of interest can be obtained. Based on the mapping relationship, for each initial pixel point in the fisheye expanded image, a target pixel point corresponding to the initial pixel point can be found from the fisheye region of interest. In addition, for each target pixel point in the fish-eye interest region, an initial pixel point corresponding to the target pixel point may also be found from the fish-eye expanded image.
Step S13, for each initial pixel point in the fisheye expansion image, determining a target pixel point corresponding to the initial pixel point in the fisheye interest region, determining a pixel value of the initial pixel point based on the pixel value of the target pixel point (for example, taking the pixel value of the target pixel point as the pixel value of the initial pixel point), and generating the fisheye expansion image based on the pixel values of all the initial pixel points.
In one possible implementation, for each initial pixel point in the fisheye deployment image, step S12 may be used to determine a target pixel point in the fisheye region of interest corresponding to the initial pixel point. In another possible implementation manner, a mapping relationship between an initial pixel point in the fisheye-expanded image and a target pixel point in the fisheye region of interest may be established, the establishing manner is shown in step S12, and is shown in table 1, and the mapping relationship is shown in table 1, so that, for each initial pixel point in the fisheye-expanded image, the target pixel point corresponding to the initial pixel point may be obtained by looking up the mapping relationship shown in table 1.
TABLE 1
Initial pixel point in fish-eye expansion image Target pixel point in fish-eye interest area
P11 P21
P12 P22
For each initial pixel point in the fisheye-expanded image, after finding a target pixel point corresponding to the initial pixel point, the pixel value of the target pixel point can be used as the pixel value of the initial pixel point, and the pixel values of all the initial pixel points form the fisheye-expanded image, so that the fisheye-expanded image is obtained.
In summary, the fisheye region of interest in the fisheye image may be expanded to obtain the fisheye expanded image. Referring to the description of the above-described process, since the center point of the fisheye region of interest is the preset expansion center point position, after the fisheye region of interest is expanded into the fisheye expansion image, the center pixel point of the fisheye expansion image also corresponds to the preset expansion center point position, and the preset expansion center point position is located in the acquisition region of the tele sensor 1, that is, the preset expansion center point position is located in the overlapping region between the tele sensor 1 and the fisheye sensor, and therefore, the center pixel point of the fisheye expansion image also corresponds to the overlapping region.
For example, taking road longitudinal installation as an example, referring to fig. 5B, two preset expansion center point positions are shown, for the preset expansion center point position on the upper side, after the fisheye region of interest is divided based on the preset expansion center point position, the fisheye expansion image corresponding to the fisheye region of interest may be referred to as fig. 5C, and it is apparent that, in fig. 5C, the center pixel point of the fisheye expansion image corresponds to the preset expansion center point position.
For the preset unfolding center point position of the lower side, after the fish-eye region of interest is divided based on the preset unfolding center point position, a fish-eye unfolding image corresponding to the fish-eye region of interest may be shown in fig. 5D, and it is obvious that, in fig. 5D, a center pixel point of the fish-eye unfolding image corresponds to the preset unfolding center point position.
Referring to fig. 5C and 5D, compared with the fisheye image, the fisheye development image better removes distortion of the fisheye image, and establishes a position mapping relationship between each initial pixel point in the fisheye development image and a target pixel point in the fisheye image, and target detection can be performed through the fisheye development image.
Fourth, the target object is in the processing mode of the acquisition area (namely area a 3) of the fish-eye sensor.
When the target object moves from the area a2 to the area a3, since the target object is always in the field of view of the fish-eye sensor, the fish-eye sensor continuously collects the fish-eye images of the target object, and based on the fish-eye images, the target detection information b2 corresponding to the target object under the fish-eye sensor is analyzed, and the target detection information b2 may include the unique identifier w1 of the target object, attribute information of the target object, the position and time when the target object is in the area a2, and the position and time when the target object is in the area a 3. The target detection information b1 is already fused with the target detection information b2 when the target object is in the area a2, and thus, the target detection information b2 may further include a position and a time when the target object is in the area a 1.
In summary, the target detection information b2 may include a track when the target object is in the area a1, a track when the target object is in the area a2, and a track when the target object is in the area a3, so as to realize track fusion of the target object in the areas a1, a2, and a3, where the tracks correspond to the same unique identifier w1.
For example, after the target object moves to the area a3, a fisheye image may be acquired by the fisheye sensor, and a corresponding position of the target object under the fisheye sensor, that is, a position of the target object in the area a1, is determined based on the fisheye image. Further, in order to realize target detection, the fisheye image acquired by the fisheye sensor is converted into a fisheye expansion image, and the corresponding position of the target object under the fisheye sensor is determined based on the fisheye expansion image, so that the mapping relation between the fisheye image and the fisheye expansion image can be given, the fisheye image is converted into the fisheye expansion image, and then the position of the target object under the fisheye sensor is determined based on the fisheye expansion image. In order to give the mapping relationship between the fisheye image and the fisheye-spread image, the following steps may be adopted, which are, of course, merely examples and are not limited thereto.
Step S21, a fisheye region of interest (namely a main picture correction region) is divided from the fisheye image.
For example, the fisheye region of interest may be partitioned from the fisheye image based on the preset expansion center point position, the preset expansion width, and the preset expansion height. The preset unfolding width and the preset unfolding height can be configured according to experience, and are not limited. The preset unfolding center point position may be empirically configured as long as the preset unfolding center point position is located in the collection area of the fish-eye sensor, for example, the preset unfolding center point position may be located in the center area of the collection area of the fish-eye sensor.
On the basis of knowing the preset unfolding center point position, the preset unfolding width and the preset unfolding height, a fish-eye region of interest can be divided from the fish-eye image, the center point of the fish-eye region of interest is the preset unfolding center point position, the width of the fish-eye region of interest is the preset unfolding width, and the height of the fish-eye region of interest is the preset unfolding height. Obviously, if the preset unfolding center point is located in the center area of the acquisition area of the fish-eye sensor, the center point of the fish-eye interest area may be the center point of the fish-eye image.
Step S22, inquiring a configured mapping table according to a half field angle corresponding to each initial pixel point in the fish-eye interest area to obtain an actual image height corresponding to the initial pixel point; determining the target ideal height corresponding to the initial pixel point based on the half field angle; and determining a target pixel point corresponding to the initial pixel point in the fish-eye expansion image based on the actual image height and the target ideal image height. Wherein the actual image height may represent a distance between the initial pixel point and a center pixel point of the fish-eye region of interest, and the target ideal height may represent a distance between the target pixel point and a center pixel point of the fish-eye expanded image.
Illustratively, determining the target ideal height corresponding to the initial pixel point based on the half field angle may include, but is not limited to: determining an initial ideal height corresponding to the initial pixel point based on the half field angle and the equivalent focal length; determining an initial ratio based on the initial ideal image height and the actual image height; modulating the initial proportion by adopting a preset modulation proportion to obtain a target proportion; the target ideal height is determined based on the target proportion and the actual image height.
For example, most imaging systems satisfy the following object-image relationship h=f_angle (y_angle), so that the straight line is still straight after imaging, when the angle of view is close to or exceeds 90 degrees, h=f_tan (y_angle) may not make sense, the fisheye lens based on equidistant projection is designed according to the object-image relationship h=f_angle, the distortion is very large, especially the closer to the circumferential image, the more compressed, the distortion correction is performed on the whole road picture as much as possible, the field loss is reduced while the whole road is straightened, and based on the above principle, in this embodiment, the image height refers to the distance from the imaging point to the center of the imaging plane by adopting the method of image height modulation correction, so the process of performing distortion correction on the fisheye image to obtain the fisheye spread image of the fisheye main picture may be as follows:
(1) The initial correspondence of the half angle (unit: degree) and the actual image height real_height (unit: mm) obtained from the optical measurement parameters of the fish-eye sensor is an example of the initial correspondence, as shown in fig. 6A. Further, in order to improve the accuracy, bilinear interpolation may be performed on the half angle of view with an accuracy of 0.2 degrees to obtain a corresponding actual image height, thereby obtaining the target correspondence shown in fig. 6B.
For example, the optical measurement parameters of the fisheye sensor may include an initial correspondence, the initial correspondence may be obtained from the optical measurement parameters, and then bilinear interpolation is performed on the half field angle to obtain the target correspondence. In the subsequent embodiment, the processing may be performed by adopting an initial corresponding relationship or may be performed by adopting a target corresponding relationship, and for convenience of description, the processing by adopting the target corresponding relationship is taken as an example.
For example, for each initial pixel point in the fish-eye interest area, the half-field angle corresponding to the initial pixel point may be determined, and the half-field angles corresponding to different initial pixel points may be the same or different, which is not limited. After the half field angle corresponding to the initial pixel point is obtained, the target corresponding relation (shown in fig. 6B) can be queried through the half field angle, and the actual image height corresponding to the initial pixel point is obtained, wherein the actual image height represents the distance between the initial pixel point and the central pixel point of the fish-eye interest area.
(2) In the fisheye equidistant projection model, the relationship between the image height h and the half field angle y_angle can be expressed as follows: h=f×y_angle, f is the equivalent focal length. Under equidistant projection, the closer the fisheye image is to the circumference image, the more compression and distortion are caused, and the effect that the middle part of the fisheye image bulges out is presented. For each initial pixel point in the fish-eye region of interest, an initial ideal height h for the initial pixel point may be determined based on the half field angle y_angle and the equivalent focal length f for the initial pixel point.
In one possible embodiment, referring to fig. 6C, which shows the relationship between the initial ideal image height after de-distortion and the half angle of view, when the half angle of view y_angle is less than 90 degrees, in order to obtain the initial ideal image height h, the initial ideal image height h may be determined based on the half angle of view y_angle and the equivalent focal length f corresponding to the initial pixel point, using the following formula: h=f×tna (y_angle). Further, when the half angle y_angle is not smaller than 90 degrees, the initial ideal height h can be determined by the following formula: h=f tan (y_angl1) +k (y_angl1), where y_angl1 is the value closest to 90 degrees in the half field table, e.g. 89.8 degrees in this example; k is a preset increasing proportion, such as 0.2, i.e. y_angle exceeding 90 degrees, and the initial ideal height h can be obtained by increasing the image height by a certain proportion.
(3) For each initial pixel point in the fish-eye region of interest, an initial ratio is determined based on an initial ideal height corresponding to the initial pixel point and an actual image height corresponding to the initial pixel point. Referring to equation (13), to determine an example of an initial ratio, the initial ratio represents a ratio of an actual image height to an initial ideal height.
/>
scale1 represents the initial scale corresponding to the initial pixel, real_height represents the actual image height corresponding to the initial pixel, and ideal_height represents the initial Ideal height corresponding to the initial pixel.
(4) And modulating the initial proportion by adopting a preset modulation proportion to obtain a target proportion.
For example, since the initial ideal height tends to infinity at the start of the half angle y_angle approaching 90 degrees, the initial scale1 drops to 0 at the start of the half angle y_angle approaching 90 degrees, and thus the target scale2 can be obtained by modulating the initial scale1 by the preset modulation scale k. Referring to equation (14), an example of modulating the initial scAle1 to obtain the target scAle2 is shown.
scale2 = scale1 k+ (1-k) k e [0,1] equation (14)
Referring to fig. 6D, there is shown a relationship between the initial scale1 and the target scale2, the upper curve corresponds to the initial scale1, and the lower curve corresponds to the target scale2. In fig. 6D, the relationship between the initial scale1 and the target scale2 is exemplified by the preset modulation scale k being 0.5.
Wherein, the preset modulation ratio k can be configured empirically, when the preset modulation ratio k is closer to 1, scale2 is closer to scale1, the correction intensity is larger, and the ideal image height of correction stretching is larger.
(5) For each initial pixel point in the fish-eye interest area, determining a target ideal image height corresponding to the initial pixel point based on the target proportion and the actual image height corresponding to the initial pixel point, wherein the target ideal image height can represent the distance between the target pixel point and the central pixel point of the fish-eye expansion image.
As shown in equation (14), there is no limitation to this for determining an example of the ideal high of the target. In equation (14), ideal_height is used to represent the target Ideal height corresponding to the initial pixel, real_height is used to represent the actual image height corresponding to the initial pixel, and scale2 is used to represent the target scale.
Obviously, the modulated target Ideal height may be recalculated based on the modulated target ratio, and when the preset modulation ratio k is 0.5, the modulated target Ideal height may be shown in fig. 6E, where fig. 6E is a relation between the corrected actual image height, the target Ideal height and the half field angle y_angle, f×y_angle represents the actual image height, and ideal_height represents the modulated target Ideal height.
(6) And determining a target pixel point corresponding to the initial pixel point in the fish-eye expanded image based on the actual image height and the target ideal image height corresponding to the initial pixel point for each initial pixel point in the fish-eye interest area.
For example, since the actual image height represents the distance (e.g., the pixel distance) between the initial pixel point and the center pixel point of the fish-eye region of interest, and the target ideal image height represents the distance (e.g., the pixel distance) between the target pixel point and the center pixel point of the fish-eye expanded image, the initial pixel point can be found from the fish-eye region of interest based on the distance between the initial pixel point and the center pixel point of the fish-eye region of interest, and the target pixel point can be found from the fish-eye expanded image based on the distance between the target pixel point and the center pixel point of the fish-eye expanded image, so that the mapping relationship between the target pixel point in the fish-eye expanded image and the initial pixel point in the fish-eye region of interest can be obtained. Based on the mapping relationship, for each target pixel point in the fisheye-expanded image, an initial pixel point corresponding to the target pixel point can be found from the fisheye region of interest. In addition, for each initial pixel point in the fish-eye interest region, a target pixel point corresponding to the initial pixel point may also be found from the fish-eye expanded image.
Step S23, for each target pixel point in the fisheye-expanded image, determining an initial pixel point corresponding to the target pixel point in the fisheye interest region, and determining a pixel value of the target pixel point based on the pixel value of the initial pixel point (for example, using the pixel value of the initial pixel point as the pixel value of the target pixel point), where the fisheye-expanded image can be generated based on the pixel values of all the target pixel points.
In one possible implementation, for each target pixel point in the fisheye deployment image, step S22 may be used to determine an initial pixel point in the fisheye region of interest corresponding to the target pixel point. In another possible implementation manner, a mapping relationship between a target pixel point in the fisheye-expanded image and an initial pixel point in the fisheye interest region may be established, and for each target pixel point in the fisheye-expanded image, the initial pixel point corresponding to the target pixel point may be obtained by querying the mapping relationship.
For each target pixel point in the fisheye-expanded image, after finding the initial pixel point corresponding to the target pixel point, the pixel value of the initial pixel point can be used as the pixel value of the target pixel point, and the pixel values of all the target pixel points form the fisheye-expanded image, so that the fisheye-expanded image is obtained.
In summary, the fisheye region of interest in the fisheye image may be expanded to obtain the fisheye expanded image. Referring to the description of the above process, since the center point of the fish-eye region of interest is the preset expansion center point position, the preset expansion center point is located in the center region of the acquisition region of the fish-eye sensor, and therefore, after expanding the fish-eye region of interest into the fish-eye expanded image, the center pixel point of the fish-eye expanded image also corresponds to the preset expansion center point position, that is, is located in the center region of the acquisition region of the fish-eye sensor, and the center pixel point of the fish-eye expanded image corresponds to the center pixel point of the fish-eye region of interest.
For example, referring to fig. 6F, a schematic view of a fisheye image is shown, after the fisheye image is unfolded, a fisheye unfolded image may be obtained, referring to fig. 6G, a schematic view of a fisheye unfolded image after the main frame distortion correction is shown, and it is apparent that in fig. 6G, the main frame road is corrected and straightened, and fields of view at two ends are better preserved. In practical applications, only the road body portion may also be displayed, as shown in fig. 6H.
Fifth, the target object is disposed in the overlapping area (i.e., area a 4) between the tele sensor 2 and the fish-eye sensor, that is, the target object moves from area a3 to area a4.
When the target object moves from the area a3 to the area a4, since the target object is always in the field of view of the fish-eye sensor, the fish-eye sensor continuously collects the fish-eye images of the target object, and based on the fish-eye images, the target detection information b2 corresponding to the target object under the fish-eye sensor is analyzed, and the target detection information b2 may include a unique identifier w1 of the target object, attribute information of the target object, a position and time when the target object is in the area a1, a position and time when the target object is in the area a2, a position and time when the target object is in the area a3, and a position and time when the target object is in the area a 4.
When the target object moves from the area a3 to the area a4, since the target object enters the field of view of the tele sensor 2, the tele sensor 2 may collect a tele image for the target object and analyze target detection information b3 corresponding to the target object under the tele sensor 2 based on the tele image, and the target detection information b3 may include, but is not limited to, at least one of the following: the unique identifier (denoted as identifier w3, which is optional) assigned to the target object by the tele sensor 2, the attribute information of the target object (which is optional), the position of the target object, and the time the target object is at the position are not limited thereto.
Based on the position and time in the target detection information b2 and the time of the position in the target detection information b3, if the position in the target detection information b2 and the position in the target detection information b3 correspond to the same physical position at the same time, it is determined that the target detection information b2 and the target detection information b3 belong to the same target object.
For example, if the target detection information b2 includes the position P5 and the time point T2, it indicates that the target object is at the position P5 at the time point T2, and the target detection information b3 includes the position P6 and the time point T2, it indicates that the target object is at the position P6 at the time point T2. On this basis, if the position P5 and the position P6 correspond to the same physical position, it is determined that the target detection information b2 and the target detection information b3 belong to the same target object.
Illustratively, regarding how to determine whether position P5 and position P6 correspond to the same physical location, the following may be employed: the calibration relation between the tele sensor 2 and the fish-eye sensor is calibrated in advance, the position P5 can be converted into a mapping position corresponding to the target object under the fish-eye sensor based on the calibration relation between the tele sensor 2 and the fish-eye sensor, if the mapping position is matched with the position P6, the position P5 and the position P6 are determined to correspond to the same physical position, and if the mapping position is not matched with the position P6, the position P5 and the position P6 are determined to not correspond to the same physical position. Alternatively, the position P6 may be converted into a mapped position corresponding to the target object under the tele sensor 2 based on a calibration relationship between the tele sensor 2 and the fish-eye sensor, and if the mapped position matches the position P5, it is determined that the position P5 and the position P6 correspond to the same physical position, and if the mapped position does not match the position P5, it is determined that the position P5 and the position P6 do not correspond to the same physical position.
For example, after determining that the target detection information b2 and the target detection information b3 belong to the same target object, the target detection information b2 may be transferred to the tele sensor 2, so that the target detection information b2 and the target detection information b3 are associated, that is, the target detection information b2 and the target detection information b3 are fused.
When the target detection information b2 and the target detection information b3 are fused, the tele sensor 2 may update the unique identifier of the target object to the identifier w1, may update the attribute information of the target object in the target detection information b2 to the target detection information b3, and may update the position and time in the target detection information b2 to the target detection information b3, so that the target detection information b3 may include a track when the target object is in the area a1, a track when the target object is in the area a2, a track when the target object is in the area a3, and a track when the target object is in the area a4, thereby realizing track fusion of the target object in each area.
For example, when the target object moves from the area a3 to the area a4, it is also necessary to convert the fisheye image acquired by the fisheye sensor into a fisheye spread image, and determine the position corresponding to the target object based on the fisheye spread image, which can be described with reference to the third point, except that when the fisheye region of interest is divided from the fisheye image, the preset spread center point position is the preset spread center point position corresponding to the tele sensor 2, and not the preset spread center point position corresponding to the tele sensor 1. For example, as shown in fig. 3A, a certain position of the upper area is configured as a preset expansion center point position corresponding to the tele sensor 1, a certain position of the lower area is configured as a preset expansion center point position corresponding to the tele sensor 2, as shown in fig. 3C, a certain position of the left area is configured as a preset expansion center point position corresponding to the tele sensor 1, and a certain point of the right area is configured as a preset expansion center point position corresponding to the tele sensor 2.
Sixth, the target object is in the processing mode of the acquisition area (i.e., area a 5) of the tele sensor 2.
When the target object moves from the area a4 to the area a5, since the target object is always in the field of view of the tele sensor 2, the tele sensor 2 continuously collects tele images for the target object, and analyzes the target detection information b3 corresponding to the target object under the tele sensor 2 based on the tele images, wherein the target detection information b3 may include a unique identifier w1 of the target object, attribute information of the target object, a position and time of the target object in the area a1, a position and time of the target object in the area a2, a position and time of the target object in the area a3, a position and time of the target object in the area a4, and a position and time of the target object in the area a 5. In summary, the target detection information b3 may include the tracks when the target object is in each region, so as to achieve track fusion of each region, where the tracks correspond to the same unique identifier w1.
According to the technical scheme, in the embodiment of the application, the visual field range of the camera can be increased through the long-focus sensor and the fish-eye sensor, so that the camera can acquire images in a larger visual field range, full coverage of the target detection blind area is realized, namely, full scene target detection without the blind area is realized, target detection without the blind area of the whole road is realized, and the situation that the real-time position of the target object cannot be obtained when the target object is positioned at certain positions is avoided. When the target object enters the overlapping area between the long-focus sensor and the fish-eye sensor, the corresponding target detection information of the target object under the long-focus sensor and the corresponding target detection information of the target object under the fish-eye sensor can be associated, so that implementation detection under the condition that the target object moves is realized. The real-time mapping of the positions can be realized only by establishing the mapping relation between the fisheye image and the fisheye unfolding image, and the cascade connection of T-shaped intersections can be considered to realize the target detection without blind areas in all directions.
The above-mentioned object detection method of the embodiment of the present application may be applied to the following scenario, for example.
1. And (5) collecting the license plate of the face of the electric vehicle. When the license plate of the electric vehicle is behind, the face is on the vehicle side, the single camera cannot acquire the face and the license plate, the camera of the application can acquire the face of a target object through the tele sensor 1, the tele sensor 2 can acquire the license plate of the electric vehicle, and the tele sensor 1, the fisheye sensor and the tele sensor 2 can realize the integral association of the target object, so that the association of the license plate of the electric vehicle and the face can be realized, and the application scene is shown in the example shown in FIG. 7A.
2. And (5) parking space management. The tele sensor 1 and/or the tele sensor 2 are/is used for detecting a remote parking area and vehicle license plate recognition, the fish-eye sensor is used for detecting a lower parking space area and vehicle license plate recognition, and license plate information of a lower parking vehicle can be obtained by associating the fish-eye sensor with a target object of each tele sensor, as shown in fig. 7B, and an example of the application scene is shown.
3. Target video relay. The tele sensor 1 and/or the tele sensor 2 are/is used for detecting a target object and identifying characteristics, when the set target information is identified, information can be uploaded to the platform, and after the target object is identified, relay detection of the target object is carried out in the tele sensor and the fish-eye sensor, and meanwhile, information is continuously uploaded to the platform. Wherein, the uploading information can include but is not limited to one of the following: the identification of the preset target, the ID of the detection target, the channel of the detection target, the position of the picture where the detection target is located and the like. Furthermore, the platform can realize the display and dynamic switching of the corresponding channel pictures according to the target information.
Based on the same application concept as the above method, an embodiment of the present application provides a target detection device, which is applied to a camera for implementing full coverage of a target detection blind area, where the camera includes a tele sensor and a fish-eye sensor, as shown in fig. 8, and is a schematic structural diagram of the device, and the device may include:
a determining module 81, configured to determine first target detection information corresponding to a target object under the tele sensor when the target object enters an overlapping region between the tele sensor and the fish-eye sensor, determine second target detection information corresponding to the target object under the fish-eye sensor, and determine that the first target detection information and the second target detection information belong to the same target object;
a processing module 82, configured to, if the target object enters the overlapping area from the acquisition area of the tele sensor, transmit the first target detection information to the fisheye sensor, so that the fisheye sensor correlates the first target detection information with second target detection information under the fisheye sensor; and if the target object enters the overlapping area from the acquisition area of the fish-eye sensor, transmitting the second target detection information to the tele sensor so that the tele sensor correlates the second target detection information with the first target detection information under the tele sensor.
In a possible implementation manner, the first target detection information includes a first position corresponding to the target object under the tele sensor, the second target detection information includes a second position corresponding to the target object under the fish-eye sensor, and the determining module 81 is specifically configured to, when determining that the first target detection information and the second target detection information belong to the same target object:
and if the first position and the second position correspond to the same physical position at the same time, determining that the first target detection information and the second target detection information belong to the same target object.
In a possible implementation manner, the determining module 81 is further configured to:
collecting a tele image through the tele sensor, and determining a first position corresponding to the target object under the tele sensor based on the tele image; acquiring a fisheye image through the fisheye sensor, and determining a second position corresponding to the target object under the fisheye sensor based on the fisheye image;
based on the calibration relation between the tele sensor and the fish-eye sensor, converting the first position into a mapping position corresponding to the target object under the fish-eye sensor, and if the mapping position is matched with the second position, determining that the first position and the second position correspond to the same physical position.
In a possible implementation manner, the determining module 81 is specifically configured to, when determining, based on the fisheye image, the second position of the target object under the fisheye sensor:
dividing a first fish-eye region of interest from the fish-eye image based on a preset unfolding center point position, a preset unfolding width and a preset unfolding height, and unfolding the first fish-eye region of interest to obtain a first fish-eye unfolded image; the center pixel point of the first fisheye expansion image corresponds to the preset expansion center point position, and the preset expansion center point position is located in the acquisition area of the tele sensor;
and determining a second position corresponding to the target object based on the first fisheye-expanded image.
In a possible implementation manner, the determining module 81 expands the first fisheye interest region, and is specifically configured to:
determining a projection point coordinate corresponding to each initial pixel point on the hemispherical surface of the fisheye image based on an equivalent focal length aiming at each initial pixel point in the first fisheye expansion image; determining an azimuth angle and an incident angle corresponding to the initial pixel point based on the projection point coordinates; determining a target pixel point corresponding to the initial pixel point in the first fisheye interest region based on the equivalent focal length, the azimuth angle and the incident angle;
Determining a pixel value of the initial pixel point based on the pixel value of the target pixel point;
and generating the first fish-eye expansion image based on pixel values of all initial pixel points.
Wherein, the determining module 81 is specifically configured to: the equivalent focal length is determined based on an effective width of the first fisheye region of interest and a maximum field angle of the fisheye sensor.
In a possible implementation manner, the determining module 81 is further configured to: if the target object enters the acquisition area of the fish-eye sensor, acquiring a fish-eye image through the fish-eye sensor, and determining a corresponding third position of the target object under the fish-eye sensor based on the fish-eye image; when determining a corresponding third position of the target object under the fish-eye sensor based on the fish-eye image, the method is used for:
a second fish-eye region of interest is divided from the fish-eye image, and the second fish-eye region of interest is unfolded to obtain a second fish-eye unfolded image; the center pixel point of the second fish-eye expansion image corresponds to the center pixel point of the second fish-eye region of interest;
and determining a third position corresponding to the target object based on the second fisheye-expanded image.
In a possible implementation manner, the determining module 81 expands the second fisheye interest region, and is specifically configured to:
inquiring a configured mapping table based on a half field angle corresponding to each initial pixel point in the second fish-eye interest area to obtain an actual image height corresponding to the initial pixel point;
determining the target ideal height corresponding to the initial pixel point based on the half field angle;
determining a target pixel point corresponding to the initial pixel point in the second fish-eye expansion image based on the actual image height and the target ideal image height; wherein the actual image height represents a distance between the initial pixel point and a center pixel point of the second fisheye interest region, and the target ideal height represents a distance between the target pixel point and a center pixel point of the second fisheye-expanded image;
determining a pixel value of the target pixel point based on the pixel value of the initial pixel point;
and generating the second fish-eye expansion image based on the pixel values of all the target pixel points.
In one possible implementation, the determining module 81 is specifically configured to, when determining, based on the half field angle, that the target corresponding to the initial pixel point is ideally high: determining an initial ideal height corresponding to the initial pixel point based on the half field angle and the equivalent focal length; determining an initial ratio based on the initial ideal image height and the actual image height; modulating the initial proportion by adopting a preset modulation proportion to obtain a target proportion; the target ideal height is determined based on the target proportion and the actual image height.
Based on the same application conception as the method, a camera is provided in the embodiment of the application, wherein the camera is used for realizing the full coverage of a target detection blind area and comprises a tele sensor, a fish-eye sensor and a processor; wherein: the long-focus sensor is used for acquiring long-focus images and sending the long-focus images to the processor; the fish-eye sensor is used for collecting fish-eye images and sending the fish-eye images to the processor; based on the tele image and the fisheye image, the processor is configured to perform:
when a target object enters an overlapping area between the tele sensor and the fish-eye sensor, determining first target detection information corresponding to the target object under the tele sensor based on the tele image, determining second target detection information corresponding to the target object under the fish-eye sensor based on the fish-eye image, and determining that the first target detection information and the second target detection information belong to the same target object; if the target object enters the overlapping area from the acquisition area of the tele sensor, transmitting the first target detection information to the fish-eye sensor so that the fish-eye sensor correlates the first target detection information with second target detection information under the fish-eye sensor; and if the target object enters the overlapping area from the acquisition area of the fish-eye sensor, transmitting the second target detection information to the tele sensor so that the tele sensor correlates the second target detection information with the first target detection information under the tele sensor.
Based on the same application concept as the above method, a camera is proposed in an embodiment of the present application, and as shown in fig. 9, the camera includes: a processor 91 and a machine-readable storage medium 92, the machine-readable storage medium 92 storing machine-executable instructions executable by the processor 91; the processor 91 is configured to execute machine executable instructions to implement the object detection method disclosed in the above examples of the present application.
Based on the same application concept as the above method, the embodiments of the present application further provide a machine-readable storage medium, where a number of computer instructions are stored, where the computer instructions can implement the object detection method disclosed in the above examples of the present application when executed by a processor.
Wherein the machine-readable storage medium may be any electronic, magnetic, optical, or other physical storage device that can contain or store information, such as executable instructions, data, or the like. For example, a machine-readable storage medium may be: RAM (Radom Access Memory, random access memory), volatile memory, non-volatile memory, flash memory, a storage drive (e.g., hard drive), a solid state drive, any type of storage disk (e.g., optical disk, dvd, etc.), or a similar storage medium, or a combination thereof.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. A typical implementation device is a computer, which may be in the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email device, game console, tablet computer, wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each element may be implemented in one or more software and/or hardware elements when implemented in the present application.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Moreover, these computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and changes may be made to the present application by those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. which are within the spirit and principles of the present application are intended to be included within the scope of the claims of the present application.

Claims (10)

1. A target detection method, characterized by being applied to a camera for realizing full coverage of a target detection blind area, the camera including a tele sensor and a fish-eye sensor, the method comprising:
when a target object enters an overlapping area between the tele sensor and the fish-eye sensor, determining first target detection information corresponding to the target object under the tele sensor, determining second target detection information corresponding to the target object under the fish-eye sensor, and determining that the first target detection information and the second target detection information belong to the same target object;
If the target object enters the overlapping area from the acquisition area of the tele sensor, transmitting the first target detection information to the fish-eye sensor so that the fish-eye sensor correlates the first target detection information with second target detection information under the fish-eye sensor;
and if the target object enters the overlapping area from the acquisition area of the fish-eye sensor, transmitting the second target detection information to the tele sensor so that the tele sensor correlates the second target detection information with the first target detection information under the tele sensor.
2. The method of claim 1, wherein the first target detection information includes a first location of the target object corresponding under the tele sensor, the second target detection information includes a second location of the target object corresponding under the fish-eye sensor, and the determining that the first target detection information and the second target detection information belong to the same target object includes:
and if the first position and the second position correspond to the same physical position at the same time, determining that the first target detection information and the second target detection information belong to the same target object.
3. The method according to claim 2, wherein the method further comprises:
collecting a tele image through the tele sensor, and determining a first position corresponding to the target object under the tele sensor based on the tele image; acquiring a fisheye image through the fisheye sensor, and determining a second position corresponding to the target object under the fisheye sensor based on the fisheye image;
based on the calibration relation between the tele sensor and the fish-eye sensor, converting the first position into a mapping position corresponding to the target object under the fish-eye sensor, and if the mapping position is matched with the second position, determining that the first position and the second position correspond to the same physical position.
4. A method according to claim 3, wherein said determining a corresponding second position of the target object under the fish-eye sensor based on the fish-eye image comprises:
dividing a first fish-eye region of interest from the fish-eye image based on a preset unfolding center point position, a preset unfolding width and a preset unfolding height, and unfolding the first fish-eye region of interest to obtain a first fish-eye unfolded image; the center pixel point of the first fisheye expansion image corresponds to the preset expansion center point position, and the preset expansion center point position is located in the acquisition area of the tele sensor;
And determining a second position corresponding to the target object based on the first fisheye-expanded image.
5. The method of claim 4, wherein the step of determining the position of the first electrode is performed,
the expanding the first fish-eye interested area to obtain a first fish-eye expanded image comprises the following steps:
determining projection point coordinates corresponding to the initial pixel points on the hemispherical surface of the fisheye image based on perspective projection aiming at each initial pixel point in the first fisheye expansion image; determining an azimuth angle and an incident angle corresponding to the initial pixel point based on the projection point coordinates; determining a target pixel point corresponding to the initial pixel point in the first fisheye interest region based on the equivalent focal length, the azimuth angle and the incident angle;
determining a pixel value of the initial pixel point based on the pixel value of the target pixel point;
and generating the first fish-eye expansion image based on pixel values of all initial pixel points.
6. The method of claim 5, wherein the step of determining the position of the probe is performed,
the determining mode of the equivalent focal length comprises the following steps: the equivalent focal length is determined based on an effective width of the first fisheye region of interest and a maximum field angle of the fisheye sensor.
7. The method of claim 1, wherein if the target object enters the acquisition region of the fisheye sensor, acquiring a fisheye image by the fisheye sensor, and determining a corresponding third position of the target object under the fisheye sensor based on the fisheye image; the determining, based on the fisheye image, a corresponding third position of the target object under the fisheye sensor includes:
A second fish-eye region of interest is divided from the fish-eye image, and the second fish-eye region of interest is unfolded to obtain a second fish-eye unfolded image; the center pixel point of the second fish-eye expansion image corresponds to the center pixel point of the second fish-eye region of interest;
and determining a third position corresponding to the target object based on the second fisheye-expanded image.
8. The method of claim 7, wherein the step of determining the position of the probe is performed,
the expanding the second fish-eye region of interest to obtain a second fish-eye expanded image comprises:
inquiring a configured mapping table based on a half field angle corresponding to each initial pixel point in the second fish-eye interest area to obtain an actual image height corresponding to the initial pixel point;
determining the target ideal height corresponding to the initial pixel point based on the half field angle;
determining a target pixel point corresponding to the initial pixel point in the second fish-eye expansion image based on the actual image height and the target ideal image height; wherein the actual image height represents a distance between the initial pixel point and a center pixel point of the second fisheye interest region, and the target ideal height represents a distance between the target pixel point and a center pixel point of the second fisheye-expanded image;
Determining a pixel value of the target pixel point based on the pixel value of the initial pixel point;
and generating the second fish-eye expansion image based on the pixel values of all the target pixel points.
9. The method of claim 8, wherein the step of determining the position of the first electrode is performed,
the determining, based on the half field angle, the target ideal height corresponding to the initial pixel point includes:
determining an initial ideal height corresponding to the initial pixel point based on the half field angle and the equivalent focal length;
determining an initial ratio based on the initial ideal image height and the actual image height;
modulating the initial proportion by adopting a preset modulation proportion to obtain a target proportion;
the target ideal height is determined based on the target proportion and the actual image height.
10. The camera is used for realizing full coverage of a target detection blind area, and comprises a tele sensor, a fish-eye sensor and a processor; wherein:
the long-focus sensor is used for acquiring long-focus images and sending the long-focus images to the processor;
the fish-eye sensor is used for collecting fish-eye images and sending the fish-eye images to the processor;
based on the tele image and the fisheye image, the processor is configured to perform:
When a target object enters an overlapping area between the tele sensor and the fish-eye sensor, determining first target detection information corresponding to the target object under the tele sensor based on the tele image, determining second target detection information corresponding to the target object under the fish-eye sensor based on the fish-eye image, and determining that the first target detection information and the second target detection information belong to the same target object; if the target object enters the overlapping area from the acquisition area of the tele sensor, transmitting the first target detection information to the fish-eye sensor so that the fish-eye sensor correlates the first target detection information with second target detection information under the fish-eye sensor; and if the target object enters the overlapping area from the acquisition area of the fish-eye sensor, transmitting the second target detection information to the tele sensor so that the tele sensor correlates the second target detection information with the first target detection information under the tele sensor.
CN202210834005.9A 2022-07-14 2022-07-14 Target detection method and camera Pending CN117437386A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210834005.9A CN117437386A (en) 2022-07-14 2022-07-14 Target detection method and camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210834005.9A CN117437386A (en) 2022-07-14 2022-07-14 Target detection method and camera

Publications (1)

Publication Number Publication Date
CN117437386A true CN117437386A (en) 2024-01-23

Family

ID=89555917

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210834005.9A Pending CN117437386A (en) 2022-07-14 2022-07-14 Target detection method and camera

Country Status (1)

Country Link
CN (1) CN117437386A (en)

Similar Documents

Publication Publication Date Title
CN113412614B (en) Three-dimensional localization using depth images
JP3539788B2 (en) Image matching method
US9858639B2 (en) Imaging surface modeling for camera modeling and virtual view synthesis
Carr et al. Monocular object detection using 3d geometric primitives
CN107533753A (en) Image processing apparatus
US10909395B2 (en) Object detection apparatus
CN110826499A (en) Object space parameter detection method and device, electronic equipment and storage medium
CN109922251A (en) The method, apparatus and system quickly captured
US20130155205A1 (en) Image processing device, imaging device, and image processing method and program
KR20170139548A (en) Camera extrinsic parameters estimation from image lines
US20130162786A1 (en) Image processing apparatus, imaging apparatus, image processing method, and program
CN112204614B (en) Motion segmentation in video from non-stationary cameras
KR20150074544A (en) Method of tracking vehicle
KR20200071960A (en) Method and Apparatus for Vehicle Detection Using Lidar Sensor and Camera Convergence
CN112069862A (en) Target detection method and device
CN111652937B (en) Vehicle-mounted camera calibration method and device
CN110544278A (en) rigid body motion capture method and device and AGV pose capture system
CN108074250B (en) Matching cost calculation method and device
KR20190044439A (en) Method of stitching depth maps for stereo images
CN111612812B (en) Target object detection method, detection device and electronic equipment
KR102065337B1 (en) Apparatus and method for measuring movement information of an object using a cross-ratio
CN117437386A (en) Target detection method and camera
JP5727969B2 (en) Position estimation apparatus, method, and program
WO2021124657A1 (en) Camera system
CN112639864B (en) Method and apparatus for ranging

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination