CN112600994B - Object detection device, method, storage medium, and electronic apparatus - Google Patents

Object detection device, method, storage medium, and electronic apparatus Download PDF

Info

Publication number
CN112600994B
CN112600994B CN202011401996.9A CN202011401996A CN112600994B CN 112600994 B CN112600994 B CN 112600994B CN 202011401996 A CN202011401996 A CN 202011401996A CN 112600994 B CN112600994 B CN 112600994B
Authority
CN
China
Prior art keywords
pixel
microlenses
detection region
lens
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011401996.9A
Other languages
Chinese (zh)
Other versions
CN112600994A (en
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cloudminds Shanghai Robotics Co Ltd
Original Assignee
Cloudminds Shanghai Robotics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cloudminds Shanghai Robotics Co Ltd filed Critical Cloudminds Shanghai Robotics Co Ltd
Priority to CN202011401996.9A priority Critical patent/CN112600994B/en
Publication of CN112600994A publication Critical patent/CN112600994A/en
Priority to PCT/CN2021/122452 priority patent/WO2022116676A1/en
Application granted granted Critical
Publication of CN112600994B publication Critical patent/CN112600994B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Abstract

The present disclosure relates to an object detection apparatus, method, storage medium, and electronic device, the apparatus including: the camera comprises a lens, an image sensor and a processor, wherein the lens comprises a plurality of microlenses with the same area and shape, or the lens comprises a plurality of types of microlenses, the microlenses of different types are used for representing at least one microlens with different area and shape, the image sensor is used for acquiring first pixel information corresponding to a current detection area in at least one preset detection area and sending the first pixel information to the processor, and the processor is used for determining a first object characteristic of an object to be detected according to the first pixel information. According to the object detection device and the detection method, the first object characteristic of the object to be detected is determined through the first pixel information corresponding to the current detection area, accurate detection of the object to be detected can be achieved under the condition of low power consumption, meanwhile, the structure of the object detection device is simple, and the cost, the weight and the size of the object detection device are reduced.

Description

Object detection device, method, storage medium, and electronic apparatus
Technical Field
The present disclosure relates to the field of object detection technologies, and in particular, to an object detection device, an object detection method, a storage medium, and an electronic device.
Background
With the continuous progress of scientific technology and the continuous development of robot technology, various types of robots are widely used in various fields. In some specific application scenarios, the robot is generally required to have functions of navigation, obstacle avoidance, capture and the like (for example, an intelligent sweeping robot), which requires the robot to be able to detect objects in the surrounding environment. In the related technology, sensing devices such as ultrasonic, infrared and laser radar are mainly used as navigation obstacle avoidance devices of the robot, and object detection is realized through the navigation obstacle avoidance devices. However, the adoption of these sensing devices as navigation obstacle avoidance devices has various drawbacks. For example, ultrasonic and infrared sensing devices are relatively inexpensive, but can only detect objects at close distances, and when the situation is complicated, the number of failures in object detection is large. However, although the lidar has high accuracy and long detection distance for detecting an object, it is expensive, consumes high power for active scanning, and has relatively large volume and weight, and is not suitable for most robots.
Disclosure of Invention
To solve the problems in the related art, the present disclosure provides an object detection apparatus, a method, a storage medium, and an electronic device.
In order to achieve the above object, according to a first aspect of embodiments of the present disclosure, there is provided an object detection device, the device including a lens, an image sensor, and a processor, the lens including a plurality of microlenses of the same area and shape, or the lens including a plurality of types of microlenses, the different types of microlenses being used to characterize microlenses of different at least one of area and shape;
the image sensor is used for acquiring first pixel information corresponding to a current detection area in at least one preset detection area and sending the first pixel information to the processor; each preset detection region is a region formed by target pixel points on the image sensor, the target pixel points are pixel points included in a projection region of the microlenses on the image sensor, and the first pixel information is pixel point information collected by the target pixel points corresponding to the current detection region;
the processor is used for determining a first object characteristic of the object to be detected according to the first pixel information.
Optionally, in a case that the lens includes multiple types of microlenses, and the multiple types of microlenses are a first microlens and a second microlens, the target pixel point includes a first pixel point and/or a second pixel point, where the first pixel point is a pixel point included in a projection area of the first microlens on the image sensor, and the second pixel point is a pixel point included in a projection area of the second microlens on the image sensor;
the processor is further configured to determine a target detection area other than the current detection area from at least one preset detection area when the first object characteristic satisfies an area switching condition corresponding to the current detection area;
the processor is further configured to determine a second object characteristic of the object to be detected according to second pixel information corresponding to the target detection area, where the second pixel information is pixel point information acquired by the target pixel point corresponding to the target detection area.
Optionally, the preset detection region includes a large lens detection region, a small lens detection region and a full lens detection region, the target pixel point corresponding to the large lens detection region includes the first pixel point, the target pixel point corresponding to the small lens detection region includes the second pixel point, and the full lens detection region includes the large lens detection region and the small lens detection region.
Optionally, the first and second microlenses are polygonal in shape.
Optionally, under the condition that the lens includes a plurality of microlenses with the same area and shape, the target pixel point includes a third pixel point, the third pixel point is a pixel point included in a plurality of projection regions of the microlenses on the image sensor, and the first pixel information is pixel point information acquired by the third pixel point.
According to a second aspect of the embodiments of the present disclosure, there is provided an object detection method applied to an object detection apparatus, the apparatus including a lens, an image sensor and a processor, the lens including a plurality of microlenses having the same area and shape, or the lens including a plurality of types of microlenses, the different types of microlenses being used to characterize microlenses having at least one of different areas and shapes; the method comprises the following steps:
acquiring first pixel information corresponding to a current detection area in at least one preset detection area through the image sensor, and sending the first pixel information to the processor; each preset detection region is a region formed by target pixel points on the image sensor, the target pixel points are pixel points included in a projection region of the microlenses on the image sensor, and the first pixel information is pixel point information collected by the target pixel points corresponding to the current detection region;
and determining a first object characteristic of the object to be detected according to the first pixel information through the processor.
Optionally, in a case that the lens includes multiple types of microlenses, and the multiple types of microlenses are a first microlens and a second microlens, the target pixel point includes a first pixel point and/or a second pixel point, where the first pixel point is a pixel point included in a projection area of the first microlens on the image sensor, and the second pixel point is a pixel point included in a projection area of the second microlens on the image sensor, the method further includes:
determining, by the processor, a target detection area other than the current detection area from at least one preset detection area under the condition that the first object characteristic satisfies an area switching condition corresponding to the current detection area;
determining, by the processor, a second object characteristic of the object to be detected according to second pixel information corresponding to the target detection region, where the second pixel information is pixel point information acquired by the target pixel point corresponding to the target detection region.
Optionally, the preset detection region includes a large lens detection region, a small lens detection region and a full lens detection region, the target pixel point corresponding to the large lens detection region includes the first pixel point, the target pixel point corresponding to the small lens detection region includes the second pixel point, and the full lens detection region includes the large lens detection region and the small lens detection region.
Optionally, the first and second microlenses are polygonal in shape.
Optionally, under the condition that the lens includes a plurality of microlenses with the same area and shape, the target pixel point includes a third pixel point, the third pixel point is a pixel point included in a plurality of projection regions of the microlenses on the image sensor, and the first pixel information is pixel point information acquired by the third pixel point.
According to a third aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the object detection method provided by the second aspect.
According to a fourth aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to implement the steps of the object detection method provided by the second aspect.
According to the technical scheme, the object detection device comprises a lens, an image sensor and a processor, wherein the lens comprises a plurality of microlenses with the same area and shape, or the lens comprises a plurality of types of microlenses, the microlenses of different types are used for representing at least one microlens with different area and shape, the image sensor is used for obtaining first pixel information corresponding to a current detection area in at least one preset detection area and sending the first pixel information to the processor, each preset detection area is an area formed by target pixel points on the image sensor, the target pixel points are pixel points included in a projection area of the plurality of microlenses on the image sensor, the first pixel information is pixel point information collected by the target pixel points corresponding to the current detection area, and the processor is used for determining the first object characteristic of an object to be detected according to the first pixel information. According to the object detection device and the detection method, the first object characteristic of the object to be detected is determined through the first pixel information corresponding to the current detection area, accurate detection of the object to be detected can be achieved under the condition of low power consumption, meanwhile, the structure of the object detection device is simple, and the cost, the weight and the size of the object detection device are reduced.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure, but do not constitute a limitation of the disclosure. In the drawings:
FIG. 1 is a block diagram illustrating an object detection device in accordance with an exemplary embodiment;
FIG. 2 is a schematic diagram illustrating a distribution of first and second microlenses, according to an exemplary embodiment;
FIG. 3 is a schematic illustration of a distribution of microlenses shown in accordance with an exemplary embodiment;
FIG. 4 is a schematic illustration of a distribution of microlenses shown in accordance with an exemplary embodiment;
FIG. 5 is a flow chart illustrating a method of object detection according to an exemplary embodiment;
FIG. 6 is a flow chart illustrating another method of object detection according to an exemplary embodiment;
FIG. 7 is a block diagram of an electronic device provided in accordance with an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Before introducing the object detection apparatus, the method, the storage medium, and the electronic device provided by the present disclosure, an application scenario related to various embodiments of the present disclosure is first introduced. The application scene is to detect an object to be detected in the surrounding environment by using an object detection device. The object detection device may be disposed on the terminal device, and the object detection device may include a lens, an image sensor, and a processor. The image sensor is located on one side corresponding to the image side surface of the lens and provided with an imaging surface facing the image side surface, the imaging surface consists of a plurality of pixel points, and the lens and the image sensor can be completely attached together or can be separated by a certain distance. The lens may be a planar lens or a curved lens, the image sensor may be a CMOS (Complementary Metal-Oxide-Semiconductor, chinese) sensor, a CCD (Charge-coupled Device, chinese), or any other photosensitive sensor, and the disclosure does not specifically limit the present disclosure. The terminal device can be, for example, a terminal device such as a smart robot, a smart phone, a tablet computer, a smart watch, and a smart bracelet.
FIG. 1 is a block diagram illustrating an object detection device according to an exemplary embodiment. As shown in fig. 1, the apparatus 100 includes a lens 101, an image sensor 102 and a processor 103, the lens 101 includes a plurality of first microlenses having the same area and shape, or the lens includes a plurality of types of microlenses, the different types of microlenses being used to characterize microlenses having at least one different area and shape.
The image sensor 102 is configured to acquire first pixel information corresponding to a current detection area in at least one preset detection area, and send the first pixel information to the processor 103.
Each preset detection area is an area formed by target pixel points on the image sensor, the target pixel points are pixel points included in a projection area of the plurality of microlenses on the image sensor, and the first pixel information is pixel point information collected by the target pixel points corresponding to the current detection area.
For example, in order to avoid the defects of object detection by using sensing devices such as ultrasound, infrared, laser radar, etc., an object detection device 100 may be constructed through a lens 101, an image sensor 102 and a processor 103 to perform object detection. Specifically, the lens 101 may be first divided into a plurality of microlenses of the same area and shape arranged in a grid pattern, or the lens 101 may be divided into a plurality of types of microlenses each including a plurality of microlenses. The microlenses of different types may have the same area or the same shape, or the microlenses of different types may have the same area or the same shape. The lens 101 may be divided by, for example, forming a plurality of microlenses (or a plurality of types of microlenses) with the same area and shape in a grid pattern on the lens by photolithography (or other techniques), or forming a plurality of microlenses (or a plurality of types of microlenses) with the same area and shape in a grid pattern on the lens by a layer of nano-film.
When the lens 101 includes a plurality of microlenses having the same area and shape, each microlens corresponds to one or more pixel points on the image sensor 102, and the pixel point corresponding to each microlens is a pixel point included in a projection area of the microlens on the image sensor 102. When the lens 101 includes multiple types of microlenses, each type of microlens corresponds to one type of pixel point on the image sensor 102, and the pixel point corresponding to each type of microlens is a pixel point included in a projection area of the type of microlens on the image sensor 102. When an object to be detected is captured by a certain micro lens, imaging can be carried out on a pixel point corresponding to the micro lens.
Then, when the lens 101 includes a plurality of microlenses with the same area and shape, the imaging surface of the entire image sensor can be used as a preset detection region, that is, the target pixel point corresponding to the preset detection region includes only one type of pixel point. When the lens 101 includes multiple types of microlenses, the imaging surface may be divided into multiple preset detection regions according to types of pixel points, each preset detection region may be a region formed by target pixel points, target pixel points corresponding to each preset detection region are different, and the number of the target pixel points is different. Then, the processor 103 selects a certain preset detection region from the at least one preset detection region as a current detection region, the image sensor 102 acquires first pixel information acquired by a target pixel point corresponding to the current detection region, and sends the first pixel information to the processor 103. The first pixel information may include a pixel value of each target pixel point corresponding to the current detection region.
And the processor 103 is used for determining a first object characteristic of the object to be detected according to the first pixel information.
For example, after acquiring the first pixel information, the processor 103 may calculate a first object feature of the object to be detected according to the first pixel information. The first object characteristic may include a first distance of the object to be detected from the object detection device, and a first size and a first moving speed of the object to be detected. The specific implementation manner of calculating the first object feature according to the first pixel information may refer to the manner described in the related art, and details are not repeated here.
In summary, the object detection device in the present disclosure includes a lens, an image sensor and a processor, where the lens includes a plurality of microlenses having the same area and shape, or the lens includes a plurality of types of microlenses, where the different types of microlenses are used to represent at least one different microlens in area and shape, the image sensor is configured to obtain first pixel information corresponding to a current detection region in at least one preset detection region, and send the first pixel information to the processor, where each preset detection region is a region formed by target pixel points on the image sensor, a target pixel point is a pixel point included in a projection region of the plurality of microlenses on the image sensor, and the first pixel information is pixel point information acquired by the target pixel point corresponding to the current detection region, and the processor is configured to determine a first object feature of an object to be detected according to the first pixel information. According to the object detection device, the first object characteristics of the object to be detected are determined through the first pixel information corresponding to the current detection area, accurate detection of the object to be detected can be achieved under the condition of low power consumption, meanwhile, the structure of the object detection device is simple, and the cost, the weight and the size of the object detection device are reduced.
Optionally, in a case that the lens 101 includes multiple types of microlenses, and the multiple types of microlenses are the first microlens and the second microlens, the target pixel includes a first pixel and/or a second pixel, the first pixel is a pixel included in a projection area of the first microlens on the image sensor, and the second pixel is a pixel included in a projection area of the second microlens on the image sensor.
The processor 103 is further configured to determine a target detection area other than the current detection area from at least one preset detection area when the first object characteristic satisfies an area switching condition corresponding to the current detection area.
In one scenario, in a case where the lens 101 includes two types of microlenses (a first microlens and a second microlens), the imaging surface of the image sensor 102 may be divided into three preset detection regions. For example, the preset detection region may include a large lens detection region, a small lens detection region and a full lens detection region, the target pixel point corresponding to the large lens detection region includes a first pixel point, the target pixel point corresponding to the small lens detection region includes a second pixel point, and the full lens detection region includes the large lens detection region and the small lens detection region, that is, the target pixel point corresponding to the full lens detection region includes both the first pixel point and the second pixel point. Wherein, the first and second microlenses may be polygonal in shape. For example, when the first microlenses and the second microlenses are both square in shape, the lens 101 may be divided into a plurality of first microlenses and a plurality of second microlenses as shown in fig. 2, where the smallest square grid in fig. 2 corresponds to one pixel, a square 6 × 6 surrounded by a dotted line in fig. 2 is the first microlens, and a square 3 × 3 surrounded by a dotted line in fig. 2 is the second microlens.
When the target pixel points corresponding to different preset detection regions are used for detecting the object to be detected, the number of the used target pixel points is different, the data volume of the pixel point information collected by the target pixel points is different, and the analysis force and the power consumption for detecting the object to be detected by the target pixel points corresponding to the different preset detection regions are different. The number of target pixels corresponding to the preset detection area is increased, the analytic force for detecting the object to be detected is increased, and the power consumption for detecting the object to be detected is increased. The analytic force can reflect the performance of detecting the object to be detected, and the higher the analytic force is, the better the performance of detecting the object to be detected is, that is, the higher the accuracy of detecting the object to be detected is. Taking the example that the preset detection region includes a large lens detection region, a small lens detection region and a full lens detection region, and the number of the first microlenses included on the lens is greater than that of the second microlenses, the size relationship of the analytic forces of the large lens detection region, the small lens detection region and the full lens detection region is as follows: the full lens detection area > the small lens detection area > the large lens detection area, and the size relation of the power consumption is as follows: the whole lens detection area is less than the small lens detection area and less than the large lens detection area.
Therefore, the area switching condition corresponding to each preset detection area may be preset in the processor 103, so that when the processor 103 detects the object to be detected through the target pixel point corresponding to each preset detection area, it may be determined whether the analytic force of the preset detection area for detecting the object to be detected is insufficient or excessive. The area switching condition corresponding to each preset detection area may be set according to a distance range between the object to be detected and the object detection device, which can be detected by the preset detection area, and a size range and a moving speed range of the object to be detected. When the first object characteristic satisfies the area switching condition corresponding to the current detection area, it indicates that the resolving power for detecting the object to be detected by the target pixel point corresponding to the current detection area is insufficient or excessive, and the processor 103 needs to switch the current detection area. When it is determined that the resolving power of the target pixel point corresponding to the current detection region for detecting the object to be detected is insufficient, the processor 103 may select a preset detection region with a higher resolving power from the plurality of preset detection regions as the target detection region (i.e., select a preset detection region with a larger number of target pixel points) to detect the object to be detected. When it is determined that the resolving power of the target pixel point corresponding to the current detection region for detecting the object to be detected is excessive, the processor 103 may select a preset detection region with a smaller resolving power from the plurality of preset detection regions as the target detection region for detecting the object to be detected, that is, select a preset detection region with a smaller number of target pixel points for detecting the object to be detected, thereby reducing the power consumption for detecting the object to be detected. By switching the current detection area into the target detection area, the power consumption of object detection can be reduced while the object detection performance (detection accuracy) is ensured, so that the requirements of different scenes are met.
The processor 103 is further configured to determine a second object characteristic of the object to be detected according to second pixel information corresponding to the target detection region, where the second pixel information is pixel information acquired by a target pixel corresponding to the target detection region.
In this step, the processor 103 may calculate a second object characteristic of the object to be detected according to the second pixel information acquired by the target pixel point corresponding to the target detection region. The second object characteristic may include a second distance of the object to be detected from the object detection device, and a second size and a second moving speed of the object to be detected. The specific implementation manner of calculating the second object feature according to the second pixel information may refer to the manner described in the related art, and is not described in detail here.
Further, under the condition that the first object characteristic does not meet the area switching condition corresponding to the current detection area, it is indicated that the resolving power for detecting the object to be detected through the target pixel point corresponding to the current detection area meets the requirement, and the resolving power is not excessive, and the current detection area does not need to be switched. The processor 103 may directly take the first object feature as the second object feature.
Alternatively, the first object characteristic may include a first distance between the object to be detected and the object detection device, and the first size and the first moving speed of the object to be detected, and the area switching condition may include any one of the following conditions:
1) The first size is smaller than a first size threshold corresponding to the current detection area, or the first size is larger than or equal to a second size threshold corresponding to the current detection area, and the second size threshold is larger than the first size threshold.
2) The first distance is greater than a first distance threshold corresponding to the current detection area, or the first distance is less than or equal to a second distance threshold corresponding to the current detection area, and the second distance threshold is less than the first distance threshold.
3) The first moving speed is less than a first speed threshold corresponding to the current detection area, or the first moving speed is greater than or equal to a second speed threshold corresponding to the current detection area, and the second speed threshold is greater than the first speed threshold.
For example, the first size threshold is a minimum size of the object to be detected that can be detected by the current detection area, the first distance threshold is a maximum distance between the object to be detected that can be detected by the current detection area and the object detection device, and the first speed threshold is a minimum speed of the object to be detected that can be detected by the current detection area. When the first size is smaller than a first size threshold corresponding to the current detection region, the first distance is larger than a first distance threshold corresponding to the current detection region, and any condition that the first moving speed is smaller than the first speed threshold corresponding to the current detection region is met, it is indicated that the resolving power of the target pixel point corresponding to the current detection region for detecting the object to be detected is insufficient, and the current detection region needs to be switched.
The second size threshold is the maximum size of the object to be detected that can be detected in the current detection area, the second distance threshold is the minimum distance between the object to be detected that can be detected in the current detection area and the object detection device, and the second speed threshold is the maximum speed of the object to be detected that can be detected in the current detection area. When any condition that the first size is larger than or equal to a second size threshold corresponding to the current detection area, the first distance is smaller than or equal to a second distance threshold corresponding to the current detection area, and the first moving speed is larger than or equal to a second speed threshold corresponding to the current detection area is met, it is indicated that the resolving power of the target pixel corresponding to the current detection area for detecting the object to be detected is excessive, and the current detection area needs to be switched.
Taking the example that the preset detection regions include a large lens detection region, a small lens detection region and a full lens detection region, and the number of the first microlenses is greater than that of the second microlenses on the lens as an example, after the object detection apparatus 100 is started, a user can manually select one preset detection region from the large lens detection region, the small lens detection region and the full lens detection region as a current detection region to detect an object to be detected. An automatic selection mode may also be adopted, for example, a large lens detection area may be selected by default, and then a first object feature of an object to be detected is calculated through first pixel information acquired by a target pixel point corresponding to the large lens detection area, and whether the first object feature satisfies an area switching condition corresponding to the large lens detection area is determined. If the first size is smaller than a first size threshold corresponding to the large lens detection region, the first distance is larger than a first distance threshold corresponding to the large lens detection region, and any condition that the first moving speed is smaller than the first speed threshold corresponding to the large lens detection region is met (at this moment, the resolving power of target pixel points corresponding to the large lens detection region for detecting an object to be detected is insufficient, and the pixel value difference of adjacent target pixel points is large or the pixel value change is too slow), the large lens detection region is switched to the small lens detection region.
And then recalculating the first object characteristics of the object to be detected according to the first pixel information acquired by the target pixel points corresponding to the small lens detection regions, and judging whether the recalculated first object characteristics meet the region switching conditions corresponding to the small lens detection regions. If the recalculated first size is smaller than the first size threshold corresponding to the small lens detection region, the recalculated first distance is larger than the first distance threshold corresponding to the small lens detection region, and any condition that the recalculated first moving speed is smaller than the first speed threshold corresponding to the small lens detection region is met (at this moment, the resolving power of the target pixel point corresponding to the small lens detection region for detecting the object to be detected is insufficient, and the difference of the pixel values of the adjacent target pixel points is large), the small lens detection region is switched to the full lens detection region.
And then recalculating the first object characteristics of the object to be detected through the first pixel information acquired by the target pixel points corresponding to the full-lens detection area, and judging whether the recalculated first object characteristics meet the area switching conditions corresponding to the full-lens detection area. If the recalculated first size is judged to be larger than or equal to the second size threshold corresponding to the current detection region, the recalculated first distance is smaller than or equal to the second distance threshold corresponding to the current detection region, and any condition that the recalculated first moving speed is larger than or equal to the second speed threshold corresponding to the current detection region is met (at the moment, the resolution of target pixels corresponding to the full-lens detection region for detecting the object to be detected is excessive, which means that the pixel value difference of adjacent target pixels is small), the full-lens detection region is switched to the small lens detection region.
Optionally, under the condition that the lens 101 includes a plurality of microlenses with the same area and shape, the target pixel point includes a third pixel point, the third pixel point is a pixel point included in a projection area of the plurality of microlenses on the image sensor, and the first pixel information is pixel point information acquired by the third pixel point.
In one scenario, when the lens 101 includes a plurality of microlenses having the same area and shape, the current detection region may be a region composed of all the third pixel points. The image sensor 102 may acquire the first pixel information collected by the third pixel point, and send the first pixel information to the processor 103. A first object characteristic of the object to be detected is calculated by the processor 103 on the basis of the first pixel information.
Further, in the case where the lens 101 includes a plurality of microlenses having the same area and shape, the shape of the microlenses included in the lens 101 may be a polygon. For example, the microlenses may be rectangles with equal areas, as shown in fig. 3, the smallest square in fig. 3 corresponds to one pixel, and the 6 × 3 rectangular area surrounded by the dotted line in fig. 3 is the microlens. For another example, the microlenses may be hexagons having equal areas, as shown in fig. 4, the smallest square in fig. 4 corresponds to one pixel point, and the hexagonal area surrounded by the dotted line in fig. 4 is the microlens.
In summary, the object detection device in the present disclosure includes a lens, an image sensor and a processor, where the lens includes a plurality of microlenses having the same area and shape, or the lens includes a plurality of types of microlenses, where the different types of microlenses are used to represent at least one different microlens in area and shape, the image sensor is configured to obtain first pixel information corresponding to a current detection region in at least one preset detection region, and send the first pixel information to the processor, where each preset detection region is a region formed by target pixel points on the image sensor, a target pixel point is a pixel point included in a projection region of the plurality of microlenses on the image sensor, and the first pixel information is pixel point information acquired by the target pixel point corresponding to the current detection region, and the processor is configured to determine a first object feature of an object to be detected according to the first pixel information. According to the object detection device and the detection method, the first object characteristic of the object to be detected is determined through the first pixel information corresponding to the current detection area, accurate detection of the object to be detected can be achieved under the condition of low power consumption, meanwhile, the structure of the object detection device is simple, and the cost, the weight and the size of the object detection device are reduced.
FIG. 5 is a flow chart illustrating a method of object detection according to an exemplary embodiment. As shown in fig. 5, applied to an object detection device including a lens including a plurality of microlenses of the same area and shape, or including a plurality of types of microlenses for characterizing microlenses of different at least one of area and shape, an image sensor, and a processor, the method may include the steps of:
step 201, obtaining first pixel information corresponding to a current detection area in at least one preset detection area through an image sensor, and sending the first pixel information to a processor.
Each preset detection area is an area formed by target pixel points on the image sensor, the target pixel points are pixel points included in a projection area of the plurality of microlenses on the image sensor, and the first pixel information is pixel point information collected by the target pixel points corresponding to the current detection area.
Step 202, determining, by the processor, a first object characteristic of the object to be detected according to the first pixel information.
FIG. 6 is a flow chart illustrating another method of object detection according to an exemplary embodiment. As shown in fig. 6, in a case where the lens includes multiple types of microlenses, and the multiple types of microlenses are the first microlens and the second microlens, the target pixel includes a first pixel and/or a second pixel, the first pixel is a pixel included in a projection area of the first microlens on the image sensor, and the second pixel is a pixel included in a projection area of the second microlens on the image sensor, the method may further include the following steps:
step 203, determining, by the processor, a target detection area other than the current detection area from at least one preset detection area when the first object characteristic satisfies an area switching condition corresponding to the current detection area.
And 204, determining the characteristics of a second object of the object to be detected according to second pixel information corresponding to the target detection area through the processor, wherein the second pixel information is pixel point information acquired by target pixel points corresponding to the target detection area.
Optionally, the preset detection region includes a large lens detection region, a small lens detection region and a full lens detection region, the target pixel point corresponding to the large lens detection region includes a first pixel point, the target pixel point corresponding to the small lens detection region includes a second pixel point, and the full lens detection region includes the large lens detection region and the small lens detection region.
Optionally, the first and second microlenses are polygonal in shape.
Optionally, under the condition that the lens includes a plurality of microlenses with the same area and shape, the target pixel point includes a third pixel point, the third pixel point is a pixel point included in a projection area of the plurality of microlenses on the image sensor, and the first pixel information is pixel point information collected by the third pixel point.
With regard to the method in the above-described embodiment, the specific manner in which each step performs the operation has been described in detail in the embodiment related to the apparatus, and will not be described in detail here.
In summary, the object detection device in the present disclosure includes a lens, an image sensor and a processor, where the lens includes a plurality of microlenses having the same area and shape, or the lens includes a plurality of types of microlenses, where the different types of microlenses are used to represent at least one different microlens in area and shape, the image sensor is configured to obtain first pixel information corresponding to a current detection region in at least one preset detection region, and send the first pixel information to the processor, where each preset detection region is a region formed by target pixel points on the image sensor, a target pixel point is a pixel point included in a projection region of the plurality of microlenses on the image sensor, and the first pixel information is pixel point information acquired by the target pixel point corresponding to the current detection region, and the processor is configured to determine a first object feature of an object to be detected according to the first pixel information. According to the object detection device and the detection method, the first object characteristic of the object to be detected is determined through the first pixel information corresponding to the current detection area, accurate detection of the object to be detected can be achieved under the condition of low power consumption, meanwhile, the structure of the object detection device is simple, and the cost, the weight and the size of the object detection device are reduced.
Fig. 7 is a block diagram illustrating an electronic device 700 in accordance with an example embodiment. As shown in fig. 7, the electronic device 700 may include: a processor 701 and a memory 702. The electronic device 700 may also include one or more of a multimedia component 703, an input/output (I/O) interface 704, and a communication component 705.
The processor 701 is configured to control the overall operation of the electronic device 700, so as to complete all or part of the steps of the object detection method. The memory 702 is used to store various types of data to support operation at the electronic device 700, such as instructions for any application or method operating on the electronic device 700 and application-related data, such as contact data, transmitted and received messages, pictures, audio, video, and so forth. The Memory 702 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as Static Random Access Memory (SRAM), electrically Erasable Programmable Read-Only Memory (EEPROM), erasable Programmable Read-Only Memory (EPROM), programmable Read-Only Memory (PROM), read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk, or optical disk. The multimedia components 703 may include screen and audio components. Wherein the screen may be, for example, a touch screen and the audio component is used for outputting and/or inputting audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signal may further be stored in the memory 702 or transmitted through the communication component 705. The audio assembly also includes at least one speaker for outputting audio signals. The I/O interface 704 provides an interface between the processor 701 and other interface modules, such as a keyboard, mouse, buttons, and the like. These buttons may be virtual buttons or physical buttons. The communication component 705 is used for wired or wireless communication between the electronic device 700 and other devices. Wireless Communication, such as Wi-Fi, bluetooth, near Field Communication (NFC), 2G, 3G, 4G, NB-IOT, eMTC, or other 5G, or combinations thereof, which is not limited herein. The corresponding communication component 705 may thus comprise: wi-Fi module, bluetooth module, NFC module, etc.
In an exemplary embodiment, the electronic Device 700 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic components for performing the above-described object detection method.
In another exemplary embodiment, a computer readable storage medium comprising program instructions which, when executed by a processor, implement the steps of the object detection method described above is also provided. For example, the computer readable storage medium may be the above-described memory 702 comprising program instructions executable by the processor 701 of the electronic device 700 to perform the above-described object detection method.
The preferred embodiments of the present disclosure are described in detail with reference to the accompanying drawings, however, the present disclosure is not limited to the specific details of the above embodiments, and various simple modifications may be made to the technical solution of the present disclosure within the technical idea of the present disclosure, and these simple modifications all belong to the protection scope of the present disclosure.
It should be noted that, in the above embodiments, the various features described in the above embodiments may be combined in any suitable manner, and in order to avoid unnecessary repetition, various possible combinations will not be further described in the present disclosure.
In addition, any combination of various embodiments of the present disclosure may be made, and the same should be considered as the disclosure of the present disclosure, as long as it does not depart from the spirit of the present disclosure.

Claims (8)

1. An object detecting device, comprising a lens, an image sensor and a processor, wherein the lens comprises a plurality of microlenses of the same area and shape, or wherein the lens comprises a plurality of types of microlenses, wherein different types of microlenses are used to characterize microlenses of different at least one of area and shape;
the image sensor is used for acquiring first pixel information corresponding to a current detection area in at least one preset detection area and sending the first pixel information to the processor; each preset detection region is a region formed by target pixel points on the image sensor, the target pixel points are pixel points included in a projection region of the microlenses on the image sensor, and the first pixel information is pixel point information collected by the target pixel points corresponding to the current detection region;
the processor is used for determining a first object characteristic of the object to be detected according to the first pixel information;
under the condition that the lens comprises a plurality of types of microlenses, and the plurality of types of microlenses are a first microlens and a second microlens, the target pixel point comprises a first pixel point and/or a second pixel point, the first pixel point is a pixel point included in a projection area of the first microlens on the image sensor, and the second pixel point is a pixel point included in a projection area of the second microlens on the image sensor;
the processor is further configured to determine a target detection region except the current detection region from at least one preset detection region under the condition that the first object characteristic meets a region switching condition corresponding to the current detection region, where the region switching condition is preset according to a distance range between an object to be detected and an object detection device, which can be detected by the preset detection region, and a size range and a moving speed range of the object to be detected, and the number of target pixel points corresponding to different preset detection regions is different;
the processor is further configured to determine a second object characteristic of the object to be detected according to second pixel information corresponding to the target detection region, where the second pixel information is pixel information acquired by the target pixel corresponding to the target detection region.
2. The apparatus of claim 1, wherein the predetermined detection region comprises a large lens detection region, a small lens detection region and a full lens detection region, the target pixel corresponding to the large lens detection region comprises the first pixel, the target pixel corresponding to the small lens detection region comprises the second pixel, and the full lens detection region comprises the large lens detection region and the small lens detection region.
3. The apparatus of claim 1, wherein the first and second microlenses are polygonal in shape.
4. The apparatus according to claim 1, wherein in a case where the lens includes a plurality of microlenses having the same area and shape, the target pixel includes a third pixel, the third pixel is a pixel included in a plurality of projection regions of the microlenses on the image sensor, and the first pixel information is pixel information acquired by the third pixel.
5. An object detection method, applied to an object detection device, the device including a lens, an image sensor and a processor, wherein the lens includes a plurality of microlenses having the same area and shape, or the lens includes a plurality of types of microlenses, wherein different types of microlenses are used to characterize at least one different microlens in area and shape; the method comprises the following steps:
acquiring first pixel information corresponding to a current detection area in at least one preset detection area through the image sensor, and sending the first pixel information to the processor; each preset detection region is a region formed by target pixel points on the image sensor, the target pixel points are pixel points included in a projection region of the microlenses on the image sensor, and the first pixel information is pixel point information collected by the target pixel points corresponding to the current detection region;
determining, by the processor, a first object characteristic of an object to be detected according to the first pixel information;
in a case where the lens includes a plurality of types of microlenses, and the plurality of types of microlenses are a first microlens and a second microlens, the target pixel point includes a first pixel point and/or a second pixel point, the first pixel point is a pixel point included in a projection area of the first microlens on the image sensor, and the second pixel point is a pixel point included in a projection area of the second microlens on the image sensor, the method further includes:
determining, by the processor, a target detection region excluding the current detection region from at least one preset detection region when the first object characteristic satisfies a region switching condition corresponding to the current detection region, where the region switching condition is preset according to a distance range between an object to be detected, which can be detected by the preset detection region, and an object detection device, and a size range and a movement speed range of the object to be detected, and numbers of target pixel points corresponding to different preset detection regions are different;
determining, by the processor, a second object characteristic of the object to be detected according to second pixel information corresponding to the target detection area, where the second pixel information is pixel point information acquired by the target pixel point corresponding to the target detection area.
6. The method according to claim 5, wherein in a case where the lens includes a plurality of microlenses having the same area and shape, the target pixel includes a third pixel, the third pixel is a pixel included in a plurality of projection regions of the microlenses on the image sensor, and the first pixel information is pixel information acquired by the third pixel.
7. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 5 to 6.
8. An electronic device, comprising:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to carry out the steps of the method of any one of claims 5 to 6.
CN202011401996.9A 2020-12-02 2020-12-02 Object detection device, method, storage medium, and electronic apparatus Active CN112600994B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011401996.9A CN112600994B (en) 2020-12-02 2020-12-02 Object detection device, method, storage medium, and electronic apparatus
PCT/CN2021/122452 WO2022116676A1 (en) 2020-12-02 2021-09-30 Object detection apparatus and method, storage medium, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011401996.9A CN112600994B (en) 2020-12-02 2020-12-02 Object detection device, method, storage medium, and electronic apparatus

Publications (2)

Publication Number Publication Date
CN112600994A CN112600994A (en) 2021-04-02
CN112600994B true CN112600994B (en) 2023-04-07

Family

ID=75188577

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011401996.9A Active CN112600994B (en) 2020-12-02 2020-12-02 Object detection device, method, storage medium, and electronic apparatus

Country Status (2)

Country Link
CN (1) CN112600994B (en)
WO (1) WO2022116676A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112584015B (en) * 2020-12-02 2022-05-17 达闼机器人股份有限公司 Object detection method, device, storage medium and electronic equipment
CN112600994B (en) * 2020-12-02 2023-04-07 达闼机器人股份有限公司 Object detection device, method, storage medium, and electronic apparatus
CN117553910A (en) * 2022-08-05 2024-02-13 上海禾赛科技有限公司 Detection module, detector and laser radar

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013145980A (en) * 2012-01-13 2013-07-25 Canon Inc Imaging device, control method thereof, image processing apparatus, image generation method, program

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6638786B2 (en) * 2002-10-25 2003-10-28 Hua Wei Semiconductor (Shanghai ) Co., Ltd. Image sensor having large micro-lenses at the peripheral regions
FR2884338B1 (en) * 2005-04-11 2007-10-19 Valeo Vision Sa METHOD, DEVICE AND CAMERA FOR DETECTING OBJECTS FROM DIGITAL IMAGES
EP1764835B1 (en) * 2005-09-19 2008-01-23 CRF Societa'Consortile per Azioni Multifunctional optical sensor comprising a matrix of photodetectors coupled microlenses
CN101840435A (en) * 2010-05-14 2010-09-22 中兴通讯股份有限公司 Method and mobile terminal for realizing video preview and retrieval
JP5963448B2 (en) * 2012-01-13 2016-08-03 キヤノン株式会社 Imaging device
JP5836821B2 (en) * 2012-01-30 2015-12-24 オリンパス株式会社 Imaging device
JP6019947B2 (en) * 2012-08-31 2016-11-02 オムロン株式会社 Gesture recognition device, control method thereof, display device, and control program
CN103049760B (en) * 2012-12-27 2016-05-18 北京师范大学 Based on the rarefaction representation target identification method of image block and position weighting
EP3029931A1 (en) * 2014-12-04 2016-06-08 Thomson Licensing Image sensor unit and imaging apparatus
WO2020024079A1 (en) * 2018-07-28 2020-02-06 合刃科技(深圳)有限公司 Image recognition system
CN112600994B (en) * 2020-12-02 2023-04-07 达闼机器人股份有限公司 Object detection device, method, storage medium, and electronic apparatus
CN112584015B (en) * 2020-12-02 2022-05-17 达闼机器人股份有限公司 Object detection method, device, storage medium and electronic equipment

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013145980A (en) * 2012-01-13 2013-07-25 Canon Inc Imaging device, control method thereof, image processing apparatus, image generation method, program

Also Published As

Publication number Publication date
WO2022116676A1 (en) 2022-06-09
CN112600994A (en) 2021-04-02

Similar Documents

Publication Publication Date Title
CN112600994B (en) Object detection device, method, storage medium, and electronic apparatus
US8233077B2 (en) Method and apparatus with depth map generation
US20170244896A1 (en) Multiple lenses system and portable electronic device employing the same
US20090167930A1 (en) Method and apparatus with fast camera auto focus
US10564390B2 (en) Focusing control device, focusing control method, focusing control program, lens device, and imaging device
US10277802B2 (en) Focusing control device, focusing control method, focusing control program, lens device, and imaging device
JP6791962B2 (en) Imaging device
CN112584015B (en) Object detection method, device, storage medium and electronic equipment
RU2690687C1 (en) Method and device for uplink signal transmission
US10812705B2 (en) Imaging apparatus, imaging method, and imaging program
US11343422B2 (en) Focusing control device, focusing control method, focusing control program, lens device, and imaging device
US20170094189A1 (en) Electronic apparatus, imaging method, and non-transitory computer readable recording medium
KR101503017B1 (en) Motion detecting method and apparatus
JP7410675B2 (en) Imaging device and its control method
CN104215215B (en) A kind of distance-finding method
KR20210101087A (en) An electronic device and method for displaying image at the electronic device
CN114286011B (en) Focusing method and device
US20220268935A1 (en) Electronic device comprising camera and method thereof
US10520793B2 (en) Focusing control device, focusing control method, focusing control program, lens device, and imaging device
US10939056B2 (en) Imaging apparatus, imaging method, imaging program
JP2018198399A (en) Information processing apparatus, information processing system, information processing method, and program
CN111507144B (en) Touch area acquisition method and device, intelligent equipment and storage medium
US20230134771A1 (en) Image sensor, image acquisition apparatus, and electronic apparatus including the image acquisition apparatus
JP7458723B2 (en) Image processing device, imaging device, control method, and program
CN111901539B (en) Image acquisition method, image acquisition device and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 201111 Building 8, No. 207, Zhongqing Road, Minhang District, Shanghai

Applicant after: Dayu robot Co.,Ltd.

Address before: 200245 2nd floor, building 2, no.1508, Kunyang Road, Minhang District, Shanghai

Applicant before: Dalu Robot Co.,Ltd.

GR01 Patent grant
GR01 Patent grant