WO2022172719A1 - 物体情報生成システム、物体情報生成方法、および、物体情報生成プログラム - Google Patents
物体情報生成システム、物体情報生成方法、および、物体情報生成プログラム Download PDFInfo
- Publication number
- WO2022172719A1 WO2022172719A1 PCT/JP2022/001976 JP2022001976W WO2022172719A1 WO 2022172719 A1 WO2022172719 A1 WO 2022172719A1 JP 2022001976 W JP2022001976 W JP 2022001976W WO 2022172719 A1 WO2022172719 A1 WO 2022172719A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- distance
- information generation
- object information
- unit
- image
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 42
- 238000004364 calculation method Methods 0.000 claims abstract description 38
- 238000012545 processing Methods 0.000 claims abstract description 36
- 238000003384 imaging method Methods 0.000 claims abstract description 22
- 238000000605 extraction Methods 0.000 claims abstract description 18
- 239000000284 extract Substances 0.000 claims abstract description 8
- 230000002194 synthesizing effect Effects 0.000 claims description 7
- 238000005259 measurement Methods 0.000 description 40
- 230000000875 corresponding effect Effects 0.000 description 21
- 230000006870 function Effects 0.000 description 7
- 238000002366 time-of-flight method Methods 0.000 description 6
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 3
- 238000000926 separation method Methods 0.000 description 3
- 239000013598 vector Substances 0.000 description 3
- 230000000877 morphologic effect Effects 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 230000006399 behavior Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000001276 controlling effect Effects 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000005401 electroluminescence Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 229910052736 halogen Inorganic materials 0.000 description 1
- 150000002367 halogens Chemical class 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
- G01S17/08—Systems determining position data of a target for measuring distance only
- G01S17/10—Systems determining position data of a target for measuring distance only using transmission of interrupted, pulse-modulated waves
- G01S17/18—Systems determining position data of a target for measuring distance only using transmission of interrupted, pulse-modulated waves wherein range gates are used
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
- G01S17/894—3D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/155—Segmentation; Edge detection involving morphological operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/174—Segmentation; Edge detection involving the use of two or more images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20132—Image cropping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
Definitions
- the present disclosure relates to an object information generation system that generates distance information of an object existing in a target section from a plurality of segmented images obtained by dividing the target space by distance.
- Patent Document 1 discloses a distance measuring device capable of improving the resolution of the measured distance.
- the distance measuring section calculates the distance to the object based on the time from when the wave transmitting section transmits the measuring wave to when the wave receiving section receives the measuring wave.
- the distance measurement unit measures the amount of received waves in the period corresponding to the preceding distance section, and , the distance to the object is calculated based on the amount of received waves in the period corresponding to the latter distance section.
- Patent Document 1 the calculation accuracy of the distance value to the object is improved by using the ratio of the amount of received light signals corresponding to adjacent distance sections for each pixel.
- Patent Document 1 when the technique of Patent Document 1 is actually used, it is necessary to measure the same pixel a plurality of times in order to remove noise. Therefore, there is a problem that it takes time to calculate the distance value.
- the present disclosure has been made in view of this point, and aims to enable an object information generation system to generate object distance information in a short period of time.
- An object information generation system includes an imaging unit that captures a plurality of segmented images respectively corresponding to a plurality of distance segments into which a target space is divided; a signal processing unit for generating information about an object existing in space, wherein the signal processing unit is an object region extracting unit for extracting an object region, which is a pixel region containing an image of the object, from the plurality of segmented images; determining a window for extracting pixels to be calculated for the object region, and calculating using information of a plurality of pixels in the window in two or more segmented images among the plurality of segmented images. and an object information generation unit that generates distance information of the object.
- an object information generation system it is possible to generate highly accurate distance information of an object in a short time.
- Configuration of object information generation system is a scene in which a person exists in the target space and the arrival pattern of the reflected light reflected by the person, and (b) is an example of a segmented image captured.
- Example of signal level for each distance segment in pixels with a human image is a scene with people in the target space, and (b) is an example of window settings.
- Example of object information generation method is an object area extraction step, and (b) is an object information generation step (a) is a scene with multiple objects in the target space, and (b) is an example of window settings.
- An object information generation system includes an imaging unit that captures a plurality of segmented images respectively corresponding to a plurality of distance segments into which a target space is divided; a signal processing unit for generating information about an object existing in space, wherein the signal processing unit is an object region extracting unit for extracting an object region, which is a pixel region containing an image of the object, from the plurality of segmented images; determining a window for extracting pixels to be calculated for the object region, and calculating using information of a plurality of pixels in the window in two or more segmented images among the plurality of segmented images. and an object information generator for generating object distance information.
- the imaging unit captures a plurality of segmented images respectively corresponding to a plurality of distance segments into which the target space is divided.
- the object area extraction section extracts an object area, which is a pixel area including an image of the object, from the plurality of segmented images.
- the object information generation unit determines a window for extracting pixels to be calculated for the object region, and performs calculation using information of a plurality of pixels in the window in the two or more segmented images, Object distance information is generated. For this reason, noise can be removed by using information of a plurality of pixels, and it becomes unnecessary to measure the same pixel a plurality of times for noise removal.
- the pixels within the window determined for the object region are the object of calculation, the amount of calculation is significantly reduced compared to the method in which all pixels are the object of calculation. Therefore, distance information can be generated in a short time.
- the object information generation unit may set a plurality of different windows for the object region, and generate distance information of the object using each of the windows.
- the object information generation unit may generate information about the shape of the object using the distance information of the object generated using each of the windows.
- information about the shape of the object can be generated from the distance information of multiple locations on the object.
- the object information generation unit may determine whether or not the object region can be separated using the object distance information generated using each window.
- the object information generation unit may determine the number of pixels used for calculation according to the distance of the distance segment corresponding to the segmented image to be calculated.
- the signal processing unit includes an image synthesizing unit that generates a distance image in which a distance value is assigned to each pixel using the plurality of segmented images, and the object area extracting unit extracts the object in the distance image. It is also possible to extract the region.
- the object area can be extracted using the range image.
- the signal processing unit includes an image synthesis unit that generates a distance image in which a distance value is assigned to each pixel using the plurality of segmented images, and the object information generation unit uses the distance image to: Generating object distance information.
- the distance information of the object can be generated using the distance image.
- An object information generation method processes a plurality of segmented images respectively corresponding to a plurality of distance segments into which a target space is divided to generate information about an object existing in the target space. a step of extracting an object region, which is a pixel region including an image of an object, from the plurality of segmented images; and a step of determining a window for extracting pixels to be calculated for the object region; calculating using information of a plurality of pixels within the window in two or more segmented images of the plurality of segmented images to generate object distance information.
- an object region which is a pixel region containing an image of an object
- a window for extracting pixels to be calculated for the object region is determined, calculation is performed using information of a plurality of pixels in the window in two or more segmented images, and object distance information is generated.
- noise can be removed by using information of a plurality of pixels, and it becomes unnecessary to measure the same pixel a plurality of times for noise removal.
- the pixels within the window determined for the object region are the object of calculation, the amount of calculation is significantly reduced compared to the method in which all pixels are the object of calculation. Therefore, distance information can be generated in a short time.
- a program according to one aspect of the present disclosure causes a computer system including one or more processors to execute the object information generating method of the aspect.
- FIG. 1 is a block diagram showing the configuration of an object information generation system according to an embodiment.
- the object information generation system 1 includes an imaging section 10 , a signal processing section 20 and an output section 30 .
- the object information generation system 1 is a system that acquires distance information to an object using the TOF method (TOF: Time Of Flight) and processes the generated image.
- TOF Time Of Flight
- the object information generation system 1 includes, for example, a surveillance camera that detects objects (people), a system that tracks people's flow lines in factories and commercial facilities to improve labor productivity and analyze consumer purchasing behavior, and automobiles. It can be used for a system that detects obstacles mounted on it.
- the imaging unit 10 includes a light emitting unit 11, a light receiving unit 12, and a control unit 13.
- the imaging unit 10 emits measurement light from the light emitting unit 11 into the target space, captures the reflected light reflected by the object existing in the target space with the light receiving unit 12, and uses the TOF method to determine the distance to the object. It is configured to be able to measure and generate images.
- the light emitting unit 11 is configured to project measurement light onto the target space.
- the light receiving unit 12 is configured to receive reflected light reflected by an object existing in the target space and generate an image.
- the control unit 13 is configured to perform light emission control of the light emitting unit 11 and light reception control of the light receiving unit 12 .
- the control unit 13 controls the light emitting unit 11 and the light receiving unit 12 so that the light receiving unit 12 generates a plurality of segmented images respectively corresponding to a plurality of distance segments obtained by dividing the target space.
- the signal processing unit 20 is configured to process a plurality of segmented images generated by the imaging unit 10 and generate information about objects existing in the target space.
- the signal processing unit 20 includes an object area extracting unit 21 and an object information generating unit 22 .
- the object region extraction unit 21 is configured to extract an object region, which is a pixel region containing an image of an object, from the segmented images generated by the imaging unit 10 .
- the object information generating section 22 is configured to generate information regarding an object included in the object area extracted by the object area extracting section 21 .
- the output unit 30 is configured to output the object information generated by the signal processing unit 20 to the external device 2 .
- the object information generation system 1 of this embodiment uses a plurality of pixels of a plurality of segmented images to generate information including a distance value or a shape for each object existing in the target space. Therefore, in the object information generation system 1 of this embodiment, each object existing in the target space can be separated in the depth direction. Furthermore, it is possible to perform processing for separating objects in the depth direction at high speed.
- the imaging unit 10 includes the light emitting unit 11, the light receiving unit 12, and the control unit 13.
- the imaging unit 10 emits measurement light from the light emitting unit 11 into the target space, and the light receiving unit 12 captures an image of the reflected light reflected by an object existing in the target space, and uses the TOF method to determine the distance to the object. It is configured to be able to measure and generate images.
- the light emitting unit 11 is configured to project measurement light onto the target space.
- the light emitting unit 11 includes a light source 111 for projecting measurement light onto the target space.
- the measurement light is pulsed light.
- the measurement light preferably has a single wavelength, a relatively short pulse width, and a relatively high peak intensity.
- the wavelength of the measurement light should be in the near-infrared band, which has low human visibility and is less susceptible to disturbance light from sunlight. preferably a region.
- the light source 111 is composed of, for example, a laser diode and outputs a pulsed laser.
- the intensity of the pulsed laser output by the light source meets the class 1 or class 2 standards of the safety standards for laser products (JIS C 6802).
- the light source 111 is not limited to the above configuration, and may be a light emitting diode (LED), a vertical cavity surface emitting laser (VCSEL), a halogen lamp, or the like.
- the measurement light may be in a wavelength range different from the near-infrared band.
- the light emitting unit 11 may further include a projection optical system 112 such as a lens that projects the measurement light onto the target space.
- the light receiving unit 12 is configured to receive reflected light reflected by an object existing in the target space and generate an image.
- the light receiving unit 12 includes an imaging device 121 including a plurality of pixels, and is configured to receive reflected light reflected by an object existing in the target space and generate segmented images.
- Each pixel is provided with an avalanche photodiode. Another photodetector may be arranged in each pixel.
- Each pixel is configured to be switchable between an exposed state in which it receives reflected light and a non-exposed state in which it does not receive reflected light.
- the light receiving unit 12 outputs a pixel signal based on the reflected light received by each pixel in an exposed state.
- the signal level of the pixel signal corresponds to the number of pulses of light received by the pixel.
- the signal level of the pixel signal may be correlated to other properties of light, such as reflected light intensity.
- the light-receiving unit 12 may further include a light-receiving optical system 122, such as a lens, that collects the reflected light on the light-receiving surface of the imaging device.
- the light receiving section 12 may further include a filter that blocks or transmits light of a specific frequency. In this case, it is possible to obtain information about the frequency of light.
- FIG. 2 is a diagram showing an outline of image generation by the light receiving unit 12.
- the light receiving unit 12 For each distance division into which the target space is divided into one or more, the light receiving unit 12 generates, in each pixel, a detection signal based on light reflected from the object when an object exists in the distance division, and obtains a divided image. Generate and output lm1-5.
- the distance to the innermost part of the target space is determined according to the time from when the light emitting unit 11 emits the measurement light until when the imaging element 121 finally performs the exposure operation.
- the distance to the innermost part of the target space is not particularly limited, but is, for example, several tens of centimeters to several tens of meters.
- the distance to the innermost part of the target space may be fixed or may be set variably. Here, it is assumed that it can be set variably.
- the control unit 13 is configured to perform light emission control of the light emitting unit 11 and light reception control of the light receiving unit 12 .
- the control unit 13 is composed of, for example, a microcomputer having a processor and memory.
- the processor functions as the control unit 13 by executing appropriate programs.
- the program may be prerecorded in a memory, or may be provided via an electric line such as the Internet, or from a non-temporary recording medium such as a memory card.
- a setting reception unit such as a keyboard may be provided so that the control method can be changed by receiving settings from the operator.
- control unit 13 controls the timing of outputting light from the light source 111, the pulse width of the light output from the light source 111, and the like.
- control unit 13 controls the timing of exposure (exposure timing), the exposure width (exposure time), etc. by controlling the operation timing of the transistor in each of the plurality of pixels. Control.
- the exposure timing and exposure time may be the same for all pixels, or may be different for each pixel.
- control unit 13 causes the light source 111 to output the measurement light multiple times during a period corresponding to one distance measurement.
- the number of times the measurement light is output in one measurement cycle is the same as the number of distance sections into which the object information generation system 1 divides the target space.
- One measurement cycle includes a plurality of measurement periods.
- the number of measurement periods in one measurement cycle is the same as the number of distance segments.
- one measurement period corresponds to a plurality of distance segments on a one-to-one basis.
- the time length of each divided period is, for example, 10 ns.
- the control unit 13 causes the light source 111 to emit light in the first divided period in each measurement period.
- the light emission period for one light emission may have the same length of time as the divided period, or may have different lengths of time.
- control unit 13 causes the light receiving unit 12 to be exposed during one of a plurality of divided periods in each measurement period. Specifically, the control unit 13 sequentially shifts the timing of exposing the light receiving unit 12 by one from the first divided period to the n-th divided period for each measurement period. That is, in one measurement cycle, the light receiving section 12 is exposed during all of a plurality of divided periods.
- the exposure period in one exposure may have the same time length as the division period, or may have different time lengths.
- the timing of exposing the light receiving section 12 may be performed in a different order instead of sequentially shifting from the first divided period to the n-th divided period.
- the light receiving unit 12 can receive the reflected light reflected by the object only during the exposure period.
- the control unit 13 repeats the measurement cycle p times, and in each pixel, among the pixel signals generated in each measurement cycle, integrates the signal generated by exposure in the same divided period, and calculates the value corresponding to each divided period.
- the light receiving section 12 is controlled so as to output as pixels of the segmented image.
- p is an integer of 1 or more.
- the number of times of light emission and the number of times of exposure in each divided period is 1.
- each pixel signal has a maximum value of 1. That is, when the measurement cycle is repeated p times and the pixel signals are integrated, the signal level corresponding to each pixel is equal to p at maximum.
- the maximum signal level of pixel signals of all pixels in an image is p.
- An image may be generated including measurement cycles in which no exposure is performed in one or more divided periods so that the maximum value of the signal level changes for each divided period.
- the signal processing unit 20 is configured to generate information about an object from the multiple segmented images generated by the imaging unit 10 .
- the signal processing unit 20 includes an object area extracting unit 21 and an object information generating unit 22 .
- the object region extraction unit 21 is configured to extract an object region, which is a pixel region containing an image of an object, from the segmented images generated by the imaging unit 10 .
- the object region extracting unit 21 is realized, for example, by a computer system having a processor and memory.
- the object region extracting section 21 functions when the processor executes an appropriate program.
- the program may be prerecorded in a memory, or may be provided through a telecommunications line such as the Internet, or from a non-temporary recording medium such as a memory card.
- the processor processes the image output from the light receiving unit 12, and the memory holds the image and the result of processing by the processor (information on the extracted object region).
- Information held in the memory is output to the object information generator 22 at a predetermined timing.
- the object region information held in the memory may be in the form of an image, may be converted into a form such as a run length code or a chain code, or may be in another form.
- the object region extraction unit 21 extracts, as an object region, regions in which pixels with high signal levels exist at high density in each of the plurality of segmented images.
- the method of object region extraction processing is not limited to this.
- a region in which high-level pixels exist at high density may be extracted as an object region.
- Fig. 3(a) shows a scene in which a person exists in the target space and the arrival pattern of the reflected light reflected by the person
- Fig. 3(b) shows an example of an imaged segmented image.
- a person exists across the k-th distance segment and the k+1-th distance segment.
- FIG. Set the window WD to cut out.
- the size of the window WD is 2 ⁇ u in width and 2 ⁇ v in height.
- Sk be the signal level of the k-th distance segment in pixel (u, v)
- the function den(u, v, t) is set as shown in Equation (1) below.
- a threshold value Th that satisfies 0 ⁇ Th ⁇ 1 is prepared, and pixels (u, v) that satisfy den(u, v, t)>Th are determined as candidate regions in which an object image is shown. Then, candidate areas that are connected to each other in any of the eight adjacent directions are extracted as one object area.
- processing may be inserted before and after the calculation of den(u, v, t) for the purpose of limiting the object region or increasing the sensitivity of the object region extraction processing.
- each segmented image showing the background is generated and stored as a reference segmented image, and before calculating den(u, v, t), the reference segmented image corresponding to the generated segmented image
- a process of selecting an image and subtracting the signal level of the reference segmented image from the signal level of the segmented image for each pixel may be inserted.
- the process of subtracting the signal level of the reference segmented image from the signal level of the segmented image for each pixel corresponds to the process of removing the background image from the segmented image targeted for object region extraction.
- a filtering method such as morphological operation is used to exclude candidate regions derived from noise or The candidate areas may be easily connected to each other.
- the contents of pre-processing and post-processing for object region extraction are not limited to those described above, and other methods may be used.
- the method for extracting candidate regions in the segmented image is not necessarily limited to formula (1), and other calculation methods may be used.
- the object information generating section 22 is configured to generate information regarding an object included in the object area extracted by the object area extracting section 21 .
- the object information generation unit 22 is realized, for example, by a computer system having a processor and memory.
- the object information generator 22 functions when the processor executes an appropriate program.
- the program may be prerecorded in a memory, or may be provided through a telecommunications line such as the Internet, or from a non-temporary recording medium such as a memory card.
- the processor executes object information generation processing, and the memory holds the generated object information.
- the object information held in the memory is output to the output unit 30 at a predetermined timing.
- the object information held in the memory may be an image, or may be in other formats such as vectors and character strings.
- the object information generating unit 22 generates, for example, information about the three-dimensional position coordinates of the center, the area, and the three-dimensional shape of each object included in the object region, and generates the information of the dimension equal to the number of the above information. Output as an object information vector. It should be noted that, in the present embodiment, it is determined that one object region corresponds to one object image for simplification of explanation. However, it is not always necessary to determine that one object region corresponds to one image of an object, and it may be determined that a plurality of objects exist in one object region as described in a modified example below. . Alternatively, it may be determined that a plurality of object regions correspond to one object.
- the object information generation unit 22 may generate other features instead of generating the above object information. For example, the direction and velocity of an object in three-dimensional space may be generated by processing between multiple measurement cycles. Alternatively, two-dimensional features such as the moment of the object image or the aspect ratio of the bounding rectangle of the object image may be generated. Further, when it is determined that a plurality of objects exist in the target space, the relationship such as relative positions with other objects may be generated. Furthermore, the object information generation unit 22 may output the generated object information in another format instead of outputting it as a vector for each object. Also, it is not necessary to output the same type of object information in every measurement cycle.
- the object information generation unit 22 generates the three-dimensional position coordinates of the center of the object using multiple pixels of multiple segmented images.
- FIG. 4 is a diagram showing an example of signal levels in the first to n-th distance divisions in the pixels showing the human image shown in FIG. 3(a).
- a person exists across the k-th distance segment and the k+1-th distance segment. Therefore, as shown in FIG. 4, the signal level at the pixel corresponding to the image of a person becomes maximum at the kth distance segment, and then exhibits a large value at the k+1th distance segment.
- the distance to a person can be obtained in units finer than the size of the distance segment.
- the magnitude relationship of signal levels in each distance segment may be reversed or the ratio of signal levels may change due to local features such as noise and uneven patterns on the surface of an object. Therefore, the method of using the ratio of signal levels for a single pixel may not provide sufficient representative information for the object.
- the object information generation unit 22 generates a distance using the signal levels of the three distance segments before and after centering on the most likely distance segment where the center of the object exists.
- a plausible distance section k in which the center of the object exists is calculated, for example, by the following equation (2).
- the method of calculating the distance segment k is not limited to the formula (2), and other calculation methods may be used.
- the center distance of the object is generated by the following equation (3).
- the distance to the center of the k-th distance division is dk.
- the above calculations allow the generation of distance values in units smaller than the size of the distance segment, allowing objects existing in the same segmented image to be separated in the depth direction.
- the formulas (2) and (3) use the rectangular window WD shown in FIG. 3B to cut out pixels to be calculated. Calculation of formula (3) enables measurement in units of distance about 1/4 ⁇ u ⁇ v times in the same number of measurement cycles as compared with the technique described in Patent Document 1.
- Equation (3) does not necessarily have to be used to calculate the center distance of the object.
- more segmented images may be used instead of segmented images corresponding to three distance segments before and after the k-th distance segment.
- the calculation may be performed using two segmented images corresponding to only the k-th distance segment and the k+1-th distance segment over which the person straddles.
- a rectangular window WD is used to cut out pixels to be calculated.
- the window for extracting pixels to be calculated may have other shapes such as a shape close to a circle, a shape like a picture frame, or the like.
- the pixels in the window may be thinned out and used as the calculation target. By decimating the pixels in the window and using them as calculation targets, it is possible to reduce the amount of calculation without losing the information of the entire object region.
- the number of pixels used for calculation may be determined according to the distance dk of the distance segment corresponding to the segmented image to be calculated. For example, pixels in the window may be thinned out so that the number of pixels to be calculated is proportional to dk ⁇ 2. As a result, in the process of calculating distances for objects of similar size, it is possible to suppress variation in calculation amount and calculation accuracy depending on the distance of the object.
- the window may be formed in the same shape as the image of the object, or in a shape that cuts out a part of the image of the object, instead of applying a fixed shape such as a circumscribing rectangle to all the object regions. I don't mind.
- the object information generation unit 22 can calculate the distance for parts other than the center of the object using the above formula (3) or the like.
- the coordinates near the center of the part to be calculated should be clipped with a window to calculate the distance. .
- the object information generation unit 22 generates information about the three-dimensional shape of the object.
- the information about the three-dimensional shape of the object it is determined whether the object has a concave shape, a convex shape, or a plane in the depth direction.
- the object information generation unit 22 sets a plurality of windows for the object region, and generates information about the shape of the object based on the distance information of the object obtained using each window.
- FIG. 5 is a diagram showing an example of a scene in which a person exists in the target space and an imaged segmented image.
- a person is standing sideways with respect to the object information generating system 1 and has both hands outstretched.
- the object information generator 22 uses two different windows WD1 and WD2 to calculate the distance using Equation (3).
- the first window WD1 is set to be the same as the circumscribing rectangle of the object image (width 2 ⁇ u1, height 2 ⁇ v1)
- the second window WD2 is smaller than the object image and cuts out the vicinity of the object image. (width 2 ⁇ u2, height 2 ⁇ v2).
- the first window WD1 is an example of a window including the entire object area
- the second window WD2 is an example of a window including a part of the object area.
- the first window WD1 contains more pixels with larger distance values than the second window WD2. Therefore, the distance d1 calculated using the first window WD1 is longer than the distance d2 calculated using the second window WD2.
- the object information generator 22 sets a predetermined positive threshold Th cv , and determines that an object satisfying Th cv ⁇ d1 ⁇ d2 has a convex shape.
- a predetermined positive threshold value Th cc is set, and an object satisfying Th cc ⁇ d2 ⁇ d1 is determined to have a concave shape. If the shape is neither convex nor concave, it is determined to be flat.
- the value of (d1-d2) or the values of d1 and d2 may be generated as object information.
- the output unit 30 is configured to output the object information generated by the object information generation unit 22 to the external device 2 .
- the external device 2 is, for example, a display device such as a liquid crystal display or an organic EL display (EL: Electro Luminescence).
- the output unit 30 causes the external device 2 to display the object information generated by the object information generation unit 22 .
- the external device 2 is a computer system having a processor and a memory
- the output unit 30 outputs the object information generated by the object information generation unit 22 to the external device 2
- the external device 2 further outputs the object information may be used to analyze the shape, position or movement of an object appearing in the target space.
- the external device 2 is not limited to the display device or computer system as described above, and may be another device.
- the imaging unit 10 captures a plurality of segmented images respectively corresponding to a plurality of distance segments obtained by dividing the target space.
- the object area extracting unit 21 extracts an object area, which is a pixel area including an image of the object, from the plurality of segmented images.
- the object information generator 22 determines a window for extracting pixels to be calculated from the object region, and performs calculation using information of a plurality of pixels in the window in two or more segmented images. , object distance information is generated. For this reason, noise can be removed by using information of a plurality of pixels, and it becomes unnecessary to measure the same pixel a plurality of times for noise removal.
- the pixels within the window determined for the object region are the object of calculation, the amount of calculation is significantly reduced compared to the method in which all pixels are the object of calculation. Therefore, distance information can be generated in a short time.
- each window may be used to generate object distance information. This makes it possible to easily obtain distance information for a plurality of locations on the object. Furthermore, information about the shape of the object may be generated using the distance information of the object generated using each window.
- the distance calculation may be performed multiple times for the object area by changing the conditions. For example, the distance calculation may be performed multiple times by changing the number of pixels used for calculation within the window. Alternatively, the distance calculation may be performed multiple times by changing the shape of the window.
- the object information generation method is a method of acquiring distance information to an object using the TOF method (TOF: Time Of Flight) and processing the generated image.
- TOF Time Of Flight
- FIGS. 6 and 7 are flowcharts showing an example of an object information generating method.
- the imaging step S10 a plurality of segmented images are generated for each distance segment obtained by dividing the target space in the depth direction.
- the object region extraction step S21 as shown in FIG. 7A, in each of the segmented images, candidate regions that may be the image of the object are extracted (S211), and the connected candidate regions are extracted as object regions. (S212).
- the object information generating step S22 as shown in FIG. 7B, for the object area extracted in the object area extracting step S21, a target point is determined (S221), and a window including its surroundings is determined (S222). , the pixels in the window are used to calculate the distance value (S223).
- the calculated distance value itself or information about the feature of the object using the distance value is generated as object information (S224).
- the above-described processing is executed for each object region, and the object information generating step S22 ends when the processing for all object regions is completed (S225).
- the generated object information is output to the external device (S30).
- the object information generation method may be embodied by a computer program or a non-temporary recording medium on which the program is recorded.
- the program causes the computer system to execute the object information raw method.
- the object information generation system includes a computer system in the object information generation unit.
- a computer system is mainly composed of a processor and a memory as hardware. Functions such as an object information generation section are realized by the processor executing a program recorded in the memory of the computer system.
- the program may be recorded in advance in the memory of the computer system, may be provided through an electric communication line, or may be recorded in a non-temporary recording medium such as a computer system-readable memory card, optical disk, or hard disk drive. may be provided.
- a processor in a computer system consists of one or more electronic circuits, including semiconductor integrated circuits (ICs) or large scale integrated circuits (LSIs). A plurality of electronic circuits may be integrated into one chip, or may be distributed over a plurality of chips. A plurality of chips may be integrated in one device, or may be distributed in a plurality of devices. Also, the function as the object information generation system may be realized by the cloud (cloud computing).
- object information is generated based on the assumption that one object region corresponds to one object image. , it may be determined whether or not the object region can be separated.
- Fig. 8(a) shows a scene in which a plurality of objects exist in the target space
- Fig. 8(b) shows a segmented image in which the objects are captured.
- two objects OB1 and OB2 are arranged horizontally with respect to the object information generating system 1, and captured as the same segmented image.
- the pixels in the range in which the images of the two objects OB1 and OB2 appear are extracted as candidate areas, and are extracted as one object area based on their connectivity with each other.
- the object information generation system 1 determines whether the object regions can be separated, that is, whether the object regions of the objects OB1 and OB2 can be separated.
- the object information generation unit 22 executes the separability determination process twice for each object area to determine whether the object area can be separated. That is, the object area separability determination is performed once in the horizontal direction and once in the vertical direction.
- the separability determination process first, the separability determination is performed for the object area in the horizontal direction.
- a window WD5 is set (width ⁇ u3, height 2 ⁇ v3) centered at (u′′ ⁇ u3/2,v′′) and including the left half of the area within the bounding rectangle.
- the object OB2 on the right side of the screen is far away and the object OB1 on the left side is near, so the relationship between the calculated distances dc, dr, and dl is dl ⁇ dc ⁇ dr becomes.
- the object information generator 22 generates and outputs object information for the left and right objects OB1 and OB2, respectively.
- the object information generation unit 22 does not generate object information for each of the object areas determined to be separable in the separability determination process, but generates one piece of object information for the object area. , as one piece of object information, information on whether or not the object can be separated may be generated and output. Alternatively, separable coordinates may be output.
- the number of times to determine whether separation is possible is not limited to two, and may be one or three or more.
- the distance drl is calculated in a window for cutting out the left half of the right half of the circumscribing rectangle.
- the distance dlr is calculated.Then, the distances drl, dc, and dlr may be used to further determine whether or not the object can be separated.By repeating this process, for example, when the object has a gentle slope in the horizontal direction, It can be determined whether there is a step in the depth direction near the center (multiple objects overlap).
- the shape and size of the window are not limited to those described above.
- the light receiving unit 12 is configured to output a plurality of segmented images
- the object region extraction unit 21 and the object information generation unit 22 use the segmented images to extract the object region and generate the object information.
- the object information generating system 1 includes an image synthesizing unit that uses a plurality of segmented images output by the light receiving unit 12 to generate a distance image that stores distance information for each pixel, and an object region
- the extraction unit 21 and the object information generation unit 22 may be configured to extract the object region and generate the object information using the distance image.
- the distance corresponding to the distance section with the highest signal level is calculated for each pixel and stored as the pixel value of the pixel in the distance image.
- the distance corresponding to the distance segment is stored as the pixel value of the pixel in the distance image, and the plurality of distances Processing may be performed to store a value indicating that the measurement is invalid if the highest signal level in the segment is less than or equal to Th b .
- the method of generating the pixel values of the distance image is not limited to the above.
- the distance may be calculated in smaller units and stored as pixel values of the distance image.
- the image synthesizing unit in this modified example applies the above-described processing to all pixels, and outputs the distance image to the object region extracting unit 21.
- the object region extraction unit 21 extracts the connected pixel group as one object region.
- the determination of connectivity is not limited to this, and for example, if a pixel of interest is connected in any one of the four directions of up, down, left, and right, it may be considered to be connected.
- a binary image is created in which 1 is stored in the coordinates of pixels having the same distance value and 0 is stored in the coordinates of other pixels, and morphological operations and median calculations are performed on this binary image. Connectivity may be determined for this binary image after performing preprocessing using a filter or the like.
- the object information generating unit 22 uses the pixel values of the distance image generated by the image synthesizing unit to generate information related to the three-dimensional position coordinates of the object and the three-dimensional shape of the object, or to determine whether or not the object can be separated. A distance value in the region is also calculated.
- the object information generation unit 22 sets the center coordinates of the window to (u′, v′), and the number of pixels showing the distance equal to dk in the window. Ck, and the distance is calculated by the following equation (4).
- the object information generation unit 22 uses d(u', v') to generate information related to the three-dimensional position coordinates and three-dimensional shape of the object, or to determine whether or not the object can be separated.
- the object information generation system can generate object distance information in a short period of time. Useful.
- Imaging unit 20 Signal processing unit 21 Object region extraction unit 22 Object information generation unit
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Electromagnetism (AREA)
- Computer Networks & Wireless Communication (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Image Analysis (AREA)
Abstract
Description
本開示の一態様に係る物体情報生成システムは、目標空間を分割した複数の距離区分にそれぞれ対応する、複数の区分画像を撮像する撮像部と、前記複数の区分画像を処理して、前記目標空間に存在する物体に関する情報を生成する信号処理部とを備え、前記信号処理部は、前記複数の区分画像から、物体の像が含まれる画素領域である物体領域を抽出する物体領域抽出部と、前記物体領域に対して計算対象とする画素を切り出すためのウィンドウを決定し、前記複数の区分画像のうちの2以上の区分画像における、前記ウィンドウ内の複数個の画素の情報を用いて計算を行い、物体の距離情報を生成する物体情報生成部とを備える。
図1は実施形態に係る物体情報生成システムの構成を示すブロック図である。図1に示すように、物体情報生成システム1は、撮像部10と、信号処理部20と、出力部30と、を備える。物体情報生成システム1は、TOF法(TOF: Time Of Flight)を利用して物体までの距離の情報を取得し、生成した画像を処理するシステムである。物体情報生成システム1は、例えば、物体(人)を検知する監視カメラ、工場や商業施設内の人の動線を追跡して労働生産性の向上や消費者購買行動を解析するシステム、自動車に搭載され障害物を検知するシステム等に利用することができる。
[2-1.撮像部]
上述したとおり、撮像部10は、発光部11と、受光部12と、制御部13と、を備える。撮像部10は、発光部11から目標空間に測定光を発光し、目標空間内に存在する物体で反射された反射光を受光部12によって撮像し、TOF法を利用して物体までの距離を測定し、画像を生成可能なように構成されている。
発光部11は、測定光を目標空間に投光するように構成されている。発光部11は、目標空間に測定光を投光するための光源111を備える。測定光は、パルス状の光である。測定光は、単一波長であり、パルス幅が比較的短く、ピーク強度が比較的高いことが好ましい。
受光部12は、目標空間に存在する物体で反射された反射光を受光し、画像を生成するように構成されている。受光部12は、複数の画素を含む撮像素子121を備え、目標空間に存在する物体で反射された反射光を受光し、区分画像を生成するように構成されている。各画素には、アバランシェフォトダイオードが配置されている。各画素に他の光検出素子が配置されていても良い。各画素は、反射光を受光する露光状態と、反射光を受光しない非露光状態とを切り替え可能に構成されている。受光部12は、露光状態において、各画素で受光した反射光に基づく画素信号を出力する。画素信号の信号レベルは、画素が受光した光のパルス数に相当する。画素信号の信号レベルは、反射光強度等、光の他の特性に相関するものであっても良い。
制御部13は、発光部11の発光制御、及び受光部12の受光制御を行うように構成されている。制御部13は、例えば、プロセッサ及びメモリを有するマイクロコンピュータで構成されている。そして、プロセッサが適宜のプログラムを実行することにより、制御部13として機能する。プログラムは、メモリに予め記録されていても良いし、インターネット等の電気回線を通じて、またはメモリカード等の非一時的な記録媒体から提供されても良い。さらに、キーボード等の設定受付部を有し、操作者からの設定を受け付けて制御方法を変更可能になっていても良い。
信号処理部20は、撮像部10によって生成された複数の区分画像から、物体に関する情報を生成するように構成されている。信号処理部20は、物体領域抽出部21と、物体情報生成部22と、を備える。
物体領域抽出部21は、撮像部10によって生成された区分画像から、物体の像が含まれる画素領域である物体領域を抽出するように構成されている。
物体情報生成部22は、物体領域抽出部21によって抽出された物体領域に含まれる物体に関する情報を生成するように構成されている。
出力部30は、物体情報生成部22が生成した物体情報を、外部装置2に出力するように構成されている。外部装置2は例えば、液晶ディスプレイ、有機ELディスプレイ(EL: Electro Luminescence)等の表示装置である。出力部30は、外部装置2に、物体情報生成部22で生成された物体情報を表示させる。
物体情報生成システムと同様の機能は、物体情報生成方法で具現化されても良い。物体情報生成方法は、TOF法(TOF: Time Of Flight)を利用して物体までの距離の情報を取得し、生成した画像を処理する方法である。
[5-1.第1変形例]
上述した実施形態では、1つの物体領域は1つの物体の像に相当するとして物体情報を生成するものとしたが、1つの物体領域に複数の物体の像が含まれている可能性を考慮し、物体領域の分離可否判定を行うようにしてもよい。
dl<dc<dr
となる。
上述した例では、受光部12は複数の区分画像を出力するように構成され、物体領域抽出部21及び物体情報生成部22は、区分画像を利用して物体領域の抽出と物体情報の生成を行うように構成されていた。この構成に限らず、物体情報生成システム1が、受光部12で出力された複数の区分画像を利用して画素毎に距離に関する情報を格納した距離画像を生成する画像合成部を備え、物体領域抽出部21及び物体情報生成部22が、距離画像を利用して、物体領域の抽出と物体情報の生成を行うように構成されていても良い。
10 撮像部
20 信号処理部
21 物体領域抽出部
22 物体情報生成部
Claims (9)
- 目標空間を分割した複数の距離区分にそれぞれ対応する、複数の区分画像を撮像する撮像部と、
前記複数の区分画像を処理して、前記目標空間に存在する物体に関する情報を生成する信号処理部とを備え、
前記信号処理部は、
前記複数の区分画像から、物体の像が含まれる画素領域である物体領域を抽出する物体領域抽出部と、
前記物体領域に対して計算対象とする画素を切り出すためのウィンドウを決定し、前記複数の区分画像のうちの2以上の区分画像における、前記ウィンドウ内の複数個の画素の情報を用いて計算を行い、物体の距離情報を生成する物体情報生成部とを備える
ことを特徴とする物体情報生成システム。 - 請求項1記載の物体情報生成システムにおいて、
前記物体情報生成部は、
前記物体領域に対して、異なる複数の前記ウィンドウを、設定し、
前記各ウィンドウを用いて、それぞれ、物体の距離情報を生成する
ことを特徴とする物体情報生成システム。 - 請求項2記載の物体情報生成システムにおいて、
前記物体情報生成部は、
前記各ウィンドウを用いてそれぞれ生成した物体の距離情報を用いて、物体の形状に関する情報を生成する
ことを特徴とする物体情報生成システム。 - 請求項2記載の物体情報生成システムにおいて、
前記物体情報生成部は、
前記各ウィンドウを用いてそれぞれ生成した物体の距離情報を用いて、物体領域の分離可否を判定する
ことを特徴とする物体情報生成システム。 - 請求項1記載の物体情報生成システムにおいて、
前記物体情報生成部は、
計算に用いる画素の個数を、計算対象となる区分画像に対応する距離区分の距離に応じて、決定する
ことを特徴とする物体情報生成システム。 - 請求項1記載の物体情報生成システムにおいて、
前記信号処理部は、
前記複数の区分画像を用いて、画素毎に距離値を付与した距離画像を生成する画像合成部を備え、
前記物体領域抽出部は、前記距離画像において、前記物体領域を抽出する
ことを特徴とする物体情報生成システム。 - 請求項1記載の物体情報生成システムにおいて、
前記信号処理部は、
前記複数の区分画像を用いて、画素毎に距離値を付与した距離画像を生成する画像合成部を備え、
前記物体情報生成部は、前記距離画像を用いて、物体の距離情報を生成する
ことを特徴とする物体情報生成システム。 - 目標空間を分割した複数の距離区分にそれぞれ対応する、複数の区分画像を処理して、
前記目標空間に存在する物体に関する情報を生成する物体情報生成方法であって、
前記複数の区分画像から、物体の像が含まれる画素領域である物体領域を抽出するステップと、
前記物体領域に対して計算対象とする画素を切り出すためのウィンドウを決定するステップと、
前記複数の区分画像のうちの2以上の区分画像における、前記ウィンドウ内の複数個の画素の情報を用いて計算を行い、物体の距離情報を生成するステップとを備える
物体情報生成方法。 - 1以上のプロセッサを含むコンピュータシステムに、請求項8に記載の物体情報生成方法を実行させるためのプログラム。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202280014674.XA CN116848436A (zh) | 2021-02-15 | 2022-01-20 | 物体信息生成系统、物体信息生成方法以及物体信息生成程序 |
JP2022581288A JPWO2022172719A1 (ja) | 2021-02-15 | 2022-01-20 | |
US18/448,712 US20230386058A1 (en) | 2021-02-15 | 2023-08-11 | Object information generation system, object information generation method, and object information generation program |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021-021596 | 2021-02-15 | ||
JP2021021596 | 2021-02-15 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/448,712 Continuation US20230386058A1 (en) | 2021-02-15 | 2023-08-11 | Object information generation system, object information generation method, and object information generation program |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022172719A1 true WO2022172719A1 (ja) | 2022-08-18 |
Family
ID=82838751
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2022/001976 WO2022172719A1 (ja) | 2021-02-15 | 2022-01-20 | 物体情報生成システム、物体情報生成方法、および、物体情報生成プログラム |
Country Status (4)
Country | Link |
---|---|
US (1) | US20230386058A1 (ja) |
JP (1) | JPWO2022172719A1 (ja) |
CN (1) | CN116848436A (ja) |
WO (1) | WO2022172719A1 (ja) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009281895A (ja) * | 2008-05-23 | 2009-12-03 | Calsonic Kansei Corp | 車両用距離画像データ生成装置および車両用距離画像データの生成方法 |
JP2010071704A (ja) * | 2008-09-17 | 2010-04-02 | Calsonic Kansei Corp | 車両用距離画像データ生成装置及び方法 |
JP2016136321A (ja) * | 2015-01-23 | 2016-07-28 | トヨタ自動車株式会社 | 物体検出装置及び物体検出方法 |
JP2018160049A (ja) * | 2017-03-22 | 2018-10-11 | パナソニックIpマネジメント株式会社 | 画像認識装置 |
WO2019181518A1 (ja) * | 2018-03-20 | 2019-09-26 | パナソニックIpマネジメント株式会社 | 距離測定装置、距離測定システム、距離測定方法、及びプログラム |
US10445896B1 (en) * | 2016-09-23 | 2019-10-15 | Apple Inc. | Systems and methods for determining object range |
WO2020121973A1 (ja) * | 2018-12-10 | 2020-06-18 | 株式会社小糸製作所 | 物体識別システム、演算処理装置、自動車、車両用灯具、分類器の学習方法 |
-
2022
- 2022-01-20 WO PCT/JP2022/001976 patent/WO2022172719A1/ja active Application Filing
- 2022-01-20 JP JP2022581288A patent/JPWO2022172719A1/ja active Pending
- 2022-01-20 CN CN202280014674.XA patent/CN116848436A/zh active Pending
-
2023
- 2023-08-11 US US18/448,712 patent/US20230386058A1/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009281895A (ja) * | 2008-05-23 | 2009-12-03 | Calsonic Kansei Corp | 車両用距離画像データ生成装置および車両用距離画像データの生成方法 |
JP2010071704A (ja) * | 2008-09-17 | 2010-04-02 | Calsonic Kansei Corp | 車両用距離画像データ生成装置及び方法 |
JP2016136321A (ja) * | 2015-01-23 | 2016-07-28 | トヨタ自動車株式会社 | 物体検出装置及び物体検出方法 |
US10445896B1 (en) * | 2016-09-23 | 2019-10-15 | Apple Inc. | Systems and methods for determining object range |
JP2018160049A (ja) * | 2017-03-22 | 2018-10-11 | パナソニックIpマネジメント株式会社 | 画像認識装置 |
WO2019181518A1 (ja) * | 2018-03-20 | 2019-09-26 | パナソニックIpマネジメント株式会社 | 距離測定装置、距離測定システム、距離測定方法、及びプログラム |
WO2020121973A1 (ja) * | 2018-12-10 | 2020-06-18 | 株式会社小糸製作所 | 物体識別システム、演算処理装置、自動車、車両用灯具、分類器の学習方法 |
Also Published As
Publication number | Publication date |
---|---|
US20230386058A1 (en) | 2023-11-30 |
JPWO2022172719A1 (ja) | 2022-08-18 |
CN116848436A (zh) | 2023-10-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7369921B2 (ja) | 物体識別システム、演算処理装置、自動車、車両用灯具、分類器の学習方法 | |
US10755417B2 (en) | Detection system | |
US9311715B2 (en) | Method and system to segment depth images and to detect shapes in three-dimensionally acquired data | |
US8611598B2 (en) | Vehicle obstacle detection system | |
JP5822255B2 (ja) | 対象物識別装置及びプログラム | |
Reisman et al. | Crowd detection in video sequences | |
González-Sabbagh et al. | A survey on underwater computer vision | |
EP2779092A1 (en) | Apparatus and techniques for determining object depth in images | |
US20050201612A1 (en) | Method and apparatus for detecting people using stereo camera | |
JP6358552B2 (ja) | 画像認識装置および画像認識方法 | |
JP5353455B2 (ja) | 周辺監視装置 | |
CN111695402A (zh) | 用于标注3d点云数据中人体姿态的工具和方法 | |
KR20150086479A (ko) | 관심 객체에 관한 적응적 조명을 사용하는 깊이 이미징 방법 및 장치 | |
EP3789958A1 (en) | Optical condition determination system and optical condition determination method | |
JP7503750B2 (ja) | 物体検知システム及び物体検知方法 | |
WO2022172719A1 (ja) | 物体情報生成システム、物体情報生成方法、および、物体情報生成プログラム | |
US20230057655A1 (en) | Three-dimensional ranging method and device | |
US20220214434A1 (en) | Gating camera | |
Hiremath et al. | Implementation of low cost vision based measurement system: motion analysis of indoor robot | |
Ponnaganti et al. | Utilizing CNNs for Object Detection with LiDAR Data for Autonomous Driving | |
GB2605621A (en) | Monocular depth estimation | |
US20240085553A1 (en) | Obstacle detection device, obstacle detection method, and non-transitory computer-readable recording medium | |
JP2011170499A (ja) | 車両の周辺監視装置 | |
Najeeb et al. | Implementation And Evaluation of Vehicle Tracking System | |
Zhou et al. | Real-Time Ranging of Vehicles and Pedestrians for Mobile Application on Smartphones |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22752550 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2022581288 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202280014674.X Country of ref document: CN |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 22752550 Country of ref document: EP Kind code of ref document: A1 |