WO2023145698A1 - Dispositif de caméra et procédé de traitement d'image - Google Patents

Dispositif de caméra et procédé de traitement d'image Download PDF

Info

Publication number
WO2023145698A1
WO2023145698A1 PCT/JP2023/001984 JP2023001984W WO2023145698A1 WO 2023145698 A1 WO2023145698 A1 WO 2023145698A1 JP 2023001984 W JP2023001984 W JP 2023001984W WO 2023145698 A1 WO2023145698 A1 WO 2023145698A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
camera device
captured image
memory
data
Prior art date
Application number
PCT/JP2023/001984
Other languages
English (en)
Japanese (ja)
Inventor
雄一 畑瀬
隆宏 池
利章 篠原
Original Assignee
i-PRO株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by i-PRO株式会社 filed Critical i-PRO株式会社
Publication of WO2023145698A1 publication Critical patent/WO2023145698A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/54Mounting of pick-up tubes, electronic image sensors, deviation or focusing coils
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
    • H04N25/42Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by switching between different modes of operation using different resolutions or aspect ratios, e.g. switching between interlaced and non-interlaced mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
    • H04N25/44Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by partially reading an SSIS array
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/76Addressed sensors, e.g. MOS or CMOS sensors
    • H04N25/78Readout circuits for addressed sensors, e.g. output amplifiers or A/D converters

Definitions

  • the present disclosure relates to a camera device and an image processing method.
  • Patent Document 1 An image processing apparatus for processing is disclosed (see Patent Document 1).
  • a temporary local score is calculated between a point of a feature image in a template image and a point at a position corresponding to the point of the feature image in a search image. Subsequently, in Patent Document 1, when it is determined that the temporary local score is 0 or more, the temporary local score is set as the local score, and when it is determined that the temporary local score is less than 0, the temporary local score is multiplied by a coefficient to obtain the local similarity.
  • Patent Document 1 pattern matching processing performed by an image processing device can be shortened even if there is a difference in the surface state of the captured image of the object.
  • the entire processing performed by the image processing apparatus for example, a series of processing including input of image data transmitted from a camera, transfer to memory, pattern matching processing, and output of pattern matching processing results
  • the present disclosure has been devised in view of the above-described conventional circumstances, and aims to make it possible to reduce the overall delay of processing using a camera device.
  • the present disclosure includes a memory for inputting and outputting a signal, an image of an object, and as an image of the object, a first image having a first resolution and a second image having a second resolution smaller than the first resolution.
  • a memory for inputting and outputting a signal, an image of an object, and as an image of the object, a first image having a first resolution and a second image having a second resolution smaller than the first resolution.
  • an image sensor that outputs a captured image to the memory
  • an image processing unit that performs detection processing for detecting whether or not a detection target is included in the captured image output from the memory; and a result of the detection processing.
  • an interface for outputting for outputting.
  • an object is imaged, and as the imaged images of the object, a first imaged image having a first resolution and a second imaged image having a second resolution smaller than the first resolution, outputting data to a memory for inputting and outputting data; performing detection processing for detecting whether or not a captured image output from the memory includes a detection target; and outputting a result of the detection processing.
  • FIG. 1 is a diagram showing a use case example of the robot control system according to the first embodiment.
  • FIG. 2 is a diagram showing a system configuration example of the robot control system according to the first embodiment.
  • FIG. 3 is a diagram showing an example of operation modes of the camera device according to Embodiment 1.
  • FIG. 4 is a diagram showing the data flow in time series in the first operation example of the camera device according to the first embodiment.
  • FIG. 5 is a flowchart showing an operation procedure example according to the first operation example according to the first embodiment.
  • FIG. 6 is a diagram showing the data flow in time series in the second operation example of the camera device according to the first embodiment.
  • 7 is a flowchart illustrating an example of an operation procedure according to a second example of operation according to Embodiment 1.
  • FIG. 8 is a diagram showing a data flow in time series in the third operation example of the camera device according to Embodiment 1.
  • FIG. 9 is a flowchart illustrating an example of an operation procedure according to a third example of operation according to Embodiment 1.
  • FIG. 10 is a block diagram illustrating a configuration example of an image processing unit of a camera device according to a modification of Embodiment 1;
  • FIG. 11 is a block diagram showing a configuration example of a camera device according to a comparative example.
  • FIG. 12 is a flow chart showing an example of the operation procedure of the camera device according to the comparative example.
  • FIG. 11 is a block diagram showing a configuration example of a camera device according to a comparative example.
  • the subject of the camera device 10Z according to the comparative example of FIG. 11 is the same as the subject of the camera device 10 (see FIG. 2) according to Embodiment 1, which will be described later. It is an object such as a workpiece that
  • the camera device 10Z of FIG. 11 also includes an image processing unit 16Z capable of executing processing (for example, object detection processing) using AI (Artificial Intelligence).
  • AI Artificial Intelligence
  • a lens 11 has a general camera configuration (for example, a lens 11Z, an image sensor 12Z, a CPU (Central Processing Unit) 13Z, an ISP 14Z, a memory 15Z, a data output I/ It has the same configuration as F (Interface) 17Z).
  • F Interface
  • FIG. 12 is a flow chart showing an example of the operation procedure of the camera device according to the comparative example.
  • the image sensor 12Z of the camera device 10Z in FIG. 11 uses an object (for example, a work moving along a belt conveyor arranged in a factory) as a subject, and an optical image of the subject incident through the lens 11Z is converted into an electrical signal. is performed (step StZ1).
  • the image sensor 12Z of the camera device 10Z in FIG. 11 generates captured data (for example, a captured image of a subject) having a resolution of, for example, FullHD (that is, 1920 ⁇ 1080 dots, which is 2 MP (megapixels)).
  • the image sensor 12Z of the camera device 10Z of FIG. 11 may generate imaging data in either monochrome format or color format.
  • the camera device 10Z of FIG. 11 transmits and captures the imaging data of the subject imaged in step StZ1 from the image sensor 12Z to the CPU 13 (step StZ2). Subsequently, the camera device 10Z of FIG. 11 transfers the imaging data taken in at step StZ2 from the CPU 13Z to the memory 15Z and saves (stores) it (step StZ3).
  • the ISP 14Z of the camera device 10Z in FIG. 11 reads out the imaging data stored in step StZ3 from the memory 15Z, and performs resizing processing on the read imaging data (step StZ4).
  • the resizing process is a process for converting the size of imaged data into a size suitable for object search processing (see below) executed by the image processing unit 16Z.
  • the ISP 14Z of the camera device 10Z of FIG. 11 may, for example, perform a process of converting from the RGB format to the YUV format together with the resizing process when the imaging data in the color format instead of the monochrome format is obtained from the image sensor 12Z. . Further, the ISP 14Z of the camera device 10Z of FIG. 11 does not convert the RGB format into the YUV format when monochrome image data is obtained from the image sensor 12Z.
  • the ISP 14Z of the camera device 10Z in FIG. 11 transfers and saves (stores) the imaging data after the resizing process in step StZ4 to the memory 15Z (step StZ5). That is, in the camera device 10Z of FIG. 11, transfer of image data to the memory 15Z is performed twice.
  • the image processing unit 16Z of the camera device 10Z reads out the image data after the resizing process stored in step StZ5 from the memory 15Z, and executes object detection processing (for example, pattern matching) on the image data after the resizing process (step StZ6).
  • the image processing unit 16Z of the camera device 10Z calculates the detection result of the object appearing in the imaging data after the resizing process (for example, coordinate data indicating the position of the object) (step StZ7).
  • the data output I/F 17Z of the camera device 10Z outputs the calculation result of step StZ7 (that is, the coordinate data of the object) to a subsequent device (for example, the robot controller connected to the camera device 10Z) (step StZ8).
  • the imaging data is transferred to the memory 15Z twice, in steps StZ3 and StZ5.
  • the camera device 10Z of FIG. 11 there is a possibility that a data transmission delay occurs every time image data is transferred to the memory 15Z.
  • the resolution of the image sensor 12Z for example, the resolution of FullHD (that is, 1920 ⁇ 1080 dots, which is 2 MP (megapixel))
  • the processing load of resizing to a size suitable for input to the image processing section 16Z increases. Therefore, in the camera device 10Z of FIG. 11, the delay time of the entire processing of the camera device 10Z becomes long.
  • the present disclosure describes an example of a camera device and an image processing method that can achieve low delay in the entire processing using a camera device that repeatedly images an object.
  • FIG. 1 is a diagram showing a use case example of the robot control system 100 according to the first embodiment.
  • the robot control system 100 includes at least a camera device 10, a robot controller 30, and a robot 50.
  • the camera device 10 and the robot controller 30 and the robot controller 30 and the robot 50 are connected so as to enable input/output of data or signals.
  • the robot control system 100 is placed in a production facility such as a factory.
  • the robot control system 100 controls a robot 50 that mounts (mounts) parts at specified positions of an object (for example, a work) that is transferred along the transfer direction DR1 on the belt conveyor CB that is deployed in the production facility.
  • the use cases of the robot control system 100 according to the first embodiment are not limited to mounting the above-described parts, but can be applied to labeling of workpieces, screwing of workpieces, assembly of parts, machining of parts, welding, painting, bonding, and the like. may be used.
  • the camera device 10 (specifically, the camera devices 10A, 10B, . . . ) is a robot arm (specifically, the robot arms AR1, AR2, . ) is fixed to the tip of the The tip of the robot arm is, for example, the robot hand or the vicinity of the end effector.
  • the camera device 10 takes an image of an object (for example, works WK1, WK2, .
  • the camera device 10 performs pattern matching processing using the captured image captured in step T1 and AI (Artificial Intelligence) provided in advance so as to be executable, and determines whether or not the captured image includes the target object. detect.
  • the camera device 10 sends the object detection result (for example, the coordinates indicating the position of the object included in the captured image) to the robot controller 30 . Details of the internal configuration of the camera device 10 will be described later with reference to FIG.
  • the robot controller 30 inputs the object detection result from the camera device 10 and recognizes the object (step T2). Subsequently, the robot controller 30 generates a movement instruction for the robot 50 (for example, an instruction for driving or controlling an actuator (not shown) provided in the robot 50) according to the position where the object is detected, and sends the robot 50 output (step T2).
  • a movement instruction for the robot 50 for example, an instruction for driving or controlling an actuator (not shown) provided in the robot 50
  • the robot 50 (specifically, the robots 50A, 50B, . . . ) moves based on instructions from the robot controller 30 (step T3).
  • the movement performed by the robot 50 based on the instructions of the robot controller 30 is, for example, a series of steps for mounting the part picked up by the robot hand at a specified position on the object being transferred on the belt conveyor CB. is the operation of It should be noted that, as described above, the content of the motion performed by the robot 50 is adaptively determined according to the use case of the robot control system 100, and is obviously not limited to the above-described component mounting.
  • FIG. 2 is a diagram showing a system configuration example of the robot control system 100 according to the first embodiment.
  • the robot control system 100 includes at least a camera device 10, a robot controller 30, and robots 50A and 50B.
  • the robots 50A and 50B have the same configuration, they may have different configurations. For convenience of explanation, only two robots are shown, but the number of robots may be one or three or more.
  • the camera device 10 includes a substrate on which a lens 11, an image sensor 12, a CPU (Central Processing Unit) 13, an ISP (Image Signal Processor) 14, a memory 15, an image processing section 16, and a data output I/F 17 are mounted. .
  • This substrate is arranged in a housing (not shown) of the camera device 10 .
  • the lens 11 (an example of an imaging unit) includes, for example, a focus lens and a zoom lens.
  • Incident light which is light reflected by a subject (for example, an object such as works WK1, WK2, . . . ), enters the lens 11 .
  • a visible light cut filter and an IR cut filter are arranged between the lens 11 and the image sensor 12, the incident light incident on the lens 11 passes through either one of the filters and reaches the light receiving surface (image pickup surface) of the image sensor 12.
  • An optical image of the subject is formed on the surface).
  • lenses with various focal lengths or shooting ranges can be used depending on the installation location of the camera device 10, the shooting application, or the like.
  • the camera device 10 may include a lens driving section (not shown) that controls driving of the lens 11 .
  • the CPU 13 or the ISP 14 adjusts (changes) internal parameters related to the driving of the lens 11 (for example, the position of the focus lens, the position of the zoom lens corresponding to the zoom magnification), and controls the lens 11 via a lens driving unit (not shown).
  • a lens driving unit not shown
  • the lens 11 may be fixedly arranged.
  • a visible light cut filter (an example of an imaging unit) has a characteristic of blocking visible light (for example, light having a wavelength of 400 to 760 [nm]) among incident light transmitted through the lens 11 (that is, light reflected by an object).
  • the visible light cut filter blocks visible light out of the incident light that has passed through the lens 11 .
  • the camera device 10 may include a filter driving section (not shown) that controls driving of the visible light cut filter.
  • a filter drive unit is provided, the visible light cut filter is driven by a filter drive unit (not shown) so that it is positioned between the lens 11 and the image sensor 12 for a predetermined period (for example, at night) based on a control signal from the CPU 13 or ISP 14. ).
  • An IR cut filter (an example of an imaging unit) passes visible light (for example, light having a wavelength of 400 to 760 [nm]) and near-infrared light (for example, light having a wavelength of 780 [nm] or more). It has blocking properties.
  • the IR cut filter blocks near-infrared light and allows visible light to pass through the incident light that has passed through the lens 11 .
  • the camera device 10 may include a filter driving section (not shown) that controls driving of the IR cut filter. When a filter driving section is provided, the IR cut filter is positioned between the lens 11 and the image sensor 12 for a predetermined period (for example, daytime) based on a control signal from the CPU 13 or ISP 14. A filter driving section (not shown) is provided. placed through
  • the image sensor 12 (an example of an imaging unit) includes a CCD (Charge Coupled Device) sensor or a CMOS (Complementary Metal Oxide Semiconductor) sensor in which a plurality of pixels suitable for imaging visible light or near-infrared light are arranged, and exposure control. It includes a circuit (not shown) and a signal processing circuit (not shown).
  • the image sensor 12 performs photoelectric conversion for converting light received by a light-receiving surface (imaging surface) composed of a plurality of pixels into an electric signal at predetermined intervals.
  • the predetermined interval of photoelectric conversion is determined according to the so-called frame rate (fps: frame per second). For example, when the frame rate is 120 [fps], the predetermined interval is 1/120 [second].
  • the image sensor 12 outputs a red component signal (R signal), a green component signal (G signal), and a blue component signal according to the light reflected by the subject (objects such as works WK1, WK2, . . . ).
  • (B signal) is obtained as an electric signal continuously in time for each pixel.
  • a signal processing circuit (not shown) of the image sensor 12 converts the electrical signal (analog signal) into digital imaging data.
  • a data transmission bus for direct transfer is provided between the image sensor 12 and the memory 15 .
  • the image sensor 12 transfers the digital format imaging data to the memory 15 via the data transmission bus at predetermined intervals (see above) according to the frame rate.
  • the memory 15 stores the digital format imaging data received from the image sensor 12 .
  • the image sensor 12 may send the digital format imaging data to the CPU 13 at predetermined intervals (see above) according to the frame rate.
  • FIG. 3 is a diagram showing an example of operation modes of the camera device 10 according to the first embodiment.
  • the operation mode table TBL1 shown in FIG. 3 indicates the operation mode of the camera device 10 having a record combining each item of frame rate, output resolution, and processing delay time.
  • the sensor frame rate of the operation mode table TBL1 corresponds to the frame rate of the image sensor 12.
  • the sensor output resolution of the operation mode table TBL1 corresponds to the output resolution of the image sensor 12 (predetermined resolution described later).
  • the treatment delay time in the operation mode table TBL1 corresponds to a processing delay time allowed for the processing of the camera device 10 as a whole.
  • the operation mode of the camera device 10 can be set by the user by selecting the frame rate and output resolution (predetermined resolution described later) of the image sensor 12 according to the processing delay time. Further, the CPU 13 or the ISP 14 may generate a signal for changing the operation mode of the camera device 10 and send it to the image sensor 12 based on detection of an event or input of an external signal while the camera device 10 is operating. . In this case, the image sensor 12 changes the operation mode (specifically, frame rate and output resolution) of the camera device 10 based on the signal from the CPU 13 or ISP 14 . Thereby, the camera device 10 can dynamically and arbitrarily change the operation mode at necessary timing.
  • the processing delay time shown in FIG. 3 is an example, and may vary slightly depending on the performance, frame rate, resolution, etc. of the image processing unit 16 used.
  • Operation mode 1 is a combination of a frame rate of 480 [fps], an output resolution of VGA (that is, 640 ⁇ 480 dots), and a processing delay time of 10 [msec]. Each value corresponding to operation mode 1 is set. Thereby, the camera device 10 can store the entire processing executed in the camera device 10 (specifically, the memory 15 of the captured image captured by the image sensor 12 in order to send the detection result of the target object to the robot controller 30). data transfer, resizing processing as necessary, object detection processing by the image processing unit 16, and transmission processing of the object detection result to the robot controller 30) within the processing delay time of operation mode 1. can be suppressed. Therefore, in the operation mode 1, processing delay can be reduced to the extent that processing congestion does not occur in the camera device 10, and an instruction based on the detection result of the object can be promptly prompted to the robot controller 30. can be done.
  • Operation mode 2 is configured by combining a frame rate of 240 [fps], an output resolution of 1.3 MP (that is, 1280 ⁇ 960 dots), and a processing delay time of 20 [msec].
  • the resolution is set to each value corresponding to operation mode 2.
  • the camera device 10 can store the entire processing executed in the camera device 10 (specifically, the memory 15 of the captured image captured by the image sensor 12 in order to send the detection result of the target object to the robot controller 30).
  • data transfer, resizing processing if necessary, object detection processing by the image processing unit 16, and transmission processing of the object detection result to the robot controller 30) within the processing delay time of operation mode 2. can be suppressed. Therefore, in the operation mode 2, processing delay can be reduced to the extent that processing congestion does not occur in the camera device 10, and an instruction based on the detection result of the object can be quickly prompted to the robot controller 30. can be done.
  • the frame rate is 120 [fps]
  • the output resolution is FullHD (that is, 1920 ⁇ 1080 dots)
  • the processing delay time is 40 [msec].
  • Each value corresponding to operation mode 3 is set (see FIG. 6 or FIG. 8).
  • the camera device 10 can store the entire processing executed in the camera device 10 (specifically, the memory 15 of the captured image captured by the image sensor 12 in order to send the detection result of the target object to the robot controller 30).
  • data transfer, resizing processing if necessary, object detection processing by the image processing unit 16, transmission processing of the object detection result to the robot controller 30) is within the allowable processing delay time of operation mode 3. can be reduced to Therefore, in the operation mode 3, processing delay can be reduced to the extent that processing congestion does not occur in the camera device 10, and an instruction based on the detection result of the object can be quickly urged to the robot controller 30. can be done.
  • the image sensor 12 adjusts (changes) internal parameters (for example, exposure time, gain, frame rate) relating to exposure conditions of the camera device 10 by means of an exposure control circuit based on an exposure control signal from the CPU 13 or ISP 14 .
  • internal parameters for example, exposure time, gain, frame rate
  • the image sensor 12 can capture image data of an object with a predetermined resolution (for example, 1920 ⁇ 1080 dots corresponding to 2.0 MP (megapixels) of FullHD), and performs cropping or binning processing.
  • Cropping is a process of cropping an image area of a specific range (for example, a bright central portion), which is part of the entire image area of captured data, by the image sensor 12 . Therefore, the cropped image is an image with a reduced size (in other words, resolution) compared to the captured data before cropping.
  • Binning is a pseudo aggregation of pixel components (e.g., pixel values) of a plurality of pixels (e.g., 2 ⁇ 2, 4 ⁇ 4, . This is a process to be handled by combining).
  • the binned image is an image with a reduced number of pixels (in other words, resolution) compared to the imaging data before binning.
  • the image sensor 12 has a resolution smaller than a predetermined resolution (for example, VGA (Video Graphics Array) corresponding to operation mode 1, 640 ⁇ 480 dots, or operation mode 2). 1280 ⁇ 960 dots, which is 1.3 MP) and output.
  • a predetermined resolution for example, VGA (Video Graphics Array) corresponding to operation mode 1, 640 ⁇ 480 dots, or operation mode 2.
  • 1280 ⁇ 960 dots which is 1.3 MP
  • the predetermined resolution of the image sensor 12 is not limited to FullHD (1920 ⁇ 1080 dots) (see FIG. 3), and may be 1.3MP (1280 ⁇ 960 dots) or VGA (640 ⁇ 480 dots) (see FIG. 3). 4).
  • the CPU 13 is a processor that functions as a controller that controls the overall operation of the camera device 10 .
  • the CPU 13 performs control processing for supervising the operation of each unit of the camera device 10, data input/output processing with each unit of the camera device 10, data arithmetic processing, and data storage processing.
  • the CPU 13 operates according to programs and control data stored in the memory 15 .
  • the CPU 13 uses the memory 15 during operation, and transfers data or information generated or acquired by the CPU 13 to the memory 15 for temporary storage. Also, the CPU 13 transfers the imaging data from the image sensor 12 to the memory 15 for temporary storage and/or sends to the ISP 14 .
  • the CPU 13 also has a timer (not shown) or an illuminance sensor (not shown).
  • the CPU 13 generates a control signal for instructing whether the filter to be placed between the lens 11 and the image sensor 12 is a visible light cut filter or an IR cut filter, based on the output of the timer or the illuminance sensor. and sent to a filter driving section (not shown).
  • the ISP 14 is a processor that controls various image processing performed within the camera device 10 .
  • the ISP 14 reads the imaging data from the image sensor 12 from the memory 15 and performs image processing using the read imaging data. Further, the ISP 14 performs resizing processing for converting the size of the imaging data read from the memory 15 into a size suitable for object detection processing (for example, pattern matching) by the image processing unit 16 .
  • ISP 14 uses memory 15 during operation and transfers data or information generated or obtained by ISP 14 to memory 15 for temporary storage.
  • the memory 15 includes at least RAM (Random Access Memory) and ROM (Read Only Memory).
  • the memory 15 temporarily holds a program necessary for executing the operation of the camera device 10 and data or information generated during the operation of each unit of the camera device 10 .
  • the RAM is, for example, a work memory that is used when each part of the camera device 10 operates.
  • the ROM pre-stores and retains, for example, a program and control data for controlling each unit of the camera device 10 .
  • the memory 15 may store the data of the operation mode table TBL1 shown in FIG.
  • the CPU 13 may set one of the operation modes in the operation mode table TBL1 stored in the memory 15 by a user's setting operation via an external terminal (not shown), for example. Thereby, the camera device 10 can flexibly set one of the operation modes defined in the operation mode table TBL1 according to the use case of the camera device 10.
  • FIG. 1
  • the image processing unit 16 (an example of the AI processing unit) is configured using, for example, a GPU (Graphics Processing Unit) and memory.
  • a DSP Digital Signal Processor
  • the image processing unit 16 uses AI (artificial intelligence), for example, to detect the position of the object from the imaging data of the object (for example, objects such as works WK1, WK2, etc.) imaged by the image sensor 12. Perform detection processing (for example, pattern matching).
  • AI artificial intelligence
  • the image processing unit 16 may perform the above-described detection processing using pattern matching processing, which is known as existing image processing, without using AI.
  • a data transmission bus P1 for direct transfer is provided between the memory 15 and the image processing section 16 .
  • the image processing unit 16 acquires imaging data used for object detection processing from the memory 15 via the data transmission bus P1, and executes object detection processing included in the imaging data. If the GPU that constitutes the image processing unit 16 has a processing performance of 4 Tops, for example, the image processing unit 16 can execute object detection processing in the order of 1 to 4 milliseconds.
  • the image processing unit 16 can execute a learned model (not shown) that has already been generated by machine learning, for example.
  • the trained model corresponds to a data set in which a detection target object to be detected by the image processing unit 16 is determined by machine learning based on a plurality of image data.
  • Objects to be detected by the image processing unit 16 are, for example, various parts used in a production factory where the robot control system 100 is installed.
  • the image processing unit 16 uses the input imaging data having a prescribed size and the above-described learned model to execute detection processing of the detection target included in the imaging data.
  • a data transmission bus for direct transfer is provided between the image processing section 16 and the data output I/F section.
  • the image processing unit 16 sends the data of the detection result obtained by the object detection process to the data output I/F 17 via the data transmission bus.
  • the data output I/F 17 (an example of an output interface) is configured using a circuit capable of inputting/outputting data or signals to/from the robot controller 30 connected to the rear stage of the camera device 10 .
  • the data output I/F 17 outputs the object detection result data from the image processing unit 16 to the robot controller 30 according to a predetermined data transmission method (for example, GigE).
  • GigE Gigabit Ethernet (registered trademark)
  • GigE Gigabit Ethernet (registered trademark)
  • the data transmission method by wire connection between the data output I/F 17 and the robot controller 30 need not be limited to GigE.
  • the robot controller 30 inputs the data of the object detection result output from the data output I/F 17 by a predetermined data transmission method (for example, GigE), and performs processing (object recognition processing) for recognizing the object. Execute.
  • the robot controller 30 generates an appropriate movement instruction (for example, an instruction to drive or control an actuator (not shown) provided in the robot 50) according to the detected position of the object based on the processing result of the object recognition process. and output to the robot 50 (specifically, the robots 50A, 50B, . . . ).
  • the robot 50 (more specifically, robots 50A, 50B, . . . ) includes robot drivers (specifically, robot drivers 51A, 51B, . . . ) and robot motors (specifically, robot motors 52A, 52B, . ) and at least. Although only two robots 50 included in the robot control system 100 are shown in FIG. 2 to simplify the explanation, one robot or three or more robots may be used as described above.
  • the robot driver 51A controls the robot motor 52A to generate a driving force for causing the robot 50A to perform an operation based on instructions from the robot controller 30. Further, the robot driver 51B controls the robot motor 52B to generate a driving force for causing the robot 50B to perform an operation based on an instruction from the robot controller 30.
  • FIG. 1 A schematic diagram of a robot motor 52A to generate a driving force for causing the robot 50A to perform an operation based on instructions from the robot controller 30.
  • the robot motor 52A moves toward the object to be transferred on the belt conveyor CB in accordance with the instructions from the robot controller 30 (for example, moving the part sucked by the robot hand to the object). ) to the tip of the robot 50A (for example, a robot hand or an end effector).
  • the robot motor 52B moves toward the object to be transferred on the belt conveyor CB in accordance with the instructions from the robot controller 30 (for example, moving the part sucked by the robot hand to the object). ) to the tip of the robot 50B (for example, a robot hand or an end effector).
  • the robot 50 (specifically, the robots 50A, 50B, . . . ) includes a robot base (not shown) in addition to the robot driver and robot motors. ) and robot arms AR1 and AR2.
  • Each of the robot arms AR1 and AR2 is sequentially connected by, for example, six joints (specifically, three bending joints and three torsion joints).
  • a robot hand or an end effector is arranged at the tip of each of the robot arms AR1 and AR2 to attract and hold a component to be mounted at a specified position on each of the works WK1 and WK2 and to mount the component.
  • each of the camera devices 10A and 10B is fixed to each tip of the robot arms AR1 and AR2 (for example, a robot hand or an end effector).
  • FIG. 4 is a diagram showing the data flow in time series in the first operation example of the camera device 10 according to the first embodiment.
  • FIG. 5 is a flow chart showing an operation procedure example according to the first operation example of FIG.
  • the operation mode of the camera device 10 corresponds to operation mode 1 (see FIG. 3) of the operation mode table TBL1.
  • the image sensor 12 of the camera device 10 captures image data having a predetermined resolution (for example, the maximum resolution is FullHD) (see step St1 in FIG. 5).
  • a predetermined resolution for example, the maximum resolution is FullHD
  • color image data for example, RGB format
  • the image sensor 12 performs cropping or binning on the imaged data imaged in step St1 to a size suitable for input to the image processing unit 16 .
  • the image sensor 12 outputs the cropped or binned imaging data to the image processing board B2 every 480 [fps] (in other words, 2.1 [milliseconds]) (see step St2 in FIG. 5).
  • the image processing board B2 is a mounting board on which the CPU 13, ISP 14, memory 15, image processing section 16 and data output I/F 17 are arranged. That is, in the memory 15 of the image processing board B2, image data of VGA resolution (for example, luminance level equivalent to VGA, or data configured in RGB format) is received from the image sensor 12 every 2.1 [milliseconds]. is input via the route RT1a (that is, the data transmission bus) (step X15, see step St3 in FIG. 5).
  • VGA resolution for example, luminance level equivalent to VGA, or data configured in RGB format
  • the memory 15 temporarily stores the imaging data input to the memory 15 in step X15.
  • the memory 15 inputs the imaging data received at step X15 to the image processing section 16 via the route RT1b (that is, the data transmission bus P1) (step X16).
  • the processing of this step X16 is processed within 1 fps (2.1 milliseconds or less at 480 fps).
  • the image processing unit 16 uses the imaging data input in step X16 to execute detection processing of an object included in the imaging data (see step St4 in FIG. 5).
  • the image processing unit 16 extracts the calculation result of the coordinates of the object included in the imaging data through the object detection process (see step St5 in FIG. 5).
  • the image processing unit 16 sends the extracted detection result data to the data output I/F 17 via the route RT1c (that is, the data transmission bus) (step X17, see step St6 in FIG. 5).
  • the detection processing of these objects and the transmission of detection result data are also processed within 1 fps (2.1 milliseconds or less at 480 fps).
  • the data output I/F 17 sends the detection result data from the image processing unit 16 to the robot controller 30 according to the route RT1d (in other words, a predetermined data transmission method (eg, GigE)).
  • This processing is also processed within 1 fps. Therefore, the entire processing of the camera device 10 described with reference to FIG. 4 can be suppressed to about 10 [milliseconds]. Even if it is assumed that image data is input from the image sensor 12 to the image processing board B2 every 2.1 [milliseconds], the possibility of processing congestion occurring in the image processing board B2 is reduced. Therefore, in the camera device 10 according to the present embodiment, it is possible to reduce the delay in the entire processing of the camera device 10, and it can be expected that the control of the robot controller 30 will not be hindered.
  • a predetermined data transmission method eg, GigE
  • the ISP 14 reads the captured data from the memory 15 and converts it from RGB format to YUV format.
  • the image data may be resized to a size suitable for input to the image processing unit 16, and the imaged data after resizing may be stored in the memory 15 again.
  • the camera device 10 may input the image data after resizing stored in the memory 15 to the image processing section 16 .
  • the camera device 10 generates an exposure control signal for adjusting internal parameters that determine exposure conditions for imaging (that is, exposure) by the image sensor 12, based on the imaging data converted from the RGB format to the YUV format in the ISP 14. and sent to the image sensor 12 (AE control).
  • the camera device 10 may stream the imaging data stored in the memory 15 as it is to an external terminal (not shown) according to a predetermined data transmission method (for example, GigE).
  • a predetermined data transmission method for example, GigE
  • FIG. 6 is a diagram showing the data flow in time series in the second operation example of the camera device 10 according to the first embodiment.
  • FIG. 7 is a flowchart showing an operation procedure example according to the second operation example of the camera device 10 according to the first embodiment.
  • the same reference numerals or step numbers are given to the elements or processes that overlap with those of FIGS. 4 and 5 to simplify the description, and different contents will be described.
  • the operation mode of the camera device 10 corresponds to operation mode 1 (see FIG. 3) of the operation mode table TBL1.
  • the image sensor 12 of the camera device 10 captures imaging data having a predetermined resolution (for example, the maximum resolution is FullHD) (see step St1 in FIG. 7).
  • the image sensor 12 performs cropping or binning on the imaged data imaged in step St1 to a size suitable for input to the image processing unit 16 .
  • the image sensor 12 outputs the cropped or binned imaging data to the image processing board B2 every 480 [fps] (in other words, 2.1 [ms]) (see step St2 in FIG. 7). .
  • image data of VGA resolution (for example, luminance level equivalent to VGA, or data configured in RGB format) is received from the image sensor 12 every 2.1 [milliseconds]. is input via the route RT1a (that is, the data transmission bus) (step X15, see step St3 in FIG. 7).
  • the memory 15 temporarily stores the imaging data input to the memory 15 in step X15.
  • the ISP 14 reads the imaging data from the memory 15 via the route RT2b (that is, the data transmission bus between the ISP 14 and the memory 15), converts it from RGB format to YUV format (step X14 (1)), and performs image processing. It is resized to a size suitable for input to the unit 16 (step X14 (2), see step St11 in FIG. 7).
  • the ISP 14 stores the resized imaging data in the memory 15 via the route RT3b (that is, the data transmission bus between the ISP 14 and the memory 15) (step X16, see step St12 in FIG. 7).
  • the processing up to step X15, step X14(1), step X14(2), and step X16 is processed in approximately 2.1 milliseconds.
  • the image processing unit 16 uses the imaging data input in step X16 to execute detection processing of the object included in the imaging data (see step St4 in FIG. 7).
  • the image processing unit 16 extracts the calculation result of the coordinates of the object included in the imaging data through the object detection process (see step St5 in FIG. 7).
  • the image processing unit 16 sends the extracted detection result data to the data output I/F 17 via the route RT1c (that is, the data transmission bus P2) (step X17, see step St6 in FIG. 7).
  • the detection processing of these objects and the transmission of detection result data are also processed within 1 fps (2.1 [ms] or less at 480 fps).
  • the data output I/F 17 sends the detection result data from the image processing unit 16 to the robot controller 30 according to the route RT1d (in other words, a predetermined data transmission method (eg, GigE)).
  • This processing is also processed within 1 fps. Therefore, the entire processing of the camera device 10 described with reference to FIG. 6 can be suppressed to about 10 [milliseconds], substantially similar to the first operation example of FIG. Even if it is assumed that image data is input from the image sensor 12 to the image processing board B2 every 2.1 [milliseconds], the possibility of processing congestion occurring in the image processing board B2 is reduced. Therefore, in the camera device 10 according to the present embodiment, it is possible to reduce the delay in the entire processing of the camera device 10, and it can be expected that the control of the robot controller 30 will not be hindered.
  • a predetermined data transmission method eg, GigE
  • FIG. 8 is a diagram showing the data flow in time series in the third operation example of the camera device 10 according to the first embodiment.
  • FIG. 9 is a flow chart showing an operation procedure example according to the third operation example of the camera device 10 according to the first embodiment.
  • the same reference numerals or step numbers are given to the elements or processes that overlap with those of FIGS. 6 and 7 to simplify the description, and different contents will be described.
  • the operation mode of the camera device 10 corresponds to operation mode 3 (see FIG. 3) of the operation mode table TBL1.
  • the image sensor 12 of the camera device 10 captures imaging data having a predetermined resolution (for example, the maximum resolution is FullHD) (see step St1A in FIG. 9).
  • the image sensor 12 performs cropping or binning to a size suitable for input to the image processing unit 16 on the imaged data imaged in step St1A.
  • the image sensor 12 outputs the cropped or binned imaging data to the image processing board B2 every 120 [fps] (in other words, 8.4 [ms]) (see step St2 in FIG. 9). .
  • image data of FullHD resolution (for example, luminance level equivalent to FullHD, or data configured in RGB format) is stored every 8.4 [milliseconds] from the image sensor 12. is input via the route RT1a (that is, the data transmission bus) (step X15, see step St3 in FIG. 9).
  • the memory 15 temporarily stores the imaging data input to the memory 15 in step X15.
  • the ISP 14 reads the imaging data from the memory 15 via the route RT2b (that is, the data transmission bus between the ISP 14 and the memory 15), converts it from RGB format to YUV format (step X14 (1)), and performs image processing. It is resized to a size suitable for input to the unit 16 (step X14 (2), see step St11A in FIG. 9). As a result, for example, image data corresponding to VGA can be obtained from image data corresponding to FullHD.
  • the ISP 14 stores the resized imaging data in the memory 15 via the route RT3b (that is, the data transmission bus between the ISP 14 and the memory 15) (step X16, see step St12 in FIG. 9).
  • the processing up to step X15, step X14(1), step X14(2), and step X16 is processed in approximately 8.4 [milliseconds].
  • the image processing unit 16 uses the imaging data input in step X16 to execute detection processing of the object included in the imaging data (see step St4 in FIG. 9).
  • the image processing unit 16 extracts the calculation result of the coordinates of the object included in the imaging data through the object detection process (see step St5 in FIG. 9). Subsequently, the image processing unit sends the extracted detection result data to the data output I/F 17 via the route RT1c (that is, the data transmission bus P2) (step X17, see step St6 in FIG. 9). It takes about 2 to 4 milliseconds to detect these objects and transmit the data of the detection results.
  • the data output I/F 17 sends the detection result data from the image processing unit 16 to the robot controller 30 according to the route RT1d (in other words, a predetermined data transmission method (eg, GigE)).
  • a predetermined data transmission method eg, GigE
  • This process takes about 1 [millisecond]. Therefore, the entire processing of the camera device 10 described with reference to FIG. 8 can be suppressed to 40 [milliseconds] or less. Even if it is assumed that image data is input from the image sensor 12 to the image processing board B2 every 8.4 [milliseconds], the possibility of processing congestion occurring in the image processing board B2 is reduced. Therefore, in the camera device 10 according to the present embodiment, it is possible to reduce the delay in the entire processing of the camera device 10, and it can be expected that the control of the robot controller 30 will not be hindered.
  • a predetermined data transmission method eg, GigE
  • FIG. 5 can also be expressed as a flowchart showing an example of an operation procedure combining the first example of operation and the second example of operation of the camera device 10 according to the first embodiment.
  • the image sensor 12 may omit cropping or binning to a size suitable for the input of the image processing unit 16, but the example ( That is, in the first operation example), the image sensor 12 performs cropping or binning to a size suitable for input to the image processing unit 16 . Since other processing contents are common to FIGS. 5 and 8, detailed description thereof will be omitted.
  • FIG. 10 is a block diagram showing an internal configuration example of the image processing section 16A of the camera device 10 according to the modification of the first embodiment.
  • the image processing unit 16A of the camera device 10 can realize the imaging data by DMA transfer to the memory (image processing memory 162) in the image processing unit 16A.
  • the image processing unit 16A shown in FIG. 10 includes an image processing engine 161, an image processing memory 162, a peripheral function processing unit 163, and a DMA processing unit 164.
  • the image processing engine 161, the image processing memory 162, the peripheral function processing section 163, and the DMA processing section 164 are connected to each other via a data transmission bus so that data can be input/output.
  • the image processing engine 161 is, for example, a GPU, and executes object detection processing by the image processing unit 16A. This detection result data is stored in the image processing memory 162 from the image processing engine 161 by the DMA processing unit 164 .
  • the image processing memory 162 is, for example, a RAM, and temporarily stores the detection result data based on the object detection processing by the image processing engine 161 , which is transferred by the DMA processing unit 164 .
  • the peripheral function processing unit 163 is a circuit capable of inputting/outputting data or signals to/from the data output I/F 17, for example.
  • the peripheral function processing unit 163 acquires the detection result data DMA-transferred from the image processing memory 162 by the DMA processing unit 164 via the data transmission bus, and sends the data to the data output I/F 17 .
  • the DMA processing unit 164 DMA-transfers the detection result data stored in the image processing memory 162 from the image processing memory 162 to the peripheral function processing unit 163 via the data transmission bus. As a result, the input/output of the detection result data is executed by the DMA processing unit 164 instead of the image processing engine 161, and the processing load of the image processing engine 161, which is dominated by the processing load in the image processing unit 16A, is reduced. can be reduced.
  • the camera device 10 includes the memory 15 that inputs and outputs signals, the first captured image that captures an image of an object and has a first resolution (for example, FHD) as the captured image of the object, An image sensor 12 that outputs a second captured image having a second resolution (for example, VGA) smaller than the first resolution to a memory 15, and an image sensor 12 that determines whether or not the captured image output from the memory 15 includes a detection target.
  • An image processing unit 16 that performs detection processing and an interface (for example, data output I/F 17) that outputs the result of the detection processing are provided.
  • the camera device 10 transmits the imaging data once stored in the memory 15 to the image processing section 16 . Therefore, it is possible to reduce the delay of the entire processing using the camera device 10 .
  • the image sensor 12 can execute cropping processing for cutting out the image area of the captured image, and generates the second captured image by performing the cropping processing on the first captured image. Accordingly, the camera device 10 can output image data having a second resolution smaller than the first resolution to the image processing board B2 by cropping processing in the image sensor 12, and can reduce the processing load on the image processing board B2.
  • the image sensor 12 is capable of executing a binning process of aggregating a plurality of pixel components included in the captured image into one pixel component, and performing the binning process on the first captured image yields a second captured image. is generated and output to the memory 15 .
  • the camera device 10 can output image data having a second resolution smaller than the first resolution to the image processing board B2 by binning processing in the image sensor 12, and can reduce the processing load on the image processing board B2.
  • the pixel components of the first captured image and the second captured image are luminance components.
  • the camera device 10 can reduce the overall delay of processing using the monochrome (black-and-white) image data. be able to.
  • the pixel components of the first captured image and the second captured image are RGB components.
  • the camera device 10 can reduce the overall delay of processing using the color (eg, RGB) format image data. can be achieved.
  • the camera device 10 according to Embodiment 1 also includes a bus (for example, a data transmission bus P1) that connects the memory 15 and the image processing unit 16.
  • the memory 15 inputs the first captured image and the second captured image to the image processing section 16 via the bus.
  • a path is formed through which the first captured image and the second captured image are directly transmitted from the memory 15 to the image processing unit 16, so that the processing load on the image processing board B2 can be reduced.
  • the image processing unit 16 is realized by AI (Artificial Intelligence) that executes detection processing using a learning model (learned model) for determining a detection target.
  • AI Artificial Intelligence
  • the camera device 10 can use, in the image processing unit 16, a learned model generated through machine learning or the like for detection processing for detecting whether or not a detection target is included in a captured image. , the detection accuracy of the object can be improved.
  • the camera device 10 further includes a control section (for example, the CPU 13 or the ISP 14) that outputs a control signal for controlling the image sensor 12.
  • the control unit causes the image sensor 12 to execute a first mode (for example, operation mode 3) in which the first captured image is output for each unit time in a first frame (for example, sensor frame rate of operation mode 3; 120 fps). and a second mode (for example, operation mode 1) for outputting a second captured image per unit time for a second frame larger than the first frame (for example, sensor frame rate of operation mode 1; 480 fps) to the image sensor 12.
  • the second control signal to be executed is switched and output.
  • the camera device 10 can switch the operation mode of the camera device 10 between the operation mode 1 that enables high-speed imaging of about 480 fps and the operation mode 3 that enables low-speed imaging of about 120 fps.
  • the image sensor 12 performs a third imaging having a third resolution (eg, 1.3 MP) smaller than the first resolution (eg, FHD) and larger than the second resolution (eg, VGA) as a captured image of the object. Images can be output.
  • the control unit for example, CPU 13 or ISP 14
  • a third control signal for causing the image sensor 12 to execute a third mode (for example, operation mode 2) to be output, a first control signal, and a second control signal are switched and output.
  • the camera device 10 can be operated in three operation modes: an operation mode 1 that enables high-speed imaging of about 480 fps and an operation mode 2 that enables medium-speed imaging of about 240 fps. It is possible to switch between operation mode 3 in which low-speed imaging of about 120 fps is possible.
  • the use case of the camera device 10 according to Embodiment 1 described above is not limited to the robot control system 100 described above, and can be applied to various use cases described below, for example.
  • High-speed line inspection by a fixedly installed camera device 10 The camera device 10 captures an image of a case containing tablets flowing (moving) on the line as an object.
  • the robot control system is placed, for example, in a production facility such as a factory, and removes abnormal tablets (for example, defective tablets) based on the captured image of the tablet-containing case. Since the entire processing of the camera device 10 can be reduced in delay, it is possible to realize high-speed tablet inspection.
  • the camera device 10 accurately follows a moving object that runs around:
  • the camera device 10 may be mounted on a multicopter-type unmanned aerial vehicle such as a drone.
  • a drone equipped with the camera device 10 can track an object (for example, a suspect in an escape incident) by the camera device 10 and accurately shoot the object with an electron gun or the like from a distance. Even if both the object and the drone on which the camera device 10 is mounted are moving bodies, the entire processing of the camera device 10 can be made low-delay, so real-time feedback is possible, and the object can be threatened or shot accurately. It becomes possible.
  • the camera device 10 is installed in the safety device of an autonomous vehicle, etc.: By fixedly installing the camera device 10 at the rear of the vehicle body of the autonomous vehicle, the overall processing delay of the camera device 10 can be reduced. It is possible to avoid collisions with other vehicles. In addition, when the camera device 10 is installed as a surveillance camera in the city, low delay is achieved in the entire processing of the camera device 10, so feedback (for example, recognizing a dangerous situation and stopping) can be performed on the roadway with low delay. It is possible to avoid collisions of automobiles running
  • the camera device 10 responds quickly to changes in environmental brightness: By appropriately setting the exposure time or lighting output in response to changes in the brightness of the external environment, the overall processing delay of the camera device 10 can be reduced. , appropriate control can be performed according to the environment, and an image of desired image quality can be obtained.
  • the present disclosure is useful as a camera device and an image processing method that make it possible to reduce the overall delay of processing using a camera device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

Dispositif de caméra comprenant : une mémoire pour entrer et émettre un signal ; un capteur d'image pour former des images d'un objet et fournir à la mémoire, en tant qu'images capturées de l'objet, une première image capturée présentant une première résolution et une seconde image capturée présentant une seconde résolution inférieure à la première résolution ; une unité de traitement d'image pour effectuer un traitement de détection permettant de détecter si un objet de détection est compris dans des images capturées fournies par la mémoire ; et une interface pour fournir le résultat du traitement de détection.
PCT/JP2023/001984 2022-01-28 2023-01-23 Dispositif de caméra et procédé de traitement d'image WO2023145698A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022-012213 2022-01-28
JP2022012213 2022-01-28

Publications (1)

Publication Number Publication Date
WO2023145698A1 true WO2023145698A1 (fr) 2023-08-03

Family

ID=87471980

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/001984 WO2023145698A1 (fr) 2022-01-28 2023-01-23 Dispositif de caméra et procédé de traitement d'image

Country Status (1)

Country Link
WO (1) WO2023145698A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017162141A (ja) * 2016-03-09 2017-09-14 株式会社デンソーアイティーラボラトリ 画像識別システム及びその制御装置、並びに画像識別方法
JP2017192123A (ja) * 2016-04-12 2017-10-19 キヤノン株式会社 画像記録装置およびその制御方法
JP2020188310A (ja) * 2019-05-10 2020-11-19 ソニーセミコンダクタソリューションズ株式会社 画像認識装置および画像認識方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017162141A (ja) * 2016-03-09 2017-09-14 株式会社デンソーアイティーラボラトリ 画像識別システム及びその制御装置、並びに画像識別方法
JP2017192123A (ja) * 2016-04-12 2017-10-19 キヤノン株式会社 画像記録装置およびその制御方法
JP2020188310A (ja) * 2019-05-10 2020-11-19 ソニーセミコンダクタソリューションズ株式会社 画像認識装置および画像認識方法

Similar Documents

Publication Publication Date Title
CN109691079B (zh) 成像装置和电子设备
US20190158747A1 (en) Monitoring camera and swing correction method
JP2008060873A (ja) 複数画角カメラ
JP4781243B2 (ja) ドライブレコーダおよびその画像取得タイミング制御方法
CN108496056B (zh) 摄像装置
US10914960B2 (en) Imaging apparatus and automatic control system
EP1742463B1 (fr) Appareil de correction du tremblement d'image avec sortie vers la synthèse des images vidéo
JP2009089158A (ja) 撮像装置
US7872671B2 (en) Image pickup apparatus and image pickup method
WO2023145698A1 (fr) Dispositif de caméra et procédé de traitement d'image
JP2006243373A (ja) 映像信号処理装置、この映像信号処理装置を搭載した撮像装置及び映像信号処理方法並びに映像信号処理用プログラム
JP2006224291A (ja) ロボットシステム
JP5256060B2 (ja) 撮像装置
US7917021B2 (en) Portable apparatus
WO2023189077A1 (fr) Dispositif de commande et dispositif d'imagerie
CN103379273A (zh) 摄像装置
WO2021152877A1 (fr) Dispositif d'imagerie à semi-conducteurs, dispositif électronique et système d'imagerie
JP6238629B2 (ja) 画像処理方法及び画像処理装置
JP2009101464A (ja) 撮影機及び画像処理装置を有する治具自動制御方法及び治具自動制御装置
US11818470B2 (en) Image generation device, image generation method, and vehicle control system
JP2004286699A (ja) 画像位置検出方法及び画像位置検出装置
JPH09322047A (ja) ビデオカメラ制御方法、ビデオカメラ制御システム、ビデオカメラ制御装置、およびビデオカメラ
JP2003018450A (ja) 静止画撮像装置
JP2019022028A (ja) 撮像装置、その制御方法およびプログラム
JPH05306907A (ja) レンズ計測制御システム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23746909

Country of ref document: EP

Kind code of ref document: A1