CN112956183A - Image capturing apparatus, control method, and program - Google Patents

Image capturing apparatus, control method, and program Download PDF

Info

Publication number
CN112956183A
CN112956183A CN201980070029.8A CN201980070029A CN112956183A CN 112956183 A CN112956183 A CN 112956183A CN 201980070029 A CN201980070029 A CN 201980070029A CN 112956183 A CN112956183 A CN 112956183A
Authority
CN
China
Prior art keywords
image
vehicle
information
reading
read
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201980070029.8A
Other languages
Chinese (zh)
Other versions
CN112956183B (en
Inventor
山中刚
汤川泰宏
山口卓也
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Semiconductor Solutions Corp
Sony Group Corp
Original Assignee
Sony Semiconductor Solutions Corp
Sony Group Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Semiconductor Solutions Corp, Sony Group Corp filed Critical Sony Semiconductor Solutions Corp
Publication of CN112956183A publication Critical patent/CN112956183A/en
Application granted granted Critical
Publication of CN112956183B publication Critical patent/CN112956183B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/54Mounting of pick-up tubes, electronic image sensors, deviation or focusing coils
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/23Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
    • B60R1/26Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view to the rear of the vehicle
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B37/00Panoramic or wide-screen photography; Photographing extended surfaces, e.g. for surveying; Photographing internal surfaces, e.g. of pipe
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/60Rotation of whole images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/69Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
    • H04N25/44Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by partially reading an SSIS array
    • H04N25/441Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by partially reading an SSIS array by reading contiguous pixels from selected rows or columns of the array, e.g. interlaced scanning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
    • H04N25/46Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by combining or binning pixels
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/71Charge-coupled device [CCD] sensors; Charge-transfer registers specially adapted for CCD sensors
    • H04N25/75Circuitry for providing, modifying or processing image signals from the pixel array
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/79Arrangements of circuitry being divided between different or multiple substrates, chips or circuit boards, e.g. stacked image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • B60R2300/302Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing combining image information with GPS information or vehicle data, e.g. vehicle speed, gyro, steering angle data

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Mechanical Engineering (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Studio Devices (AREA)
  • Traffic Control Systems (AREA)
  • Fittings On The Vehicle Exterior For Carrying Loads, And Devices For Holding Or Mounting Articles (AREA)
  • Indication In Cameras, And Counting Of Exposures (AREA)

Abstract

The present technology relates to an imaging device, a control method, and a program capable of easily providing an image suitable for driving a vehicle. The control unit controls reading of an image from an image sensor that captures an image to be displayed on a display unit of the vehicle, based on vehicle information acquired by the vehicle. The present technology is applicable to, for example, a viewing system that displays an image showing the rear of a vehicle.

Description

Image capturing apparatus, control method, and program
Technical Field
The present technology relates to an image capturing apparatus, a control method, and a program. In particular, the present technology relates to, for example, an image capturing apparatus, a control method, and a program that make it possible to easily provide an image suitable for the operation of a vehicle.
Background
For example, there is proposed an observation system in which a camera as an image capturing device is mounted on the rear of a vehicle such as an automobile, and an image of an area behind the vehicle captured using the camera is displayed.
Examples of the image of the area behind the vehicle provided by the observation system include an image of an area located further rearward than an area immediately behind the rear of the vehicle, and an image of an area immediately behind the rear of the vehicle.
Here, the image of an area located further rearward than an area immediately behind the rear of the vehicle is, for example, an image corresponding to an image seen in a so-called interior mirror, i.e., a class I mirror, and hereinafter, is also referred to as a rear view mirror (BM) image. Further, the image of the area immediately behind the rear of the vehicle is an image of the rear of the vehicle and the area immediately behind the rear of the vehicle, and is also referred to as a Rear View (RV) image hereinafter.
When a vehicle starts to ascend or descend a slope or travels on an uneven road surface, the vehicle may be inclined. Further, the vehicle may also be inclined according to, for example, the state of an object mounted on the vehicle or an occupant of the vehicle.
When the vehicle is inclined, the camera mounted in the vehicle is also inclined, which causes a change in the ratio (proportion) of sky, road, and the like in the BM image as compared with the ratio when the vehicle is not inclined.
When the vehicle is not tilted and the observation system is adjusted so that the BM image suitable for the vehicle operation is displayed, if the ratio of the sky, the road, and the like in the BM image is changed due to the tilt of the vehicle, the BM image suitable for the vehicle operation is not displayed in some cases.
For example, when the vehicle runs on a flat road surface, a BM image is displayed that enables the driver to sufficiently recognize, for example, another vehicle approaching the vehicle from behind, or an obstacle behind the vehicle. On the other hand, when the vehicle is inclined, the BM image may include less information required for the driver to recognize other vehicles or obstacles.
Therefore, for example, a technique has been proposed which stores captured images in a storage device in a subsequent stage and cuts out necessary images from the captured images when the vehicle travels uphill or downhill. The captured image is captured using a camera. The storage device is located behind the camera (see patent document 1, for example).
CITATION LIST
Patent document
Patent document 1: japanese patent No. 6245274
Disclosure of Invention
Technical problem
When a necessary image is cut out from the captured images stored in the storage device, it is necessary to read a captured image whose size is significantly larger than the size (number of pixels) of the necessary image from the camera. The large size of the photographed image read from the camera may cause a frame rate to be lowered.
Further, in addition to a technique of cutting out a necessary image from a captured image stored in a storage device located after the camera, there is also a need for a proposal of a technique of enabling an image suitable for vehicle operation to be easily provided.
The present technology has been made in view of the above circumstances, and aims to make it possible to easily provide an image suitable for vehicle operation.
Solution to the problem
An image capturing apparatus according to the present technology includes: an image sensor that captures an image displayed on a display unit of a vehicle; and a controller that controls reading of the image from the image sensor based on vehicle information acquired by the vehicle.
The control method or program according to the present technology is a control method or program that: the control method includes controlling reading of an image from an image sensor that captures an image, the image being displayed on a display portion of the vehicle, based on vehicle information acquired by the vehicle; the program causes the computer to operate as a controller that performs such control.
In the image capturing apparatus, the control method, and the program according to the present technology, reading of an image from an image sensor that captures an image, which is displayed on a display portion of a vehicle, is controlled based on vehicle information acquired by the vehicle.
Note that the image capturing apparatus may be a stand-alone apparatus or an internal block included in a single apparatus.
Further, the program may be provided by being transmitted via a transmission medium or by being stored in a recording medium.
Drawings
Fig. 1 is a perspective view showing an external configuration example of a vehicle 10 including an observation system according to the present technology.
Fig. 2 is a perspective view showing an example of the configuration of the interior of the vehicle 10.
Fig. 3 is a block diagram showing a first example of the configuration of the observation system included in the vehicle 10.
Fig. 4 is a diagram describing an example of control of reading of the BM image and the RV image (i.e., an image read from the captured image) by the controller 43.
Fig. 5 is a diagram describing a first example of the relationship between the state of the vehicle 10 and the BM image.
Fig. 6 is a diagram describing an example of control of reading of a read image by the controller 43 according to the inclination information.
Fig. 7 shows an example of the configuration of the image sensor 32.
Fig. 8 is a flowchart describing an example of display processing of displaying a BM image performed in the first example of the configuration of the observation system.
Fig. 9 is a block diagram showing a second example of the configuration of the observation system included in the vehicle 10.
Fig. 10 is a diagram describing a second example of the relationship between the state of the vehicle 10 and the BM image.
Fig. 11 is a diagram describing an example of control of reading of a read image by the controller 71 according to the inclination information.
Fig. 12 is a diagram further describing an example of control of reading of a read image by the controller 71 according to the inclination information.
Fig. 13 is a flowchart describing an example of display processing of displaying a BM image performed in the second example of the configuration of the observation system.
Fig. 14 is a block diagram showing a third example of the configuration of the observation system included in the vehicle 10.
Fig. 15 is a diagram describing an example of control of reading of a read image by the controller 71 according to the inclination information.
Fig. 16 is a flowchart describing an example of display processing of displaying a BM image performed in the third example of the configuration of the observation system.
Fig. 17 is a block diagram showing a fourth example of the configuration of the observation system included in the vehicle 10.
Fig. 18 shows an example of an image that can be output by the image sensor 32.
Fig. 19 is a diagram describing an example of a vehicle transmission bandwidth that can be used for data transmission in the vehicle 10.
Fig. 20 is a diagram describing a first example of control performed by the controller 136 to adjust the data amount of the BM image and the RV image.
Fig. 21 is a diagram describing a second example of control performed by the controller 136 to adjust the data amounts of the BM image and the RV image.
Fig. 22 is a flowchart describing an example of display processing performed in the observation system to display the BM image and the RV image.
Fig. 23 is a block diagram showing a fifth example of the configuration of the observation system included in the vehicle 10.
Fig. 24 is a diagram describing an example of control performed by the controller 181 to extract a BM image and an RV image from a captured image.
Fig. 25 is a flowchart describing an example of display processing performed in the observation system to display the BM image and the RV image.
Fig. 26 is a block diagram showing an example of the configuration of an embodiment of a computer to which the present technology is applied.
Detailed Description
< example of configuration of vehicle including Observation System >
Fig. 1 is a perspective view showing an example of the configuration of the appearance of a vehicle 10 including an observation system to which the present technology is applied.
For example, a camera unit 11 as an image pickup device for picking up an image of an area behind the (four-wheeled) vehicle 10 is mounted on the rear of the vehicle 10. In fig. 1, a camera unit 11 is mounted above a rear window of a vehicle 10.
The camera unit 11 is a wide-angle camera unit (for example, an angle of view is 120 degrees or more) so that both a range corresponding to a BM image and a range corresponding to an RV image can appear in a captured image. Further, the camera unit 11 is a high-resolution camera unit (for example, 4K or higher resolution), so that a subject far away in the BM image is conspicuous. Therefore, the camera unit 11 can capture a high-resolution image at a wide angle.
Note that, in the camera unit 11, the BM image and the RV image are extracted from an image (hereinafter also referred to as a captured image) captured by the camera unit 11. As will be described later.
The camera unit 11 is mounted in a state in which the orientation of the optical axis has been adjusted such that the BM image includes an image of the state of an area further rearward than the area immediately behind the rear portion of the vehicle 10, and such that the RV image includes an image of the state of the rear portion of the vehicle 10 and the area immediately behind the rear portion of the vehicle 10, which is a state that can be observed using an interior rearview mirror (a class I mirror in regulation 46 defined by the united nations european economic commission UNECE) when the interior rearview mirror (a class I mirror in the united nations european economic commission UNECE) is mounted in the vehicle 10.
Therefore, the BM image is an image of a state of an area located further rearward than an area immediately behind the rear of the vehicle 10, which is a state that can be observed using the interior mirror when the interior mirror is mounted in the vehicle 10. Further, the RV image is an image of the state of the rear of the vehicle 10 and the area immediately behind the rear of the vehicle 10. The RV image is particularly useful when the vehicle 10 is traveling backwards, because a region immediately behind the rear of the vehicle 10, which is a blind spot of the interior rear view mirror, may appear in the RV image. Further, the RV image may be used to generate a roof image obtained when the vehicle 10 is viewed from above.
Note that the camera unit 11 is not limited to being mounted above the rear window of the vehicle 10, as long as a captured image from which the above-described BM image and RV image can be extracted can be captured. For example, in addition to being mounted above the rear window of the vehicle 10, the camera unit 11 may also be mounted at a position P11, for example, above a license plate located at the rear of the vehicle 10.
Fig. 2 is a perspective view showing an example of the internal configuration of the vehicle 10 of fig. 1.
A BM display portion 21 on which a BM image is displayed is provided in the vehicle 10 at a position where the interior rearview mirror is mounted. The BM display unit 21 is a display unit that replaces an interior mirror.
An RV display portion 22 on which an RV image is displayed is provided at a center position of an instrument panel in the vehicle 10.
Note that an in-vehicle camera 23 for taking an image of the driver is provided on the driver seat side of the instrument panel in the vehicle 10. The image of the driver is captured and output by the in-vehicle camera 23. In the vehicle 10, the line of sight and the position of the head of the driver are detected from the image of the driver.
Here, the in-vehicle camera 23 for capturing an image of the driver may be provided at any position other than the position on the dashboard, for example, at a position P21 above the BM display section 21.
< first example of configuration of Observation System >
Fig. 3 is a block diagram showing a first example of the configuration of the observation system included in the vehicle 10.
The observation system includes the camera unit 11, the BM display section 21, and the RV display section 22 described with reference to fig. 1 and 2.
The camera unit 11 includes an optical system 31, an image sensor 32, an output section 33, an acquisition section 41, a detector 42, and a controller 43.
The optical system 31 includes optical components such as a condenser and an aperture, and collects light entering the optical system 31 onto the image sensor 32.
The image sensor 32 receives light from the optical system 31 and performs photoelectric conversion to capture a captured image including images as a BM image and an RV image. Then, in accordance with control by the controller 43, the image sensor 32 reads (as images of) the BM image and the RV image from the captured image to output the read images. Here, an image read from a captured image and output by the image sensor 32 is also referred to as a read image. The read image output by the image sensor 32 is supplied to the output section 33.
The output section 33 is an output Interface (IF) that transmits the BM image and the RV image, which are read images from the image sensor 32, to the outside of the camera unit 11. The output section 33 transmits the BM image to the BM display section 21 and transmits the RV image to the RV display section 22. The BM image from the output section 33 is displayed on the BM display section 21 in accordance with the specification of the BM display section 21, and the RV image from the output section 33 is displayed on the RV display section 22 in accordance with the specification of the RV display section 22. The output section 33 can perform format conversion and other image processing on the BM image and the RV image, if necessary.
The acquisition portion 41 acquires (receives) vehicle information acquired by the vehicle 10 from the vehicle 10 through a network (vehicle information network) established in the vehicle 10, and supplies the acquired vehicle information to the detector 42.
Here, examples of the vehicle information include any information that the vehicle 10 can acquire, such as gyroscope information obtained from a gyroscope of the vehicle 10, suspension information related to a suspension of the vehicle 10, a front camera image obtained from a front camera for taking an image of an area in front of the vehicle 10, GPS information obtained from a Global Positioning System (GPS), traveling information such as a vehicle speed and a traveling direction (forward or backward) representing a traveling state of the vehicle 10, a line of sight of a driver and a position of the head of the vehicle 10 obtained from an image taken using the onboard camera 23, a three-dimensional (3D) map for a navigation system of the vehicle 10, and a high definition map for an Advanced Driving Assistance System (ADAS)/autonomous driving system. For example, when the vehicle 10 includes a speed sensor, speed information output by the speed sensor may be employed as the vehicle speed. For example, gear information indicating the state of the transmission may be employed as the traveling direction.
The acquisition section 41 acquires at least one piece of vehicle information as needed, and supplies the at least one piece of vehicle information to the detector 42.
In the first example of the configuration of the observation system, the acquisition section 41 acquires, for example, gyroscope information, or GPS information and a 3D map (high-definition map) as vehicle information, and supplies the acquired vehicle information to the detector 42.
The detector 42 detects (calculates) inclination information indicating the inclination of the vehicle 10 using the vehicle information from the acquisition section 41, and supplies the detected inclination information to the controller 43.
The controller 43 controls reading of the read image from the image sensor 32 based on the vehicle information.
For example, the controller 43 controls reading of the read image from the image sensor 32 according to the inclination information detected by the detector 42 using the vehicle information.
In other words, based on the inclination information supplied from the detector 42, the controller 43 calculates a reading start position of an image from which reading as a read image is started from the captured image captured by the image sensor 32, sets the size (number of pixels) of the read image, and supplies the read position specified by the reading start position and the size to the image sensor 32. Thus, the controller 43 performs reading control for controlling reading of the read image from the image sensor 32.
The image sensor 32 reads a pixel signal of a pixel at a reading position supplied from the controller 43, and outputs a read image exhibiting a pixel value corresponding to the pixel signal.
Note that the reading position may be specified by, for example, a reading start position and a reading end position where reading of the read image is ended, in addition to the reading start position and the size.
Here, in the first example of the configuration of the observation system, the read image output by the image sensor 32 is the same as the BM image displayed on the BM display section 21 or the RV image displayed on the RV display section 22. Therefore, in the first example of the configuration of the observation system, the size of the BM image displayed on the BM display section 21 or the size of the image of the RV displayed on the RV display section 22 is set to the size of the read image.
< control of reading of read image from captured image >
Fig. 4 is an exemplary diagram describing control of reading of the BM image and the RV image, which are images read from the captured image, by the controller 43.
Regarding the positional relationship between the optical system 31 and the captured image captured by the image sensor 32, the captured image (the light receiving surface of the image sensor 32) includes an image circle of the optical system 31 (the lens included therein), for example, as shown in fig. 4.
In the reading control, the controller 43 controls the reading of the pixel signals from the image sensor 32 so that the specified region R11 is extracted as the BM image from the captured image, and the specified region R11 is a region in which a region further rearward than a region immediately behind the rear of the vehicle 10 appears (a region observed using the interior mirror if the interior mirror is mounted in the vehicle 10). In other words, the controller 43 controls reading of pixel signals from the image sensor 32 according to the type (/ application) of the display device or the image display function.
In addition, in the reading control, the controller 43 controls the reading of the pixel signals from the image sensor 32 by supplying the reading position of the read image to the image sensor 32, thereby extracting the designated area R12 as an RV image from the captured image (the image in the circle among them), the designated area R12 being one of the areas in which the rear portion of the vehicle 10 and the area immediately behind the rear portion of the vehicle 10 appear.
According to control by the controller 43, the image sensor 32 reads the pixel signal of the region R11 corresponding to (the pixel value of) the BM image from the captured image obtained by performing image capturing, and outputs a read image exhibiting the pixel value corresponding to the pixel signal. Further, the image sensor 32 reads the pixel signal of the region R12 corresponding to the RV image from the captured image, and outputs a read image exhibiting the pixel value corresponding to the pixel signal.
The regions R11 and R12 are respectively specified by read positions of read images supplied from the controller 43 to the image sensor 32.
Note that, in the control of the reading of the BM image as the read image, the controller 43 may also calculate a read position for specifying the region R11 extracted as the BM image, from, for example, the line of sight of the driver and the position of the head included in the vehicle information.
In other words, if a rear view mirror is installed in the vehicle 10, the range appearing in the image that the driver can see using the rear view mirror will change as the driver moves his/her line of sight or head. In the control of reading of the BM image as the read image, the controller 43 may also calculate the read position from the line of sight and the position of the head of the driver, so that the driver can see such BM image: the range of the BM image is the same as that in an image that can be observed when the rear view mirror is mounted in the vehicle 10, and the reading position is used to specify the region R11 extracted as the BM image.
To simplify the description, the description of the RV image is omitted below.
< relationship between state of vehicle 10 and BM image >
Fig. 5 is a diagram describing a first example of the relationship between the state of the vehicle 10 and the BM image.
Fig. 5(a1) shows the state of the vehicle 10 when the vehicle 10 is traveling on a flat road surface. Further, fig. 5(a2) shows an example of a BM image obtained when the vehicle 10 is running on a flat road surface.
Fig. 5(B1) shows the state of the vehicle 10 when the vehicle 10 starts driving downhill. Further, fig. 5(B2) shows an example of a BM image obtained when the vehicle 10 starts driving downhill.
Fig. 5(C1) shows the state of the vehicle 10 when the vehicle 10 starts traveling uphill. Further, fig. 5(C2) shows an example of a BM image obtained when the vehicle 10 starts traveling uphill.
Here, the range of the three-dimensional space in the BM image is referred to as a BM range. Further, in the following description, it is assumed that the line of sight of the driver and the position of the head are fixed in order to simplify the description. Therefore, it is assumed that (the position of) the region R11 is extracted from the captured image as a BM image, and therefore the BM range does not change due to the driver moving his/her sight line or the position of his/her head.
Further, it is assumed that when the vehicle 10 is on a flat road surface (when the vehicle 10 is not inclined), the controller 43 performs reading control, that is, performs calculation of a reading position (reading start position and size) to extract a BM image in which a rear vehicle located behind a specified distance from the vehicle 10 appears substantially in the center of the field of view of the BM image, as shown in fig. 5.
In this case, when the vehicle 10 starts to run downhill, the vehicle 10 is inclined forward, i.e., in the pitch direction, and the optical axis of the camera unit 11 is mounted at the rear of the vehicle 10, so the BM range is inclined upward (toward the sky) compared to when the vehicle 10 is on a flat road surface (fig. 5(B1) and 5 (B2)).
Therefore, when the camera is set so that a rear vehicle located behind a specified distance from the vehicle 10 appears substantially in the center of the BM image visual field in a state where the vehicle 10 is on a flat road surface (the vehicle 10 is running on a flat road surface), then a rear vehicle located behind the specified distance from the vehicle 10 appears in the lower part of the BM image visual field in the BM image obtained in a state where the vehicle 10 starts running downhill. As shown in fig. 5, the proportion of the sky (the area where the sky appears) is larger and the proportion of the road is smaller in the BM image than in the BM image obtained in a state where the vehicle 10 is on a flat road surface.
Therefore, the amount of information in the BM image obtained when the vehicle 10 starts driving downhill changes compared to the BM image obtained when the vehicle 10 is on a flat road surface. In other words, for example, the amount of information relating to the road is reduced in the BM image obtained when the vehicle 10 starts driving downhill, compared to the BM image obtained when the vehicle 10 is on a flat road surface. Further, depending on the relative position of the vehicle 10 with respect to the rear vehicle or the inclination state of the downhill, there is a possibility that a part of the rear vehicle or the rear vehicle itself does not appear in the BM image.
On the other hand, when the vehicle 10 starts traveling uphill, the vehicle 10 is tilted backward, i.e., tilted in the pitch direction, and the optical axis of the camera unit 11 is mounted at the rear of the vehicle 10, so the BM range is tilted downward (toward the road) as compared to when the vehicle 10 is on a flat road surface (fig. 5(C1) and 5 (C2)).
Therefore, when the camera is set so that a rear vehicle located behind a specified distance from the vehicle 10 appears substantially at the center of the BM image visual field in a state where the vehicle 10 is on a flat road surface, a rear vehicle located behind the specified distance from the vehicle 10 appears at the upper portion of the BM image visual field in the BM image obtained in a state where the vehicle 10 starts traveling uphill. As shown in fig. 5, the proportion of the sky (the area where the sky appears) is small and the proportion of the road is large in the BM image, compared to the BM image obtained in a state where the vehicle 10 is on a flat road surface.
Therefore, the amount of information in the BM image obtained when the vehicle 10 starts traveling uphill changes compared to the BM image obtained when the vehicle 10 is on a flat road surface. In other words, for example, the amount of information relating to the sky is reduced in the BM image obtained when the vehicle 10 starts traveling uphill, compared to the BM image obtained when the vehicle 10 is on a flat road surface. Further, depending on the relative position of the vehicle 10 with respect to the rear vehicle or the inclination state of the uphill, there is a possibility that a part of the rear vehicle or the rear vehicle itself does not appear in the BM image.
As described above, from the viewpoint of providing the driver of the vehicle 10 with the BM image suitable for the operation of the vehicle 10, it is disadvantageous that the amount of information in the BM image obtained when the vehicle 10 starts driving downhill or uphill is changed as compared with the amount of information in the BM image obtained when the vehicle 10 is on a flat road surface.
Therefore, in the control of reading the BM image, the controller 43 controls the reading of the BM image corresponding to the read image from the image sensor 32 in accordance with the inclination information from the detector 42 so that the BM image obtained has a similar amount of information to that obtained when the vehicle 10 is on a flat road surface regardless of the state of the vehicle 10.
< control of reading of read image by controller 43 based on Tilt information >
Fig. 6 is a diagram describing an example of control of reading of a read image by the controller 43 according to the inclination information.
Here, it is assumed that when the vehicle 10 is on a flat road surface, reading control is performed by the controller 43 so that (pixel signals of pixels of) the rectangular region R101 is read from a captured image captured by the image sensor 32 (from the light receiving surface of the image sensor 32). The region R101 has the same size as the BM image as the read image.
For example, when the vehicle 10 starts traveling uphill and inclines rearward, the controller 43 performs reading control such that a reading position for specifying the region R102 having the same size as the region R101 is calculated from the inclination information, and a pixel signal of the calculated reading position is read. The region R102 is shifted upward from the region R101 by the number of pixels corresponding to the angle at which the vehicle 10 is tilted rearward.
Further, for example, when the vehicle 10 starts traveling downhill and inclines forward, the controller 43 performs reading control such that a reading position for specifying the region R103 having the same size as the region R101 is calculated from the inclination information, and a pixel signal of the calculated reading position is read. The region R103 is shifted downward from the region R101 by the number of pixels corresponding to the angle at which the vehicle 10 is tilted forward.
In the calculation of the reading position according to the inclination information, the reading position is calculated so that the proportion of the road and the sky appearing in the read image is the same as the proportion in the BM image obtained when the vehicle 10 is on a flat road surface, that is, so that the proportion between the road and the sky appearing in the read image is kept the same (/ close) as the proportion in the BM image obtained when the vehicle 10 is on a flat road surface.
Through the above-described reading control by the controller 43, an image suitable for the operation of the vehicle 10 can be easily obtained. In other words, it is possible to easily obtain a BM image having an information amount similar to that of the BM image obtained when the vehicle 10 is on a flat road surface, regardless of the inclination of the vehicle 10 in the pitch direction.
In the first example of the configuration of the observation system, the detector 42 detects, for example, a (steep) slope from gyro information, or GPS information and a 3D map, which is vehicle information supplied from the acquisition section 41, and detects (calculates) inclination information relating to the vehicle 10 when the vehicle 10 starts ascending or descending, that is, inclination information mainly indicating (horizontal) inclination of the vehicle 10 forward and backward (i.e., in the pitch direction). The detector 42 then provides the tilt information to the controller 43.
For example, the detector 42 may use the gyro information to detect that the vehicle 10 starts ascending or descending a slope, and may further detect inclination information indicating the inclination of the vehicle 10 forward and backward (i.e., in the pitch direction) at that time.
Further, for example, the detector 42 detects (estimates) that the vehicle 10 starts ascending or descending from the current position obtained using the GPS information and from the 3D map, and may further detect (estimate) inclination information indicating inclination of the vehicle 10 forward and backward (i.e., in the pitch direction) using inclination of the slope obtained by the 3D map when the vehicle 10 starts ascending or descending.
Based on the inclination information from the detector 42, the controller 43 calculates a read position of an image read from the image sensor 32 as a read image, and supplies the calculated read position to the image sensor 32. Thus, the controller 43 controls reading of the read image from the image sensor 32.
The image sensor 32 reads a pixel signal of a pixel at a reading position supplied from the controller 43, and outputs a read image exhibiting a pixel value corresponding to the pixel signal.
< example of configuration of image sensor 32 >
Fig. 7 shows an example of the configuration of the image sensor 32 of fig. 3.
The image sensor 32 includes a pixel array 51, an input circuit 52, a row selection circuit 53, a column selection circuit 54, an analog-to-digital (AD) converter 55, a line buffer 56, and an output circuit 57.
The pixel array 51 includes a plurality of pixels 61 arranged in a two-dimensional plane. The region of the pixel array 51 where the pixels 61 are arranged is a light receiving surface of the image sensor 32.
The pixel 61 converts light entering the pixel 61 into a pixel signal, which is an electric signal corresponding to the amount of light. The pixel signals of the pixels 61 located in the row selected by the row selection circuit 53 and in the column selected by the column selection circuit 54 are read from the pixel array 51 by the column selection circuit 54, and the read pixel signals are supplied to the AD converter 55.
A read position specified by a read start position of a read image read from a captured image and a size of the read image (hereinafter also referred to as a read size) is supplied to the input circuit 52 by the controller 43.
The input circuit 52 calculates coordinates of the pixel 61 as the reading start coordinates (X _ STA, Y _ STA) among the pixels 61 of the pixel array 51 using the reading start position and the reading size as the reading position from the controller 43, and calculates coordinates of the pixel 61 as the reading END coordinates (X _ END, Y _ END) among the pixels 61 of the pixel array 51, the pixel 61 for the reading start coordinates (X _ STA, Y _ STA) being the pixel 61 where the pixel signal starts to be read, for example, in the raster scanning order, and the pixel 61 for the reading END coordinates (X _ END, Y _ END) being the pixel 61 where the pixel signal ENDs to be read, for example, in the raster scanning order.
The input circuit 52 supplies the Y-coordinate Y _ STA of the read start coordinates (X _ STA, Y _ STA) and the Y-coordinate Y _ END of the read termination coordinates (X _ END, Y _ END) to the row selecting circuit 53, and the input circuit 52 supplies the X-coordinate X _ STA of the read start coordinates (X _ STA, Y _ STA) and the X-coordinate X _ END of the read termination coordinates (X _ END, Y _ END) to the column selecting circuit 54 (process PR 1).
The row selection circuit 53 sequentially selects rows from the row of the pixels 61 indicated by the Y coordinate Y _ STA received from the input circuit 52 to the row of the pixels 61 indicated by the Y coordinate Y _ END received from the input circuit 52 (process PR 2).
In the pixel array 51, pixel signals are read from the pixels 61 in the row selected by the row selection circuit 53, and the read pixel signals are supplied to the column selection circuit 54.
The column selection circuit 54 selects, from among the pixel signals read from the pixels 61, pixel signals of the pixels 61 in respective columns from the column of the pixels 61 indicated by the X coordinate X _ STA received from the input circuit 52 to the column of the pixels 61 indicated by the X coordinate X _ END received from the input circuit 52, and the column selection circuit 54 supplies the selected pixel signals to the AD converter 55 (process PR 3).
The AD converter 55 AD-converts the pixel signals from the column selection circuit 54, for example, row by row, and supplies the AD-converted pixel signals to the line buffer 56 (process PR 4).
The line buffer 56 temporarily stores the pixel signal from the AD converter 55.
The output circuit 57 reads the pixel signal stored in the line buffer 56 for each pixel (process PR5), and outputs the read pixel signal to the outside of the image sensor 32 as a pixel value of a read image.
As described above, in the image sensor 32, the pixel signals of the pixels 61 included in the reading area are read, and an image exhibiting pixel values corresponding to the pixel signals is read as a read image. The reading area is a rectangular area including a pixel 61 as a reading start coordinate (X _ STA, Y _ STA) of an upper left vertex and a pixel 61 as a reading END coordinate (X _ END, Y _ END) of a lower right vertex.
< display treatment >
Fig. 8 is a flowchart describing an example of display processing of displaying a BM image performed in the first example of the configuration of the observation system of fig. 3.
In step S11, the acquisition section 41 acquires the gyro information, or the GPS information and the 3D map as the vehicle information, and supplies the vehicle information to the detector 42. Then, the process advances to step S12.
In step S12, the detector 42 detects a slope using the vehicle information from the acquisition section 41. Further, the detector 42 uses the vehicle information from the acquisition section 41 to detect (calculate) inclination information related to the vehicle 10 when the vehicle 10 starts ascending or descending (mainly, inclination information indicating inclination in the forward and backward directions (i.e., inclination in the pitch direction)), and supplies the detected inclination information to the controller 43. Then, the process advances from step S12 to step S13.
In step S13, the controller 43 calculates a reading start position of the read image from the image sensor 32 based on the tilt information from the detector 42, and supplies the reading start position and the size of the BM image as the reading position to the image sensor 32. Then, the process advances to step S14.
In step S14, the image sensor 32 reads the pixel signal of the pixel of the reading position supplied from the controller 43, and acquires a read image exhibiting a pixel value corresponding to the pixel signal to output the acquired read image. The read image output by the image sensor 32 is supplied to the output section 33, and the process proceeds from step S14 to step S15.
In step S15, the output section 33 transfers the read image from the image sensor 32 to the BM display section 21 as a BM image, and causes the BM display section 21 to display the read image. As a result, the BM image is displayed on the BM display section 21, and the display process is terminated.
As described above, in the first example of the configuration of the observation system, the inclination information mainly indicating the inclination forward and backward (i.e., in the pitch direction) is detected, and the reading of the read image from the image sensor 32 is controlled in accordance with the inclination information. Therefore, an image suitable for the operation of the vehicle 10 can be easily provided. In other words, when the vehicle 10 starts ascending or descending a slope, it is possible to easily provide the BM image having the similar amount of information to that of the BM image obtained when the vehicle 10 is on a flat road surface.
Further, in the first example of the configuration of the observation system, a read image having the same size as the BM image is read from the image sensor 32 in the reading control. Therefore, the possibility of a frame rate reduction of the BM image can be reduced compared to reading an image having a size larger than the BM image from the image sensor 32.
In the first configuration example, the gyro information, as well as the GPS information and the 3D map, have been described as examples of the vehicle information. However, as in the second configuration example or the third configuration example described later, the suspension information, the front camera image, and the like may also be used as the vehicle information in the first configuration example. The inclination of the vehicle can be detected using suspension information, a front camera image, and the like.
< second example of configuration of Observation System >
Fig. 9 is a block diagram showing a second example of the configuration of the observation system included in the vehicle 10.
Note that in this figure, portions corresponding to those in fig. 3 are denoted by the same reference numerals as in fig. 3, and description thereof is omitted hereinafter.
In fig. 9, the observation system includes a camera unit 11, a BM display section 21, and an RV display section 22. Further, the camera unit 11 in fig. 9 includes an optical system 31, an image sensor 32, an output section 33, an acquisition section 41, a detector 42, a controller 71, and a processing section 72.
Therefore, the second example of the configuration of the observation system of fig. 9 is similar to the configuration of the observation system of fig. 3 in the case of including the camera unit 11, the BM display section 21, and the RV display section 22.
However, the second example of the configuration of the observation system of fig. 9 is different from the configuration of the observation system of fig. 3 in that the camera unit 11 includes the controller 71 instead of the controller 43, and newly includes the processing section 72.
As in the case of the controller 43, the controller 71 controls reading of the read image from the image sensor 32 in accordance with the tilt information supplied from the detector 42.
However, based on the inclination information supplied from the detector 42, the controller 71 calculates a reading start position and a reading size of a read image from a captured image captured by the image sensor 32, and the controller 71 supplies the read position specified by the reading start position and the reading size to the image sensor 32.
In other words, in the first example of the configuration of the observation system, since the size of the read image is the same as the size of the BM image, the read size is set to the size of the BM image. On the other hand, in the second example of the configuration of the observation system, the reading size is calculated from the inclination information.
Here, in the second example of the configuration of the observation system, the acquisition portion 41 acquires, for example, suspension information as vehicle information, and supplies the acquired vehicle information to the detector 42.
In this case, the detector 42 detects the inclinations to the left and right (i.e., in the rolling direction) using the suspension information from the acquisition section 41, and detects (calculates) inclination information indicating (the level of) the inclination. The detector 42 then provides the tilt information to the controller 71.
The controller 71 calculates a rotation angle for rotating the read image output by the image sensor 32 based on the tilt information from the detector 42, and supplies the calculated rotation angle to the processing section 72. The controller 71 supplies the above-described rotation angle to the processing portion 72 to control the rotation of the read image by the processing portion 72 so that the read image is rotated by the rotation angle.
The read image output by the image sensor 32 is supplied to the processing section 72.
The processing section 72 rotates the read image in accordance with the rotation angle from the controller 71. Further, the processing section 72 cuts out an image having the size of the BM image (hereinafter also referred to as BM size) from the rotated read image, and supplies the obtained image to the output section 33 as the BM image.
As described above, in the processing section 72, the read image is rotated and an image having the BM size is cut out from the rotated read image. Therefore, the read image must be an image having the following size: an image having a BM size can be cut out from the rotated read image.
Therefore, for example, the controller 71 calculates the minimum size of the read image as the read size from the tilt information, so that an image having the BM size can be cut out from the read image after the rotation.
< relationship between state of vehicle 10 and BM image >
Fig. 10 is a diagram describing a second example of the relationship between the state of the vehicle 10 and the BM image.
Fig. 10(a1) shows the state of the vehicle 10 when the vehicle 10 is traveling on a flat road surface. Further, fig. 10(a2) shows an example of a BM image obtained when the vehicle 10 is running on a flat road surface.
Fig. 10(B1) shows a state of the vehicle 10 when the vehicle 10 is on a road surface having a difference in height and the right side of the road surface is higher than the left side thereof in the direction in which the front side of the vehicle 10 is oriented. Further, fig. 10(B2) shows an example of a BM image obtained when the vehicle 10 is on a road surface having a difference in height and the right side of the road surface is higher than the left side thereof in the direction in which the front side of the vehicle 10 is oriented.
Fig. 10(C1) shows a state of the vehicle 10 when the vehicle 10 is on a road surface having a difference in height and the left side of the road surface is higher than the right side thereof in the direction in which the front side of the vehicle 10 is oriented. Further, fig. 10(C2) shows an example of a BM image obtained when the vehicle 10 is on a road surface having a height difference and the left side of the road surface is higher than the right side thereof in the direction in which the front side of the vehicle 10 is oriented.
It is assumed that in fig. 10, since the driver moves his/her line of sight or the position of his/her head, the region R11 extracted from the captured image as a BM image is not changed, as in the case of fig. 5.
In this case, when the vehicle 10 is on a road surface having a difference in height and the right side of the road surface is higher than the left side thereof in the direction in which the front side of the vehicle 10 is oriented, the vehicle 10 is tilted to the left, i.e., in the roll direction, and the camera unit 11 mounted at the rear of the vehicle 10 is also tilted in the roll direction.
Therefore, when the vehicle 10 is on a flat road surface (fig. 10(a)), a horizontal line extending horizontally appears in the BM image. When the vehicle 10 is on a road surface having a height difference and the right side is high, the horizontal line in the BM image obtained assumes a state of being diagonally inclined downward to the right (diagonally inclined upward to the left) (fig. 10 (B)). As shown in fig. 10, the range of the sky and the road appearing in the BM image is changed compared to the BM image obtained when the vehicle 10 is on a flat road surface. In other words, in the BM image (fig. 10(B)) obtained when the vehicle 10 is on a road surface having a height difference and the right side is higher, the proportion of the right side road is smaller and the proportion of the right side sky is larger, and the proportion of the left side road is larger and the proportion of the left side sky is smaller, as compared with the BM image obtained when the vehicle 10 is on a flat road surface.
Therefore, the content of the BM image obtained when the vehicle 10 is on a road surface having a height difference and the right side is higher is changed compared to the BM image obtained when the vehicle 10 is on a flat road surface.
On the other hand, when the vehicle 10 is on a road surface having a difference in height and the left side of the road surface is higher than the right side thereof in the direction in which the front side of the vehicle 10 is oriented, the vehicle 10 is tilted rightward, i.e., in the roll direction, and the camera unit 11 mounted at the rear of the vehicle 10 is also tilted in the roll direction.
Therefore, when the vehicle 10 is on a road surface having a height difference and the left side is high, the horizontal line in the obtained BM image assumes a state of being diagonally inclined to the upper right (diagonally inclined to the lower left) (fig. 10 (C)). As shown in fig. 10, the range of the sky and the road appearing in the BM image is changed compared to the BM image obtained when the vehicle 10 is on a flat road surface. In other words, in the BM image (fig. 10(C)) obtained when the vehicle 10 is on a road surface having a height difference and the left side is higher, the proportion of the right side road is larger and the proportion of the right side sky is smaller, and the proportion of the left side road is smaller and the proportion of the left side sky is larger, as compared with the BM image obtained when the vehicle 10 is on a flat road surface.
Therefore, the content of the BM image obtained when the vehicle 10 is on a road surface having a height difference and the left side is higher is changed compared to the BM image obtained when the vehicle 10 is on a flat road surface.
As described above, from the viewpoint of providing the BM image suitable for the operation of the vehicle 10 to the driver of the vehicle 10, the change in the BM image obtained when the vehicle 10 is on a road surface having a height difference and being high on the left or right side is disadvantageous compared to the BM image obtained when the vehicle 10 is on a flat road surface.
Therefore, in the control of the reading of the BM image, the controller 71 controls the reading of the BM image corresponding to the read image from the image sensor 32 in accordance with the inclination information from the detector 42 so that the BM image having a similar amount of information to that obtained when the vehicle 10 is on a flat road surface can be obtained regardless of the state of the vehicle 10.
< control of reading of read image by controller 71 based on inclination information >
Fig. 11 is a diagram describing an example of control of reading of a read image by the controller 71 according to the inclination information.
Fig. 11(a) on the left side shows the read area on the image sensor 32 in each respective case. Fig. 11(B) on the right shows a region cut out by the processing unit 72.
As described with reference to fig. 6, it is assumed that when the vehicle 10 is on a flat road surface, reading control is performed by the controller 71 so that (pixel signals of pixels of) a rectangular region R101 having the same size as the BM image is read from a captured image captured by the image sensor 32 (from the light receiving surface of the image sensor 32).
For example, when the vehicle 10 is on a road surface having a height difference and inclined in the roll direction, the controller 71 performs reading control such that the reading position for specifying the region R111 is calculated from the inclination information and the pixel signal of the calculated reading position is read. The size of the region R111 is larger than the region R101 by the number of pixels corresponding to the inclination angle of the vehicle 10 in the rolling direction.
Here, as shown in fig. 10, a horizontal line appearing to extend horizontally in the BM image obtained when the vehicle 10 is on a flat road surface is inclined in the BM image obtained when the vehicle 10 is inclined in the rolling direction.
Therefore, the read image output by the image sensor 32 according to the reading control is rotated by the processing section 72 such that the horizontal line appearing in the read image extends horizontally, and the BM image (corresponding image) is cut out from the rotated read image.
For example, in calculating the reading position from the inclination information, the controller 71 calculates a reading start position and a reading size as the reading position, and supplies the calculated reading start position and reading size to the image sensor 32. The reading position is used to specify a region R111 centered on the center (center of gravity) of the region R101, the region R111 having a minimum size that makes it possible to cut out a BM image in the read image after rotation.
Further, for example, the controller 71 calculates a rotation angle for rotating the region R111 corresponding to the read image output by the image sensor 32 so that a horizontal line appearing in the region R111 corresponding to the read image appears to extend horizontally, from the inclination information, and the controller 71 supplies the calculated rotation angle to the processing section 72.
The rectangular region R111 specified by a reading start position and a reading size as a reading position from the controller 71 is read as a read image by the image sensor 32, and is supplied to the processing section 72 by the image sensor 32.
The processing section 72 rotates the region R111, which is a read image from the image sensor 32, by the rotation angle from the controller 71. Then, the processing unit 72 cuts out a region R113 as a BM image from the region R112. The region R113 has the same size as the BM image and is rotated by 0 degrees. The region R112 is obtained by rotating the region R111 (fig. 11 (B)).
With respect to the read image of fig. 11(a), the vehicle 10 is tilted rightward, i.e., in the rolling direction, and therefore, the horizontal line appearing in the read image is diagonally tilted upward and rightward. Therefore, the region R111 as a read image is rotated clockwise by the processing section 72 so that a horizontal line diagonally inclined to the upper right extends horizontally, and the region R113 as a BM image is cut out from the region R112 by the processing section 72. The region R112 is obtained by rotation.
Therefore, an image suitable for the operation of the vehicle 10 can be easily obtained as the BM image. In other words, it is possible to easily obtain the BM image having the similar information amount to that of the BM image obtained when the vehicle 10 is on a flat road surface, regardless of the inclination of the vehicle 10 in the rolling direction.
Fig. 12 is a diagram further describing an example of reading control of a read image by the controller 71 according to the inclination information.
Fig. 12(a) is a diagram describing a case where the read size of the read image obtained when the vehicle 10 is tilted in the roll direction is larger than the size of the BM image. Fig. 12(B) is a diagram describing the rotation by the processing portion 72. Fig. 12(C) is a diagram illustrating the process of cutting out the BM image by the processing unit 72.
In fig. 12(a), the vehicle 10 is tilted to the right, i.e., in the rolling direction, and therefore, the horizontal line appearing in the read image is diagonally tilted to the upper right.
In the second example of the configuration of the observation system, the detector 42 detects that the vehicle 10 is inclined in the roll direction from the suspension information provided as the vehicle information by the acquisition section 41, and detects (calculates) inclination information indicating the inclination of the vehicle 10, that is, inclination information mainly indicating (the level of) the inclination to the left and right (that is, in the roll direction) of the vehicle 10. The detector 42 then provides the tilt information to the controller 71.
The controller 71 calculates a reading start position and a reading size of the region R111 indicated by a reading position of the read image read from the image sensor 32, based on the inclination information from the detector 42, and the controller 71 supplies the calculated reading start position and reading size to the image sensor 32. When the vehicle 10 is inclined in the roll direction (when the inclination angle in the roll direction is not 0 degrees), the reading size is larger than the size of the region R101 having the same size as the BM image.
Further, the controller 71 calculates a rotation angle for rotating the region R111 as a read image based on the tilt information from the detector 42, and supplies the calculated rotation angle to the processing section 72.
The image sensor 32 reads the region R111 indicated by the reading position (reading start position and reading size) as a read image from the controller 71, and outputs the read image.
The region R111, which is a read image output by the image sensor 32, is supplied to the processing section 72.
The region R111 is rotated by the processing section 72 by a rotation angle from the controller 71, and this results in the generation of the region R112. In this region R112, the horizontal line appears to extend horizontally (fig. 12 (B)). Further, a region R113 having the same size as the region R101 is cut out from the region R112 as a BM image by the processing unit 72 (fig. 12B and 12C).
< display treatment >
Fig. 13 is a flowchart describing an example of display processing of displaying a BM image performed in the second example of the configuration of the observation system of fig. 9.
In step S21, the acquisition portion 41 acquires the suspension information as the vehicle information, and supplies the vehicle information to the detector 42. Then, the process advances to step S22.
In step S22, the detector 42 detects the inclination of the vehicle 10 in the roll direction using the vehicle information from the acquisition section 41. Further, the detector 42 uses the vehicle information from the acquisition section 41 to detect (calculate) inclination information (mainly inclination information indicating inclination to the left and right, i.e., in the rolling direction) relating to the vehicle 10 when the vehicle 10 is inclined in the rolling direction, and supplies the detected inclination information to the controller 71. Then, the process advances from step S22 to step S23.
In step S23, the controller 71 calculates a reading start position and a reading size of the read image from the image sensor 32 based on the tilt information from the detector 42, and supplies the reading start position and the reading size as the reading position to the image sensor 32. Further, the controller 71 calculates a rotation angle for rotating the read image based on the tilt information from the detector 42, and supplies the calculated rotation angle to the processing section 72. Then, the process advances from step S23 to step S24.
In step S24, the image sensor 32 reads the pixel signal of the pixel of the reading position supplied from the controller 71, and acquires a read image exhibiting a pixel value corresponding to the pixel signal to output the acquired read image. The read image output by the image sensor 32 is supplied to the processing section 72, and the process proceeds from step S24 to step S25.
In step S25, the processing section 72 rotates the read image from the image sensor 32 by the rotation angle from the controller 71. Further, the processing portion 72 cuts out the BM image from the rotated read image, and supplies the BM image to the output portion 33. Then, the process advances from step S25 to step S26.
In step S26, the output unit 33 transfers the BM image from the processing unit 72 to the BM display unit 21, and displays the BM image on the BM display unit 21. This causes the BM image to be displayed on the BM display section 21, and the display process is terminated.
As described above, in the second example of the configuration of the observation system, the inclination information mainly indicating the leftward and rightward inclinations, that is, the inclination in the roll direction is detected, and the reading of the read image from the image sensor 32 is controlled in accordance with the inclination information. Further, in the second example of the configuration of the observation system, the rotation of the read image is controlled in accordance with the tilt information, and the BM image is cut out from the read image after the rotation. Therefore, the second example of the configuration of the observation system makes it possible to easily provide an image suitable for the operation of the vehicle 10. In other words, when the vehicle 10 is on a road where there is a height difference between the left and right sides, it is possible to easily provide BM images having a similar amount of information to those obtained when the vehicle 10 is on a flat road.
Further, in the second example of the configuration of the observation system, a read image having a minimum size such that a BM image can be cut out from the rotated read image is read from the image sensor 32 in the read control. This makes it possible to reduce the possibility of a frame rate reduction of the BM image.
An example in which the suspension information is taken as the vehicle information has been described in the second configuration example. However, as in the first configuration example described above or in a third configuration example described later, gyroscope information, GPS information, and a 3D map, a front camera image, and the like may be used as the vehicle information. Then, the tilt of the vehicle may be detected using the gyro information, the GPS information, and the 3D map, the front camera image, and the like.
< third example of configuration of Observation System >
Fig. 14 is a block diagram showing a third example of the configuration of the observation system included in the vehicle 10.
Note that in this figure, portions corresponding to those in fig. 3 or 9 are denoted by the same reference numerals as those in fig. 3 or 9, and description thereof is omitted hereinafter.
In fig. 14, the observation system includes a camera unit 11, a BM display section 21, and an RV display section 22. Further, the camera unit 11 in fig. 14 includes an optical system 31, an image sensor 32, an output section 33, an acquisition section 41, a detector 42, a controller 71, and a processing section 72.
Therefore, the third example of the configuration of the observation system of fig. 14 is similar to the second example of the configuration of the observation system of fig. 9.
However, in the third example of the configuration of the observation system of fig. 14, the acquisition section 41 acquires, for example, suspension information or a front camera image as vehicle information, and supplies the acquired vehicle information to the detector 42.
In this case, the detector 42 detects the uneven state of the road surface using the suspension information or the front camera image from the acquisition section 41. Further, the detector 42 detects the forward and backward (i.e., in the pitch direction) inclination of the vehicle 10 and the leftward and rightward (i.e., in the roll direction) inclination of the vehicle 10 due to the unevenness of the road surface, and detects (calculates) inclination information indicating (the level of) the inclination. The detector 42 then provides the tilt information to the controller 71.
Here, in the first example of the configuration of the observation system, the reading of the read image from the image sensor 32 is controlled in accordance with the forward and backward (i.e., in the pitch direction) inclination of the vehicle 10. In the second example of the configuration of the observation system, the reading of the read image from the image sensor 32 and the rotation of the read image are controlled by the processing portion 72 according to the leftward and rightward (i.e., in the rolling direction) inclination of the vehicle 10.
On the other hand, in the third example of the configuration of the observation system, the reading of the read image from the image sensor 32 and the rotation of the read image are controlled by the processing portion 72 in accordance with the forward and backward (i.e., in the pitch direction) tilt of the vehicle 10 and in accordance with the leftward and rightward (i.e., in the roll direction) tilt of the vehicle 10.
Therefore, in the third example of the configuration of the observation system, control obtained by combining the reading control of the read image from the image sensor 32, which is performed separately in the first example and the second example of the configuration of the observation system, is performed as the reading control of the read image from the image sensor 32. Further, in the third example of the configuration of the observation system, the same control as the rotation control of the read image performed in the second example of the configuration of the observation system is performed by the processing portion 72 as the rotation control of the read image.
< control of reading of read image by controller 71 based on inclination information >
Fig. 15 is a diagram describing an example of reading control of a read image by the controller 71 according to the inclination information.
Fig. 15(a) shows reading of a read image from the image sensor 32 having a size larger than the BM image. Fig. 15(B) shows rotation and cutting-out of the read image by the processing section 72.
As described with reference to fig. 6 to 11, it is assumed that when the vehicle 10 is on a flat road surface, reading control is performed by the controller 71 so that a rectangular region R101 having the same size as a BM image as a read image is read from a captured image captured by the image sensor 32 (from the light receiving surface of the image sensor 32) (pixel signal of a pixel).
For example, when the vehicle 10 is inclined on an uneven road surface and in the pitch direction and the roll direction, the controller 71 controls reading by the image sensor 32 so that a read position for specifying the area R121 is calculated from the inclination information and a pixel signal of the calculated read position is read. The region R121 is larger in size than the region R111 by the number of pixels corresponding to the tilt angle of the vehicle 10 in the roll direction, and the region R121 is offset upward or downward from the region R111 by the number of pixels corresponding to the tilt angle of the vehicle 10 in the pitch direction.
Further, for example, the controller 71 calculates a rotation angle for rotating the region R121 corresponding to the read image output by the image sensor 32 so that a horizontal line appearing in the region R121 corresponding to the read image appears to extend horizontally, based on the inclination information, and the controller 71 supplies the calculated rotation angle to the processing section 72.
According to the reading control by the controller 71, the rectangular region R121 is read as a read image by the image sensor 32, and is supplied to the processing section 72 by the image sensor 32.
The processing unit 72 rotates the region R121, which is a read image from the image sensor 32, by a rotation angle from the controller 71. Then, a region R123 having the same size as the BM image and having a rotation angle of 0 degrees is cut out from the region R122 as the BM image by the processing section 72. The region R122 is obtained by rotating the region R121.
In fig. 15, the vehicle 10 is tilted backward, i.e., in the pitch direction, and tilted rightward, i.e., in the roll direction. Therefore, the region R121, which is larger in size than the region R101 and is shifted upward from the region R101, is read from the image sensor 32 as a read image. Further, the vehicle 10 is inclined rightward, i.e., in the rolling direction, so that the horizontal line appearing in the region R121 as the read image is inclined diagonally upward and rightward. Therefore, the processing section 72 rotates the region R121 as a read image clockwise so that a horizontal line diagonally inclined to the upper right extends horizontally. Further, a region R123 as a BM image is cut out from the region R122 by the processing section 72, and the region R122 is obtained by rotating the region R121.
Therefore, an image suitable for the operation of the vehicle 10 can be easily obtained as the BM image. In other words, a BM image exhibiting an image representation similar to that obtained when the vehicle 10 is on a flat road surface can be easily obtained regardless of the inclination of the vehicle 10 in the pitch direction and the inclination of the vehicle 10 in the roll direction. This makes it possible to display an image in which the horizontal line extends horizontally regardless of the inclination of the vehicle 10. Therefore, there is no difference in the displayed image caused by a temporary difference in inclination from the rear vehicle (there is no difference in the position of the horizontal line). This enables the driver to grasp the state behind the vehicle without worrying about the difference in the displayed image (difference in the position of the horizontal line) caused during driving.
Note that, in the third example of the configuration of the observation system, the detector 42 may detect the tilt of the vehicle 10 in the pitch direction and the roll direction from, for example, suspension information.
Further, the detector 42 may recognize unevenness of a road surface in front of the vehicle 10 from, for example, a front camera image, and the detector 42 may be estimated from the result of recognizing the unevenness to detect the tilt of the vehicle 10 in the pitch direction and the roll direction when the vehicle 10 travels on the road surface with the recognized unevenness.
< display treatment >
Fig. 16 is a flowchart describing an example of display processing of displaying a BM image performed in the third example of the configuration of the observation system of fig. 14.
In step S31, the acquisition section 41 acquires suspension information or a front camera image as vehicle information, and supplies the vehicle information to the detector 42. Then, the process advances to step S32.
In step S32, the detector 42 detects the uneven state of the road surface on which the vehicle 10 is located using the vehicle information from the acquisition section 41. In addition, the detector 42 detects (calculates) inclination information indicating the inclination of the vehicle 10 (forward and backward inclination, i.e., inclination in the pitch direction, and leftward and rightward inclination, i.e., inclination in the roll direction) due to the unevenness of the road surface using the vehicle information from the acquisition section 41, and supplies the detected inclination information to the controller 71. Then, the process advances from step S32 to step S33.
In step S33, the controller 71 calculates a reading start position and a reading size of the read image from the image sensor 32 based on the tilt information from the detector 42, and supplies the reading start position and the reading size as the reading position to the image sensor 32. Further, the controller 71 calculates a rotation angle for rotating the read image based on the tilt information from the detector 42, and supplies the calculated rotation angle to the processing section 72. Then, the process advances from step S33 to step S34.
In step S34, the image sensor 32 reads the pixel signal of the pixel of the reading position supplied from the controller 71, and acquires a read image exhibiting a pixel value corresponding to the pixel signal to output the acquired read image. The read image output by the image sensor 32 is supplied to the processing section 72, and the process proceeds from step S34 to step S35.
In step S35, the processing section 72 rotates the read image from the image sensor 32 by the rotation angle from the controller 71. Further, the processing portion 72 cuts out the BM image from the rotated read image, and supplies the BM image to the output portion 33. Then, the process advances from step S35 to step S36.
In step S36, the output unit 33 transfers the BM image from the processing unit 72 to the BM display unit 21, and displays the BM image on the BM display unit 21. This causes the BM image to be displayed on the BM display section 21, and the display process is terminated.
As described above, in the third example of the configuration of the observation system, the inclination information indicating the forward and backward inclinations (i.e., the inclination in the pitch direction) and the leftward and rightward inclinations (i.e., the inclination in the roll direction) is detected, and the reading of the read image from the image sensor 32 is controlled in accordance with the inclination information. Further, in the third example of the configuration of the observation system, the rotation of the read image is controlled according to the inclination information, and the BM image is cut out from the read image after the rotation. Therefore, the third example of the configuration of the observation system makes it possible to easily provide an image suitable for the operation of the vehicle 10. In other words, when the vehicle 10 is on an uneven road surface, it is possible to easily provide a BM image having a similar amount of information to that obtained when the vehicle 10 is on a flat road surface.
Further, in the third example of the configuration of the observation system, a read image having a minimum size such that a BM image can be cut out from the read image after rotation is read from the image sensor 32 in the read control, as in the second example of the configuration of the observation system. This makes it possible to reduce the possibility of a frame rate reduction of the BM image.
In the third configuration example, the suspension information and the front camera image have been described as examples of the vehicle information. However, as in the first or second configuration example described above, gyroscope information, GPS information, 3D maps, and the like may be used as the vehicle information. Then, the inclination of the vehicle may be detected using the gyro information, the GPS information, the 3D map, and the like.
Note that the present technology is applicable not only to a case where the vehicle 10 is inclined according to the state of the road surface on which the vehicle 10 is located, but also to a case where the vehicle 10 is inclined according to, for example, the state of an object mounted on the vehicle 10 or an occupant of the vehicle 10. In other words, the technique is applicable regardless of the cause of the inclination of the vehicle 10.
< fourth example of configuration of Observation System >
Fig. 17 is a block diagram showing a fourth example of the configuration of the observation system included in the vehicle 10.
The observation system includes the camera unit 11, the BM display section 21, and the RV display section 22 described with reference to fig. 1 and 2.
The camera unit 11 includes an optical system 31, an image sensor 32, a data amount adjuster 133, an output section 134, an acquisition section 135, and a controller 136.
The optical system 31 includes optical components such as a condenser and an aperture, and collects light entering the optical system 31 onto the image sensor 32.
The image sensor 32 receives light from the optical system 31 and performs photoelectric conversion to capture a captured image. Then, according to control by the controller 136, the image sensor 32 extracts the BM image and the RV image from the captured image to output the extracted images. The BM image and the RV image output by the image sensor 32 are supplied to the data amount adjuster 133.
In accordance with control by the controller 136, the data amount adjuster 133 adjusts the data amounts of the BM image and the RV image output by the image sensor 32, and supplies the BM image and the RV image, whose respective data amounts have been adjusted, to the output section 134.
The output section 134 is an output Interface (IF) that transmits the BM image and the RV image from the data amount adjuster 133 to the outside of the camera unit 11. The output section 134 transfers the BM image to the BM display section 21 and transfers the RV image to the RV display section 22. The BM display section 21 displays the BM image from the output section 134 in accordance with the specification of the BM display section 21, and the RV display section 22 displays the RV image from the output section 134 in accordance with the specification of the RV display section 22. The output section 134 can perform format conversion and other image processing on the BM image and the RV image as necessary.
The acquisition section 135 acquires the vehicle information from the vehicle 10, and supplies the acquired vehicle information to the controller 136.
Examples of the vehicle information acquired by the acquisition section 135 include traveling information, specifications of the BM display section 21 and the RV display section 22, a line of sight of a driver of the vehicle 10 and a position of a head, and gyroscope information.
The running information is information indicating a running state of the vehicle 10, and specifically indicates a vehicle speed and a running direction (forward or backward). For example, when the vehicle 10 includes a speed sensor, the vehicle speed may be obtained from an output of the speed sensor. For example, the direction of travel may be derived from the state of the transmission.
For example, the specifications of the BM display section 21 and the RV display section 22 are the resolution of the BM display section 21 and the resolution of the RV display section 22, and can be acquired from the BM display section 21 and the RV display section 22.
The line of sight of the driver of the vehicle 10 and the position of the head are obtained from the image captured by the onboard camera 23.
The gyro information is information indicating the pose (/ the angle of inclination of the vehicle) of the vehicle 10. When the vehicle 10 includes a gyroscope, gyroscope information may be obtained from the output of the gyroscope. The use of the gyro information makes it possible to identify whether the vehicle 10 is on an incline.
The controller 136 controls the image sensor 32 and the data amount adjuster 133 according to the vehicle information supplied from the acquisition section 135.
In other words, for example, the controller 136 performs read control similar to the read control performed by the controller 43 according to the vehicle information. Therefore, the controller 136 performs extraction control for controlling extraction of the BM image and the RV image from the captured image captured by the image sensor 32. Examples of the range in which the BM image and the RV image are read include the range shown in fig. 4. Further, based on the vehicle information, the controller 136 performs adjustment control for controlling adjustment of the data amounts of the BM image and the RV image by the data amount adjuster 133.
Therefore, it can be said that the image sensor 32 extracts the BM image and the RV image from the captured image according to the vehicle information, and the data amount adjuster 133 adjusts the data amounts of the BM image and the RV image according to the vehicle information.
< image that can be output by the image sensor 32 >
Fig. 18 shows an example of an image that can be output by the image sensor 32.
Here, the highest-resolution captured image that can be output by the image sensor 32 is referred to as a highest-resolution image. Assume, for example, that the image sensor 32 has the capability of outputting the highest resolution image having a resolution (number of pixels) of Rmax at (a frame rate of) 60fps (frames per second) or more.
Here, it is assumed that the resolution RBM for the highest resolution BM image (BM image of the largest number of pixels) extracted from the highest resolution image is equal to or less than 1/2 of the resolution Rmax of the highest resolution image. It is also assumed that the resolution RRV for the highest resolution RV image extracted from the highest resolution image is equal to or less than the resolution RBM of the BM image.
In the present embodiment, for example, it is assumed that the sum RBM + RRV of the resolution RBM for the BM image and the resolution RRV for the RV image is equal to or less than 1/2 of the resolution Rmax of the highest resolution image. In this case, the image sensor 32 capable of outputting the highest resolution image with the output resolution of Rmax at 60fps (or more) is used, so that it is possible to output both a BM image with the resolution of RBM and an RV image with the resolution of RRV, which are obtained by partially reading the high resolution images, at 120 fps.
< example of vehicle transfer Bandwidth available for data transfer in vehicle 10>
Fig. 19 is a diagram describing an example of a vehicle transmission bandwidth that can be used for data transmission in the vehicle 10.
In other words, fig. 19 shows an example of the BM image and the RV image that can be output by the camera unit 11 without adjusting the data amount by the data amount adjuster 133.
For example, the camera unit 11 may output a color image with a resolution RBM in YUV4:2:2 format, in which the number of bits per pixel is eight (with respect to each of the luminance and color difference), as a BM image. Further, for example, the camera unit 11 may output a color image with a resolution of RRV as an RV image in YUV4:2:2 format, in which the number of bits per pixel is eight.
A BM image with a resolution RBM in YUV4:2:2 format with eight bit per pixel is called a highest quality BM image, and an RV image with a resolution RRV in YUV4:2:2 format with eight bit per pixel is called a highest quality RV image.
In the present embodiment, it is assumed that the vehicle transmission bandwidth (which is a bandwidth in which data is transmitted from the camera unit 11) is a transmission bandwidth of two screens that can transmit, for example, the highest quality BM image of 60fps (in real time). In the present embodiment, since RBM ≧ RRV, the transmission bandwidth with which two pictures of the highest quality BM image of 60fps can be transmitted makes it possible to transmit two pictures of the highest quality RV image of 60fps, for example. Further, the vehicle transmission bandwidth is such that, for example, a total of two screens, i.e., a single screen of the highest quality BM image of 60fps and a single screen of the highest quality RV image of 60fps can be transmitted.
As described with reference to fig. 18 and 19, the camera unit 11 is capable of outputting both the highest quality BM image and the highest quality RV image of up to 120 fps.
However, in the present embodiment, the vehicle transmission bandwidth is such that only two screens of the highest quality BM image (or RV image) of 60fps can be transmitted.
The increase in the vehicle transmission bandwidth makes it possible to transmit both the highest quality BM image and the highest quality RV image of 120fps that can be output by the camera unit 11. However, the increase in vehicle transmission bandwidth results in an increase in the cost of the viewing system.
In the present technique, the camera unit 11 appropriately adjusts the data amounts of the BM image and the RV image to transmit the BM image and the RV image in the vehicle transmission bandwidth, thereby suppressing an increase in the cost of the observation system.
< control of adjustment of data amounts of BM image and RV image when two pictures of the highest quality BM image of 60fps can be transmitted in the vehicle transmission bandwidth >
Fig. 20 is a diagram describing a first example of control of adjustment of the data amounts of the BM image and the RV image by the controller 136.
A of fig. 20 shows the adjustment control performed when the vehicle 10 is traveling forward or backward at a high speed equal to or greater than the first threshold value of the speed.
In this case, the controller 136 performs adjustment control as control of the data amount adjuster 133 so that a BM image with a resolution of RBM of 120fps is output and the output of an RV image is limited. In accordance with the adjustment control by the controller 136, the data amount adjuster 133 adjusts the data amounts of the BM image and the RV image from the image sensor 32, thereby outputting a BM image with a resolution of RBM of 120fps and limiting the output of the RV image.
Therefore, in this case, a BM image with a resolution RBM of 120fps is output from the camera unit 11, and an RV image is not output from the camera unit 11. Therefore, a BM image with a resolution RBM of 120fps is displayed on the BM display section 21, and an RV image is not displayed on the RV display section 22.
As described above, when the vehicle 10 is traveling forward or backward at a high speed, the driver can confirm a region further rearward than a region immediately behind the rear portion of the vehicle 10 using a BM image with a resolution RBM of 120fps (i.e., a high-resolution BM image with a high frame rate).
Note that when the vehicle 10 is traveling forward or backward at a high speed, the RV image including the image of the area immediately behind the rear of the vehicle 10 is not displayed.
Further, the BM image with the resolution RBM of 120fps output by the camera unit 11 can be transmitted in the vehicle transmission bandwidth in which two screens of the highest quality BM image (with the resolution RBM) of 60fps can be transmitted.
Here, examples of the data amount adjusting method for adjusting the data amount of an image by the data amount adjuster 133 include a method for reducing the resolution (the number of pixels), a method for reducing the gradation (the number of bits per pixel), a method for reducing the frame rate, and a compression method using a specified compression encoding scheme, in addition to the method for limiting the output of an image (not outputting an image) as described above.
In addition to being performed by the data amount adjuster 133, the image sensor 32 may be caused to perform limiting of image output, lowering of resolution, lowering of gradation, and lowering of frame rate in the data amount adjustment method by control (such as control of extraction performed by the image sensor 32) performed by the controller 136.
In other words, limiting image output, for example limiting output of an RV image, to limit extraction of the RV image from a captured image may be performed by the controller 136 controlling the reading of data from the image sensor 32 and not reading of pixel signals corresponding to the RV image from the pixels 51 (fig. 7). When the vehicle 10 is traveling forward or backward at a high speed, the controller 136 may control the extraction by the image sensor 32 according to the vehicle information, thereby restricting the extraction of the RV image from the captured image. Of course, the output of the RV image may be limited by the data amount adjuster 133 instead of the image sensor 32.
For example, the image sensor 32 may be controlled by the controller 136 to reduce the resolution of the image so that the number of pixels 51 from which pixel signals are read is reduced, or so that combining (binning) for adding pixel signals of a plurality of pixels 51 is performed by performing, for example, so-called Source Follower (SF) addition or Floating Diffusion (FD) addition.
For example, the image sensor 32 may be controlled by the controller 136 to reduce the number of bits required for AD conversion by the AD converter 55 (fig. 7) to lower the gradation.
For example, the frame rate may be reduced by controlling the image sensor 32 by the controller 136 so that the rate of reading the pixel signal from the pixel 51 or the rate of AD conversion by the AD converter 55 is reduced.
B of fig. 20 shows the adjustment control performed when the vehicle 10 is traveling backward at a medium speed that is less than the first threshold value and equal to or greater than the second threshold value that is less than the first threshold value.
In this case, the controller 136 performs adjustment control as control of the data amount adjuster 133 such that a BM image with a resolution RBMM of 120fps is output and an RV image with a resolution RRVM of 30fps is output, wherein the resolution RBMM is smaller than the resolution RBM and the resolution RRVM is smaller than the resolution RRV. In accordance with the adjustment control by the controller 136, the data amount adjuster 133 adjusts the data amounts of the BM image and the RV image from the image sensor 32 to output a BM image with a resolution of 120fps as the RBMM and output an RV image with a resolution of 30fps as the RRVM.
Therefore, in this case, the camera unit 11 outputs a BM image with a resolution of 120fps being RBMM and an RV image with a resolution of 30fps being RRVM. Therefore, a BM image with a resolution RBMM of 120fps is displayed on the BM display section 21, and an RV image with a resolution RRVM of 30fps is displayed on the RV display section 22.
As described above, when the vehicle 10 is traveling backward at a medium speed, the driver can confirm the region further backward than the region immediately after the rear of the vehicle 10 using the BM image with the resolution RBMM of 120fps (i.e., the medium-resolution BM image with the high frame rate). Further, the driver can confirm the region immediately behind the rear of the vehicle 10 using an RV image with a resolution of RRVM (i.e., a medium resolution RV image at a low frame rate) at 30 fps.
Note that it is assumed that the transmission bandwidth required to transmit an RV picture of resolution RRVM at 30fps is equal to or smaller than the transmission bandwidth used for the difference between the maximum transmission rate (here, the transmission rate required to transmit a BM picture of resolution RBM at 120fps without compressing the BM picture) and the transmission rate (first transmission rate) required to transmit a BM picture of resolution RBMM at 120 fps. In this case, both the BM image with a resolution of 120fps being RBMM and the RV image with a resolution of 30fps being RRVM output by the camera unit 11 can be transmitted in the vehicle conveyance bandwidth in which two screens of high-quality BM images of 60fps can be transmitted.
Here, when the transmission bandwidth required to transmit the RV image of resolution of 30fps as the RRVM is not equal to or less than the transmission bandwidth for the difference between the maximum transmission rate and the first transmission rate, one or both of the BM image and the RV image may be compressed (compression-encoded) by the data amount adjuster 133 so that the transmission bandwidth required to transmit the RV image of resolution of 30fps as the RRVM is equal to or less than the transmission bandwidth for the difference between the maximum transmission rate and the first transmission rate (so that the RV image of resolution as the RRVM may be transmitted). For example, for a BM picture with a resolution of RRVM of 120fps, a part or all of the BM picture is compressed (compression-encoded). This results in that the data amount of the BM image can be reduced. For an RV image with a resolution of RRVM of 30fps, the RV image is compression-encoded in a state of a color image being retained, or converted into a black-and-white image to be compression-encoded. This results in being able to reduce the data amount of the RV image.
C of fig. 20 shows the adjustment control performed when the vehicle 10 is traveling backward at a low speed that is less than the second threshold value that is less than the first threshold value.
In this case, the controller 136 performs adjustment control as control of the data amount adjuster 133 so that a BM image with a resolution of RBM at 60fps is output and an RV image with a resolution of RRV at 60fps is output. In accordance with the adjustment control by the controller 136, the data amount adjuster 133 adjusts the data amounts of the BM image and the RV image from the image sensor 32 to output a BM image with a resolution RBM of 60fps and an RV image with a resolution RRV of 60 fps.
Therefore, in this case, the camera unit 11 outputs a BM image with a resolution of RBM of 60fps and an RV image with a resolution of RRV of 60 fps. Therefore, a BM image with a resolution RBM of 60fps is displayed on the BM display section 21, and an RV image with a resolution RRV of 60fps is displayed on the RV display section 22.
As described above, when the vehicle 10 is traveling backward at a low speed, the driver can confirm the area further backward than the area immediately after the rear of the vehicle 10 using the BM image with the resolution RBM of 60fps (i.e., the high-resolution BM image with the medium frame rate). In addition, the driver can confirm the region immediately behind the rear of the vehicle 10 using an RV image with a resolution RRV of 60fps (i.e., a high-resolution RV image of a medium frame rate).
The case where the vehicle 10 travels backward at a low speed is, for example, a case where the driver is about to park the vehicle 10, and it is important to confirm an area immediately behind the rear of the vehicle 10, which is a blind spot from the driver's perspective. Therefore, when the vehicle 10 travels backward at a low speed, the RV image is displayed at a higher resolution and a higher frame rate than when the vehicle 10 travels backward at a high speed or a medium speed. This makes it possible to easily determine the region of the blind spot and to easily control the vehicle according to the state of the blind spot.
Note that both the BM image with the resolution of 60fps being RBM and the RV image with the resolution of 60fps being RRV output by the camera unit 11 can be conveyed in the vehicle conveyance bandwidth capable of conveying two screens of the highest quality BM image of 60 fps.
< control of adjustment of data amounts of BM image and RV image when a single screen of the highest quality BM image of 60fps can be transmitted in the vehicle transmission bandwidth >
Fig. 21 is a diagram describing a second example of adjustment control of the data amounts of the BM image and the RV image by the controller 136.
Here, in the second example of the adjustment control of the data amounts of the BM image and the RV image, it is assumed that the vehicle transmission bandwidth is, for example, a transmission bandwidth of a single screen in which a high-quality BM image (with a resolution of RBM) of 60fps (or more) can be transmitted.
A of fig. 21 shows the adjustment control performed when the vehicle 10 is traveling forward or backward at a high speed equal to or greater than the first threshold value of speed.
In this case, the controller 136 performs adjustment control as control of the data amount adjuster 133 so that a BM image with a resolution of RBM of 60fps (or more) is output and the output of an RV image is limited. In accordance with the adjustment control by the controller 136, the data amount adjuster 133 adjusts the data amounts of the BM image and the RV image from the image sensor 32 to output a BM image with a resolution RBM of 60fps and limit the output of the RV image.
Therefore, in this case, a BM image with a resolution RBM of 60fps is output from the camera unit 11, and an RV image is not output from the camera unit 11. Therefore, a BM image with a resolution RBM of 60fps is displayed on the BM display section 21, and an RV image is not displayed on the RV display section 22.
As described above, when the vehicle 10 is traveling forward or backward at a high speed, the driver can confirm the area further rearward than the area immediately behind the rear portion of the vehicle 10 using the BM image with the resolution RBM of 60fps (i.e., the high resolution BM image with the medium frame rate).
Note that when the vehicle 10 is traveling forward or backward at a high speed, as described with reference to a of fig. 20, the RV image including the image of the area immediately behind the rear of the vehicle 10 is not displayed.
Further, the BM image with the resolution of RBM of 60fps output by the camera unit 11 can be transmitted in the vehicle transmission bandwidth capable of transmitting a single screen of the highest quality BM image (with the resolution of RBM) of 60 fps.
B of fig. 21 shows the adjustment control performed when the vehicle 10 is traveling backward at a medium speed that is less than the first threshold value and equal to or greater than a second threshold value that is smaller than the first threshold value.
In this case, the controller 136 performs adjustment control as control of the data amount adjuster 133 such that a BM image of resolution RBMM, which is smaller than the resolution RBM, is output at 60fps and an RV image of resolution RRVM, which is smaller than the resolution RRV, is output at 30 fps. In accordance with the adjustment control by the controller 136, the data amount adjuster 133 adjusts the data amounts of the BM image and the RV image from the image sensor 32 to output a BM image with a resolution RBMM of 60fps and an RV image with a resolution RRVM of 30 fps.
Therefore, in this case, the camera unit 11 outputs a BM image with a resolution of RBMM at 60fps and an RV image with a resolution of RRVM at 30 fps. Therefore, a BM image with a resolution RBMM of 60fps is displayed on the BM display section 21, and an RV image with a resolution RRVM of 30fps is displayed on the RV display section 22.
As described above, when the vehicle 10 is traveling backward at a medium speed, the driver can confirm the area further backward than the area immediately after the rear of the vehicle 10 using the BM image with the resolution RBMM of 60fps (i.e., the medium resolution BM image at the medium frame rate). In addition, the driver may use a 30fps RV image with a resolution of RRVM (i.e., a low frame rate, medium resolution RV image) to confirm the region near behind the rear of the vehicle 10.
Note that, when the transmission bandwidth (hereinafter also referred to as necessary transmission bandwidth) required to transmit both the BM image of 60fps whose resolution is RBMM and the RV image of 30fps whose resolution is RRVM output by the camera unit 11 is not within the vehicle transmission bandwidth of the single screen capable of transmitting the highest quality BM image of 60fps, the BM image may be compressed at the first compression rate for medium speed, and the RV image may be compressed at the second compression rate for medium speed (which provides higher compression than the first compression rate for medium speed) in a state in which a color image is held, or the RV image may be converted into a black-and-white image to be compressed at the second compression rate for medium speed so that the necessary transmission bandwidth is within the vehicle transmission bandwidth.
C of fig. 21 shows the adjustment control performed when the vehicle 10 is traveling backward at a low speed that is less than the second threshold value that is less than the first threshold value.
In this case, the controller 136 performs adjustment control as control of the data amount adjuster 133 so that a BM image of a resolution RBML of 60fps is output and an RV image of a resolution RRVM of 30fps is output, where the resolution RBML is smaller than the resolution RBMM. In accordance with the adjustment control by the controller 136, the data amount adjuster 133 adjusts the data amounts of the BM image and the RV image from the image sensor 32 to output a BM image with a resolution of RBML at 60fps and an RV image with a resolution of RRVM at 30 fps.
Therefore, in this case, the camera unit 11 outputs a BM image with a resolution of RBML at 60fps and an RV image with a resolution of RRV at 30 fps. Therefore, a BM image with a resolution RBML of 60fps is displayed on the BM display section 21, and an RV image with a resolution RRVM of 30fps is displayed on the RV display section 22.
As described above, when the vehicle 10 is traveling backward at a low speed, the driver can confirm the area further backward than the area immediately after the rear of the vehicle 10 using the BM image with the resolution RBML of 60fps (i.e., the low resolution BM image at the medium frame rate). In addition, the driver can confirm the region immediately behind the rear of the vehicle 10 using an RV image with a resolution of RRVM of 30fps (i.e., a medium resolution RV image with a low frame rate).
Note that, when the necessary transmission bandwidth required to transmit both the BM image of 60fps with the resolution RBMM and the RV image of 30fps with the resolution RRVM output by the camera unit 11 is not within the vehicle transmission bandwidth capable of transmitting a single screen of the highest quality BM image of 60fps, the BM image may be compressed at the first compression rate for low speed, and the RV image may be compressed (in a state in which a color image is held) at the second compression rate for low speed (providing higher compression than the first compression rate for low speed) so that the necessary transmission bandwidth is within the vehicle transmission bandwidth.
Here, a compression ratio that provides compression higher than the first compression ratio for medium speed may be adopted as the first compression ratio for low speed. The second compression ratio for medium speed and the second compression ratio for low speed may employ the same compression ratio. In this case, the following is a relationship between the first and second compression ratios for the low speed and the first and second compression ratios for the medium speed: the second compression ratio for low speed ═ the second compression ratio for medium speed > the first compression ratio for low speed > the first compression ratio for medium speed. Here, however, it is assumed that a compression ratio larger in value provides higher compression. The following is a relationship between data amounts respectively obtained by compression at a first compression ratio for low speed, compression at a second compression ratio for low speed, compression at a first compression ratio for medium speed, and compression at a second compression ratio for medium speed, respectively: the amount of data obtained by compression at the second compression rate for low speed ═ the amount of data obtained by compression at the second compression rate for medium speed < the amount of data obtained by compression at the first compression rate for low speed < the amount of data obtained by compression at the first compression rate for medium speed.
The adjustment control for adjusting the data amounts of the BM image and the RV image in accordance with the vehicle speed and the traveling direction (forward or backward) of the vehicle 10 has been described above. The adjustment control method is not limited to the method described with reference to fig. 20 and 21. In other words, the adjustment control method may be appropriately set according to, for example, the vehicle transmission bandwidth, the capability of the image sensor 32, and the specifications of the BM display portion 21 and the RV display portion 22. The amount of data is related to the quality of the displayed BM image and the displayed RV image, respectively. Therefore, it can be said that the quality of the BM image and the quality of the RV image vary depending on the vehicle speed and the traveling direction (forward or backward) of the vehicle 10.
Further, in the adjustment control, the resolution of the BM image and the resolution of the RV image may be reduced by reducing the number of pixels contained in the BM image and the RV image, or by (irreversibly) compressing the BM image and the RV image without changing the number of pixels.
When the resolution of the BM image and the resolution of the RV image are reduced by compressing the BM image and the RV image, the following compression ratio may be adopted as the compression ratio for compressing the BM image and the RV image: the compression rate is such that the BM image and the RV image obtained by performing compression and decompression lose a part of the frequency components (e.g., high frequency components) but each still has a sufficiently high resolution (highest frequency component) equivalent to the resolution (number of pixels) described with reference to fig. 20 or 21.
< display treatment >
Fig. 22 is a flowchart describing an example of display processing for displaying a BM image and an RV image performed in the observation system of fig. 17.
In step S111, the image sensor 32 captures a captured image, and the process advances to step S112.
In step S112, the acquisition portion 135 acquires the vehicle information from the vehicle 10, and supplies the vehicle information to the controller 136. Then, the process advances to step S113.
In step S113, the controller 136 controls the extraction by the image sensor 32 based on the vehicle information from the acquisition unit 135. The image sensor 32 extracts the BM image and the RV image from the captured image according to extraction control by the controller 136. Then, the image sensor 32 supplies the BM image and the RV image to the data amount adjuster 133, and the process proceeds from step S113 to step S114.
In step S114, the controller 136 controls the adjustment by the data amount adjuster 133 according to the vehicle information from the acquisition section 135. The data amount adjuster 133 adjusts the data amount of the BM image and the RV image from the image sensor 32 according to adjustment control by the controller 136. Then, the data amount adjuster 133 supplies the BM image and the RV image, whose respective data amounts have been adjusted, to the output section 134, and the process proceeds from step S114 to step S115.
In step S115, the output section 134 outputs the BM image and the RV image, which have been adjusted in the respective data amounts, supplied from the data amount adjuster 133 to the outside of the camera unit 11, transfers the BM image to the BM display section 21, and transfers the RV image to the RV display section 22. Then, the process advances to step S116.
In step S116, the BM display section 21 displays thereon the BM image from the output section 134 according to the specification of the BM display section 21, and the RV display section 22 displays thereon the RV image from the output section 134 according to the specification of the RV display section 22.
< fifth example of configuration of Observation System >
Fig. 23 is a block diagram showing a fifth example of the configuration of the observation system included in the vehicle 10.
Note that in this figure, portions corresponding to those in fig. 17 are denoted by the same reference numerals as in fig. 17, and description thereof is omitted below.
In fig. 23, the observation system includes the camera unit 11, the BM display section 21, the RV display section 22, and the extraction section 182, and the camera unit 11 includes the optical system 31, the image sensor 32, the data amount adjuster 133, the output section 134, the acquisition section 135, and the controller 181.
Therefore, the observation system of fig. 23 is similar to that of fig. 17 in that it includes the camera unit 11, the BM display section 21, and the RV display section 22, and in that it includes components from the optical system 31 to the acquisition section 135 in the camera unit 11.
However, the observation system of fig. 23 is different from the observation system of fig. 17 in that an extraction section 182 is newly included, and a controller 181 is included in the camera unit 11 instead of the controller 136.
Note that the extraction section 182 may be provided inside the camera unit 11, although the extraction section 182 is provided outside the camera unit 11 in fig. 23.
The vehicle information is supplied from the acquisition unit 135 to the controller 181. Examples of the vehicle information include traveling information, specifications of the BM display portion 21 and the RV display portion 22, and gyro information. However, in this example, the vehicle information provided by the acquisition section 135 does not include the line of sight of the driver and the position of the head. Note that the line of sight of the driver of the vehicle 10 and the position of the head are input to the extraction portion 182 as part of the vehicle information.
As in the case of the controller 136, the controller 181 controls the extraction by the image sensor 32 and the adjustment by the data amount adjuster 133 according to the vehicle information supplied from the acquisition section 135.
However, in the extraction control, the controller 181 causes an area larger in size than the area R11 to be extracted as a BM image in accordance with one or both of the line of sight of the driver and the position of the head, instead of controlling (the position of) the area R11 extracted as a BM image.
Therefore, the size of the BM image output by the output section 134 in fig. 23 is larger than the size of the BM image output by the output section 134 in fig. 17 and 12.
In fig. 23, the BM image having a size larger than the region R11 output by the output section 134 is supplied to the extraction section 182.
In addition to the BM image having a size larger than the region R11 provided to the extraction unit 182 by the output unit 134, the line of sight of the driver and the position of the head in the vehicle information are provided to the extraction unit 182.
The extraction section 182 extracts a part of the BM image having a size larger than the region R11 from the output section 134 (i.e., a region having the same size as the region R11) as a final BM image to be displayed on the BM display section 21 according to one or both of the line of sight of the driver and the position of the head, and the extraction section 182 supplies the BM image to the BM display section 21.
< control of extraction of BM image and RV image from captured image >
Fig. 24 is a diagram describing an example of control of extraction of the BM image and the RV image from the captured image by the controller 181.
As in the case of fig. 4, fig. 24 shows the image circle of the optical system 31 and a captured image captured by the image sensor 32 (the light receiving surface of the image sensor 32).
In the extraction control, as in the case of the controller 136, the controller 181 controls reading of data from the image sensor 32 so that the region R12 is extracted as an RV image.
Further, in the extraction control, the controller 181 controls reading of data from the image sensor 32 so that not the region R11 but the region R31 having a size larger than the region R11 is extracted as a BM image from the captured image.
If the interior rear view mirror is mounted in the vehicle 10, the region R31 is a region including the maximum range that the driver can see using the interior rear view mirror by moving his/her line of sight or his/her head. The region R11 is a variable region whose position changes according to the line of sight of the driver and the position of the head, while the region R31 is a fixed region.
The same size region as the region R11 at a position determined by one or both of the line of sight of the driver and the position of the head is extracted from the region R31 by the extraction unit 182 as a final BM image to be displayed on the BM display unit 21. In other words, if the interior rear view mirror is mounted in the vehicle 10, the region R11 to be observed by the driver using the interior rear view mirror is extracted from the region R31 by the extraction section 182 as a BM image.
< display treatment >
Fig. 25 is a flowchart describing an example of display processing for displaying a BM image and an RV image performed in the observation system of fig. 23.
In step S121, the image sensor 32 captures a captured image, and the process advances to step S122.
In step S122, the acquisition portion 135 acquires the vehicle information from the vehicle 10, and supplies the vehicle information to the controller 136. Then, the process advances to step S123.
In step S123, the controller 136 controls the extraction by the image sensor 32. As described with reference to fig. 24, according to the extraction control by the controller 136, the image sensor 32 extracts the region R31 and the region R12 from the captured image as a BM image and an RV image, respectively. Then, the image sensor 32 supplies the BM image and the RV image to the data amount adjuster 133, and the process proceeds from step S123 to step S124.
In step S124, the controller 136 controls the adjustment by the data amount adjuster 133 according to the vehicle information from the acquisition section 135. The data amount adjuster 133 adjusts the data amount of the BM image and the RV image from the image sensor 32 according to adjustment control by the controller 136. Then, the data amount adjuster 133 supplies the BM image and the RV image, whose respective data amounts have been adjusted, to the output section 134, and the process proceeds from step S124 to step S125.
In step S125, the output section 134 outputs the BM image and the RV image, which are supplied from the data amount adjuster 133 and have adjusted the respective data amounts, to the outside of the camera unit 11, and the process advances to step S126. Therefore, in fig. 23, the BM image is supplied to the extraction section 182, and the RV image is transferred to the RV display section 22.
In step S126, the extraction portion 182 acquires the line of sight of the driver and the position of the head included in the vehicle information from the vehicle 10, and the processing proceeds to step S127.
In step S127, the extraction unit 182 extracts, from the BM image from the output unit 134, a region having the same size as the region R11 and located at a position determined by the line of sight of the driver and the position of the head, which is extracted as a final BM image to be displayed on the BM display unit 21. Then, the extraction section 182 transfers the final BM image to the BM display section 21, and the process proceeds from step S127 to step S128.
In step S128, the BM display section 21 displays thereon the BM image from the extraction section 182 in accordance with the specification of the BM display section 21, and the RV display section 22 displays thereon the RV image from the output section 134 in accordance with the specification of the RV display section 22.
< description of computer to which the present technology is applied >
Next, the series of processes described above may be performed using hardware or software. When the series of processes is performed using software, a program included in the software is installed on, for example, a general-purpose computer.
Fig. 26 is a block diagram showing a configuration example of an embodiment of a computer on which a program for performing the series of processes described above is installed.
The program may be recorded in advance in the hard disk 905 or a Read Only Memory (ROM)903 as a recording medium included in the computer.
Alternatively, the program may be stored (recorded) in a removable recording medium 911 driven by a drive 909. Such a removable recording medium 911 may be provided as so-called package software. Here, examples of the removable recording medium 911 include a flexible disk, a read-only compact disc memory (CD-ROM), a magneto-optical (MO) disk, a Digital Versatile Disc (DVD), a magnetic disk, and a semiconductor memory.
Note that, in addition to being installed from the above-described removable recording medium 911 to a computer, a program may also be downloaded to the computer through a communication network or a broadcast network to be installed on a hard disk 905 included in the computer. In other words, for example, the program may be wirelessly transferred from a download site to the computer through a satellite for digital satellite broadcasting, or may be transferred to the computer by wire through a network such as a Local Area Network (LAN) or the internet.
The computer includes a Central Processing Unit (CPU)902, and an input/output interface 910 is connected to the CPU 902 through a bus 901.
When a user inputs a command by operating the input portion 907 through the input/output interface 910, the CPU 902 executes a program stored in the ROM 903 according to the input command. Alternatively, the CPU 902 loads a program stored in the hard disk 905 into a Random Access Memory (RAM)904 and executes the program.
This causes the CPU 902 to execute processing according to the above-described flowchart or processing executed based on the configuration of the above-described block diagram. Then, for example, the CPU 902 outputs the processing result using the output section 906 or transmits the processing result using the communication section 908 through the input/output interface 910 as necessary, and the CPU 902 further records the processing result in the hard disk 905.
Note that the input portion 907 includes, for example, a keyboard, a mouse, and a microphone. Further, the output section 906 includes, for example, a Liquid Crystal Display (LCD) and a speaker.
Here, in the specification, the processing performed by the computer according to the program is not necessarily performed chronologically in the order described in the flowcharts. In other words, the processing performed by the computer according to the program includes processing performed in parallel or individually (for example, processing performed using an object or parallel processing).
Further, the program may be a program processed by a single computer (processor), or may be a program distributed-processed by a plurality of computers. Further, the program may be transmitted to a remote computer to be executed by the remote computer.
Further, a system as used herein refers to a collection of multiple components (e.g., devices and modules (parts)), and it is not important whether all of the components are in a single housing. Therefore, both a plurality of devices accommodated in separate housings and connected to each other via a network and a single device accommodating a plurality of modules in a single housing may be the system.
Note that the embodiments of the present technology are not limited to the above-described examples, and various modifications may be made thereto without departing from the scope of the present technology.
For example, the present technology may also have a configuration of cloud computing in which a single function is shared to be collectively handled by a plurality of devices via a network.
Further, the respective steps described using the above flowcharts may be shared to be performed by a plurality of devices, in addition to being performed by a single device.
Further, when a single step includes a plurality of processes, the plurality of processes included in the single step may be shared to be executed by a plurality of devices in addition to being performed by a single device.
Note that the effects described herein are not restrictive, but merely illustrative, and other effects may be provided.
Note that the present technology can adopt the following configuration.
<1> an image capturing apparatus comprising:
an image sensor that captures an image displayed on a display portion of a vehicle; and
a controller that controls reading of an image from the image sensor based on vehicle information acquired by the vehicle.
<2> the image capturing apparatus according to <1>, further comprising:
a detector that detects inclination information related to the vehicle using the vehicle information, wherein,
the controller controls reading of an image from the image sensor according to the tilt information.
<3> the image photographing apparatus according to <2>, wherein,
the detector detects the inclination information indicating inclination to the front side and the rear side of the vehicle.
<4> the image photographing apparatus according to <2>, wherein,
the detector detects the inclination information indicating inclination to the left and right sides of the vehicle.
<5> the image photographing apparatus according to <2>, wherein,
the detector detects the inclination information indicating inclination to the front and rear sides of the vehicle and inclination to the left and right sides of the vehicle.
<6> the image pickup apparatus according to any one of <1> to <5>, wherein,
the controller controls a reading position at which an image is read in the image sensor.
<7> the image photographing apparatus according to <6>, wherein,
the reading position is specified by a size of an image read in the image sensor and a reading start position, which is a position where reading of the image in the image sensor is started.
<8> the image capturing apparatus according to any one of <1> to <7>, wherein,
the controller controls reading of the image from the image sensor so that the proportion of the road surface appearing in the image displayed on the display section is maintained at a specified proportion.
<9> the image capturing apparatus according to any one of <1> to <8>, further comprising:
and a processing section that rotates the image read from the image sensor and cuts out an image to be displayed on the display section from the rotated image.
<10> the image photographing apparatus according to <9>, wherein,
the controller controls rotation of the image.
<11> the image photographing apparatus according to <10>, wherein,
the controller calculates a rotation angle for rotating the image according to inclination information related to the vehicle, and controls the rotation of the image to rotate the image by the rotation angle.
<12> the image capturing apparatus according to any one of <1> to <11>, wherein,
the vehicle information is at least one of: gyroscope information obtained from a gyroscope possessed by the vehicle, suspension information relating to a suspension of the vehicle, a front camera image obtained from a front camera for taking an image of an area ahead of the vehicle, or GPS information obtained from a GPS.
<13> the image pickup apparatus according to any one of <1> to <12>, wherein,
the display is an alternative to a class I mirror.
<14> a control method comprising:
reading of an image from an image sensor that captures an image, which is displayed on a display portion of a vehicle, is controlled based on vehicle information acquired by the vehicle.
<15> a program that causes a computer to operate as a controller that controls reading of an image from an image sensor that captures an image, the image being displayed on a display portion of a vehicle, based on vehicle information acquired by the vehicle.
<16> the control method according to <14>, further comprising:
detecting inclination information related to the vehicle using the vehicle information, wherein,
controlling reading of an image from the image sensor according to the tilt information.
<17> according to the program of <15>, further causing the computer to operate as a detector that detects inclination information related to the vehicle using the vehicle information, wherein,
controlling reading of an image from the image sensor according to the tilt information.
<18> the image capturing apparatus according to any one of <1> to <13>, wherein,
the vehicle information includes at least one of: speed information relating to the vehicle or gear information relating to the vehicle.
<19> the control method according to <14> or <16>, wherein,
the vehicle information includes at least one of: speed information relating to the vehicle or gear information relating to the vehicle.
<20> the program according to <15> or <17>, wherein,
the vehicle information includes at least one of: speed information relating to the vehicle or gear information relating to the vehicle.
List of reference numerals
10 vehicle
11 Camera Unit
21 BM display unit
22 RV display part
23 vehicle camera
31 optical system
32 image sensor
33 output unit
41 acquisition part
42 detector
43 controller
51 pixel array
52 input circuit
53 row selection circuit
54 column select circuit
55 AD converter
56 line buffer
57 output circuit
61 pixels
71 controller
72 treatment section
133 data volume adjuster
134 output unit
135 acquisition part
136 controller
181 controller
182 extraction part
901 bus
902 CPU
903 ROM
904 RAM
905 hard disk
906 output part
907 input unit
908 communication unit
909 driving
910 input/output interface
911 removable recording medium

Claims (20)

1. An image capturing apparatus comprising:
an image sensor that captures an image displayed on a display portion of a vehicle; and
a controller that controls reading of an image from the image sensor based on vehicle information acquired by the vehicle.
2. The image capturing apparatus according to claim 1, further comprising:
a detector that detects inclination information about the vehicle using the vehicle information, wherein,
the controller controls reading of an image from the image sensor according to the tilt information.
3. The image capturing apparatus according to claim 2,
the detector detects the inclination information indicating inclination to the front side and the rear side of the vehicle.
4. The image capturing apparatus according to claim 2,
the detector detects the inclination information indicating inclination to the left and right sides of the vehicle.
5. The image capturing apparatus according to claim 2,
the detector detects the inclination information indicating inclination to the front and rear sides of the vehicle and inclination to the left and right sides of the vehicle.
6. The image capturing apparatus according to claim 1,
the controller controls a reading position at which an image is read in the image sensor.
7. The image capturing apparatus according to claim 6,
the reading position is specified by a size of an image read in the image sensor and a reading start position, which is a position where reading of the image in the image sensor is started.
8. The image capturing apparatus according to claim 1,
the controller controls reading of the image from the image sensor so that the proportion of the road surface appearing in the image displayed on the display section is maintained at a specified proportion.
9. The image capturing apparatus according to claim 1, further comprising:
and a processing section that rotates the image read from the image sensor and cuts out an image to be displayed on the display section from the rotated image.
10. The image capturing apparatus according to claim 9,
the controller controls rotation of the image.
11. The image capturing apparatus according to claim 10,
the controller
Calculating a rotation angle for rotating the image according to the inclination information on the vehicle, and
controlling rotation of the image such that the image is rotated by the rotation angle.
12. The image capturing apparatus according to claim 1,
the vehicle information is at least one of: gyroscope information obtained from a gyroscope that the vehicle has, suspension information about a suspension of the vehicle, a front camera image obtained from a front camera that captures an image of an area in front of the vehicle, or GPS information obtained from a GPS.
13. The image capturing apparatus according to claim 1,
the display is an alternative to a class I mirror.
14. A control method, comprising:
reading of an image from an image sensor that captures an image, which is displayed on a display portion of a vehicle, is controlled based on vehicle information acquired by the vehicle.
15. A program causes a computer to operate as a controller that controls reading of an image from an image sensor that captures an image, the image being displayed on a display portion of a vehicle, based on vehicle information acquired by the vehicle.
16. The control method according to claim 14, further comprising:
detecting inclination information about the vehicle using the vehicle information, wherein,
controlling reading of an image from the image sensor according to the tilt information.
17. The program of claim 15, further causing the computer to operate as a detector that detects inclination information about the vehicle using the vehicle information, wherein,
controlling reading of an image from the image sensor according to the tilt information.
18. The image capturing apparatus according to claim 1,
the vehicle information includes at least one of: speed information about the vehicle or gear information about the vehicle.
19. The control method according to claim 14, wherein,
the vehicle information includes at least one of: speed information about the vehicle or gear information about the vehicle.
20. The program according to claim 15, wherein,
the vehicle information includes at least one of: speed information about the vehicle or gear information about the vehicle.
CN201980070029.8A 2018-10-31 2019-10-18 Image capturing apparatus, control method, and medium recording program Active CN112956183B (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
JP2018204836 2018-10-31
JP2018-204836 2018-10-31
JP2019039216 2019-03-05
JP2019-039216 2019-03-05
PCT/JP2019/041037 WO2020090512A1 (en) 2018-10-31 2019-10-18 Imaging device, control method, and program

Publications (2)

Publication Number Publication Date
CN112956183A true CN112956183A (en) 2021-06-11
CN112956183B CN112956183B (en) 2024-03-12

Family

ID=70463134

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980070029.8A Active CN112956183B (en) 2018-10-31 2019-10-18 Image capturing apparatus, control method, and medium recording program

Country Status (6)

Country Link
US (1) US20210400209A1 (en)
EP (1) EP3876515A4 (en)
JP (1) JP7342024B2 (en)
KR (1) KR20210083256A (en)
CN (1) CN112956183B (en)
WO (1) WO2020090512A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113039778B (en) 2018-10-31 2024-01-02 索尼集团公司 Image capturing apparatus, image processing method, and recording medium
KR102284128B1 (en) * 2020-03-23 2021-07-30 삼성전기주식회사 Camera for vehicle
JP7505398B2 (en) * 2020-12-23 2024-06-25 トヨタ自動車株式会社 VEHICLE CONTROL SYSTEM, VEHICLE CONTROL METHOD, CONTROL DEVICE, AND PROGRAM
WO2023006832A1 (en) * 2021-07-28 2023-02-02 Motherson Innovations Company Ltd. Camera monitor system, vehicle therewith and method for adjusting the same

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006254318A (en) * 2005-03-14 2006-09-21 Omron Corp Vehicle-mounted camera, vehicle-mounted monitor and forward road area imaging method
US20080253616A1 (en) * 2004-05-10 2008-10-16 Seiichiro Mizuno Imaging System and Imaging Method
CN102582516A (en) * 2011-01-05 2012-07-18 株式会社电装 Rearward view assistance apparatus
JP2014201146A (en) * 2013-04-03 2014-10-27 市光工業株式会社 Image display device for vehicle
US20160263997A1 (en) * 2013-12-03 2016-09-15 Fujitsu Limited Apparatus and method for controlling display
JP2018012439A (en) * 2016-07-21 2018-01-25 株式会社富士通ゼネラル Rearward visibility assist device
US20180242451A1 (en) * 2017-02-22 2018-08-23 Xerox Corporation Hybrid nanosilver/liquid metal ink composition and uses thereof
CN109076163A (en) * 2016-04-27 2018-12-21 索尼公司 Imaging control apparatus, image formation control method and imaging device
CN109417611A (en) * 2016-07-13 2019-03-01 索尼公司 Image forming apparatus, image generating method and program

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS55126723A (en) 1979-03-24 1980-09-30 Nippon Steel Corp Preventive method for generation of black smoke after repairing furnace wall of carbonizing chamber of coke oven
EP1637836A1 (en) * 2003-05-29 2006-03-22 Olympus Corporation Device and method of supporting stereo camera, device and method of detecting calibration, and stereo camera system
JP4955471B2 (en) * 2007-07-02 2012-06-20 株式会社デンソー Image display device and in-vehicle image display device
JP5451497B2 (en) * 2010-04-08 2014-03-26 パナソニック株式会社 Driving support display device
KR102135427B1 (en) * 2015-06-22 2020-07-17 젠텍스 코포레이션 Systems and methods for processing streamed video images to correct flicker of amplitude-modulated light
JP6551336B2 (en) * 2016-08-12 2019-07-31 株式会社デンソー Peripheral audit equipment
CN110312641A (en) * 2017-03-17 2019-10-08 金泰克斯公司 Dual display reversing camera system
US10392013B2 (en) * 2017-09-30 2019-08-27 A-Hamid Hakki Collision detection and avoidance system
KR102470298B1 (en) * 2017-12-01 2022-11-25 엘지이노텍 주식회사 A method of correcting cameras and device thereof
JP7280006B2 (en) * 2019-08-06 2023-05-23 アルパイン株式会社 Image processing device, image processing method and image processing program

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080253616A1 (en) * 2004-05-10 2008-10-16 Seiichiro Mizuno Imaging System and Imaging Method
JP2006254318A (en) * 2005-03-14 2006-09-21 Omron Corp Vehicle-mounted camera, vehicle-mounted monitor and forward road area imaging method
CN102582516A (en) * 2011-01-05 2012-07-18 株式会社电装 Rearward view assistance apparatus
JP2014201146A (en) * 2013-04-03 2014-10-27 市光工業株式会社 Image display device for vehicle
US20160263997A1 (en) * 2013-12-03 2016-09-15 Fujitsu Limited Apparatus and method for controlling display
CN109076163A (en) * 2016-04-27 2018-12-21 索尼公司 Imaging control apparatus, image formation control method and imaging device
CN109417611A (en) * 2016-07-13 2019-03-01 索尼公司 Image forming apparatus, image generating method and program
JP2018012439A (en) * 2016-07-21 2018-01-25 株式会社富士通ゼネラル Rearward visibility assist device
US20180242451A1 (en) * 2017-02-22 2018-08-23 Xerox Corporation Hybrid nanosilver/liquid metal ink composition and uses thereof

Also Published As

Publication number Publication date
US20210400209A1 (en) 2021-12-23
JP7342024B2 (en) 2023-09-11
KR20210083256A (en) 2021-07-06
EP3876515A4 (en) 2021-12-22
JPWO2020090512A1 (en) 2021-10-07
WO2020090512A1 (en) 2020-05-07
CN112956183B (en) 2024-03-12
EP3876515A1 (en) 2021-09-08

Similar Documents

Publication Publication Date Title
CN112956183B (en) Image capturing apparatus, control method, and medium recording program
US11722645B2 (en) Image-capturing apparatus, image processing method, and program
TWI600559B (en) System and method for image processing
JP4596978B2 (en) Driving support system
US11184602B2 (en) Image processing apparatus and image processing method
WO2017190351A1 (en) Systems and methods for video processing and display
KR20140019575A (en) Around view monitor system and monitoring method
WO2018179671A1 (en) Image processing device and image processing method, and image capturing device
US20100283589A1 (en) Image processing system, image capture device and method thereof
KR20190034200A (en) Image processing apparatus and image processing method
US20130176436A1 (en) Image Capturing Device Applied in Vehicle and Image Superimposition Method Thereof
JP2005184395A (en) Method, system and apparatus for image processing, and photographing equipment
CN111038383A (en) Vehicle and control method thereof
US10834294B2 (en) Imaging device, imaging element, and method of controlling imaging device
JP5865726B2 (en) Video display device
US20220408062A1 (en) Display control apparatus, display control method, and program
CN112784656A (en) Image acquisition system, method, storage medium, and vehicle
JP2008042664A (en) Image display device
JP7459608B2 (en) Display control device, display control method and program
EP4228247A1 (en) Camera system for motor vehicle and method to operate
JP7459607B2 (en) Display control device, display control method and program
US11615582B2 (en) Enclosed multi-view visual media representation
US20230100099A1 (en) Image processing system, image processing method, and storage medium
JP2023179496A (en) Image processing device and image processing method
JP2008306496A (en) Onboard imaging system and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant