WO2018221209A1 - Dispositif de traitement d'image, procédé de traitement d'image et programme - Google Patents

Dispositif de traitement d'image, procédé de traitement d'image et programme Download PDF

Info

Publication number
WO2018221209A1
WO2018221209A1 PCT/JP2018/018840 JP2018018840W WO2018221209A1 WO 2018221209 A1 WO2018221209 A1 WO 2018221209A1 JP 2018018840 W JP2018018840 W JP 2018018840W WO 2018221209 A1 WO2018221209 A1 WO 2018221209A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
conversion
unit
viewpoint
subject
Prior art date
Application number
PCT/JP2018/018840
Other languages
English (en)
Japanese (ja)
Inventor
智詞 中山
Original Assignee
ソニーセミコンダクタソリューションズ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーセミコンダクタソリューションズ株式会社 filed Critical ソニーセミコンダクタソリューションズ株式会社
Priority to US16/615,976 priority Critical patent/US11521395B2/en
Priority to EP18809783.6A priority patent/EP3633598B1/fr
Priority to JP2019522095A priority patent/JP7150709B2/ja
Priority to CN201880033861.6A priority patent/CN110651295B/zh
Publication of WO2018221209A1 publication Critical patent/WO2018221209A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/18Image warping, e.g. rearranging pixels individually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/23Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
    • B60R1/27Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view providing all-round vision, e.g. using omnidirectional cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/02Affine transformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Definitions

  • the present technology relates to an image processing device, an image processing method, and a program, and more particularly, to an image processing device, an image processing method, and a program that make it easy to recognize a standing object, for example.
  • distortion correction processing for example, Patent Documents 1 and 2 that corrects distortion of a captured image captured using a wide-angle lens, lens distortion parameters, and vanishing point positions
  • camera calibration for example, Patent Document 3 for obtaining the above.
  • a system that captures and displays a front image with a camera set on the front grill (front camera), a camera installed on a rear license plate, a handle for opening and closing a trunk ( There is a system (viewing system) that captures and displays a rear image with a rear camera.
  • An image provided by the viewing system (hereinafter also referred to as a viewing image) is generated using a captured image captured by an in-vehicle camera.
  • a viewpoint conversion is performed to convert a captured image captured by an in-vehicle camera into a viewpoint conversion image viewed from a predetermined viewpoint such as a driver's viewpoint, and the viewpoint conversion image obtained by the viewpoint conversion is viewed. It can be displayed as an image.
  • the viewpoint conversion of the photographed image can be performed by, for example, affine transformation of the photographed image.
  • affine transformation all subjects appearing in the photographed image exist in any defined plane or curved surface such as a road, for example. This is done on the assumption that the target image exists on the surface. Therefore, for example, when it is assumed that the subject is in a plane on the road, the captured image to be subject to the viewpoint conversion is a three-dimensional object that is not on the road surface (such as an automobile or a motorcycle),
  • the standing object reflected in the viewpoint conversion image obtained by the viewpoint conversion is an image in a state of being inclined (tilted) with respect to the road. Will be converted to.
  • a user such as a driver who views the viewpoint conversion image does not easily recognize the standing object as a standing object (or a three-dimensional object).
  • a standing object or a three-dimensional object.
  • This technology has been made in view of such a situation, and makes it easy to recognize standing objects.
  • the image processing device or program of the present technology includes an acquisition unit that acquires a target image to be processed, and the subject position in the target image according to a subject distance from a vanishing point in the target image to a subject position where the subject appears.
  • An image processing apparatus including a movement conversion unit that performs movement conversion to be moved, or a program for causing a computer to function as such an image processing apparatus.
  • the image processing method of the present technology obtains a target image to be processed and moves to move the subject position in the target image according to a subject distance from a vanishing point in the target image to a subject position where the subject appears.
  • An image processing method including performing conversion.
  • a target image to be processed is acquired, and the target image is detected according to a subject distance from a vanishing point in the target image to a subject position where the subject appears. Movement conversion for moving the subject position is performed.
  • the image processing apparatus may be an independent apparatus or an internal block constituting one apparatus.
  • the program can be provided by being transmitted through a transmission medium or by being recorded on a recording medium.
  • FIG. 3 is a block diagram illustrating a configuration example of an image processing unit 22.
  • FIG. It is a figure which shows the example of the mode of imaging
  • FIG. It is a figure explaining the movement conversion of the movement conversion part 44.
  • FIG. It is a figure explaining the movement conversion of the movement conversion part 44.
  • FIG. It is a figure explaining the movement conversion of the movement conversion part 44.
  • FIG. It is a figure explaining the movement conversion of the movement conversion part 44.
  • FIG. It is a figure explaining the movement conversion of the movement conversion part 44.
  • FIG. It is a figure explaining the movement conversion of the movement conversion part 44.
  • FIG. It is a figure explaining the movement conversion of the movement conversion part 44.
  • FIG. It is a figure which shows the example of the movement conversion image obtained by the movement conversion of a viewpoint conversion image. It is a figure explaining the example of switching of the conversion characteristic used for movement conversion.
  • FIG. 18 is a block diagram illustrating a configuration example of an embodiment of a computer to which the present technology is applied.
  • FIG. 1 is a block diagram illustrating a configuration example of an embodiment of an image processing system to which the present technology is applied.
  • the image processing system includes a camera unit 10, an operation unit 21, an image processing unit 22, a setting storage memory 23, an image output unit 24, and a display unit 25.
  • the camera unit 10 includes an optical system 11, an image sensor 12, and an image quality control unit 13.
  • the optical system 11 is composed of optical components such as a lens and a diaphragm (not shown), and makes light from a subject incident on the image sensor 12.
  • the image sensor 12 takes an image. That is, the image sensor 12 receives the light incident from the optical system 11 and photoelectrically converts it, thereby generating an image signal as an electrical signal corresponding to the light.
  • a captured image (image signal) that is an image obtained by the image sensor 12 is supplied from the image sensor 12 to the image quality control unit 13.
  • the image quality control unit 13 performs control when the image sensor 12 captures a captured image, for example, exposure time control, binning control, and the like. In addition, the image quality control unit 13 performs signal processing related to image quality such as NR (Noise Reduction) and development (demosaic) on the captured image supplied from the image sensor 12 and supplies the image processing unit 22 with the signal processing.
  • the operation unit 21 is operated by a user and supplies an operation signal corresponding to the operation to the image processing unit 22.
  • the operation of the operation unit 21 is an alternative to a physical operation such as a speech utterance, a gesture, or a movement of a line of sight as a command in addition to a physical operation (for example, touching the touch panel). All actions of the user as a means are included.
  • the image processing unit 22 performs image processing on the captured image supplied from the camera unit 10 (the image control unit 13 thereof), and supplies an image obtained by the image processing to the image output unit 24.
  • the image processing of the image processing unit 22 is performed according to an operation signal from the operation unit 21 and information stored in the setting storage memory 23.
  • the setting storage memory 23 stores image processing settings of the image processing unit 22 and other necessary information.
  • the image output unit 24 performs display control for outputting and displaying the image supplied from the image processing unit 22 on the display unit 25.
  • the display unit 25 is configured by, for example, an LCD (Liquid Crystal Display) or other display device, and displays an image output from the image output unit 24 as a viewing image, for example.
  • LCD Liquid Crystal Display
  • the supply of the captured image from the image quality control unit 13 to the image processing unit 22 is performed by digital transmission.
  • the captured image is supplied from the image quality control unit 13 to the image processing unit 22 by digital transmission, the image quality can be maintained even if the connection line connecting the image quality control unit 13 and the image processing unit 22 is drawn relatively long. This is because there is no deterioration of the image signal of the captured image in the digital transmission between the control unit 13 and the image processing unit 22.
  • the camera unit 10 is configured to include the image processing unit 22 in the optical system 11 or the image quality control unit 13, and the image is supplied from the image quality control unit 13 to the image processing unit 22 by digital transmission. I will do it.
  • the connection line connecting the image quality control unit 13 and the image processing unit 22 is configured to be short, and the deterioration of the image signal is suppressed, and the digital signal is directly used as the image quality control unit. This is because, after input from 13 to the image processing unit 22 and performing processing such as image clipping, conversion to an analog signal can be performed to suppress a reduction in resolution.
  • the image after the image processing by the image processing unit 22 is displayed as a viewing image.
  • the image after the image processing by the image processing unit 22 is displayed as a viewing image, and other operations are performed. It can be used for assistance.
  • FIG. 2 is a perspective view showing an example of the external configuration of a moving body on which the image processing system of FIG. 1 can be mounted.
  • a moving body on which the image processing system of FIG. 1 can be mounted for example, an automobile, an electric car, a hybrid electric car, a motorcycle, a bicycle, a personal mobility, an airplane, a drone, a ship, a robot, or the like can be adopted. it can.
  • a vehicle 30 which is an automobile is employed as a moving body on which the image processing system of FIG. 1 can be mounted.
  • the camera unit 31 is provided in the front center part of the vehicle 30, and the camera unit 32 is provided in the rear center part.
  • a camera unit 33 is provided in a portion where the left side mirror is provided toward the front of the vehicle 30, and a camera unit 34 is provided in a portion where the right side mirror is provided.
  • the dotted line represents the shooting range of the camera units 31 to 34.
  • the camera unit 31 photographs the front of the vehicle 30, and the camera unit 32 photographs the rear of the vehicle 30.
  • the camera unit 33 images the left side of the vehicle 30, and the camera unit 34 images the right side of the vehicle 30.
  • a wide-angle lens such as a fish-eye lens can be adopted so that photographing can be performed in as wide a photographing range as possible.
  • the camera unit 10 is employed as the camera unit 32 in the rear center part of the camera units 31 to 34, and the image processing system of FIG. The image processing system will be described.
  • an overhead image that is a viewing image of the vehicle 30 viewed from above is generated for parking assistance or the like.
  • the camera unit 31 a super-wide-angle camera using a fish-eye lens or the like that can capture a wide-angle image with a field angle close to 180 degrees with the camera unit 31 as the center is used. It is necessary to take a surrounding image. The same applies to the camera units 32 to 34.
  • the camera unit 32 when generating a wide range image so that the user can check other vehicles approaching from the left and right rear of the vehicle 30, the camera unit 32 at the rear center of the vehicle 30, In addition, it is necessary to capture wide-angle images using the left and right camera units 33 and 34 of the vehicle 30.
  • the vehicle 30 can be provided with a plurality of camera units having different angles of view of images that can be taken for each application, but in this case, the number of camera units provided in the vehicle 30 increases.
  • the vehicle 30 is provided with a smaller number of camera units that can shoot wide-angle images, that is, for example, four camera units 31 to 34 are provided, and the camera units 31 to 34 take pictures. It is desirable to generate various viewing images from wide-angle images.
  • the wide-angle images captured by the camera units 31 to 34 are captured using, for example, a fisheye lens, they have distortion compared with images perceived by human eyes. Therefore, when generating a viewing image presented to the user from wide-angle images captured by the camera units 31 to 34, the distortion of the wide-angle image is corrected as necessary so that the user can easily view the image. That is, it is necessary to convert the projection method.
  • the central projection method is suitable as a method for projecting images that represent the world that people are familiar with.
  • the projection method of an image photographed with a pin fall camera is a central projection method.
  • the projection method of the images captured by the camera units 31 to 34 is, for example, an equidistant projection method. .
  • an image after projective conversion obtained by projectively transforming an equidistant projection type image to a central projection type image is a perspective problem and becomes an image in which a sense of visual depth is lost.
  • an upright standing on the ground may be in an inclined state, and the further away from the optical axis, the wider the distance.
  • f represents the focal length of the lens when taking an image
  • represents the angle formed by the light beam incident on the lens and the optical axis.
  • the optical axis of the virtual camera unit that captures the image of the image after the projection conversion obtained by performing the projective conversion from the equidistant projection method to the central projection method is set downward.
  • the point of view is changed, the three-dimensional object on the road (road surface) appears in a position further away from the optical axis and is stretched outward (in the direction away from the optical axis).
  • FIG. 3 is a block diagram illustrating a configuration example of the image processing unit 22 in FIG.
  • the image processing unit 22 includes an image acquisition unit 41, a projection conversion unit 42, a viewpoint conversion unit 43, a movement conversion unit 44, a switching unit 45, a display area cutout unit 46, and a superimposition unit 47.
  • the image acquisition unit 41 acquires a captured image supplied from the image quality control unit 13 of the camera unit 10 as a target image to be processed by the image processing, and supplies the acquired image to the projective conversion unit 42.
  • the projection conversion unit 42 converts the projection method of the captured image as the target image supplied from the image acquisition unit 41.
  • a lens constituting the optical system 11 of the camera unit 10 for example, a wide-angle lens such as a fisheye lens is employed so that an image obtained by photographing a wide photographing range can be obtained as a photographed image.
  • a captured image of the equidistant projection method is captured.
  • the projective conversion unit 42 Since the image of the subject shown in the equidistant projection captured image is distorted, the projective conversion unit 42 performs projective conversion that converts the equidistant projection captured image into, for example, a central projection captured image in which the image is not distorted. . Projection conversion of the projection conversion unit 42 is performed in consideration of image height characteristics of the lenses constituting the optical system 11 of the camera unit 10.
  • the projective conversion unit 42 supplies the viewpoint conversion unit 43 with a projective conversion image obtained by projective conversion of the captured image as the target image from the image acquisition unit 41.
  • the captured image of the equidistant projection method is converted into the captured image of the central projection method, but the projection method of the captured image that is the target of the projective conversion is: It is not limited to the equidistant projection method.
  • the projection method of the captured image (projection conversion image) after the projective transformation is not limited to the central projection method, and for example, a three-dimensional projection method or any other projection method with vanishing points should be adopted. Can do.
  • the viewpoint conversion unit 43 performs viewpoint conversion for converting the projection conversion image supplied from the projection conversion unit 42 into a viewpoint conversion image viewed from a predetermined viewpoint.
  • the viewpoint includes the viewpoint position and the line-of-sight direction.
  • the viewpoint position and the line-of-sight direction as the viewpoint of the captured image captured by the camera unit 10 are represented by the position of the camera unit 10 and the optical axis direction of the camera unit 10, respectively.
  • the viewpoint conversion unit 43 for example, sets (assumes) a virtual camera unit, and performs viewpoint conversion for converting the projected conversion image into a viewpoint conversion image viewed (captured) from the virtual camera unit.
  • the virtual camera unit is set so that the optical axis of the virtual camera unit matches the vanishing point in the viewpoint conversion image.
  • the viewpoint conversion of the viewpoint conversion unit 43 is performed so that the optical axis when photographing from the virtual camera unit matches the vanishing point in the viewpoint conversion image.
  • the fact that the optical axis matches the vanishing point in the viewpoint converted image means that the intersection of the optical axis and the viewpoint converted image matches the vanishing point in the viewpoint converted image.
  • the viewpoint conversion unit 43 supplies the viewpoint conversion image obtained by the viewpoint conversion of the projective conversion image to the movement conversion unit 44.
  • viewpoint conversion is performed after the projective conversion is performed, but the order of the projective conversion and the viewpoint conversion may be reversed. That is, projective transformation can be performed after performing viewpoint transformation.
  • the movement conversion unit 44 performs movement conversion for moving the subject position in the viewpoint conversion image according to the subject distance from the vanishing point in the viewpoint conversion image supplied from the viewpoint conversion unit 43 to the subject position where the subject appears.
  • the subject distance from the vanishing point in the viewpoint conversion image to the subject position where the subject appears is defined as the parameter of the transformation region, but it may be defined using a different equivalent parameter instead of the distance. For example, when the optical axis when photographing from a virtual camera unit matches the vanishing point in the viewpoint conversion image, the direction of the virtual camera unit can be replaced with the subject distance.
  • the movement conversion unit 44 performs movement conversion on the viewpoint-converted image so that the subject position is moved according to the subject distance so that the subject distance is shorter than that before the movement conversion.
  • the movement conversion unit 44 performs movement conversion in accordance with the conversion characteristics supplied from the switching unit 45.
  • the conversion characteristic represents the relationship between the subject distance before movement conversion and the subject distance after movement conversion.
  • the movement conversion unit 44 supplies the movement conversion image obtained by the movement conversion of the viewpoint conversion image to the display area cutout unit 46.
  • the switching unit 45 acquires (reads out) the conversion characteristics stored in the setting storage memory 23 and supplies the conversion characteristics to the movement conversion unit 44.
  • the switching unit 45 switches the conversion characteristics read from the setting storage memory 23 in accordance with a user operation, that is, an operation signal supplied from the operation unit 21, and supplies the converted conversion characteristics to the movement conversion unit 44. To do.
  • the image processing unit 22 can switch conversion characteristics used for movement conversion in accordance with a user operation.
  • the display area cutout unit 46 cuts out a display area image to be displayed on the display unit 25 from the movement conversion image supplied from the movement conversion unit 44 as a cutout image and supplies the cutout image to the superimposition unit 47.
  • the display area cutout unit 46 can cut out all of the movement conversion image as a cutout image in addition to a part of the movement conversion image.
  • the superimposing unit 47 superimposes OSD information to be displayed by OSD (On-Screen Display) on the cut-out image supplied from the display area cut-out unit 46 as necessary, and supplies it to the image output unit 24.
  • OSD On-Screen Display
  • the projection conversion of the projection conversion unit 42, the viewpoint conversion of the viewpoint conversion unit 43, the movement conversion of the movement conversion unit 44, and the cutout of the cutout image of the display area cutout unit 46 may be performed in any order. Alternatively, all of these processes may be performed comprehensively in a single process.
  • FIG. 4 is a diagram showing an example of the state of taking a photographed image by the camera unit 10 and a photographed image obtained by the photographing.
  • the camera unit 10 is installed at the rear center of the vehicle 30 with the optical axis facing slightly downward.
  • the vehicle 30 exists on a three-lane road (or exists on a three-segment parking lot road), and on the left and right behind the vehicle 30 is on the road near the lane division line that divides the lane.
  • the pylon as an upright is placed.
  • the lane markings that divide the lane include the lane center line, the lane boundary line, the lane outside line, and the like.
  • the captured image captured by the camera unit 10 is an image in which a road behind the vehicle 30 and a pylon as a standing object placed on the road are reflected. Become.
  • the optical axis of the camera unit 10 does not always match the vanishing point of the lane marking on the road that follows the vehicle 30.
  • the photographed image does not maintain the linearity of the subject that is linear in the real world, such as a lane marking that goes to the vanishing point. You may feel uncomfortable.
  • the virtual camera unit is set so that the optical axis of the virtual camera unit matches the vanishing point in the viewpoint conversion image, and the captured image (projection conversion unit) Projection conversion image obtained by projection conversion 42) is converted into a viewpoint conversion image taken from a virtual camera unit.
  • the captured image captured by the camera unit 10 is, for example, a wide-angle image, which is an image in which a range of an angle of view wider than the range reflected in the captured image of FIG. 4 is shown.
  • a wide-angle image which is an image in which a range of an angle of view wider than the range reflected in the captured image of FIG. 4 is shown.
  • a part showing a range and including a vanishing point is shown as a photographed image. Therefore, actually, the image as the captured image continues outside the rectangle shown as the captured image in FIG. The same applies to the viewpoint conversion image and other images.
  • FIG. 5 is a diagram for explaining viewpoint conversion of the viewpoint conversion unit 43.
  • the viewpoint conversion unit 43 sets the virtual viewpoint, that is, the virtual camera unit so that the optical axis of the virtual camera unit matches the vanishing point in the viewpoint conversion image. Do.
  • the virtual camera unit is set in front of the vehicle 30 with respect to the camera unit 10 and the optical axis is set to be parallel to the road.
  • the virtual camera unit is set to be farther from the pylon, road, or other subject shown in the captured image than the camera unit 10, and the optical axis is higher than the camera unit 10.
  • the viewpoint conversion unit 43 converts the captured image (projection conversion image obtained by the projection conversion of the projection conversion unit 42) into a viewpoint conversion image that is captured from a virtual camera unit.
  • the viewpoint conversion of the viewpoint conversion unit 43 can be performed by affine transformation, for example.
  • the viewpoint converted image is, for example, shown in FIG.
  • the linearity of a subject that is linear in the real world, such as a lane marking that goes to the vanishing point is maintained.
  • the viewpoint transformation of the projection transformation image is performed by, for example, affine transformation without considering the subject position in the three-dimensional space.
  • the affine transformation as the viewpoint transformation is performed on the assumption that all the subjects shown in the projective transformation image that is the subject of the viewpoint transformation exist in a plane on the road. Therefore, when the virtual camera unit is set at a position away from an object such as a pylon or a road reflected in the projection conversion image, the viewpoint conversion image obtained by the viewpoint conversion of the projection conversion image has a position away from the vanishing point ( In the real world, standing objects on the left and right of the position close to the camera unit 10 fall on the road.
  • the pylon as standing objects at the left and right of the position close to the camera unit 10 is more road than in the case of the captured image before the viewpoint conversion shown in FIG. It has fallen into the state.
  • the viewpoint conversion image when the standing object is reflected in a fallen state, it becomes difficult for the user to recognize the standing object as the standing object (or a three-dimensional object) when the user views the viewpoint conversion image. Sometimes.
  • the movement conversion unit 44 performs movement conversion so that the user can easily recognize the standing object.
  • 6, 7, 8, and 9 are diagrams for explaining the movement conversion of the movement conversion unit 44.
  • FIG. 6 is a diagram illustrating an example of a viewpoint conversion image in which a standing object has fallen down due to viewpoint conversion.
  • one pylon as an upright is placed on each of the left and right positions on the front side of the camera unit 10 on the road, and 2 as an upright is located at the center on the back side. A pylon is placed.
  • the pylon as a standing object placed at the left and right positions on the near side is in a state of being tilted by the viewpoint conversion and is difficult to recognize as a standing object.
  • FIG. 7 is a diagram showing an example of movement conversion for the viewpoint conversion image of FIG.
  • the subject position in the viewpoint conversion image is moved according to the subject distance from the vanishing point in the viewpoint conversion image to the subject position where the subject appears.
  • the subject distance (distance from the vanishing point to the pylon) of the pylon (the tip) as an upright placed at the left and right positions on the near side is the distance r1.
  • the subject distance after the pylon movement conversion is moved according to the subject distance r1 before the movement conversion of the pylon placed at the left and right positions by the movement conversion for the viewpoint conversion image.
  • the pylon subject position placed at the left and right positions on the near side is moved so that the distance R1 is shorter than the subject distance r1 before conversion.
  • a standing object (virtual object) that appears near the vanishing point in the viewpoint conversion image.
  • a standing object far away from a typical camera unit does not fall down, but a standing object reflected at a position away from the vanishing point falls down as it moves away from the vanishing point.
  • the subject position is not moved for the subject reflected near the vanishing point, and the subject position is moved toward the vanishing point as the distance from the vanishing point is away from the subject reflected at the position away from the vanishing point. Move it closer.
  • FIG. 8 is a diagram illustrating an example of a movement conversion image obtained by movement conversion of the viewpoint conversion image of FIG.
  • the pylon as a standing object placed in the left and right positions on the near side, which is in a collapsed state due to the viewpoint conversion, is shown in FIG. Get up as shown.
  • the pylon placed at the left and right positions on the near side is easily recognized as a standing object on the road.
  • the movement of the subject position from the position A to the position B can be performed, for example, by changing the pixel value of the pixel at the subject position A to the pixel value of the pixel at the subject position B.
  • the movement conversion can be performed according to conversion characteristics representing the relationship between the subject distance before the movement conversion and the subject distance after the movement conversion.
  • FIG. 9 is a diagram showing an example of conversion characteristics.
  • the horizontal axis represents the subject distance r before movement conversion
  • the vertical axis represents the subject distance R after movement conversion.
  • the conversion characteristics in FIG. 9 indicate that when the subject distance r before movement conversion is equal to or less than the threshold THr, the subject distance R after movement conversion is equal to the subject distance r before movement conversion, and thus before the movement conversion.
  • the subject distance R after movement conversion is shorter than the subject distance r before movement conversion.
  • a subject position whose subject distance r before movement conversion is equal to or less than the threshold THr is moved so that the subject distance R after movement conversion is equal to the subject distance r before movement conversion. Therefore, the subject position where the subject distance r before the movement conversion is equal to or less than the threshold THr does not substantially move.
  • the subject position where the subject distance r before the movement conversion exceeds the threshold value THr indicates that the subject distance R after the movement conversion corresponds to the subject distance before the movement conversion. It is moved toward the vanishing point so that it is shorter than r.
  • the subject position where the subject distance r before the movement conversion exceeds the threshold value THr is moved according to the subject distance r so that the greater the subject distance r, the closer to the vanishing point (distance). Is done.
  • FIG. 10 is a diagram illustrating an example of a movement conversion image obtained by movement conversion of the viewpoint conversion image illustrated in FIG.
  • the pylons as standing objects on the left and right of the position close to the camera unit 10 are in a state of falling on the road.
  • FIG. 11 is a diagram illustrating an example of switching of conversion characteristics used for movement conversion.
  • the switching unit 45 can switch the conversion characteristics that are read from the setting storage memory 23 and supplied to the movement conversion unit 44 in accordance with, for example, a user operation.
  • a first characteristic and a second characteristic are prepared as conversion characteristics.
  • the subject distance R after the movement conversion is equal to the subject distance r before the movement conversion. Therefore, in the movement conversion according to the first characteristic, the subject position is moved to the subject position, and therefore does not substantially move.
  • the subject distance R after the movement conversion is the subject before the movement conversion.
  • the subject distance R after movement conversion is shorter than the subject distance r before movement conversion.
  • the movement conversion image similar to the viewpoint conversion image that is, the movement in which the pylon as a standing object is in a state of being collapsed by the viewpoint conversion as shown in FIG. A converted image can be obtained.
  • the threshold THr is different from the second characteristic, and the relationship between the subject distances r and R after the threshold THr.
  • a characteristic different from the second characteristic a characteristic in which the relationship between the subject distances r and R after the threshold THr is steeper or gentler than the second characteristic
  • the conversion characteristics may be adjusted before shipment (in a factory) / sale (in a dealer) of a vehicle equipped with the camera system of the present invention.
  • the conversion characteristics may be adjusted by the car user after the sale.
  • the adjustment value may be stored for each user, and processing may be performed so as to appropriately read out according to the user.
  • the subject position within the threshold THr from the vanishing point is substantially moved regardless of which of the first and second characteristics. Therefore, in the movement conversion image, the appearance in the circle within the circle with the radius THr centered on the vanishing point is the same.
  • the subject position that exceeds the threshold THr from the vanishing point does not substantially move in the first characteristic, but moves in the direction of the vanishing point as the subject distance r increases in the second characteristic. Is done.
  • the movement-converted image obtained by the movement conversion according to the first characteristic and the movement-converted image obtained by the movement conversion according to the second characteristic are outside the circle with the radius THr centered on the vanishing point. For an area, the range reflected in that area changes.
  • the subject position does not move, but in the second characteristic, the subject position moves in the direction of the vanishing point as the subject distance r increases.
  • the movement-converted image obtained by the movement conversion shows a wider field angle range than the movement-converted image obtained by the movement conversion according to the first characteristic.
  • FIG. 12 is a flowchart illustrating an example of processing of the image processing system of FIG.
  • step S11 the camera unit 10 captures a captured image, supplies the captured image to the image processing unit 22, and the process proceeds to step S12.
  • step S12 the image processing unit 22 acquires a captured image from the camera unit 10, and performs image processing on the captured image. Then, the image processing unit 22 supplies an image obtained as a result of the image processing to the image output unit 24, and the processing proceeds from step S12 to step S13.
  • step S13 the image output unit 24 displays the image after image processing from the image processing unit 22 by outputting it to the display unit 25, and the process ends.
  • FIG. 13 is a flowchart illustrating an example of image processing in step S12 of FIG.
  • step S21 the image acquisition unit 41 acquires the captured image supplied from the camera unit 10 as a target image to be processed by the image processing, supplies the acquired image to the projective conversion unit 42, and the process proceeds to step S22. .
  • step S22 the projective transformation unit 42 performs projective transformation of the captured image as the target image from the image acquisition unit 41, supplies the projected transformation image obtained by the projective transformation to the viewpoint transformation unit 43, and processing is performed. The process proceeds to step S23.
  • step S23 the viewpoint conversion unit 43 sets (assumes) a virtual camera unit, performs viewpoint conversion of the projection conversion image from the projection conversion unit 42, and the optical axis of the virtual camera unit in the viewpoint conversion image. Do so to match the vanishing point. Then, the viewpoint conversion unit 43 supplies the viewpoint conversion image obtained by the viewpoint conversion of the projective conversion image to the movement conversion unit 44, and the process proceeds from step S23 to step S24.
  • step S24 the switching unit 45 acquires the conversion characteristics from the setting storage memory 23 and supplies the conversion characteristics to the movement conversion unit 44, and the process proceeds to step S25.
  • step S25 the movement conversion unit 44 performs movement conversion of the viewpoint conversion image from the viewpoint conversion unit 43 according to the conversion characteristic (latest conversion characteristic) supplied from the switching unit 45 in the immediately preceding step S24. Then, the movement conversion unit 44 supplies the movement conversion image obtained by the movement conversion of the viewpoint conversion image to the display area cutout unit 46, and the process proceeds from step S25 to step S26.
  • step S26 the display area cutout unit 46 cuts out a display area image to be displayed on the display unit 25 from the movement conversion image from the movement conversion unit 44 as a cutout image. Then, the display area cutout unit 46 supplies the cutout image to the superimposing unit 47, and the process proceeds from step S26 to step S27.
  • step S27 the switching unit 45 determines whether to switch conversion characteristics used for movement conversion.
  • step S27 when it is determined that the conversion characteristic used for the movement conversion is switched, that is, for example, when the operation unit 21 is operated so as to switch the conversion characteristic, the process returns to step S24.
  • step S24 the switching unit 45 newly acquires a conversion characteristic different from the conversion characteristic acquired last time from the setting storage memory 23 and supplies the conversion characteristic to the movement conversion unit 44. Thereafter, the same processing is repeated.
  • the conversion characteristic newly acquired by the switching unit 45 is supplied to the movement conversion unit 44, whereby the conversion characteristic used for the movement conversion is switched.
  • step S27 when it is determined in step S27 that the conversion characteristic used for the movement conversion is not switched, that is, for example, when the operation unit 21 is not operated so as to switch the conversion characteristic, the process proceeds to step S28. .
  • step S28 the superimposing unit 47 superimposes the OSD information on the cut-out image from the display area cut-out unit 46 as necessary, supplies it to the image output unit 24, and the process returns.
  • the projective transformation in step S22, the viewpoint transformation in step S23, the acquisition of the transformation characteristics in steps S24 and S25 and the movement transformation using the transformation characteristics, and the cutout image (image of the display area) in step S26 are cut out.
  • the processes may be performed in any order, and all of these processes may be performed comprehensively in a single process.
  • FIG. 14 is a diagram illustrating an example of viewpoint conversion and how an image is viewed.
  • FIG. 14 shows an example of a captured image captured by the central projection method and a viewpoint converted image obtained by viewpoint conversion of the captured image.
  • FIG. 14A shows an example of a shooting situation (a plan view showing) when the camera unit 10 performs the center projection shooting and a shot image shot by the camera unit 10.
  • photography condition of FIG. 14 is a top view.
  • FIG. 14A the road on which the pylon as a standing object is placed is photographed by the central projection method, and a photographed image in which the road and the pylon placed on the road are reflected is obtained.
  • FIG. 14B shows the shooting situation when the virtual camera unit is set at a position approaching the subject (pylon) (in the present embodiment, behind the vehicle 30), and the virtual camera unit approaches the subject.
  • An example of a viewpoint conversion image obtained by viewpoint conversion of a captured image when set to a position is shown.
  • FIG. 14C shows a shooting situation when the virtual camera unit is set at a position away from the subject (in the present embodiment, in front of the vehicle 30), and a position at which the virtual camera unit moves away from the subject.
  • the example of the viewpoint conversion image obtained by the viewpoint conversion of the captured image in the case of being performed is shown.
  • the pylon at a position away from the vanishing point is on the outside (on the side opposite to the vanishing point side) as shown in the viewpoint conversion image in FIG. It will fall down.
  • FIG. 15 is a diagram illustrating another example of viewpoint conversion and how an image is viewed.
  • FIG. 15 shows another example of a captured image captured by the central projection method and a viewpoint converted image obtained by viewpoint conversion of the captured image.
  • FIG. 15A shows an example of a shooting situation (viewed from the side) when the camera unit 10 performs shooting in the central projection method and a shot image shot by the camera unit 10.
  • FIG. 15 the figure which shows the imaging
  • FIG. 15A as in the case of FIG. 14A, the road on which the pylon as a standing object is placed is photographed by the central projection method, and the road and the pylon placed on the road are reflected. A captured image is obtained.
  • FIG. 15B shows the shooting situation when the virtual camera unit is set to the position moved downward of the camera unit 10 (in this embodiment, the direction approaching the road), and the virtual camera unit An example of a viewpoint conversion image obtained by viewpoint conversion of a captured image when set to a position moved downward is shown.
  • FIG. 15C shows a shooting situation when the virtual camera unit is set to a position moved upward of the camera unit 10 (in this embodiment, a direction away from the road), and the virtual camera unit is located on the upper side.
  • the example of the viewpoint conversion image obtained by the viewpoint conversion of a picked-up image when set to the position moved to is shown.
  • the pylon at a position away from the vanishing point is in a state of falling slightly (outside) as shown in the viewpoint conversion image in FIG. .
  • FIG. 16 is a diagram for explaining a pylon as a standing object reflected in a photographed image.
  • FIG. 16 shows an example of a shooting situation when shooting a road on which a pylon as a standing object is placed, and a shot image obtained by the shooting.
  • FIG. 16 as a figure which shows a photography condition, the top view and the figure which looked at the photography situation from the left side toward the front of the vehicle 30 are shown.
  • the pylon reflected in the captured image is a standing object placed on the road, or the pylon as a standing object on the road It is not possible to distinguish whether it is a projected image.
  • FIG. 17 is a diagram illustrating an example of a shooting range and a shot image shot by the equidistant projection type camera unit 10.
  • the camera unit 10 In order to keep the vicinity of the vehicle 30 (also in the vicinity of the camera unit 10) within the angle of view of the captured image, the camera unit 10 employs a wide-angle lens of an equidistant projection method such as a fish-eye lens.
  • the equidistant projection method it is possible to shoot a photographed image with a wide angle of view, but distortion caused by the equidistant projection method occurs in a subject reflected in the photographed image. For example, a straight lane is curved.
  • a subject for example, a pylon
  • the subject farther from the vehicle 30 is farther away from the vehicle 30, It appears smaller than how it looks.
  • the sense of distance to the subject shown in the photographed image may be difficult to grasp.
  • the image processing unit 22 performs projection conversion and viewpoint conversion.
  • FIG. 18 is a diagram illustrating an example of a viewpoint conversion image obtained by projective conversion and viewpoint conversion of the photographed image of FIG. 17 and a virtual camera unit setting performed by the viewpoint conversion.
  • the equidistant projection-type captured image is converted into a central-projection-type projected transform image, thereby correcting distortion that has occurred in the subject appearing in the captured image.
  • a subject near the vehicle 30 falls within the angle of view, so that the virtual camera unit moves away from the subject, as shown by a dotted arrow in the drawing, that is, the camera unit 10 in FIG. It is set to a position in front of the vehicle 30 from the position, and a viewpoint conversion image is obtained.
  • the virtual camera unit when the virtual camera unit is set at a position away from the subject, particularly in the vicinity of the vehicle 30 in the viewpoint conversion image obtained by the viewpoint conversion, as shown in FIG.
  • the pylon as an upright located in the position falls to the outside.
  • the pylon reflected in the captured image is a standing object placed on the road, or a projected image obtained by projecting the pylon as the standing object on the road. Cannot be distinguished.
  • affine transformation as viewpoint conversion of a captured image is performed on the assumption that the pylon reflected in the captured image is a projection image of the pylon.
  • the pylon reflected in the viewpoint conversion image obtained by the viewpoint conversion is in a collapsed state.
  • FIG. 19 is a diagram illustrating the direction in which the pylon reflected in the viewpoint conversion image falls.
  • FIG. 19A is a diagram for explaining a case where the pylon reflected in the viewpoint conversion image falls to the inside (the vanishing point side). That is, FIG. 19A is a plan view showing a shooting situation and an example of a viewpoint converted image.
  • the direction of the projection of the pylon reflected in the captured image with respect to the angle of view representing the boundary of the imaging range of the virtual camera unit is inside the angle of view (virtual When the camera is facing the optical axis side of a typical camera unit (when the projection direction of the pylon does not protrude from the angle of view), the pylon is tilted inward in the viewpoint conversion image.
  • the projection direction of the pylon may face the inside of the angle of view with respect to the angle of view.
  • FIG. 19B is a diagram for explaining a case where the pylon reflected in the viewpoint conversion image falls outside (the side opposite to the vanishing point). That is, B in FIG. 19 is a plan view showing a shooting situation and an example of a viewpoint conversion image.
  • the direction of the projection of the pylon reflected in the captured image with respect to the angle of view representing the boundary of the imaging range of the virtual camera unit is outside the angle of view (virtual When facing the opposite side of the optical axis of a typical camera unit (when the direction of projection of the pylon protrudes from the angle of view), the pylon is tilted outward in the viewpoint conversion image.
  • the pylon projection may face the outside of the angle of view with respect to the angle of view.
  • FIG. 20 is a diagram illustrating an example of a viewpoint conversion image that is a target of movement conversion and a movement conversion image obtained by movement conversion of the viewpoint conversion image.
  • the pylon as a standing object placed on the road is reflected in a state of falling outside.
  • the subject at a subject distance that exceeds the threshold THr from the vanishing point is brought close to the vanishing point, so that the pylon that has fallen outward is in a state that has occurred more than before the movement conversion. Therefore, according to the movement conversion image, it becomes easy to recognize the standing object.
  • the user In movement conversion, by leaving the subject at a subject distance within the threshold THr from the vanishing point as it is, the user corrects the distortion for the subject at the subject distance within the threshold THr from the vanishing point by the projective transformation. You can see the subject in a closed state.
  • the subject position in the movement conversion, is moved so as to be close to the vanishing point, so that the pylon as an upright standing in the state of falling down is brought into the state.
  • the subject position in addition, can be moved away from the vanishing point according to the subject distance.
  • FIG. 21 is a diagram for explaining image conversion performed by the image processing unit 22.
  • the image conversion is a combination of projective transformation, viewpoint transformation, and movement transformation performed by the image processing unit 22, or two or more of projection transformation, viewpoint transformation, and movement transformation. Means conversion.
  • an image subject to image conversion is also called an input image
  • an image obtained by image conversion of an input image is also called an output image.
  • the input image Input is converted to the output image Output according to the coordinate correspondence information indicating which coordinate point (pixel) of each coordinate (position) of the input image Input corresponds to which coordinate point (pixel) of the output image Output. Converted.
  • the coordinate correspondence information can be held in the form of a LUT (Look Up Table) or a mathematical expression, for example. That is, the coordinate correspondence information can be stored as a LUT in a memory (for example, the setting storage memory 23), or can be generated (calculated by calculation) in the image processing unit 22 according to an equation.
  • a LUT Look Up Table
  • a mathematical expression for example. That is, the coordinate correspondence information can be stored as a LUT in a memory (for example, the setting storage memory 23), or can be generated (calculated by calculation) in the image processing unit 22 according to an equation.
  • FIG. 22 is a diagram for further explaining the image conversion performed by the image processing unit 22.
  • an image Image0 is a captured image
  • an image Image1 is a viewpoint conversion image obtained by performing projective conversion and viewpoint conversion of the captured image Image0
  • the image image2 is a movement conversion image obtained by performing movement conversion of the viewpoint conversion image Image1.
  • the coordinate correspondence information for converting the captured image Image0 into the viewpoint conversion image Image1 is set as the coordinate correspondence information 1
  • the coordinate correspondence information for converting the viewpoint conversion image Image1 into the movement conversion image Image2 is set as the coordinate correspondence information 2.
  • the coordinate correspondence information 1 and the coordinate correspondence information 2 or the coordinate correspondence information 3 obtained by multiplying the coordinate correspondence information 1 and the coordinate correspondence information 2 is stored in a memory or generated according to a mathematical expression. According to the coordinate correspondence information 1 and the coordinate correspondence information 2 or the coordinate correspondence information 3, the captured image Image0 can be converted into the movement conversion image Image2.
  • the coordinate correspondence information 3 is a matrix as a product of a matrix as the coordinate correspondence information 1 and a matrix as the coordinate correspondence information 2. Can be represented.
  • the image processing unit 22 generates the viewpoint conversion image Image1 from the captured image Image0 according to the coordinate correspondence information 1, and then, from the viewpoint conversion image Image1, according to the coordinate correspondence information 2, the movement conversion image Image2
  • the movement conversion image Image2 can be directly generated from the captured image Image0 according to the coordinate correspondence information 3. In this case, the viewpoint conversion image Image1 is not generated.
  • FIG. 23 is a diagram for explaining the movement conversion performed by the image processing unit 22.
  • each viewpoint conversion image Image1 is represented by a two-dimensional coordinate system in which the upper left point is the origin, the horizontal axis is the U axis, and the vertical axis is the V axis. Represents the coordinates of a point (pixel).
  • the movement conversion image Image2 obtained by movement conversion of the viewpoint conversion image Image1 is converted by a two-dimensional coordinate system in which the upper left point is the origin, the horizontal axis is the X axis, and the vertical axis is the Y axis. Represents the coordinates of each point (pixel) in the image Image2.
  • vanishing point (U vp , V vp ) is a point (coordinates) whose distance (subject distance) is r is represented as (U s , V t ).
  • the point (U s , V t ) of the viewpoint conversion image Image1 is converted into the point (X s , Y t ) of the movement conversion image Image2.
  • the point (U s, V t) is the point (X s, Y t) to be in a relationship to be converted to the point (U s, V t) is the point (X s, Y t ) (corresponding to point (U s , V t ) and point (X s , Y t )).
  • equation (1) For the corresponding point (U s , V t ) and point (X s , Y t ), equation (1) holds.
  • the threshold value can be calculated from the vanishing point (U vp , V vp ) of the viewpoint conversion image Image1.
  • the point (U s , V t ) at the subject distance r within THr is moved to the same point (X s , Y t ) of the moving conversion image Image2 as that point (U s , V t ).
  • the point (U s , V t ) at the subject distance r exceeding the threshold THr from the vanishing point (U vp , V vp ) of the viewpoint converted image Image1 is the movement converted image Image2 as the subject distance r is larger. Is moved to a point (X s , Y t ) approaching the direction of the vanishing point (X vp , Y vp ). As a result, in the viewpoint-converted image Image1, a standing upright object at a position far from the vanishing point (U vp , V vp ) has occurred.
  • the conversion characteristics can be switched by changing the coefficients a, b, and c.
  • the conversion characteristics directly relate the subject distance r in the viewpoint conversion image Image1 and the subject distance R in the movement conversion image Image2, so to speak, the subject distance r and the subject distance R are other than, for example, an angle ⁇ can be related as a parameter.
  • FIG. 24 is a diagram for explaining an example of a method for determining the threshold value THr.
  • Threshold value THr can be determined, for example, according to a user operation. Further, the threshold value THr can be determined, for example, according to a standing object that appears in the viewpoint conversion image.
  • FIG. 24 is a plan view showing an example of a road (parking lot) where a standing object is placed.
  • a standing object such as a pylon is placed on a road within a predetermined distance such as within 500 mm from both ends of the vehicle width of the vehicle 30 provided with the camera unit 10, and 45
  • the threshold THr can be determined as the distance between the point at which an upright that falls down by a predetermined angle such as a degree appears and the vanishing point of the viewpoint conversion image.
  • FIG. 25 is a diagram for explaining an example of switching of conversion characteristics.
  • conversion characteristics p and p ′ are prepared as conversion characteristics used for movement conversion.
  • the subject distance r before movement conversion (in the viewpoint conversion image) is equal to or less than the threshold value THr, the subject distance r before movement conversion and the object distance r after movement conversion (in the movement conversion image) )
  • the subject distance R matches.
  • the movement conversion is performed so that the subject distance R after the movement conversion approaches the vanishing point of the movement conversion image. Done.
  • the conversion characteristic p has a smaller change rate of the subject distance R than the conversion characteristic p ′.
  • the object position is closer to the direction of the vanishing point in the movement conversion than when the conversion characteristic p ′ is used (the standing object in the collapsed state is Moved to a more awake state).
  • FIG. 26 is a diagram illustrating image conversion performed by the image processing unit 22 when conversion characteristics can be switched.
  • images Image0, Image1, and Image2 are a captured image viewpoint conversion image and a movement conversion image, respectively.
  • an image Image2 ′ is a movement conversion image different from the movement conversion image Image2.
  • the coordinate correspondence information for converting the captured image Image0 into the viewpoint conversion image Image1 is referred to as coordinate correspondence information 1. Further, the coordinate correspondence information for converting the viewpoint conversion image Image1 into the movement conversion image Image2 according to the conversion characteristics p of FIG. Further, the coordinate correspondence information for converting the viewpoint conversion image Image1 into the movement conversion image Image2 ′ according to the conversion characteristics p ′ of FIG. 25 is set as the coordinate correspondence information 2 ′.
  • the coordinate correspondence information 1 is stored in a memory, or is generated according to a mathematical formula, and the coordinate correspondence information 2 or the coordinate correspondence information 1 is multiplied by the coordinate correspondence information 2
  • Information 3 is stored in a memory, or generated according to a mathematical formula, and coordinate correspondence information 2 ′ or coordinate correspondence information 3 obtained by multiplying coordinate correspondence information 1 and coordinate correspondence information 2 ′ is stored in the memory.
  • the image conversion can be performed as described with reference to FIG. 22 by switching between the conversion characteristics p and p ′ by generating according to the mathematical expression.
  • FIG. 27 is a diagram for explaining an example of superimposing OSD information in the superimposing unit 47 (FIG. 3) when the conversion characteristics can be switched.
  • the superimposing unit 47 When the conversion characteristics can be switched, the superimposing unit 47 generates different OSD information according to the conversion characteristics used for the movement conversion, and superimposes them on the movement conversion image (from the cut-out image cut out by the display area cut-out unit 46). To do.
  • FIG. 27 shows an example of OSD information as a guideline for assisting driving of the vehicle 30 when the vehicle 30 moves backward, and a superimposed image obtained by superimposing the OSD information on the movement conversion image.
  • FIG. 27A shows an example of OSD information (converted OSD information) when the conversion characteristic p of FIG. 25 is used for movement conversion, and a superimposed image on which the OSD information is superimposed.
  • FIG. 27B shows an example of OSD information when the conversion characteristic p ′ of FIG. 25 is used for movement conversion and a superimposed image on which the OSD information is superimposed.
  • OSD information (hereinafter also referred to as standard OSD information) (not shown) as a guideline appropriate (matched) for the viewpoint conversion image is prepared, and according to the conversion characteristics used for the movement conversion Standard OSD information is converted.
  • the standard OSD information is converted according to the coordinate correspondence information 2 of FIG. 26, and the conversion characteristic p ′ of FIG. 25 is used for the movement conversion.
  • the standard OSD information is converted in accordance with the coordinate correspondence information 2 ′ shown in FIG.
  • the converted OSD information is OSD information appropriate for the moving conversion image obtained by moving conversion (OSD information matched with the moving conversion image).
  • the converted OSD information obtained by converting the standard OSD information according to the conversion characteristics p or p ′ is superimposed on the movement-converted image obtained by the movement conversion according to the conversion characteristics p or p ′.
  • the subject position is moved closer to the vanishing point direction in the movement conversion than in the case where the conversion characteristic p ′ is used.
  • the guideline that is the conversion OSD information when the conversion characteristic p of FIG. 27A is used in particular, the lower line L uses the conversion characteristic p ′ of B of FIG. It is closer to the vanishing point than the case.
  • the present technology can be applied to an image processing system that can be mounted on a moving body, an application for processing an image, and any other apparatus that handles an image.
  • the series of processing of the image processing unit 22 described above can be performed by hardware or software.
  • a program constituting the software is installed in a general-purpose computer or the like.
  • FIG. 28 is a block diagram illustrating a configuration example of an embodiment of a computer in which a program for executing the above-described series of processes is installed.
  • the program can be recorded in advance in a hard disk 105 or a ROM 103 as a recording medium built in the computer.
  • the program can be stored (recorded) in the removable recording medium 111.
  • a removable recording medium 111 can be provided as so-called package software.
  • examples of the removable recording medium 111 include a flexible disk, a CD-ROM (Compact Disc Read Only Memory), an MO (Magneto Optical) disc, a DVD (Digital Versatile Disc), a magnetic disc, and a semiconductor memory.
  • the program can be installed on the computer from the removable recording medium 111 as described above, or can be downloaded to the computer via the communication network or the broadcast network and installed on the built-in hard disk 105. That is, the program is transferred from a download site to a computer wirelessly via a digital satellite broadcasting artificial satellite, or wired to a computer via a network such as a LAN (Local Area Network) or the Internet. be able to.
  • a network such as a LAN (Local Area Network) or the Internet.
  • the computer includes a CPU (Central Processing Unit) 102, and an input / output interface 110 is connected to the CPU 102 via the bus 101.
  • CPU Central Processing Unit
  • the CPU 102 executes a program stored in a ROM (Read Only Memory) 103 accordingly. .
  • the CPU 102 loads a program stored in the hard disk 105 into a RAM (Random Access Memory) 104 and executes it.
  • the CPU 102 performs processing according to the flowchart described above or processing performed by the configuration of the block diagram described above. Then, the CPU 102 outputs the processing result as necessary, for example, via the input / output interface 110, from the output unit 106, transmitted from the communication unit 108, and further recorded in the hard disk 105.
  • the input unit 107 includes a keyboard, a mouse, a microphone, and the like.
  • the output unit 106 includes an LCD (Liquid Crystal Display), a speaker, and the like.
  • the processing performed by the computer according to the program does not necessarily have to be performed in chronological order in the order described as the flowchart. That is, the processing performed by the computer according to the program includes processing executed in parallel or individually (for example, parallel processing or object processing).
  • the program may be processed by one computer (processor), or may be distributedly processed by a plurality of computers. Furthermore, the program may be transferred to a remote computer and executed.
  • the system means a set of a plurality of components (devices, modules (parts), etc.), and it does not matter whether all the components are in the same housing. Accordingly, a plurality of devices housed in separate housings and connected via a network and a single device housing a plurality of modules in one housing are all systems. .
  • the present technology can take a cloud computing configuration in which one function is shared by a plurality of devices via a network and is jointly processed.
  • each step described in the above flowchart can be executed by one device or can be shared by a plurality of devices.
  • the plurality of processes included in the one step can be executed by being shared by a plurality of apparatuses in addition to being executed by one apparatus.
  • this technique can take the following structures.
  • An image processing apparatus comprising: a movement conversion unit configured to perform movement conversion for moving the subject position in the target image according to a subject distance from a vanishing point in the target image to a subject position where the subject appears.
  • a movement conversion unit configured to perform movement conversion for moving the subject position in the target image according to a subject distance from a vanishing point in the target image to a subject position where the subject appears.
  • the movement conversion unit performs the movement conversion of moving the subject position such that the subject distance is shorter than that before the movement conversion according to the subject distance.
  • the movement conversion unit performs the movement conversion of moving the subject position so that the longer the subject distance is, the closer to the vanishing point.
  • ⁇ 4> The image processing apparatus according to ⁇ 3>, wherein the movement conversion unit performs the movement conversion of moving the subject position where the subject distance is equal to or greater than a threshold.
  • ⁇ 6> The image processing apparatus according to ⁇ 5>, wherein the viewpoint conversion unit performs the viewpoint conversion so that an optical axis when shooting from the predetermined viewpoint matches a vanishing point in the viewpoint conversion image.
  • the movement conversion unit performs the movement conversion according to a conversion characteristic representing a relationship between the subject distance before the movement conversion and the subject distance after the movement conversion
  • the image processing apparatus according to any one of ⁇ 1> to ⁇ 6>, further including a switching unit that switches the conversion characteristics used for the movement conversion.
  • the image processing apparatus according to ⁇ 7>, further comprising: a superimposing unit that superimposes different OSD (On Screen Display) information on the movement-converted image obtained by the movement conversion according to the conversion characteristics used for the movement conversion.
  • OSD On Screen Display
  • An image processing method comprising: performing movement conversion for moving the subject position in the target image according to a subject distance from a vanishing point in the target image to a subject position where the subject appears.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)
  • Traffic Control Systems (AREA)

Abstract

La présente invention concerne un dispositif de traitement d'image, un procédé de traitement d'image et un programme pour permettre une reconnaissance plus facile d'objets verticaux. Un déplacement et une transformation sont effectués, la position d'un sujet d'imagerie dans une image à traiter étant déplacée en fonction d'une distance de sujet d'imagerie du point de fuite à la position du sujet d'imagerie dans ladite image. La présente technologie peut être appliquée à des utilisations telles que le traitement d'image d'une image capturée avec une unité de caméra installée dans un véhicule ou d'autres corps mobiles.
PCT/JP2018/018840 2017-05-30 2018-05-16 Dispositif de traitement d'image, procédé de traitement d'image et programme WO2018221209A1 (fr)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US16/615,976 US11521395B2 (en) 2017-05-30 2018-05-16 Image processing device, image processing method, and program
EP18809783.6A EP3633598B1 (fr) 2017-05-30 2018-05-16 Dispositif de traitement d'image, procédé de traitement d'image et programme
JP2019522095A JP7150709B2 (ja) 2017-05-30 2018-05-16 画像処理装置、画像処理方法、及び、プログラム
CN201880033861.6A CN110651295B (zh) 2017-05-30 2018-05-16 图像处理设备、图像处理方法和程序

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2017106645 2017-05-30
JP2017-106645 2017-05-30

Publications (1)

Publication Number Publication Date
WO2018221209A1 true WO2018221209A1 (fr) 2018-12-06

Family

ID=64455905

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/018840 WO2018221209A1 (fr) 2017-05-30 2018-05-16 Dispositif de traitement d'image, procédé de traitement d'image et programme

Country Status (5)

Country Link
US (1) US11521395B2 (fr)
EP (1) EP3633598B1 (fr)
JP (1) JP7150709B2 (fr)
CN (1) CN110651295B (fr)
WO (1) WO2018221209A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12033283B2 (en) 2021-07-12 2024-07-09 Toyota Jidosha Kabushiki Kaisha Virtual reality simulator and virtual reality simulation program

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102017210845A1 (de) * 2017-06-27 2018-12-27 Conti Temic Microelectronic Gmbh Kameravorrichtung sowie Verfahren zur umgebungsangepassten Erfassung eines Umgebungsbereichs eines Fahrzeugs
US11158088B2 (en) 2017-09-11 2021-10-26 Tusimple, Inc. Vanishing point computation and online alignment system and method for image guided stereo camera optical axes alignment
US11089288B2 (en) * 2017-09-11 2021-08-10 Tusimple, Inc. Corner point extraction system and method for image guided stereo camera optical axes alignment
WO2020014683A1 (fr) * 2018-07-13 2020-01-16 Kache.AI Systèmes et procédés de détection autonome d'objet et de suivi de véhicule
EP4087236A1 (fr) * 2021-05-03 2022-11-09 Honeywell International Inc. Système de surveillance vidéo avec transformation de point de vue

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009081496A (ja) 2007-09-25 2009-04-16 Hitachi Ltd 車載カメラ
JP2011185753A (ja) 2010-03-09 2011-09-22 Mitsubishi Electric Corp 車載カメラのカメラキャリブレーション装置
JP2013110712A (ja) 2011-11-24 2013-06-06 Aisin Seiki Co Ltd 車両周辺監視用画像生成装置
JP2013197816A (ja) * 2012-03-19 2013-09-30 Nippon Soken Inc 車載画像表示装置
JP2015061292A (ja) * 2013-09-20 2015-03-30 カシオ計算機株式会社 画像処理装置、画像処理方法及びプログラム

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7893963B2 (en) * 2000-03-27 2011-02-22 Eastman Kodak Company Digital camera which estimates and corrects small camera rotations
US7239331B2 (en) * 2004-02-17 2007-07-03 Corel Corporation Method and apparatus for correction of perspective distortion
JP2005311868A (ja) * 2004-04-23 2005-11-04 Auto Network Gijutsu Kenkyusho:Kk 車両周辺視認装置
JP5003946B2 (ja) 2007-05-30 2012-08-22 アイシン精機株式会社 駐車支援装置
JP5046132B2 (ja) 2008-12-24 2012-10-10 株式会社富士通ゼネラル 画像データ変換装置
JP5052708B2 (ja) 2010-06-18 2012-10-17 三菱電機株式会社 運転支援装置、運転支援システム、および運転支援カメラユニット
JP5321540B2 (ja) 2010-06-25 2013-10-23 株式会社Jvcケンウッド 画像補正処理装置、画像補正処理方法、及び画像補正処理プログラム
JP2012147149A (ja) 2011-01-11 2012-08-02 Aisin Seiki Co Ltd 画像生成装置
JP6310652B2 (ja) 2013-07-03 2018-04-11 クラリオン株式会社 映像表示システム、映像合成装置及び映像合成方法
US10652466B2 (en) * 2015-02-16 2020-05-12 Applications Solutions (Electronic and Vision) Ltd Method and device for stabilization of a surround view image
WO2017122552A1 (fr) * 2016-01-15 2017-07-20 ソニー株式会社 Dispositif et procédé de traitement d'image, programme, et système de traitement d'image
CN109313018B (zh) * 2016-06-08 2021-12-10 索尼公司 成像控制装置和方法、以及车辆

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009081496A (ja) 2007-09-25 2009-04-16 Hitachi Ltd 車載カメラ
JP2011185753A (ja) 2010-03-09 2011-09-22 Mitsubishi Electric Corp 車載カメラのカメラキャリブレーション装置
JP2013110712A (ja) 2011-11-24 2013-06-06 Aisin Seiki Co Ltd 車両周辺監視用画像生成装置
JP2013197816A (ja) * 2012-03-19 2013-09-30 Nippon Soken Inc 車載画像表示装置
JP2015061292A (ja) * 2013-09-20 2015-03-30 カシオ計算機株式会社 画像処理装置、画像処理方法及びプログラム

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3633598A4

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12033283B2 (en) 2021-07-12 2024-07-09 Toyota Jidosha Kabushiki Kaisha Virtual reality simulator and virtual reality simulation program
US12033269B2 (en) 2021-07-12 2024-07-09 Toyota Jidosha Kabushiki Kaisha Virtual reality simulator and virtual reality simulation program

Also Published As

Publication number Publication date
JP7150709B2 (ja) 2022-10-11
EP3633598B1 (fr) 2021-04-14
JPWO2018221209A1 (ja) 2020-04-02
CN110651295B (zh) 2024-04-26
EP3633598A4 (fr) 2020-04-22
EP3633598A1 (fr) 2020-04-08
US11521395B2 (en) 2022-12-06
CN110651295A (zh) 2020-01-03
US20200110947A1 (en) 2020-04-09

Similar Documents

Publication Publication Date Title
WO2018221209A1 (fr) Dispositif de traitement d'image, procédé de traitement d'image et programme
JP5222597B2 (ja) 画像処理装置及び方法、運転支援システム、車両
JP4642723B2 (ja) 画像生成装置および画像生成方法
JP4257356B2 (ja) 画像生成装置および画像生成方法
JP5454674B2 (ja) 画像生成装置、画像生成プログラム、合成テーブル生成装置および合成テーブル生成プログラム
JP5321711B2 (ja) 車両用周辺監視装置および映像表示方法
WO2015194501A1 (fr) Système de synthèse d'image, dispositif de synthèse d'image associé et procédé de synthèse d'image
JP4975592B2 (ja) 撮像装置
WO2010137265A1 (fr) Dispositif de surveillance d'une zone autour d'un véhicule
WO2013094588A1 (fr) Dispositif de traitement d'images, procédé de traitement d'images, programme pour dispositif de traitement d'images, support à mémoire et dispositif d'affichage d'images
JP2009081664A (ja) 車両用周辺監視装置および映像表示方法
JP2009118416A (ja) 車両周辺画像生成装置および車両周辺画像の歪み補正方法
WO2017179722A1 (fr) Dispositif de traitement d'image et dispositif d'imagerie
JP2015097335A (ja) 俯瞰画像生成装置
US10362231B2 (en) Head down warning system
US20230191994A1 (en) Image processing apparatus, image processing method, and image processing system
JP6700935B2 (ja) 撮像装置、その制御方法、および制御プログラム
JP7081265B2 (ja) 画像処理装置
JP6269014B2 (ja) フォーカス制御装置およびフォーカス制御方法
JP7029350B2 (ja) 画像処理装置および画像処理方法
JP2012222664A (ja) 車載カメラシステム
WO2021131481A1 (fr) Dispositif d'affichage, procédé d'affichage et programme d'affichage
JPWO2020022373A1 (ja) 運転支援装置および運転支援方法、プログラム
JP2010178018A (ja) 車両の周辺を画像表示するための装置
JP2012191380A (ja) カメラ、画像変換装置、及び画像変換方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18809783

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019522095

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2018809783

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2018809783

Country of ref document: EP

Effective date: 20200102