WO2018072908A1 - Controlling a vehicle for human transport with a surround view camera system - Google Patents

Controlling a vehicle for human transport with a surround view camera system Download PDF

Info

Publication number
WO2018072908A1
WO2018072908A1 PCT/EP2017/069922 EP2017069922W WO2018072908A1 WO 2018072908 A1 WO2018072908 A1 WO 2018072908A1 EP 2017069922 W EP2017069922 W EP 2017069922W WO 2018072908 A1 WO2018072908 A1 WO 2018072908A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
vehicle
detection system
evaluation device
human transport
Prior art date
Application number
PCT/EP2017/069922
Other languages
French (fr)
Inventor
Swaroop Kaggere Shivamurthy
Original Assignee
Connaught Electronics Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Connaught Electronics Ltd. filed Critical Connaught Electronics Ltd.
Publication of WO2018072908A1 publication Critical patent/WO2018072908A1/en

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0251Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B62LAND VEHICLES FOR TRAVELLING OTHERWISE THAN ON RAILS
    • B62JCYCLE SADDLES OR SEATS; AUXILIARY DEVICES OR ACCESSORIES SPECIALLY ADAPTED TO CYCLES AND NOT OTHERWISE PROVIDED FOR, e.g. ARTICLE CARRIERS OR CYCLE PROTECTORS
    • B62J27/00Safety equipment
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B62LAND VEHICLES FOR TRAVELLING OTHERWISE THAN ON RAILS
    • B62KCYCLES; CYCLE FRAMES; CYCLE STEERING DEVICES; RIDER-OPERATED TERMINAL CONTROLS SPECIALLY ADAPTED FOR CYCLES; CYCLE AXLE SUSPENSIONS; CYCLE SIDE-CARS, FORECARS, OR THE LIKE
    • B62K11/00Motorcycles, engine-assisted cycles or motor scooters with one or two wheels
    • B62K11/007Automatic balancing machines with single main ground engaging wheel or coaxial wheels supporting a rider
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/08Control of attitude, i.e. control of roll, pitch, or yaw
    • G05D1/0891Control of attitude, i.e. control of roll, pitch, or yaw specially adapted for land vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/582Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs

Definitions

  • the present invention relates to a camera detection system for controlling a vehicle for human transport with a camera for obtaining an image signal and an evaluation device for detecting an object and providing an appropriate control signal for the vehicle for human transport.
  • the present invention further relates to a corresponding vehicle for human transport with such a camera detection system.
  • the present invention also relates to a method for controlling a vehicle for human transport by obtaining an image signal by means of a camera and detecting an object and providing an appropriate control signal for the vehicle for human transport.
  • Autonomous vehicles usually detect obstacles in their environment through the use of radar and lidar and on the basis of GPS, odometry and computer-assisted imaging systems. Modern control systems interpret sensor information to identify suitable navigation routes as well as obstacles and relevant traffic signs. Often, however, these vehicles are not equipped to protect the transported person himself. However, protection of said person is of the utmost importance.
  • a TreatingSegway is a self-balancing, single-axis motor vehicle which uses gyroscopes for maintaining balance. It mainly serves to transport people from one place to another. Segways are electric two-wheelers which constitute environmentally friendly transport vehicles with low energy consumption. Whereas Segways usually feature a central holding rod, the so-called “Hoverboards” are two-wheeled vehicles with only a split standing surface. However, the latter are also included among the self-balancing vehicles for human transport.
  • document WO 15192610 describes a method in which a wheelchair is controllable by means of a brain-computer interface. Objects between a starting position and a target position of the wheelchair are detected by a webcam. Calculation of a route from the starting position to the target position is performed and the wheelchair follows the calculated route.
  • this object is solved by a camera detection system according to claim 1 .
  • the above-mentioned object is also solved by a method according to patent claim 12.
  • a camera detection system for controlling a vehicle for human transport with a first camera for obtaining a first image signal and an evaluation device for detecting an object and providing an appropriate control signal for the vehicle for human transport.
  • the system thus features a camera for optical capture of the environment, but also an evaluation device for evaluating the image signal of the camera.
  • the evaluation device is configured to detect one or more predefined objects or object types. If an object of a predefined object type is detected, a control signal can be provided for the vehicle for human transport, which takes into consideration the detected object.
  • the vehicle for human transport can thereby, for example, be controlled (steered, decelerated or accelerated) such that it does not collide with the object.
  • the camera detection system features at least a second camera for obtaining at least a second image signal. This means that the environment of the camera detection system can be observed by at least two cameras.
  • the camera detection system can also comprise a third camera for obtaining a third image signal, a fourth camera for obtaining a fourth image signal etc.
  • the first camera and the at least one second camera as well as the evaluation device are configured to generate a surround view on the basis of the image signals.
  • the camera detection system can also be referred to as surround view camera system.
  • the evaluation device is moreover configured to detect the object in the surround view and to produce the control signal in dependence thereon.
  • the obtained surround view is used to search therein for the predefined object type or object types, for example by means of image processing. If a corresponding object of the sought-for object type is detected, the control signal is formed accordingly.
  • the vehicle for human transport can advantageously be controlled such that objects in the entire environment of the camera detection system or of said vehicle are taken into account. As a result, the safety of the persons to be transported can be improved significantly.
  • the camera detection system it is possible to detect a curb or road edge as the object by means of the evaluation device.
  • the predefined object type is a curb or road edge, for example.
  • the evaluation device for example features an edge detector or the like. Concretely, if a curb or road edge is detected, it can be interpreted as a boundary of a drivable area (e.g. road). The control signal can then be formed such that the vehicle for human transport can drive only in said drivable area.
  • a side road branching off from a road is identifiable by means of the evaluation device on the basis of one or more detected curbs or side roads. Navigation can thus be facilitated, e.g. on the basis of registered side roads.
  • the evaluation device is configured such that a pedestrian crossing or a pedestrian walkway can be detected as the object.
  • the object type can thus be stated as “pedestrian crossing” or “pedestrian walkway”.
  • Such detection can be based on more complex image processing.
  • edge detectors can be used. If the evaluation device is capable of detecting such pedestrian crossings or pedestrian walkways, it is not, however, precluded that it is also capable of detecting curbs or road edges.
  • the evaluation device is thus capable of detecting a plurality of objects or object types, particularly different objects or object types, and of evaluating them for the control signal.
  • the camera detection system features a GPS module whose signal is used by the evaluation device for producing the control signal.
  • position data of the camera detection system can be determined which can then be used for route planning.
  • the vehicle for human transport can thus be enabled to find its way to the target position autonomously or to drive there automatically.
  • the camera detection system or the evaluation device can feature a specific route planning unit by means of which a route to a predefined destination can be determined, wherein the control signal is produced in correspondence with the route.
  • a drivable ground surface is divisible into sectors, each sector is classifiable for the production of the control signal according to one or more criteria and the control signal is producible in
  • the environment which corresponds to an all-round view or surround view, is divided into individual sectors.
  • the entire angular range of 360° is divided into 10, 20, 30 or more sectors. Any other number in between is also possible.
  • the individual sectors need not have an identical opening angle. The angle could be made dependent on the length of the sector in the radial direction, for example. Therein, the length can be limited by an object which is registered by the camera detection system in the respective sector.
  • the sectors can be classified, i.e. they are assigned to predefined classes.
  • One class of sectors can, for example, be constituted by the occurrence of one or more obstacles. It should be refrained from driving in a sector thus classified.
  • a control signal for the vehicle for human transport or its drive unit could be generated accordingly.
  • a further class of sectors could be characterized in that the respective sector is drivable at least beyond a predefined boundary. According to the evaluation of the evaluation device, such a sector is empty and safe driving is enabled therein.
  • a further class of sectors could contain those sectors which are by and large or within a specified bandwidth positioned in the direction of a route calculated for a predefined destination. Sectors thus classified should be preferably envisaged for travel. It has already been suggested that a vehicle for human transport can preferably be equipped with said camera detection system. Thus, the camera detection system with surround vision can be utilized for safe control of the vehicle for human transport.
  • the vehicle for human transport can, for example, be a single-axis, self-balancing personal transportation device or a wheelchair.
  • the single-axis, self-balancing personal transportation device can be a Segway or a Hoverboard.
  • the wheelchair can be a common electric wheelchair, a wheelchair used in a hospital for automatic
  • each of the cameras can be attached to a splash guard.
  • a splash guard is generally highly stable and static with regard to the centre of gravity of the vehicle for human transport. Thus, surround vision can be reliably attained.
  • the first camera is a front camera
  • the second camera is a rear camera
  • a third or fourth camera are side cameras each.
  • the above-mentioned object can also be solved by a method for controlling a vehicle for human transport, by
  • Implementations and feature combinations are also to be considered as disclosed, which thus do not have all of the features of an originally formulated independent claim. Moreover, implementations and feature combinations are to be considered as disclosed, in particular by the implementations set out above, which extend beyond or deviate from the feature combinations set out in the relations of the claims.
  • Fig. 1 a view of a Segway with a surround view camera system
  • Fig. 2 a splash guard of a Segway with a plurality of cameras
  • Fig. 3 a software design for an evaluation device of a camera detection system
  • Fig. 4 a flow chart for controlling a vehicle for human transport with a camera detection system
  • Fig 5 a top view of a real-life situation with road or curb detection, pedestrian walkway detection and free space detection
  • Fig. 1 shows a Segway 1 as an example of a vehicle for human transport.
  • the Segway 1 is a single-axis transport vehicle, which is self-balancing. Between two wheels 2 and 3, which are arranged on the single axis, there is a standing surface 4. A holding rod 5 is attached to the standing surface 4. The two wheels 2 and 3 are enclosed by splash guards 6 and 7, respectively. Thereby, the standing surface 4 is protected from dirt and splash water which can be swirled up by the wheels 2 and 3.
  • the splash guards 6, 7 are usually injection-molded plastic parts which are firmly attached to the standing surface 4.
  • the Segway 1 is equipped with a surround view camera system.
  • this surround view camera system features four individual cameras: a front camera 8, a rear camera 9, a first side camera 10 and a second side camera 1 1 .
  • the front camera 8 faces forward in the forward driving direction of the Segway 1 and the rear camera 9 in the opposite reverse direction. Both the front camera 8 and the rear camera 9 are firmly attached to the splash guard 7. They are arranged thereon in positions which permit said orientations towards the front and the rear, respectively.
  • the second side camera 1 1 which is basically oriented in the axial direction of the axis of the Segway 1 , is also positioned on the splash guard 7.
  • the first side camera 10 is arranged on the opposite splash guard 6 for the wheel 2. It is also parallel to the axis of the Segway 1 . Its orientation is opposite to the orientation of the second side camera 1 1 .
  • Fig. 2 shows the splash guard 7 with the cameras separately. It is indicated therein that the front camera 8 can be arranged at the front end of the splash guard 7 and the rear camera 9 at the rear end of the splash guard 7.
  • the second side camera 1 1 is positioned in the middle of the splash guard 7, preferably at the highest point. If required, other positions of the cameras 8, 9, 1 1 can also be selected. This applies analogously to the camera 10 of the splash guard 6.
  • the surround view camera system consists of four cameras 8 to 1 1 .
  • the number of cameras can vary. However, there should be at least two cameras to avoid shadowing. Thus, for example, it is also possible to use three or more than four cameras.
  • the camera detection system of the Segway 1 comprises the surround view camera system with the individual cameras 8 to 1 1 and an evaluation device which can be integrated into the bottom plate or standing surface 4 and receives the image signals of the individual cameras. From these individual image signals, the evaluation device generates the corresponding surround view and therein detects objects of a predefined type. Besides, it provides an appropriate control signal for the drives of the wheels 2 and 3 of the Segway 1 in dependence on the detected objects. It is also possible that respective partial control signals are generated for the individual drives.
  • Fig. 3 shows a basic software architecture of at least part of the evaluation device for autonomous driving.
  • a self-driving control module 12 (SDM) has a path planning module 13 (PPM).
  • Said path planning module 13 in turn comprises a free space scenario descriptor 14 (FSSD).
  • the path planning module 13 comprises a GPS module 15 (GPS).
  • the self-driving control module 12 further comprises a free space detector 16 (FSD) and a pedestrian walkway detector 17 (PWD).
  • the self-driving control module 12 receives an input signal 18 (IP) from outside.
  • the input signal 18 is provided both to the free space detector 16 and to the pedestrian walkway detector 17 in the self-driving control module 12.
  • the input signal 18 can contain video images or image signals and/or map information.
  • the input signal 18 can comprise a surround view of the environment of the vehicle for human transport and specifically of the Segway 1 .
  • the free space detector 16 detects, for example, road edges and/or curbs. If required, it also provides corresponding signals to the pedestrian walkway detector 17. The latter detects, for example, pedestrian crossings and/or pedestrian walkways.
  • the output signals of the free space detector 16, the pedestrian walkway detector 17 and the GPS navigator 15 are provided to the free space scenario descriptor 14 in order to provide one or more output signals 19 for one or more drives of the vehicle for human transport in accordance with the detected scenario in the environment of said vehicle.
  • the output signal 19 (OP) can comprise instructions for braking, accelerating, change of direction and the like.
  • a corresponding method for determining free space or freely drivable space could be designed according to the flow chart of Fig. 4.
  • the process starts in a step S1 .
  • the self-driving control module 12 of Fig. 3 can, for example, allocate and initialize necessary resources like memory buffers, data structures and the like.
  • the functionality of the self-driving control module 12 can be defined by a set of configuration parameters. Accordingly, at the start of the process the configuration parameters have to be initialized. Steps S2 and S3 may be interchanged.
  • step S4 analysis of the free space on the road or on the drivable ground surface is performed.
  • the self-driving control module can, for example, identify a desired road area and from the input signal or input signals 18 determine boundaries of said road area (e.g. road edges or curbs).
  • step S5 the self- driving control module 12 can analogously determine pedestrian crossings and pedestrian walkways from the input signal 18 (e.g. surround view).
  • a GPS module 15 e.g. GPS tracker
  • the free space scenario descriptor 14 plans a route, for instance, in accordance with step S6.
  • the free space scenario descriptor 14 is capable of performing temporal analysis of the data and of constructing a free space scenario.
  • This free space scenario represents the currently drivable area from the perspective of the vehicle for human transport in the current situation.
  • safe sectors can be identified, for instance, in which the Segway or vehicle for human transport can drive.
  • Corresponding control signals can then be provided to the motor control unit of the vehicle.
  • the free space analysis ends with the feedback of one or more control signals to the drive system.
  • the functioning of the self-driving control module 12 or of the above-mentioned method is explained in greater detail below with reference to Fig. 5.
  • the drivable space or the drivable ground surface for the vehicle for human transport is to be determined by means of the free space detector. It is thus, for example, the aim to detect drivable free space in the surround view. To this end, the following steps, detailed below, can be performed. a) The scene captured in the surround view is classified into areas of different intensity and texture.
  • the free space detector analyses the free space in world coordinates, for instance, and expresses the free space in meters in front of the camera. Finally, the consolidated free space scenario is provided to the drive system.
  • Fig. 5 shows such a free space scenario from a bird's eye view.
  • a Segway driver 20 is on a road 21 with a turning.
  • the free space detector of the camera detection system which is capable of providing a surround view, has detected road edges or curbs 22.
  • Said road edges or curbs 22 are objects which limit the drivable space.
  • Further objects such as parking or driving vehicles 23 are besides identified by the evaluation device. They also limit the drivable area, i.e. the free space scenario.
  • Still further objects in the surround view, e.g. a container 24, are identified to exactly define the drivable area. If the boundaries of the drivable area are known, the evaluation device or the free space detector divides the identified drivable space into sectors 25. These sectors can be used for further analysis.
  • the camera detection system or the self-driving control module also features a pedestrian walkway detector for detecting pedestrian walkways 29 or pedestrian crossings in the surround view. For this purpose, the following steps may be necessary. a) Analysis of the areas beyond the curbs and road edges 22.
  • a pedestrian crossing can be determined if parallel white stripes are identified on a road.
  • the free space scenario descriptor 14 is utilized. Its aim is to exactly determine the free space scenario on the basis of the signals of the free space detector 16, the GPS module 15 and the pedestrian walkway detector 17. Thus, a control signal for a safe driving path or a safe route can be produced. To achieve this, the following steps are required or optional: a) Temporal analysis of the free space in each sector to describe the relevant
  • the continuous decrease of free space in a sector indicates approach to an obstacle.
  • a sector 26 with the container 24 as an obstacle is illustrated in Fig. 5.
  • the evaluation device further indicates sectors 27 in which there are no obstacles (at least within a predefined minimum range) and in which it is potentially possible to drive as far as the edge 22 of the road 21 . If necessary, the Segway is required to switch to a sector that indicates a larger free space.
  • a route planner can continuously receive signals from the GPS module to enable autonomous movements between different places.
  • Fig. 5 shows a GPS direction 28 which is meant to be followed by the Segway driver 20 in accordance with his route or path planner. In correspondence with this GPS direction 28 the possible free sectors are then provided or one of them is selected for the movement of the Segway.
  • VSLAM Video Simultaneously
  • the camera detection system including the evaluation device can offer a training mode which, for example, is used the first few times the driver manually drives from home to the office, the marketplace or playground. The system learns in correspondence to the relevant information and simulates the path, with deviations, as the case may be.
  • a road 30 as well as a side road 31 is visible in the image of a front camera according to Fig. 6.
  • the horizontal road edges 32 on the left and right side of the road 30 are determined. Subsequently, it is searched for unusual protrusions on the left and right edge of the road to detect side roads 31 .
  • An interruption of the straight line of the road edge, as shown in Fig. 6, is indicative of a side road.
  • the detection of side roads can also be used for training purposes.
  • the algorithm can detect the number of left and right side roads before an actual turnoff is taken. This can also serve as a reference for autonomous driving.
  • the camera detection system with surround vision can also be used to avoid collisions. This is not only achievable by determining free sectors for driving but also by detecting trajectories of moving objects towards the vehicle for human transport. Therefrom, an appropriate control signal is generated to avoid a possible collision.
  • Such camera detection systems cannot only be used for Segways, Hoverboards and the like but also for autonomously controlled wheelchairs and other vehicles for human transport.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Mechanical Engineering (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Theoretical Computer Science (AREA)
  • Electromagnetism (AREA)
  • Traffic Control Systems (AREA)
  • Control Of Driving Devices And Active Controlling Of Vehicle (AREA)

Abstract

Self-driving vehicles for human transport are to be enabled to navigate more safely in their environment. To this end, there is provided a camera detection system for controlling a vehicle for human transport with a first camera (8) for obtaining a first image signal and an evaluation device for detecting an object and providing an appropriate control signal for the vehicle for human transport (1). At least a second camera (9) for obtaining at least a second image signal is contained in the camera detection system. The first camera (8) and the at least one second camera (9) as well as the evaluation device are configured to generate a surround view on the basis of the image signals. Finally, the evaluation device is also configured to detect the object in the surround view and to produce the control signal in dependence thereon.

Description

Controlling a vehicle for human transport with a surround view camera system
The present invention relates to a camera detection system for controlling a vehicle for human transport with a camera for obtaining an image signal and an evaluation device for detecting an object and providing an appropriate control signal for the vehicle for human transport. The present invention further relates to a corresponding vehicle for human transport with such a camera detection system. The present invention also relates to a method for controlling a vehicle for human transport by obtaining an image signal by means of a camera and detecting an object and providing an appropriate control signal for the vehicle for human transport.
Autonomous vehicles usually detect obstacles in their environment through the use of radar and lidar and on the basis of GPS, odometry and computer-assisted imaging systems. Modern control systems interpret sensor information to identify suitable navigation routes as well as obstacles and relevant traffic signs. Often, however, these vehicles are not equipped to protect the transported person himself. However, protection of said person is of the utmost importance.
Numerous types of vehicles for human transport are known. However, the present application is mostly directed to single-person transport vehicles, which can be uniaxial or multiaxial. Examples of such single-person transport vehicles are "Segways",
"Hoverboards", but also wheelchairs.
A„Segway" is a self-balancing, single-axis motor vehicle which uses gyroscopes for maintaining balance. It mainly serves to transport people from one place to another. Segways are electric two-wheelers which constitute environmentally friendly transport vehicles with low energy consumption. Whereas Segways usually feature a central holding rod, the so-called "Hoverboards" are two-wheeled vehicles with only a split standing surface. However, the latter are also included among the self-balancing vehicles for human transport.
Safety of the transported persons is also an issue in hospitals with electric wheelchairs, for example. Such electric wheelchairs are used in hospitals for the automatic or automated transport of patients from one place to another. From document KR 2015 0027987 there is known an autonomous control method comprising a camera and ultrasonic sensors for an electric wheelchair. The wheelchair receives distance information and image information from the sensors and the camera with regard to objects in front of the wheelchair.
Further, document WO 15192610 describes a method in which a wheelchair is controllable by means of a brain-computer interface. Objects between a starting position and a target position of the wheelchair are detected by a webcam. Calculation of a route from the starting position to the target position is performed and the wheelchair follows the calculated route.
It is the object of the present invention to improve the safety of vehicles for human transport.
According to the invention, this object is solved by a camera detection system according to claim 1 . There is moreover provided a corresponding vehicle for human transport according to claim 8. The above-mentioned object is also solved by a method according to patent claim 12. Advantageous developments of the invention are apparent from the dependent claims.
According to the present invention, there is thus provided a camera detection system for controlling a vehicle for human transport with a first camera for obtaining a first image signal and an evaluation device for detecting an object and providing an appropriate control signal for the vehicle for human transport. The system thus features a camera for optical capture of the environment, but also an evaluation device for evaluating the image signal of the camera. Therein, the evaluation device is configured to detect one or more predefined objects or object types. If an object of a predefined object type is detected, a control signal can be provided for the vehicle for human transport, which takes into consideration the detected object. In particular, the vehicle for human transport can thereby, for example, be controlled (steered, decelerated or accelerated) such that it does not collide with the object.
The camera detection system features at least a second camera for obtaining at least a second image signal. This means that the environment of the camera detection system can be observed by at least two cameras. The camera detection system can also comprise a third camera for obtaining a third image signal, a fourth camera for obtaining a fourth image signal etc. The first camera and the at least one second camera as well as the evaluation device are configured to generate a surround view on the basis of the image signals. Thus, on the basis of the image signals of the plurality of cameras of the camera detection system, total surround vision of 360° is produced. Accordin gly, the camera detection system can also be referred to as surround view camera system.
The evaluation device is moreover configured to detect the object in the surround view and to produce the control signal in dependence thereon. Thus, the obtained surround view is used to search therein for the predefined object type or object types, for example by means of image processing. If a corresponding object of the sought-for object type is detected, the control signal is formed accordingly. Thus, the vehicle for human transport can advantageously be controlled such that objects in the entire environment of the camera detection system or of said vehicle are taken into account. As a result, the safety of the persons to be transported can be improved significantly.
In one embodiment of the camera detection system it is possible to detect a curb or road edge as the object by means of the evaluation device. This means that the predefined object type is a curb or road edge, for example. In order to detect such a curb or road edge, the evaluation device for example features an edge detector or the like. Concretely, if a curb or road edge is detected, it can be interpreted as a boundary of a drivable area (e.g. road). The control signal can then be formed such that the vehicle for human transport can drive only in said drivable area.
In a specific development, in the surround view a side road branching off from a road is identifiable by means of the evaluation device on the basis of one or more detected curbs or side roads. Navigation can thus be facilitated, e.g. on the basis of registered side roads.
In a further embodiment, the evaluation device is configured such that a pedestrian crossing or a pedestrian walkway can be detected as the object. The object type can thus be stated as "pedestrian crossing" or "pedestrian walkway". Such detection can be based on more complex image processing. Here, likewise, edge detectors can be used. If the evaluation device is capable of detecting such pedestrian crossings or pedestrian walkways, it is not, however, precluded that it is also capable of detecting curbs or road edges. The evaluation device is thus capable of detecting a plurality of objects or object types, particularly different objects or object types, and of evaluating them for the control signal.
In a further embodiment it is provided that the camera detection system features a GPS module whose signal is used by the evaluation device for producing the control signal. Thus, with the aid of satellites, position data of the camera detection system can be determined which can then be used for route planning. On the basis of its own detected position and a predefined or predefinable target position, the vehicle for human transport can thus be enabled to find its way to the target position autonomously or to drive there automatically. For this purpose, the camera detection system or the evaluation device can feature a specific route planning unit by means of which a route to a predefined destination can be determined, wherein the control signal is produced in correspondence with the route.
Preferably, by means of the evaluation device in the surround view a drivable ground surface is divisible into sectors, each sector is classifiable for the production of the control signal according to one or more criteria and the control signal is producible in
dependence on the classification. This means that the image produced of the
environment, which corresponds to an all-round view or surround view, is divided into individual sectors. For example, the entire angular range of 360° is divided into 10, 20, 30 or more sectors. Any other number in between is also possible. The individual sectors need not have an identical opening angle. The angle could be made dependent on the length of the sector in the radial direction, for example. Therein, the length can be limited by an object which is registered by the camera detection system in the respective sector.
The sectors can be classified, i.e. they are assigned to predefined classes. One class of sectors can, for example, be constituted by the occurrence of one or more obstacles. It should be refrained from driving in a sector thus classified. A control signal for the vehicle for human transport or its drive unit could be generated accordingly. A further class of sectors could be characterized in that the respective sector is drivable at least beyond a predefined boundary. According to the evaluation of the evaluation device, such a sector is empty and safe driving is enabled therein. A further class of sectors could contain those sectors which are by and large or within a specified bandwidth positioned in the direction of a route calculated for a predefined destination. Sectors thus classified should be preferably envisaged for travel. It has already been suggested that a vehicle for human transport can preferably be equipped with said camera detection system. Thus, the camera detection system with surround vision can be utilized for safe control of the vehicle for human transport.
The vehicle for human transport can, for example, be a single-axis, self-balancing personal transportation device or a wheelchair. Specifically, the single-axis, self-balancing personal transportation device can be a Segway or a Hoverboard. The wheelchair can be a common electric wheelchair, a wheelchair used in a hospital for automatic
transportation or a wheelchair with a brain-computer interface and autopilot technology.
If the vehicle for human transport is a single-axis, self-balancing personal transportation device having two wheels, each with a splash guard, each of the cameras can be attached to a splash guard. Such a splash guard is generally highly stable and static with regard to the centre of gravity of the vehicle for human transport. Thus, surround vision can be reliably attained.
In a specific embodiment, the first camera is a front camera, the second camera is a rear camera and a third or fourth camera are side cameras each. Thus, surround vision is attainable by four cameras which are oriented towards each other at an angle of (about) 90° for example. Consequently, surround vision of high quality can be attained.
According to the invention, the above-mentioned object can also be solved by a method for controlling a vehicle for human transport, by
- obtaining a first image signal by means of a first camera,
- detecting an object and providing an appropriate control signal for the vehicle for human transport,
- obtaining at least a second image signal by means of at least a second camera,
- generating a surround view on the basis of the image signals and
- detecting the object in the surround view, wherein in dependence thereon the control signal is produced.
This method can be developed further with corresponding functional features of the above-mentioned camera detection system or of the above-mentioned vehicle for human transport. Accordingly, with regard to the methods thus obtained, the advantages of the corresponding devices apply analogously. Further features of the invention are apparent from the claims, the figures and the description of figures. The features and feature combinations mentioned above in the description as well as the features and feature combinations mentioned below in the description of figures and/or shown in the figures alone are usable not only in the respectively specified combination, but also in other combinations without departing from the scope of the invention. Thus, implementations are also to be considered as encompassed and disclosed by the invention, which are not explicitly shown in the figures and explained, but arise from and can be generated by separated feature combinations from the explained implementations. Implementations and feature combinations are also to be considered as disclosed, which thus do not have all of the features of an originally formulated independent claim. Moreover, implementations and feature combinations are to be considered as disclosed, in particular by the implementations set out above, which extend beyond or deviate from the feature combinations set out in the relations of the claims.
The present invention is explained below in more detail on the basis of the enclosed drawings.
Therein show:
Fig. 1 a view of a Segway with a surround view camera system;
Fig. 2 a splash guard of a Segway with a plurality of cameras;
Fig. 3 a software design for an evaluation device of a camera detection system;
Fig. 4 a flow chart for controlling a vehicle for human transport with a camera detection system;
Fig 5 a top view of a real-life situation with road or curb detection, pedestrian walkway detection and free space detection; and
Fig 6 part of a surround view with side road detection. The embodiments explained in more detail in the following represent preferred
embodiments of the present invention.
Fig. 1 shows a Segway 1 as an example of a vehicle for human transport. The Segway 1 is a single-axis transport vehicle, which is self-balancing. Between two wheels 2 and 3, which are arranged on the single axis, there is a standing surface 4. A holding rod 5 is attached to the standing surface 4. The two wheels 2 and 3 are enclosed by splash guards 6 and 7, respectively. Thereby, the standing surface 4 is protected from dirt and splash water which can be swirled up by the wheels 2 and 3. The splash guards 6, 7 are usually injection-molded plastic parts which are firmly attached to the standing surface 4.
The Segway 1 is equipped with a surround view camera system. In the present example, this surround view camera system features four individual cameras: a front camera 8, a rear camera 9, a first side camera 10 and a second side camera 1 1 . The front camera 8 faces forward in the forward driving direction of the Segway 1 and the rear camera 9 in the opposite reverse direction. Both the front camera 8 and the rear camera 9 are firmly attached to the splash guard 7. They are arranged thereon in positions which permit said orientations towards the front and the rear, respectively. The second side camera 1 1 , which is basically oriented in the axial direction of the axis of the Segway 1 , is also positioned on the splash guard 7.
The first side camera 10 is arranged on the opposite splash guard 6 for the wheel 2. It is also parallel to the axis of the Segway 1 . Its orientation is opposite to the orientation of the second side camera 1 1 .
Fig. 2 shows the splash guard 7 with the cameras separately. It is indicated therein that the front camera 8 can be arranged at the front end of the splash guard 7 and the rear camera 9 at the rear end of the splash guard 7. The second side camera 1 1 is positioned in the middle of the splash guard 7, preferably at the highest point. If required, other positions of the cameras 8, 9, 1 1 can also be selected. This applies analogously to the camera 10 of the splash guard 6.
In the example of Fig. 1 and Fig. 2, the surround view camera system consists of four cameras 8 to 1 1 . Needless to say, the number of cameras can vary. However, there should be at least two cameras to avoid shadowing. Thus, for example, it is also possible to use three or more than four cameras. In the present case, the camera detection system of the Segway 1 comprises the surround view camera system with the individual cameras 8 to 1 1 and an evaluation device which can be integrated into the bottom plate or standing surface 4 and receives the image signals of the individual cameras. From these individual image signals, the evaluation device generates the corresponding surround view and therein detects objects of a predefined type. Besides, it provides an appropriate control signal for the drives of the wheels 2 and 3 of the Segway 1 in dependence on the detected objects. It is also possible that respective partial control signals are generated for the individual drives.
Fig. 3 shows a basic software architecture of at least part of the evaluation device for autonomous driving. A self-driving control module 12 (SDM) has a path planning module 13 (PPM). Said path planning module 13 in turn comprises a free space scenario descriptor 14 (FSSD). Besides, the path planning module 13 comprises a GPS module 15 (GPS). The self-driving control module 12 further comprises a free space detector 16 (FSD) and a pedestrian walkway detector 17 (PWD).
The self-driving control module 12 receives an input signal 18 (IP) from outside. The input signal 18 is provided both to the free space detector 16 and to the pedestrian walkway detector 17 in the self-driving control module 12. The input signal 18 can contain video images or image signals and/or map information. In particular, the input signal 18 can comprise a surround view of the environment of the vehicle for human transport and specifically of the Segway 1 .
The free space detector 16 detects, for example, road edges and/or curbs. If required, it also provides corresponding signals to the pedestrian walkway detector 17. The latter detects, for example, pedestrian crossings and/or pedestrian walkways. The output signals of the free space detector 16, the pedestrian walkway detector 17 and the GPS navigator 15 are provided to the free space scenario descriptor 14 in order to provide one or more output signals 19 for one or more drives of the vehicle for human transport in accordance with the detected scenario in the environment of said vehicle. The output signal 19 (OP) can comprise instructions for braking, accelerating, change of direction and the like.
A corresponding method for determining free space or freely drivable space could be designed according to the flow chart of Fig. 4. Presently, the process starts in a step S1 . In a subsequent step S2 the self-driving control module 12 of Fig. 3 can, for example, allocate and initialize necessary resources like memory buffers, data structures and the like. In the following step S3 the functionality of the self-driving control module 12 can be defined by a set of configuration parameters. Accordingly, at the start of the process the configuration parameters have to be initialized. Steps S2 and S3 may be interchanged.
In a further step S4, analysis of the free space on the road or on the drivable ground surface is performed. For this purpose, the self-driving control module can, for example, identify a desired road area and from the input signal or input signals 18 determine boundaries of said road area (e.g. road edges or curbs). According to step S5, the self- driving control module 12 can analogously determine pedestrian crossings and pedestrian walkways from the input signal 18 (e.g. surround view). On the basis of said obtained information and, optionally, the information of a GPS module 15 (e.g. GPS tracker), which is capable of continuously providing updates of direction for a movement from a starting point to an endpoint, the free space scenario descriptor 14 plans a route, for instance, in accordance with step S6. Therein, the free space scenario descriptor 14 is capable of performing temporal analysis of the data and of constructing a free space scenario. This free space scenario represents the currently drivable area from the perspective of the vehicle for human transport in the current situation. In said free space scenario safe sectors can be identified, for instance, in which the Segway or vehicle for human transport can drive. Corresponding control signals can then be provided to the motor control unit of the vehicle. In step S7, the free space analysis ends with the feedback of one or more control signals to the drive system.
The functioning of the self-driving control module 12 or of the above-mentioned method is explained in greater detail below with reference to Fig. 5. The drivable space or the drivable ground surface for the vehicle for human transport is to be determined by means of the free space detector. It is thus, for example, the aim to detect drivable free space in the surround view. To this end, the following steps, detailed below, can be performed. a) The scene captured in the surround view is classified into areas of different intensity and texture.
b) In front of the Segway or the vehicle for human transport an area is determined which is most probably a road surface.
c) Other areas are connected which have been classified as ground surfaces on the basis of intensity and texture relationships.
d) Boundary and curb pixel positions of the road area are determined.
e) On the basis of the road edge detection the free space detector analyses the free space in world coordinates, for instance, and expresses the free space in meters in front of the camera. Finally, the consolidated free space scenario is provided to the drive system.
Fig. 5 shows such a free space scenario from a bird's eye view. A Segway driver 20 is on a road 21 with a turning. The free space detector of the camera detection system, which is capable of providing a surround view, has detected road edges or curbs 22. Said road edges or curbs 22 are objects which limit the drivable space. Further objects such as parking or driving vehicles 23 are besides identified by the evaluation device. They also limit the drivable area, i.e. the free space scenario. Still further objects in the surround view, e.g. a container 24, are identified to exactly define the drivable area. If the boundaries of the drivable area are known, the evaluation device or the free space detector divides the identified drivable space into sectors 25. These sectors can be used for further analysis.
In the present example the camera detection system or the self-driving control module also features a pedestrian walkway detector for detecting pedestrian walkways 29 or pedestrian crossings in the surround view. For this purpose, the following steps may be necessary. a) Analysis of the areas beyond the curbs and road edges 22.
b) Detection of homogenous areas parallel to the curbs, which are indicative of potential pedestrian walkways.
c) Analysis of curbs associated with these homogenous areas as to their parallel alignment to curbs associated with the road area.
d) A pedestrian crossing can be determined if parallel white stripes are identified on a road.
e) In case of high traffic volume the vehicle for human transport or the Segway is driven onto a safe pedestrian walkway and the driving speed is limited.
Finally, in the present example the free space scenario descriptor 14 is utilized. Its aim is to exactly determine the free space scenario on the basis of the signals of the free space detector 16, the GPS module 15 and the pedestrian walkway detector 17. Thus, a control signal for a safe driving path or a safe route can be produced. To achieve this, the following steps are required or optional: a) Temporal analysis of the free space in each sector to describe the relevant
occurrences. The continuous decrease of free space in a sector indicates approach to an obstacle. Such a sector 26 with the container 24 as an obstacle is illustrated in Fig. 5. The evaluation device further indicates sectors 27 in which there are no obstacles (at least within a predefined minimum range) and in which it is potentially possible to drive as far as the edge 22 of the road 21 . If necessary, the Segway is required to switch to a sector that indicates a larger free space.
b) A route planner can continuously receive signals from the GPS module to enable autonomous movements between different places. Fig. 5 shows a GPS direction 28 which is meant to be followed by the Segway driver 20 in accordance with his route or path planner. In correspondence with this GPS direction 28 the possible free sectors are then provided or one of them is selected for the movement of the Segway.
c) Real-time control as regards acceleration, braking deceleration and change of direction is utilized for the Segway control device.
d) In collision scenarios the Segway is able to intelligently switch paths for safety reasons on the basis of a rate of change of the free space sectors.
It is also possible, for instance, to use a VSLAM module (Visual Simultaneously
Localization and Mapping) instead of a GPS module. Besides, in a further development the camera detection system including the evaluation device can offer a training mode which, for example, is used the first few times the driver manually drives from home to the office, the marketplace or playground. The system learns in correspondence to the relevant information and simulates the path, with deviations, as the case may be.
In a development of the system or method it is possible that additionally side roads are detected to obtain the control signal for the vehicle for human transport. A road 30 as well as a side road 31 is visible in the image of a front camera according to Fig. 6. In order to be able to detect the side road 31 , the horizontal road edges 32 on the left and right side of the road 30 are determined. Subsequently, it is searched for unusual protrusions on the left and right edge of the road to detect side roads 31 . An interruption of the straight line of the road edge, as shown in Fig. 6, is indicative of a side road.
The detection of side roads can also be used for training purposes. Thus, for example, the algorithm can detect the number of left and right side roads before an actual turnoff is taken. This can also serve as a reference for autonomous driving.
The camera detection system with surround vision can also be used to avoid collisions. This is not only achievable by determining free sectors for driving but also by detecting trajectories of moving objects towards the vehicle for human transport. Therefrom, an appropriate control signal is generated to avoid a possible collision.
Such camera detection systems cannot only be used for Segways, Hoverboards and the like but also for autonomously controlled wheelchairs and other vehicles for human transport.

Claims

Patent Claims
1 . Camera detection system for controlling a vehicle for human transport (1 ) with
- a first camera (8) for obtaining a first image signal and
- an evaluation device (12) for detecting an object (22, 23, 24) and providing an appropriate control signal (19) for the vehicle for human transport (1 ), characterized in that
- at least a second camera (9, 10, 1 1 ) for obtaining at least a second image signal is contained in the camera detection system,
- the first camera (8) and the at least one second camera (9, 10, 1 1 ) as well as the evaluation device (12) are jointly configured to generate a surround view on the basis of the image signals, and
- the evaluation device (12) is further configured to detect the object (22, 23 24) in the surround view and to produce the control signal (19) in dependence thereon.
2. Camera detection system according to claim 1 ,
characterized in that
by means of the evaluation device (12) a curb or road edge (22) can be detected as the object (22, 23, 24).
3. Camera detection system according to claim 2,
characterized in that
in the surround view a side road (31 ) branching off from a road (30) is identifiable by means of the evaluation device (12) on the basis of one or more detected curbs or road edges (22).
4. Camera detection system according to any one of the preceding claims,
characterized in that
by means of the evaluation device (12) a pedestrian crossing or pedestrian walkway (29) can be detected as the object (22, 23, 24).
5. Camera detection system according to any one of the preceding claims,
characterized in that it features a GPS module (15) whose signal is used by the evaluation device (12) for producing the control signal (19).
6. Camera detection system according to any one of the preceding claims,
characterized in that
the evaluation device (12) features a route planning unit by means of which a route to a predefined destination can be determined, and produces the control signal (19) in correspondence with said route.
7. Camera detection system according to any one of the preceding claims,
characterized in that
by means of the evaluation device (12) in the surround view a drivable ground surface is divisible into sectors (25, 26, 27), each sector (25, 26, 27) is classifiable for the production of the control signal (19) according to one or more criteria and the control signal (19) is producible in dependence on the classification.
8. Vehicle for human transport (1 ) with a camera detection system according to any one of the preceding claims.
9. Vehicle for human transport (1 ) according to claim 8, which is realized as a single- axis, self-balancing personal transportation device or as a wheelchair.
10. Vehicle for human transport (1 ) according to claim 8, which is realized as a single- axis, self-balancing personal transportation device having two wheels (2, 3), each with a splash guard (6, 7), wherein each of the cameras (8 to 1 1 ) is attached to one of the splash guards (6, 7).
1 1 . Vehicle for human transport (1 ) according to claim 8, wherein the first camera (8) is a front camera, the second camera (9) is a rear camera and a third camera (10) and a fourth camera (1 1 ) are side cameras each.
12. Method for controlling a vehicle for human transport (1 ) by
- obtaining a first image signal by means of a first camera (8) and
- detecting an object (22, 23, 24) and providing an appropriate control signal (19) for the vehicle for human transport (1 ), characterized by
- obtaining at least a second image signal by means of at least a second camera (9, 10, 1 1 ),
- generating a surround view on the basis of the image signals and
- detecting the object (22, 23, 24) in the surround view, wherein in dependence thereon the control signal (19) is produced.
PCT/EP2017/069922 2016-10-17 2017-08-07 Controlling a vehicle for human transport with a surround view camera system WO2018072908A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102016119729.7 2016-10-17
DE102016119729.7A DE102016119729A1 (en) 2016-10-17 2016-10-17 Controlling a passenger transport vehicle with all-round vision camera system

Publications (1)

Publication Number Publication Date
WO2018072908A1 true WO2018072908A1 (en) 2018-04-26

Family

ID=59702670

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2017/069922 WO2018072908A1 (en) 2016-10-17 2017-08-07 Controlling a vehicle for human transport with a surround view camera system

Country Status (2)

Country Link
DE (1) DE102016119729A1 (en)
WO (1) WO2018072908A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110001840A (en) * 2019-03-12 2019-07-12 浙江工业大学 A kind of double-wheel self-balancing vehicle motion control method under various road conditions of view-based access control model sensor
CN110203312A (en) * 2019-06-14 2019-09-06 灵动科技(北京)有限公司 It can carry out mobile device
CN114663397A (en) * 2022-03-22 2022-06-24 小米汽车科技有限公司 Method, device, equipment and storage medium for detecting travelable area

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109774847B (en) * 2018-12-07 2020-11-03 纳恩博(北京)科技有限公司 Scooter
CN109533154B (en) 2018-12-07 2020-10-16 纳恩博(北京)科技有限公司 Scooter

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080162027A1 (en) * 2006-12-29 2008-07-03 Robotic Research, Llc Robotic driving system
DE102008027590A1 (en) * 2007-06-12 2009-01-08 Fuji Jukogyo K.K. Driving assistance system for vehicles
EP2779024A1 (en) * 2013-03-15 2014-09-17 Ricoh Company, Ltd. Intersection recognizing apparatus and computer-readable storage medium
KR20150027987A (en) 2013-09-05 2015-03-13 한국전자통신연구원 Control device and method of electrical wheel-chair for autonomous moving
WO2015192610A1 (en) 2014-06-17 2015-12-23 华南理工大学 Intelligent wheel chair control method based on brain computer interface and automatic driving technology

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102012000724A1 (en) * 2011-06-29 2013-01-03 Volkswagen Ag Method and device for partially automated driving of a motor vehicle
DE102011116169A1 (en) * 2011-10-14 2013-04-18 Continental Teves Ag & Co. Ohg Device for assisting a driver when driving a vehicle or for autonomously driving a vehicle
DE102011087894A1 (en) * 2011-12-07 2013-06-13 Robert Bosch Gmbh Method and vehicle assistance system for active warning and / or navigation aid for avoiding a collision of a vehicle body part and / or a vehicle wheel with an object
FR3017207B1 (en) * 2014-01-31 2018-04-06 Groupe Gexpertise GEOREFERENCE DATA ACQUISITION VEHICLE, CORRESPONDING DEVICE, METHOD AND COMPUTER PROGRAM

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080162027A1 (en) * 2006-12-29 2008-07-03 Robotic Research, Llc Robotic driving system
DE102008027590A1 (en) * 2007-06-12 2009-01-08 Fuji Jukogyo K.K. Driving assistance system for vehicles
EP2779024A1 (en) * 2013-03-15 2014-09-17 Ricoh Company, Ltd. Intersection recognizing apparatus and computer-readable storage medium
KR20150027987A (en) 2013-09-05 2015-03-13 한국전자통신연구원 Control device and method of electrical wheel-chair for autonomous moving
WO2015192610A1 (en) 2014-06-17 2015-12-23 华南理工大学 Intelligent wheel chair control method based on brain computer interface and automatic driving technology

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110001840A (en) * 2019-03-12 2019-07-12 浙江工业大学 A kind of double-wheel self-balancing vehicle motion control method under various road conditions of view-based access control model sensor
CN110001840B (en) * 2019-03-12 2021-01-01 浙江工业大学 Two-wheeled self-balancing vehicle motion control method based on visual sensor under various road conditions
CN110203312A (en) * 2019-06-14 2019-09-06 灵动科技(北京)有限公司 It can carry out mobile device
CN114663397A (en) * 2022-03-22 2022-06-24 小米汽车科技有限公司 Method, device, equipment and storage medium for detecting travelable area
CN114663397B (en) * 2022-03-22 2023-05-23 小米汽车科技有限公司 Method, device, equipment and storage medium for detecting drivable area

Also Published As

Publication number Publication date
DE102016119729A1 (en) 2018-04-19

Similar Documents

Publication Publication Date Title
JP6651678B2 (en) Neural network system for autonomous vehicle control
US20210397185A1 (en) Object Motion Prediction and Autonomous Vehicle Control
US10899345B1 (en) Predicting trajectories of objects based on contextual information
WO2018072908A1 (en) Controlling a vehicle for human transport with a surround view camera system
JP5536125B2 (en) Image processing apparatus and method, and moving object collision prevention apparatus
US9915951B2 (en) Detection of overhanging objects
Ziegler et al. Making bertha drive—an autonomous journey on a historic route
US9637118B2 (en) Processing apparatus, processing system, and processing method
Bauer et al. The autonomous city explorer: Towards natural human-robot interaction in urban environments
US20150332103A1 (en) Processing apparatus, computer program product, and processing method
US20170359561A1 (en) Disparity mapping for an autonomous vehicle
WO2020041214A1 (en) Steerable camera for perception and vehicle control
WO2015178497A1 (en) Processing apparatus, processing system, processing program, and processing method
US11904906B2 (en) Systems and methods for prediction of a jaywalker trajectory through an intersection
US11880203B2 (en) Methods and system for predicting trajectories of uncertain road users by semantic segmentation of drivable area boundaries
JP2020126612A (en) Method and apparatus for providing advanced pedestrian assistance system for protecting pedestrian using smartphone
JP2022543355A (en) Object Localization for Autonomous Driving with Visual Tracking and Image Reprojection
Gläser et al. Environment perception for inner-city driver assistance and highly-automated driving
Nie et al. Camera and lidar fusion for road intersection detection
EP3679441B1 (en) Mobile robot having collision avoidance system for crossing a road from a pedestrian pathway
US20230043601A1 (en) Methods And System For Predicting Trajectories Of Actors With Respect To A Drivable Area
US20190155306A1 (en) Hybrid Maps - Boundary Transition
EP4131181A1 (en) Methods and system for predicting trajectories of actors with respect to a drivable area
Chand et al. Vision and laser sensor data fusion technique for target approaching by outdoor mobile robot
Bansal et al. Vision-based perception for autonomous urban navigation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17757694

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17757694

Country of ref document: EP

Kind code of ref document: A1