CN102555905B - Produce the method and apparatus of the image of at least one object in vehicle-periphery - Google Patents

Produce the method and apparatus of the image of at least one object in vehicle-periphery Download PDF

Info

Publication number
CN102555905B
CN102555905B CN201110307335.4A CN201110307335A CN102555905B CN 102555905 B CN102555905 B CN 102555905B CN 201110307335 A CN201110307335 A CN 201110307335A CN 102555905 B CN102555905 B CN 102555905B
Authority
CN
China
Prior art keywords
vehicle
sensor
data
image
invisible area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201110307335.4A
Other languages
Chinese (zh)
Other versions
CN102555905A (en
Inventor
T·福克
A·欣里希
M·雷斯蓬德克
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Robert Bosch GmbH
Original Assignee
Robert Bosch GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Robert Bosch GmbH filed Critical Robert Bosch GmbH
Publication of CN102555905A publication Critical patent/CN102555905A/en
Application granted granted Critical
Publication of CN102555905B publication Critical patent/CN102555905B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/23Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
    • B60R1/27Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view providing all-round vision, e.g. using omnidirectional cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/60Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by monitoring and displaying vehicle exterior scenes from a transformed perspective
    • B60R2300/607Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by monitoring and displaying vehicle exterior scenes from a transformed perspective from a bird's eye viewpoint

Abstract

The present invention relates to a kind of method and apparatus that produces the image of at least one object in vehicle-periphery. Described method has the following step: definite step (410), this step at least one sensor (540) based on described vehicle (100), particularly video camera, data, determine the invisible area for sensor (210) of object (120,130); The step (420) providing, this step provides synthetic graphic element (310) to insert in the invisible area (21) of described object (120,130); And the step (430) of inserting, this step is inserted described synthetic graphic element (310) in the invisible area (210) of described object (210), with the camera position from virtual, particularly produce the image of at least one object (120,130) vehicle (100) surrounding environment from getting a bird's eye view visual angle.

Description

Produce the method and apparatus of the image of at least one object in vehicle-periphery
Technical field
The present invention relates to a kind of for from virtual taking the photograph according to described in each independent claimsImage position produces method and a kind of phase of the image of at least one object in vehicle-peripheryThe equipment of answering.
Background technology
In current information system for driver, especially use multi-camera system, its object isMake driver can observe own vehicle's surroundings. It is moving that these systems are preferably used in unpowered maneuverIn work, taking in display as driver illustrates possible barrier or boundary line. MayPresentation mode be to show each single image simultaneously or illustrate that a width is combined into by multiple image360 ° of panoramic pictures. Another kind of presentation mode is so-called " getting a bird's eye view visual angle ", this birdThe visual angle of video camera in visual angle of looking down from a height changes by conversion virtually, makes to produce when getting a bird's eye view to fieldImpression when scape.
DE102007053305A1 discloses a kind of for the road of travelling based on predeterminingThe method of the driving recommendations of vision is exported in footpath in vehicle, and it is capable along vehicle that the method has recordSail step and the current geography of detection vehicle of the image of the true surrounding environment of vehicle of directionThe step of position. In addition, described method has according to travel route and the detection predeterminedTo geographical position determine the step of designated symbol and on visual angle, revise described definite fingerShow the step of symbol, with realize with display in the pavement markers image that travels in distance correspondingDesignated symbol image. Finally, described method comprises the step of definite designated symbol and will rememberThe step that the designated symbol of image and the visual angle correction of record the is combined and image of combination isStep on display now.
Summary of the invention
Under this background, by the present invention according to each independent claims propose a kind of for fromVirtual camera position is created in the side of the image of at least one object in vehicle-peripheryMethod, proposes a kind of equipment and last corresponding meter of one that proposes that uses the method in additionCalculation machine program product. Favourable design respectively by each dependent claims and belowDescription draws.
The present invention proposes a kind of for the camera position from virtual, particularly produces from getting a bird's eye view visual angleThe method of life image of at least one object in vehicle-periphery, wherein said method toolThere is the following step:
-definite step, this step is based on the particularly number of video camera of at least one sensor of vehicleAccording to the invisible object area of determining for sensor, and
-the step that provides, this step provides synthetic graphic element to insert the invisible area of objectTerritory, and
The step of-insertion, this step is inserted synthetic graphic element the invisible area of object,With the camera position from virtual, particularly produce vehicle-periphery at least from getting a bird's eye view visual angleThe image of an object.
Described method for example can be used together in conjunction with the navigation system of vehicle. Image can exampleDiagram on navigation system screen in this way. Image can for example reappear correct from getting a bird's eye view visual angleParking scene, this parking scene comprises the object of vehicle together with vehicle's surroundings, the driving of vehicleThe person of sailing must be noted that these objects, to for example correctly and with no damage stop. Described oneIndividual object or multiple object can be for example another vehicle and its type, stop sign thingWith building or plant, in docking process, need the vehicle that gets around these objects or parking to answerThere is certain distance with respect to these objects, to for example vehicle and/or thing can not damagedIn the dangerous situation of body, open car door. The surrounding environment of vehicle can for example relate to around carWhole objects in 360 ° of angles, these objects are relevant to the docking process of vehicle, right and wrongNormal important. Virtual camera position is for example by the spy of at least one sensor of conversion vehicleSurvey data obtain. Virtual camera position can be positioned in the middle of vehicle or be positioned at vehicle in sideTop, makes whole relevant objects in vehicle-periphery pass through this virtual camera positionVirtual visible for video camera.
The data that provided by sensor can represent the vehicle periphery ring being detected by sensorThe distance of object and size and shape in border. Except video camera, sensor is all rightIt is the converter of another kind of optics. Particularly in order to determine distance and the three-dimensional feature of object,Vehicle can also have multiple sensors, for example, at sensor and the rear portion in front part of vehicle regionSensor in region. Optical pickocff can with range sensor, for example radar sensorOr ultrasonic sensor combination. If only can use a sensor in vehicle, example soAs the distance of vehicle and object also can by determine before worth go out, these values for example canBe stored in the memory of navigation system. The invisible area of object for example can with load-carrying vapourThe vehicle structure of car is relevant, and this vehicle structure is visited by the sensor being arranged on automobile bodySurvey scope draws. Can for example carry out definite step with the algorithm being applicable to.
The present invention is to be familiar with as follows as basis: by what detect in vehicle-peripheryThe synthetic diagram of object, can intactly illustrate from getting a bird's eye view view vehicle driverVehicle-periphery and scene, around driver can understand substantially intuitively thusEnvironment and scene.
According to the scheme proposing, can consider following problem: be arranged on vehicle hereVideo camera is arranged on very low height naturally, therefore high object and the district after objectTerritory can not fully be detected. Advantageously, by the scheme proposing here, by as belowFormula can be avoided the not shown region " black region in other words in the diagram of vehicle-peripheryTerritory (schwazeFlecken) ", insert the synthetic image object producing and replace these " blackLook region ". Although this is for example also applicable to known and can illustrates completely but due to visual angle distortionThe intuitivism apprehension of object no longer can to(for) driver.
In order to fill the region of these " black ", advantageously example by content that can intuitivism apprehensionAs filled with synthetic graphic element the profile in these regions. If divided by means of objectEven known which kind of object that relates to of class, for example, relate to pedestrian, automobile, load truck, trafficMark etc., so for the object being picked out and classify for example by corresponding graphic elementBe drawn in diagram in tram.
Synthetic graphic element can be is for example typical portion for definite type of vehicleSubregion is vehicle body such as. Also can be partial building at this, for example, by converting by sensingThe data of the construction zone that device is surveyed can make this partial building visually can showGo out. Synthetic graphic element for example can call or in current all the time feelings from memoryUnder condition from survey with known about determining the data of object and indirectly providing. CanCarry out the step of inserting by a kind of applicable algorithm. Synthetic graphic element can be constructedFor covering one of whole invisible areas of object or invisible area under the state insertingPart. In this way, the image of at least one object for example can be illustrated in navigation systemOn the screen of system, making to have a look at screen for the driver of vehicle can be directly perceived geographicalSeparate this object.
According to a kind of embodiment, described method comprises in addition by the step of object classification.At this, in the step providing, can carry at least one subregion of the object that is classifiedFor the synthetic graphic element of sign format. In the step of classification, can determine local quiltWhether the object of surveying is pedestrian, traffic mark or another vehicle. The vehicle that part is detectedType, for example load truck or car can be determined in the step of classification, for example logicalCross the subregion of detecting and the sample object of the storage in guidance system data storehouse are enteredRow relatively. Thus, the step of insertion advantageously can complete in fact quickly and be passableSubstitute at once whole sightless regions.
According to another embodiment, described method can also have from another location survey aboutThe step of other data of object, this another location and sensor are surveyed for described definite stepThe position difference of rapid data. At this, described other data can be at least with object forA part for the invisible area of sensor is relevant. In addition, described method can have at leastThe step that part replaces, this step view of the object area of surveying from another locationReplace at least partly synthetic graphic element. Another location can be on vehicleThe position of another sensor or can be also the same biography providing about the data of objectSensor in time and/or the position of staggering on place. The position of staggering on the time and/or on placeFor example put and obtain other the crossing of object by vehicle. In this way, progressively use reallyThe view of object part replaces synthetic element, and wherein said object is for the driver of vehicleBeing remained constantly can intuitivism apprehension.
In addition represent, the predetermined part on the whole surface of object of imagination when other dataTime, implement the described step replacing at least partly. So, can advantageously avoid,Make the driver of vehicle mixed owing to for example changing too much view on the screen of navigatorDisorderly.
When having recognized that, when object extends beyond the vertical search coverage of sensor, also canStop to the step that small part replaces. Advantageously, so can make described object forDriver is intuitivism apprehension better because can avoid for sensor can notSee the so-called black side in region.
According to another embodiment of the invention, when existing at least in vehicle-peripheryIn the situation of other object, identify, described other object extends beyond the perpendicular of sensorWhen straight search coverage, can stop the described step replacing at least partly. Here,The advantage providing is avoid due to the conversion of view too frequently and Driver Vision was producedDegree requirement.
According to one embodiment of the present invention, can implement according to vehicle driver's settingThe described step replacing at least partly. Described setting can be for example manually by navigationThe key of system or voice command are carried out, and provide true that object is reproduced to driverThe possibility that real sexual deviation affects, this object reproduces can be best and its people's senseFeel corresponding.
In addition, the present invention also proposes a kind of for the camera position from virtual, particularly from birdThe visual angle of looking down from a height is created in the equipment of the image of at least one object in vehicle-periphery, Qi ZhongsuoThe equipment of stating has following feature:
– is used for definite device, and the data of this device based at least one sensor of vehicle are specialBe the data of video camera, determine the invisible object area for sensor, and
– is for the device that provides, and this device provides the synthetic graphic element can not with what insert objectSee region, and
The device of – for inserting, this device inserts synthetic graphic element the invisible area of objectTerritory, with the camera position from virtual, particularly produces vehicle-periphery from getting a bird's eye view visual angleThe image of at least one object.
Described equipment can be connected with for example Vehicular navigation system. Described device can be letterNumber process instrumentation, this signal processing equipment be integrated in described equipment or with described equipment phaseConnect.
Described equipment can be configured to execution or implementation basis method of the present inventionStep. Especially, the device of described equipment is configured to implement a step of described methodSuddenly. By these distortion embodiments of apparatus-form of the present invention, can be fast and effectiveRealize object of the present invention.
Here equipment can be understood to electronic equipment, this electronic equipment processes sensor letterNumber and accordingly output for the signal that shows. Described equipment can have interface, this interfaceCan be constructed to hardware type and/or software type. In the design of hardware type, instituteStating interface can be for example a part for so-called ASIC system, and this system comprises described equipmentVarious function. But also have likely, described interface is intrinsic integratedCircuit or formed by discrete component at least partly. In the design of software type, described in connectMouthful can be software module, these software modules be for example on microprocessor except otherThe software module being also provided with outside software module.
The computer program with program coding is favourable, and described program coding storesOn machine readable carrier, in semiconductor, hard disk or optical memory, when on equipmentWhile carrying out described program, described computer program is implemented according in above-mentioned embodimentMethod described in any.
Brief description of the drawings
According to accompanying drawing, exemplarily the present invention will be described in detail. In accompanying drawing:
Fig. 1 is with the view of getting a bird's eye view visual angle and show scene of the object that has distortion and present;
Fig. 2 is to have the object of correcting through visual angle and cannot illustrate that the getting a bird's eye view of Fig. 1 in region lookAngle shows the view of scene;
Fig. 3 according to an embodiment of the present, with have through visual angle correct object and closeBecome Fig. 1 of the object that is drawn into get a bird's eye view visual angle to show the view of scene;
Fig. 4 showing for the shooting position from virtual according to an embodiment of the presentThe flow chart of the method for the image of at least one object in raw vehicle-periphery of buying property;
Fig. 5 showing for the shooting position from virtual according to an embodiment of the presentThe buy property frame circuit diagram of the equipment of the image of at least one object in raw vehicle-periphery.
Specific embodiments
Same or similar element is in the accompanying drawings by same or similar Reference numeralRepresent, wherein saved the description repeating. In addition each figure of accompanying drawing, its description and power,Profit requires the feature that comprises a lot of combinations. For those skilled in the art, be clear that,These features can consider individually or can comprehensively become other, not detailed hereThe thin combination of describing. In addition, the present invention is perhaps using different proportion in the following descriptionWith in the situation of size, set forth, wherein, the present invention is interpreted as being not limited to these ratiosExample and size. In addition, according to method step of the present invention can repeat or with describedDifferent other of order order implement. If embodiment is First Characteristic/step and the second spyBetween levy/step, comprise "and/or" annexation, this can be understood as so, described enforcementExample not only there is First Characteristic/step according to a kind of embodiment but also there is Second Characteristic/step andAccording to another embodiment or only there is First Characteristic/step or only have Second Characteristic/Step.
Fig. 1 is with the view of getting a bird's eye view visual angle and show scene of the object that has distortion and present. ShowWhat go out is the vehicle 100 (being that the present invention should be embodied in vehicle wherein) of oneself, by groundParking stall 110 shown in face mark, load truck (LKW=Lastkraftwagen) 120 HesLateral column 130. As seen by the top view of parking scene Fig. 1, parking stall 110 is positioned at certainlyBy the left side of own vehicle 100, and the left side on parking stall 110 load truck 120 that is at a stop again.Lateral column 130 is arranged on the right side of own vehicle 100. Parking scene shown in Figure 1 canFor example to illustrate in the display unit of navigation system. For this reason, parking stall 110, load truck120 and lateral column 130 by unshowned two sensors in Fig. 1 of own vehicle, for example twoIndividual camera record. Sensor can for example be arranged on the front of the car region of own vehicle 100Left side and right side. Because own vehicle 100 crosses object 120 and 130, sensor in successionDetect object 120 and 130 from different viewing angles, thereby by load truck 120 and sidePost 130 illustrates in display unit in the mode of distortion on visual angle. Illustrate in form this,Object 120 and 130 drivers for vehicle 100 cannot identification.
For from the video image of the camera record by being parallel to travel planeCalculate and illustrate and get a bird's eye view visual angle, three-dimensional information, for example the pixel in image is apart from video cameraDistance be necessary. This can limitedly pass through additional sensor (as ultrasonic wave or radar),Or " recover three-dimensional scene structure from movable information by three-dimensional video-frequency algorithm is so-called in other words(StructurefromMotion) " scheme realizes. If there is the steric information of scene,Can calculate so the visual angle of getting a bird's eye view of each individual camera by Coordinate Conversion. If by instituteThere is the visual angle of getting a bird's eye view of video camera to be stitched together in tram, obtain so seeing to around car100 whole scene get a bird's eye view visual angle.
Fig. 2 is to have the object of correcting through visual angle and cannot illustrate that the getting a bird's eye view of Fig. 1 in region lookAngle shows the view of scene. As the view from Fig. 2, by sensing dataConversion, right side and the lateral column 130 of load truck 120 of correcting through visual angle is shown. ForRegion 200 conduct in top view of the visible load truck 120 of sensor of vehicle 100The right side of load truck 120 can be seen. Therefore, for the driver of vehicle 100Load truck 120 and lateral column 130 itself are discernible now in view. Due to fieldThe limited detection of scape and visual angle, towards the variation of getting a bird's eye view visual angle, produce so-called in view" black " (sightless) region. Correspondingly, in Fig. 2, show with shadow surfaceSightless for sensor, the namely covered thereby actual load-carrying goods that can not illustrateThe region 210 of car 120.
Fig. 3 according to an embodiment of the present, show and have with the visual angle of getting a bird's eye view of Fig. 1The view of the scene of the object of correcting through visual angle and the synthetic object being drawn into. Fig. 3 is again with Fig. 1The visual angle of getting a bird's eye view show the view of scene with the object of correcting through visual angle. Extraly,In Fig. 2 with the invisible area of the load truck 120 shown in shadow surface by the figure synthesizingElement 310 replaces. Synthetic graphic element 310 is constructed in the embodiment shown in fig. 3For the subregion of load truck top view, and be applicable to insert load truck 120 forThe sightless region of sensor. Therefore for for example demonstration in the navigation system of vehicle 100Demonstration on device generates the whole image of load truck 120, and this image is by from vehicle 100" truly " region and the synthetic graphic element of the load truck 120 that detects of sensor310 compositions. Alternatively, synthetic graphic element 310 also can be constructed to load truckEntirety top view and all for load truck 120.
Synthetic graphic element 310 can be for example after classifying step from being for example storedIn database in navigation system memory, extract visibility region 200 in classifying stepBe classified as load truck. Database can for example especially comprise for various different automobile types orBuilding, traffic mark or personnel's synthetic graphic element. Corresponding to classifying step, thisAfter can from database, select coupling synthesising pattern element and insert invisible accordinglyIn region, thereby provide the directly overall diagram of the object of intuitivism apprehension for observer's eyesPicture.
Can illustrate for which region of judging object is synthetic adaptively, which isWhat " truly " illustrated designs. Therefore, the degree of the information integrity relevant with objectCan synthesize the degree illustrating by object determines. For example,, for compared with vehicle 100Higher load truck 120, real view is owing to lacking real overlooking hereinFigure because of but infeasible. If therefore algorithm identified arrives object than the vertical visual field of video cameraHeight, illustrates this object synthetically. On the other hand, for example flowerpot or in Fig. 3 viewLateral column 130 illustrate first synthetically. Once but receive by crossing of vehicle 100To abundant flowerpot or the information in lateral column 130 regions, just these information can be converted toReal view. Alternatively, this adaptability can be applied to whole scene. , need onlyAn object of whole scene is because it must synthesize and illustrate highly as described, wholeAll objects in scene or or rather scene just illustrate synthetically. This first for,Can not make driver's confusion due to the conversion of display type too frequently. In addition vehicle,100 driver can make adaptive degree and its demand match.
Fig. 4 show according to an embodiment of the present, for the shooting position from virtualPut the flow process of the method for the image that generates an object in vehicle-periphery or multiple objectsFigure.
Cross for example parking gap by vehicle, at least one sensor of vehicle obtain see toThe different visual angles of scene, thereby the expansion that can draw at least one object of scene. Therefore,In the first step 410 of method 400, the data based at least one sensor determine oneIndividual or multiple objects for the sightless region of sensor, and taste by applicable algorithmExamination is by object classification. Especially, sensor can be video camera, and this video camera for example arrangesOn the vehicle body of vehicle. In second step 420, provide synthetic graphic element, this figureShaped element element be applicable to inserting object for the sightless region of sensor or by least one objectThe region hiding. Then, in third step 430, by synthetic graphic element insertIn the invisible area of body. Thus, produce vehicle-periphery from virtual camera positionThe image of scene, because the synthesising pattern element inserting is one or more shown in this imageObject, makes them can see intuitively for the observer of scene. Especially, voidThe camera position of intending can be to get a bird's eye view visual angle.
The step 410,420 and 430 of method 400 can repeat to implement, because at vehicleThe region being hidden by object while surveying scene at the visual angle from different during crossing becomes more and moreLittle. Therefore, the synthetic view of the object of scene can be gradually by the video at object space placeThe stereogram of image replaces.
Fig. 5 show according to an embodiment of the present, for the shooting position from virtualThe buy property video rearview mirror of the image of at least one object in raw vehicle-peripheryThe frame circuit diagram of (Videor ü ckspielgel) or equipment 500. What illustrate is vehicle 100Schematic diagram, this vehicle has equipment 500, this equipment have for definite device 510,For the device 520 that provides and the device 530 for inserting. In addition show two sensingsDevice 540, for example video camera, one in these two sensors arranges respectively in front areaOn vehicle body left side and vehicle body right side.
Sensor 540 is configured to survey and close in the time of vehicle 100 at least one object of processIn the data of object, as size, with the distance of sensor or surface characteristic and pass through interfaceBe passed to for definite device 510. Based on described data, for definite device 510Use determine fit algorithm in the situation that object for the sightless region of sensor. CloseCross another interface in the information exchange of the invisible area of object and be passed to device 520. There,Using under the condition of described information, provide synthetic graphic element, the pattern primitive that this is syntheticElement is applicable to inserting the invisible area of object. For example, can be from the object of database storage orIn object part, select synthetic graphic element. Device 520 is by the graphic element about syntheticInformation exchange cross another applicable interface and be passed to device 530, this device 530 is configured toSynthetic graphic element is inserted in the invisible area of object, make for example at vehicle 100The display unit of navigation system on produce from virtual camera position, particularly look from getting a bird's eye viewThe image of at least one object in the vehicle-periphery that angle is observed.

Claims (11)

1. one kind produces vehicle (100) surrounding environment at least for the camera position from virtualThe method (400) of the image of an object (120,130), is characterized in that described sideMethod has the following step:
Definite step (410), this step is based at least one of described vehicle (100)The data of sensor (540) determine object (120,130) for described sensor (540)Invisible area (210);
The step (420) providing, this step provide synthetic graphic element (310) forBe inserted in the invisible area (210) of described object (120,130); And
The step (430) of inserting, this step is inserted described synthetic graphic element (310)Enter in the invisible area (210) of described object (120,130), with the shooting from virtualPosition produces the image of at least one object (120,130) in vehicle (100) surrounding environment;
Wherein, described method also has from another location and surveys about described object (120,130)The step of other data, this another location with survey for described definite step (410)The position difference of described sensor (540) of data, wherein said other data at least withObject (120,130) for the sightless region of described sensor (540) (210)A part relevant; And described method also has the step replacing at least partly, shouldStep use from another location survey described object (120,130) region view at leastPart replaces described synthesising pattern element (310).
2. in accordance with the method for claim 1 (400), is characterized in that, in addition described inMethod also has the step of described object (120,130) classification, wherein provides describedStep (420) in be at least one subregion of the object (120,130) that is classifiedThe synthesising pattern element (310) of sign format is provided.
3. in accordance with the method for claim 1 (400), is characterized in that, when described itsHis data represent whole surface predetermined of the imagination of described object (120,130)When part, implement the described step replacing at least partly.
4. according to the method described in claim 1 or 3 (400), it is characterized in that, whenIdentify, described object (120,130) extends beyond the vertical of described sensor (540)When search coverage, the described step replacing at least partly stops.
5. according to the method described in any one in claims 1 to 3 (400), its feature existsIn, when existing in the environment of described vehicle (100) at least one other objectIdentify, described other object extends beyond the vertical search coverage of described sensor (540)Time, the described step replacing at least partly stops.
6. according to the method described in any one in claims 1 to 3 (400), its feature existsIn, carry out generation at least partly according to the driver's of described vehicle (100) setting enforcement is describedThe step of replacing.
7. according to the method described in any one in claims 1 to 3 (400), its feature existsIn, described virtual camera position is from getting a bird's eye view visual angle.
8. according to the method described in any one in claims 1 to 3 (400), its feature existsIn, the data of described sensor (540) are the data of video camera.
9. one kind produces vehicle (100) surrounding environment at least for the camera position from virtualThe equipment (500) of the image of an object (120,130), is characterized in that, described in establishThe following feature of having got everything ready:
For determining the device (510) of (410), this device is based on described vehicle (100)The data of at least one sensor (540) determine described object (120,130) forThe invisible area (210) of sensor;
The device (520) that is used for providing (420), this device provides synthetic graphic element(310) to insert in the invisible area (210) of described object (120,130);
For inserting the device (530) of (430), this device is by described synthetic pattern primitiveElement (310) inserts in the invisible area (210) of described object (120,130), withAt least one object from virtual camera position generation vehicle (100) surrounding environment (120,130) image;
For the device of surveying from another location, this device is surveyed about described from another locationOther data of object (120,130), this another location is used for described definite with detectionThe position difference of the described sensor (540) of the data of step (410), wherein said itsAt least sightless for described sensor (540) with object (120,130) of his dataThe part in region (210) is relevant; And
For the device replacing at least partly, this device institute surveying from another locationThe view of stating the region of object (120,130) replaces described synthesising pattern at least partlyElement (310).
10. according to equipment claimed in claim 9 (500), it is characterized in that described voidThe camera position of intending is from getting a bird's eye view visual angle.
11. according to the equipment described in claim 9 or 10 (500), it is characterized in that instituteThe data of stating sensor (540) are data of video camera.
CN201110307335.4A 2010-10-06 2011-10-08 Produce the method and apparatus of the image of at least one object in vehicle-periphery Active CN102555905B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102010042026.3A DE102010042026B4 (en) 2010-10-06 2010-10-06 Method for generating an image of at least one object in the surroundings of a vehicle
DE102010042026.3 2010-10-06

Publications (2)

Publication Number Publication Date
CN102555905A CN102555905A (en) 2012-07-11
CN102555905B true CN102555905B (en) 2016-05-04

Family

ID=45872183

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110307335.4A Active CN102555905B (en) 2010-10-06 2011-10-08 Produce the method and apparatus of the image of at least one object in vehicle-periphery

Country Status (4)

Country Link
CN (1) CN102555905B (en)
DE (1) DE102010042026B4 (en)
FR (1) FR2965765B1 (en)
IT (1) ITMI20111752A1 (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102011087901A1 (en) * 2011-12-07 2013-06-13 Robert Bosch Gmbh Method for displaying a vehicle environment
KR101916467B1 (en) * 2012-10-30 2018-11-07 현대자동차주식회사 Apparatus and method for detecting obstacle for Around View Monitoring system
DE102014205511A1 (en) * 2014-03-25 2015-10-01 Conti Temic Microelectronic Gmbh METHOD AND DEVICE FOR DISPLAYING OBJECTS ON A VEHICLE INDICATOR
DE102014014662A1 (en) 2014-09-19 2016-03-24 Mekra Lang North America, Llc Display device for vehicles, in particular commercial vehicles
CN104883536A (en) * 2015-01-13 2015-09-02 济南高达信息技术有限公司 Driver panoramic video surveillance safe driving system
CN104539908A (en) * 2015-01-13 2015-04-22 济南高达信息技术有限公司 Driver panoramic video self-monitoring system based on traffic television network
DE102015217258A1 (en) * 2015-09-10 2017-03-16 Robert Bosch Gmbh Method and device for representing a vehicle environment of a vehicle
DE102017201000A1 (en) * 2017-01-23 2018-07-26 Robert Bosch Gmbh Method for combining a plurality of camera images

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE60207655T2 (en) * 2001-09-07 2006-06-08 Matsushita Electric Industrial Co., Ltd., Kadoma Device for displaying the environment of a vehicle and system for providing images
EP1717757A1 (en) * 2005-04-28 2006-11-02 Bayerische Motoren Werke Aktiengesellschaft Method for graphically displaying the surroundings of a motor vehicle
JP2006341641A (en) * 2005-06-07 2006-12-21 Nissan Motor Co Ltd Image display apparatus and image display method
DE102007053305A1 (en) 2007-11-08 2009-05-14 Robert Bosch Gmbh Method and device for outputting optical driving recommendations in a vehicle
FR2932595B1 (en) * 2008-06-13 2010-12-24 Valeo Vision Sas METHOD FOR DISPLAYING PARKING ASSIST.
JP5112998B2 (en) * 2008-09-16 2013-01-09 本田技研工業株式会社 Vehicle perimeter monitoring device
FR2937775B1 (en) * 2008-10-24 2010-12-31 Valeo Vision Sas METHOD FOR DETECTING A TARGET OBJECT FOR A MOTOR VEHICLE
JP5035284B2 (en) * 2009-03-25 2012-09-26 株式会社日本自動車部品総合研究所 Vehicle periphery display device

Also Published As

Publication number Publication date
CN102555905A (en) 2012-07-11
FR2965765A1 (en) 2012-04-13
DE102010042026A1 (en) 2012-04-12
DE102010042026B4 (en) 2020-11-26
FR2965765B1 (en) 2018-11-02
ITMI20111752A1 (en) 2012-04-07

Similar Documents

Publication Publication Date Title
CN102555905B (en) Produce the method and apparatus of the image of at least one object in vehicle-periphery
JP5208203B2 (en) Blind spot display device
US11587267B2 (en) Imaging apparatus, image processing apparatus, display system, and vehicle
CN110758243B (en) Surrounding environment display method and system in vehicle running process
CN1878299B (en) Apparatus and method for displaying images
US20140285523A1 (en) Method for Integrating Virtual Object into Vehicle Displays
JP3286306B2 (en) Image generation device and image generation method
US20160240012A1 (en) Method and Device for Augmented Depiction
CN104859538A (en) Vision-based object sensing and highlighting in vehicle image display systems
CN103692973A (en) Image generating apparatus, image display system and image generating method
CN104442567A (en) Object Highlighting And Sensing In Vehicle Image Display Systems
CA2590346A1 (en) Navigation and inspection system
CN103959341A (en) Method for representing a motor vehicle environment
KR20190047027A (en) How to provide a rearview mirror view of the vehicle's surroundings in the vehicle
JP2008048345A (en) Image processing unit, and sight support device and method
JP2012023658A (en) Image display device for vehicle
JP2012164157A (en) Image synthesizer
JP3301421B2 (en) Vehicle surrounding situation presentation device
CN107972585A (en) Scene rebuilding System and method for is looked around with reference to the adaptive 3 D of radar information
KR20150041334A (en) Image processing method of around view monitoring system
US20150158430A1 (en) Operating a Head-Up Display of a Vehicle and Image Determining System for the Head-Up Display
WO2018134897A1 (en) Position and posture detection device, ar display device, position and posture detection method, and ar display method
CN109690558A (en) Method, driver assistance system and the motor vehicles of driver's driving maneuver vehicle for auxiliary maneuvering vehicle
Park et al. Panoramic vision system to eliminate driver’s blind spots using a laser sensor and cameras
CN114842172A (en) Display control method, display control device, display control medium, head-up display system, and vehicle

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant