EP4213113A1 - A scheme for monitoring an object loaded on a carrier of a vehicle - Google Patents
A scheme for monitoring an object loaded on a carrier of a vehicle Download PDFInfo
- Publication number
- EP4213113A1 EP4213113A1 EP22151599.2A EP22151599A EP4213113A1 EP 4213113 A1 EP4213113 A1 EP 4213113A1 EP 22151599 A EP22151599 A EP 22151599A EP 4213113 A1 EP4213113 A1 EP 4213113A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- image
- vehicle
- image frame
- road
- frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012544 monitoring process Methods 0.000 title claims abstract description 11
- 238000000034 method Methods 0.000 claims abstract description 31
- 238000012545 processing Methods 0.000 claims abstract description 13
- 230000000007 visual effect Effects 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 4
- 238000010586 diagram Methods 0.000 description 10
- 238000004891 communication Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000000969 carrier Substances 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
Definitions
- the present invention relates to a method and an apparatus for monitoring an object loaded on a carrier of a vehicle, a vehicle equipped with the apparatus, and a computer program product.
- Exemplary embodiments of the invention include a method of monitoring, by a data processing device comprising a processor, an object loaded on a carrier of a vehicle, the method comprising obtaining a plurality of image frames of a road on which the vehicle is travelling, captured by at least one camera mounted at a rear portion of the vehicle at different points of time; selecting a first image frame and at least one second image frame such that the first image frame and the at least one second image frame have a first image section and at least one second image section, respectively, containing an image of the same part of the road; comparing the first image section and the at least one second image section; and determining if the object has dropped down on the road based on the comparison result.
- it is determined that the object has dropped down on the road when at least one difference between the first image section and the at least one second image section is detected.
- At least one camera mounted at a rear portion of the vehicle may capture a plurality of image frames each having an image of the road which the vehicle has already passed and/or is about to pass at different points of time.
- image frame may be either a "still image” which is a single static image or a "still frame” which is a still image derived from one frame of a moving image such as a video.
- the at least one camera may be a photographic camera or a video camera.
- a first image frame and at least one second image frame are selected such that the first image frame and the at least one second image frame, captured at different points of time while the vehicle is travelling on the road, share at least part of an image section containing an image of the same part of the road.
- an image section may be either part of a single image frame or a whole image frame.
- the image sections of the first image frame and the at least one second image frame, which have an image of the same part of the road, are compared each other to determine if there is at least one difference between them. It may be determined that the object has dropped down on the road when at least one difference between the first image section and the at least one second image section is detected. In this way, whether or not an object has dropped down from the vehicle to the road can be monitored in an efficient manner without using a precise image recognition technology.
- selecting the first image frame and the at least one second image frame comprises obtaining a speed of the vehicle and selecting the first image frame and the at least one second image frame among the plurality of image frames based on the speed of the vehicle.
- each image frame contains an image covering a fixed amount of distance of the road.
- a capturing speed of the at least one camera capturing image frames may be a known value.
- the speed of the vehicle may be measured in a conventional manner.
- selecting the first image frame and the at least one second image frame comprises obtaining a speed of the vehicle and controlling a capturing speed of the camera capturing image frames based on the obtained speed such that the first image frame and the at least one second image frame are subsequent image frames. Since the speed of the vehicle can be measured and the fixed distance of the road contained in each image frame is a known value, the capturing speed of the camera capturing image frames can be adapted in a way that the first image frame and the at least one second image frame are subsequent image frames sharing at least part of an image section containing an image of the same part of the road.
- the method further comprises removing the image of the at least part of the vehicle from the first image frame and the at least one second image frame. If the image frames contain an image of at least part of the vehicle, quality of the comparison of the image frames may be deteriorated by the image of at least part of the vehicle. In this regard, by removing the image of at least part of the vehicle, the quality of the comparison can be enhanced.
- the first image section and the at least one second image section are compared without considering the image of the at least part of the vehicle.
- the processor may have prior knowledge about the image of the at least part of the vehicle contained in each image frame, the image of the at least part of the vehicle can be simply disregarded when comparing the first image section and the at least one second image section to enhance the quality of the comparison.
- a self learning algorithm may be used for the processor to obtain the knowledge about the image of the at least part of the vehicle contained in each image frame.
- the plurality of image frames are captured by a single camera and each image frame has a plurality of image sections which result from virtually partitioning each image frame.
- each of the plurality of image frames has N number of image sections from a first image section to an N th image section, a lower-ranked image section containing an image of part of the road which the vehicle has passed earlier, wherein M number of image frames are selected among the plurality of image frames such that second to N th image sections of the first image frame and first image sections of the second to M th image frames contain an image of the same part of the road, respectively, wherein M is greater than two and is equal to or less than N, and wherein the second to the N th image sections of the first image frame are compared with the first image sections of the second to M th image frames, respectively, wherein a higher-ranked image frame means being captured at an earlier point of time.
- a broader area of the road can be monitored to check if the object has dropped down from the carrier of the vehicle to the road.
- a first image section is “lower-ranked” than second, third, ..., and the N th image sections
- the M th image frame is "higher-ranked” than the (M-1) th , (M-2) th , ..., and first image frame.
- the first image frame and the at least one second image frame are captured by separate cameras, respectively.
- a plurality of cameras may be mounted at the rear portion of the vehicle such that the plurality of cameras can capture a plurality of image frames having an image of the same part of the road captured at different points of time while the vehicle is travelling on the road.
- Exemplary embodiments of the invention further include an apparatus for monitoring an object loaded on a carrier of a vehicle, the apparatus comprising at least one camera mountable at a rear portion of the vehicle so as to capture a plurality of image frames of a road on which the vehicle is travelling; a data processing device comprising a processor and configured to perform the method according any of the embodiments as discussed herein; and means for giving a warning when it is determined that the object has dropped down on the road.
- the warning is at least one of an auditory warning or a visual warning.
- the means for giving the warning comprises at least one of a monitor for giving the visual warning and/or a speaker for giving the auditory warning.
- Exemplary embodiments of the invention further include a vehicle equipped with the apparatus according to any of the embodiments discussed herein.
- the vehicle is a bicycle, a motorcycle, a car or a boat.
- the term road means a sea road, a sea lane or a marine route.
- Exemplary embodiments of the invention further include a computer program product comprising instructions which, when executed by a computer, cause the computer to carry out the method according to any of the embodiments as discussed herein.
- a bicycle 1 has a rear carrier 3 for carrying an object 30 on it and the bicycle 1 is travelling on a road 2.
- the bicycle 1 is equipped with an apparatus according to an embodiment of the invention.
- the apparatus comprises a camera 10 and a data processing device 20.
- the camera 10 is mounted beneath the rear carrier 3 such that it may capture a plurality of image frames each containing an image of part of the road 2 which the bicycle 1 has already passed. It is to be noted that the camera 10 may be mounted at another part of the bicycle 1 in so far as it can take an image of part of the road which the bicycle 1 has already passed.
- the data processing device 20 is mounted on a handlebar 5 of the bicycle 1 and is configured to perform a process according to an embodiment of the invention as described herein. The apparatus comprising the camera 10 and the data processing device 20 will be discussed below in more detail.
- Fig. 1B shows a schematic top view depicting the road 2 on which the bicycle 1 is travelling.
- the rectangular-shaped area 4 in Fig. 1B represents part of the road 2 contained in an image frame captured by the camera 10.
- all image frames captured by the camera 10 contain images covering the same distance of the road 2. That is, in Fig. 1B , the distance "s" is the same for all image frames unless the calibration and the location of the camera 10 is changed.
- Fig. 2 shows a schematic block diagram depicting a configuration of the apparatus in accordance with an exemplary embodiment of the invention.
- the apparatus comprises the camera 10 and the data processing device 20.
- the camera 10 comprises a camera module 10a and a communication module 10b.
- the data processing device 20 comprises a processor 22, a communication module 24, a memory module 26, and a user interface 28.
- the user interface 28 comprises an input module 28a, a display module 28b, and a speaker 28c.
- a configuration of the apparatus as shown in Fig. 2 is exemplary. A different configuration may also be considered.
- the processor 20, the communication module 24, the memory module 26, and the camera 10 may be integrated into a single entity which can be mounted beneath the rear carrier 10 of the bicycle 1.
- the user interface 28 may be mounted alone on the handlebar 5 of the bicycle 1.
- a bicycle rider may input an instruction via the input module 28a for the apparatus to start monitoring the object 30 loaded on the rear carrier 3 of the bicycle 1.
- the instruction is sent to the camera 10 via the communication modules 24, 10 which may perform wired or wireless communication each other using a known communication technology.
- the camera module 10a starts capturing image frames each containing an image of part of the road 2.
- the image frames may be captured by the camera module 10a at a constant speed which may be changed by a control of the processor 22 as necessary.
- the captured image frames are sent to the data processing device 20 via the communication modules 10, 24 and are stored into the memory module 26.
- the processor 22 performs a process according to an embodiment of the invention to determine if the object 30 has dropped down from the rear carrier 3 to the road 2. The process to be performed by the processor 22 will be discussed below in detail.
- the processor 20 instructs the user interface 28 to give a warning to the bicycle rider.
- the user interface 28 may give a visual warning, e.g. by displaying a warning sentence or picture on the display 28b.
- an auditory warning e.g. an alarming sound, may be given via the speaker 28. Both the visual and auditory warning may also given through the monitor 28b and the speaker 28c at the same time.
- Fig. 3A is a flow chart depicting a process to be performed by the processor 22 in accordance with an exemplary embodiment of the invention.
- the camera module 10a captures a plurality of image frames at S31, each containing an image of part of the road 2 which the bicycle 1 has already passed and the captured image frames are stored in the memory module 26 at S32.
- the processor 22 obtains a speed of the bicycle 1.
- the speed of the bicycle 1 may be measured in a conventional manner, e.g. using a speedometer or a GPS module (not shown).
- the processor 22 selects a first image frame 4a and a second image frame 4b among the plurality of image frames stored in the memory module 26 such that the first image frame and the second image frame have image sections, respectively, containing an image of the same part of the road 2.
- Fig. 3B is a schematic diagram for explaining an example of how to select the first and second image frames by the processor 22.
- the rectangular-shaped areas 4a, 4b in Fig. 3B represents part of the road 2 contained in the first image frame 4a and the second image frame 4b, respectively.
- all image frames captured by the camera 10 contain images covering the same distance of the road 2. That is, the distance "s" on the road 2 contained in all image frames captured by the camera 10 is the same.
- the processor 22 may have prior information about not only a capturing speed of the camera module 10a capturing the image frames, but also a time period between two subsequent points of time.
- Each image frame captured by the camera module 10a has two image sections, i.e. a first image section and a second image section, which are virtually partitioned by the processor 22.
- the lower-ranked image section i.e. the first image section 4a', 4b'
- each image frame having two image sections is exemplary and that the processor 22 may virtually partition each image frame into more than two image sections.
- the speed (v) of the bicycle 1 may be an average speed during the time period ( ⁇ t) between the first and second points of time. Alternatively, the speed (v) may be an instantaneous speed measured at any point of time during the time period.
- the processor 22 may control a capturing speed of the camera 10 capturing image frames based on the obtained speed of the bicycle 1 such that the first image frame 4a and the second image frame 4b are two subsequent image frames which have the second image section 4a" and the first image section 4b', respectively, containing an image of the same part of the road 2 which the bicycle 1 has already passed.
- the controller 22 compares the second image section 4a" of the first image frame 4a and the first image section 4b' of the second image frame 4b which contain an image of the same part of the road 2.
- the controller 22 determines if the object 30 has dropped down on the road 2 based on the comparison result.
- the processor 22 determines that the object 30 has dropped down on the road 2 when at least one difference between the second image section 4a" of the first image frame 4a and the first image section 4b' of the second image frame 4b is detected.
- the second image section 4a" of the first image frame 4a and the first image section 4b' of the second image frame 4b contain the image of the same part of the road 2 with a relatively short time difference, any difference between the two image sections 4a", 4b' can be easily detected, which may result in an easy detection of the object 30 if it has been fallen down on the road 2, without using a high-profile image recognition technology.
- the processor 22 is configured to remove the image of the at least part of the bicycle 1 from the second image section 4a" of the first image frame 4a and the first image section 4b' of the second image frame 4b.
- the second image section 4a" of the first image frame 4a and the first image section 4b' of the second image frame 4b contain an image of at least part of the bicycle 1, they may be compared without considering the image of the at least part of the bicycle 1. That is, since the processor 22 may have prior knowledge about the image of the at least part of the bicycle 1 contained in each image frame, the image of the at least part of the bicycle 1 may be simply disregarded when comparing the two image sections to enhance the quality of the comparison.
- a self learning algorithm may be used for the processor 22 to obtain the knowledge about the image of the at least part of the bicycle 1 contained in each image frame.
- the processor 22 instructs the user interface 28 to give at least one of an auditory or visual warning via the display 28b and/or speaker 28c at S38. If it is determined that the object has not dropped down on the road, on the other hand, the process returns to the step of S33.
- Fig. 4A is a flow chart depicting a process to be performed by the processor 22 in accordance with another exemplary embodiment of the invention.
- Fig. 4A The process of Fig. 4A is analogous to that of Fig. 3A other than the steps of S44 and S45 which relate to selecting a plurality image frames and comparing image sections of different image frames.
- steps of S44 and S45 which relate to selecting a plurality image frames and comparing image sections of different image frames.
- the other steps of S41-S43 and S46-S48 will not be explained below in detail and reference can be made to the corresponding steps of Fig. 3A .
- Fig. 4B is a schematic diagram for explaining an example of how to select the plurality of image frames and how to compare images sections of different image frames by the processor 22.
- four image frames i.e. first to fourth image frames 4a-4d are selected and each image frame is virtually partitioned into four image sections, i.e. first to fourth image sections.
- the first to fourth image frames 4a-4d have been captured by the camera module 10a at four different points of time, i.e. at t4, t3, t2, and t1, respectively, among which t1 is the earliest point of time and t4 is the latest point of time.
- a time period between two subsequent points of time may be a constant value ( ⁇ t).
- the first to fourth image frames 4a-4d are selected such that the second to fourth image sections 4a"-4a"" of the first image frame 4a and the first image sections 4b'-4d' of the second to fourth image frames 4b-4d contain an image of the same part of the road 2, respectively.
- the second image section 4a" of the first image frame 4a and the first image section 4b' of the second image frame 4b contain an image of the same part of the road 2 with a time difference ⁇ t.
- the third image section 4a'" of the first image frame 4a and the first image section 4c' of the third image frame 4c contain an image of the same part of the road 2 with a time difference 2 x ⁇ t.
- the fourth image section 4a"" of the first image frame 4a and the first image section 4d' of the fourth image frame 4d contain an image of the same part of the road 2 with a time difference 3 x ⁇ t.
- the processor 22 may select the first to fourth image frames 4a-4d among the plurality of image frames stored in the memory module 26 using the speed of the bicycle 1 and the time period ( ⁇ t) between the two subsequent points of time.
- the speed (v) of the bicycle 1 may be an average speed during the time period (3 x ⁇ t) between the first and fourth points of time.
- the speed (v) may be an instantaneous speed measured at any point of time during the same time period.
- the processor 22 may control a capturing speed of the camera 10 capturing image frames based on the obtained speed of the bicycle 1 such that the first to fourth image frames 4a-4d are four subsequent image frames.
- the controller 22 compares the second image section 4a" of the first image frame 4a and the first image section 4b' of the second image frame 4b, the third image section 4a'" of the first image frame 4a and the first image section 4c' of the third image frame 4c, and the fourth image section 4a"" of the first image frame 4a and the first image section 4d' of the fourth image frame 4d.
- the controller 22 determines if the object 30 has dropped down on the road 2 based on the comparison result. In particular, the processor 22 determines that the object 30 has dropped down on the road 2 if there is at least one difference between the two image sections, i.e.
- the processor 22 determines that the object 30 has not dropped down on the road 2.
- accuracy of the monitoring apparatus can be enhanced by expanding both the monitoring distance of the road and time span.
- the number of the image frames and image sections in each image frame may be more than four.
- Fig. 5 shows a schematic diagram depicting a car equipped with an apparatus in accordance with an embodiment of the invention.
- a car 100 comprises a rooftop carrier 3 for carrying an object 30 on it.
- the car 100 is equipped with an apparatus according to an embodiment of the invention.
- the apparatus comprises a camera 10 and a data processing device (not shown).
- the camera 10 is mounted at a rear part of the car 100 such that it may capture a plurality of image frames each containing an image of part of a road which the car 100 has already passed.
- the data processing device may be integrated into a center console of the car 100 and is configured to perform a process according to an embodiment of the invention as described herein.
- an apparatus according to an embodiment of the invention may be combined with any vehicle having an outside carrier, such as a motorcycle with a rear carrier, a car with a rear carrier, and a boat with a rear carrier.
- an outside carrier such as a motorcycle with a rear carrier, a car with a rear carrier, and a boat with a rear carrier.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
Description
- The present invention relates to a method and an apparatus for monitoring an object loaded on a carrier of a vehicle, a vehicle equipped with the apparatus, and a computer program product.
- Most vehicles like a bicycle, a motorcycle, a car, and a boat have carriers for loading objects like luggages thereon. While driving a vehicle, it is not easy for a driver to pay close attention to an object loaded on its carrier all the time. Thus, it may sometimes happen for the object to drop down from the carrier to the road on which the vehicle is travelling without the driver's recognising it.
- Accordingly, it would be beneficial to provide an enhanced method and apparatus for monitoring an object loaded on a carrier of a vehicle and to provide a vehicle equipped with the apparatus and a computer program product therefor.
- Exemplary embodiments of the invention include a method of monitoring, by a data processing device comprising a processor, an object loaded on a carrier of a vehicle, the method comprising obtaining a plurality of image frames of a road on which the vehicle is travelling, captured by at least one camera mounted at a rear portion of the vehicle at different points of time; selecting a first image frame and at least one second image frame such that the first image frame and the at least one second image frame have a first image section and at least one second image section, respectively, containing an image of the same part of the road; comparing the first image section and the at least one second image section; and determining if the object has dropped down on the road based on the comparison result. In an embodiment, it is determined that the object has dropped down on the road when at least one difference between the first image section and the at least one second image section is detected.
- At least one camera mounted at a rear portion of the vehicle may capture a plurality of image frames each having an image of the road which the vehicle has already passed and/or is about to pass at different points of time. In this document, the term "image frame" may be either a "still image" which is a single static image or a "still frame" which is a still image derived from one frame of a moving image such as a video. Likewise, the at least one camera may be a photographic camera or a video camera. A first image frame and at least one second image frame are selected such that the first image frame and the at least one second image frame, captured at different points of time while the vehicle is travelling on the road, share at least part of an image section containing an image of the same part of the road. Here, an image section may be either part of a single image frame or a whole image frame. The image sections of the first image frame and the at least one second image frame, which have an image of the same part of the road, are compared each other to determine if there is at least one difference between them. It may be determined that the object has dropped down on the road when at least one difference between the first image section and the at least one second image section is detected. In this way, whether or not an object has dropped down from the vehicle to the road can be monitored in an efficient manner without using a precise image recognition technology.
- According to a further embodiment, selecting the first image frame and the at least one second image frame comprises obtaining a speed of the vehicle and selecting the first image frame and the at least one second image frame among the plurality of image frames based on the speed of the vehicle. When the plurality of image frames are captured by a camera mounted at a certain rear portion of the vehicle so as to capture images of the road which the vehicle has already passed, each image frame contains an image covering a fixed amount of distance of the road. A capturing speed of the at least one camera capturing image frames may be a known value. In this regard, it is possible for the first image frame and the at least one second image frame to be selected such that they share at least part of an image section containing an image of the same part of the road. The speed of the vehicle may be measured in a conventional manner.
- According to a further embodiment, selecting the first image frame and the at least one second image frame comprises obtaining a speed of the vehicle and controlling a capturing speed of the camera capturing image frames based on the obtained speed such that the first image frame and the at least one second image frame are subsequent image frames. Since the speed of the vehicle can be measured and the fixed distance of the road contained in each image frame is a known value, the capturing speed of the camera capturing image frames can be adapted in a way that the first image frame and the at least one second image frame are subsequent image frames sharing at least part of an image section containing an image of the same part of the road.
- According to a further embodiment, when the first image frame and the at least one second image frame contain an image of at least part of the vehicle, the method further comprises removing the image of the at least part of the vehicle from the first image frame and the at least one second image frame. If the image frames contain an image of at least part of the vehicle, quality of the comparison of the image frames may be deteriorated by the image of at least part of the vehicle. In this regard, by removing the image of at least part of the vehicle, the quality of the comparison can be enhanced.
- According to a further embodiment, when the first image frame and the at least one second image frame contain an image of at least part of the vehicle, the first image section and the at least one second image section are compared without considering the image of the at least part of the vehicle. Since the processor may have prior knowledge about the image of the at least part of the vehicle contained in each image frame, the image of the at least part of the vehicle can be simply disregarded when comparing the first image section and the at least one second image section to enhance the quality of the comparison. A self learning algorithm may be used for the processor to obtain the knowledge about the image of the at least part of the vehicle contained in each image frame.
- According to a further embodiment, the plurality of image frames are captured by a single camera and each image frame has a plurality of image sections which result from virtually partitioning each image frame.
- According to a further embodiment, each of the plurality of image frames has N number of image sections from a first image section to an Nth image section, a lower-ranked image section containing an image of part of the road which the vehicle has passed earlier, wherein M number of image frames are selected among the plurality of image frames such that second to Nth image sections of the first image frame and first image sections of the second to Mth image frames contain an image of the same part of the road, respectively, wherein M is greater than two and is equal to or less than N, and wherein the second to the Nth image sections of the first image frame are compared with the first image sections of the second to Mth image frames, respectively, wherein a higher-ranked image frame means being captured at an earlier point of time. In this way, a broader area of the road can be monitored to check if the object has dropped down from the carrier of the vehicle to the road. Here, regarding the terms "lower-ranked" and "higher ranked", for example, a first image section is "lower-ranked" than second, third, ..., and the Nth image sections, while the Mth image frame is "higher-ranked" than the (M-1)th, (M-2)th, ..., and first image frame.
- According to a further embodiment, the first image frame and the at least one second image frame are captured by separate cameras, respectively. A plurality of cameras may be mounted at the rear portion of the vehicle such that the plurality of cameras can capture a plurality of image frames having an image of the same part of the road captured at different points of time while the vehicle is travelling on the road.
- Exemplary embodiments of the invention further include an apparatus for monitoring an object loaded on a carrier of a vehicle, the apparatus comprising at least one camera mountable at a rear portion of the vehicle so as to capture a plurality of image frames of a road on which the vehicle is travelling; a data processing device comprising a processor and configured to perform the method according any of the embodiments as discussed herein; and means for giving a warning when it is determined that the object has dropped down on the road.
- According to a further embodiment, the warning is at least one of an auditory warning or a visual warning. In an embodiment, the means for giving the warning comprises at least one of a monitor for giving the visual warning and/or a speaker for giving the auditory warning.
- Exemplary embodiments of the invention further include a vehicle equipped with the apparatus according to any of the embodiments discussed herein. In an embodiment, the vehicle is a bicycle, a motorcycle, a car or a boat. In case the vehicle is a boat, the term road means a sea road, a sea lane or a marine route.
- Exemplary embodiments of the invention further include a computer program product comprising instructions which, when executed by a computer, cause the computer to carry out the method according to any of the embodiments as discussed herein.
- Further exemplary embodiments of the invention are described with respect to the accompanying drawings, wherein:
-
Fig. 1A is a schematic diagram depicting a bicycle equipped with an apparatus in accordance with an embodiment of the invention; -
Fig. 1B is a schematic top view depicting a road on which a bicycle equipped with an apparatus in accordance with an embodiment of the invention is travelling; -
Fig. 2 is a block diagram depicting a configuration of an apparatus in accordance with an exemplary embodiment of the invention; -
Fig. 3A is a flow chart depicting a process performed by a processor in accordance with an exemplary embodiment of the invention; -
Fig. 3B is a schematic diagram for explaining an example of how to select first and second image frames by a processor in accordance with an exemplary embodiment of the invention; -
Fig. 4A is a flow chart depicting a process performed by a processor in accordance with another exemplary embodiment of the invention; -
Fig. 4B is a schematic diagram for explaining an example of how to select plurality of image frames and how to compare images sections of different image frames by a processor in accordance with another exemplary embodiment of the invention. -
Fig. 5 is a schematic diagram depicting a car equipped with an apparatus in accordance with an embodiment of the invention. -
Fig. 1A shows a schematic diagram depicting a bicycle equipped with an apparatus in accordance with an embodiment of the invention. - Referring to
Fig. 1A , abicycle 1 has arear carrier 3 for carrying anobject 30 on it and thebicycle 1 is travelling on aroad 2. Thebicycle 1 is equipped with an apparatus according to an embodiment of the invention. The apparatus comprises acamera 10 and adata processing device 20. Thecamera 10 is mounted beneath therear carrier 3 such that it may capture a plurality of image frames each containing an image of part of theroad 2 which thebicycle 1 has already passed. It is to be noted that thecamera 10 may be mounted at another part of thebicycle 1 in so far as it can take an image of part of the road which thebicycle 1 has already passed. Thedata processing device 20 is mounted on ahandlebar 5 of thebicycle 1 and is configured to perform a process according to an embodiment of the invention as described herein. The apparatus comprising thecamera 10 and thedata processing device 20 will be discussed below in more detail. -
Fig. 1B shows a schematic top view depicting theroad 2 on which thebicycle 1 is travelling. The rectangular-shapedarea 4 inFig. 1B represents part of theroad 2 contained in an image frame captured by thecamera 10. Without changing a calibration and a location of thecamera 10, all image frames captured by thecamera 10 contain images covering the same distance of theroad 2. That is, inFig. 1B , the distance "s" is the same for all image frames unless the calibration and the location of thecamera 10 is changed. -
Fig. 2 shows a schematic block diagram depicting a configuration of the apparatus in accordance with an exemplary embodiment of the invention. - As described above, the apparatus comprises the
camera 10 and thedata processing device 20. Thecamera 10 comprises acamera module 10a and acommunication module 10b. Thedata processing device 20 comprises aprocessor 22, acommunication module 24, amemory module 26, and auser interface 28. Theuser interface 28 comprises aninput module 28a, adisplay module 28b, and aspeaker 28c. It is to be noted that a configuration of the apparatus as shown inFig. 2 is exemplary. A different configuration may also be considered. For example, theprocessor 20, thecommunication module 24, thememory module 26, and thecamera 10 may be integrated into a single entity which can be mounted beneath therear carrier 10 of thebicycle 1. Theuser interface 28 may be mounted alone on thehandlebar 5 of thebicycle 1. - In operation, before or while riding the
bicycle 1, a bicycle rider (not shown) may input an instruction via theinput module 28a for the apparatus to start monitoring theobject 30 loaded on therear carrier 3 of thebicycle 1. The instruction is sent to thecamera 10 via thecommunication modules camera module 10a starts capturing image frames each containing an image of part of theroad 2. The image frames may be captured by thecamera module 10a at a constant speed which may be changed by a control of theprocessor 22 as necessary. The captured image frames are sent to thedata processing device 20 via thecommunication modules memory module 26. Theprocessor 22 performs a process according to an embodiment of the invention to determine if theobject 30 has dropped down from therear carrier 3 to theroad 2. The process to be performed by theprocessor 22 will be discussed below in detail. When it is determined that theobject 30 has dropped down on theroad 2, theprocessor 20 instructs theuser interface 28 to give a warning to the bicycle rider. Theuser interface 28 may give a visual warning, e.g. by displaying a warning sentence or picture on thedisplay 28b. Alternatively, an auditory warning, e.g. an alarming sound, may be given via thespeaker 28. Both the visual and auditory warning may also given through themonitor 28b and thespeaker 28c at the same time. -
Fig. 3A is a flow chart depicting a process to be performed by theprocessor 22 in accordance with an exemplary embodiment of the invention. - Referring to
Fig. 3A , as discussed above, thecamera module 10a captures a plurality of image frames at S31, each containing an image of part of theroad 2 which thebicycle 1 has already passed and the captured image frames are stored in thememory module 26 at S32. At S33, theprocessor 22 obtains a speed of thebicycle 1. The speed of thebicycle 1 may be measured in a conventional manner, e.g. using a speedometer or a GPS module (not shown). - At S34, the
processor 22 selects afirst image frame 4a and asecond image frame 4b among the plurality of image frames stored in thememory module 26 such that the first image frame and the second image frame have image sections, respectively, containing an image of the same part of theroad 2. -
Fig. 3B is a schematic diagram for explaining an example of how to select the first and second image frames by theprocessor 22. - As discussed above with reference to
Fig. 1B , the rectangular-shapedareas Fig. 3B represents part of theroad 2 contained in thefirst image frame 4a and thesecond image frame 4b, respectively. Without changing a calibration and a location of thecamera 10, all image frames captured by thecamera 10 contain images covering the same distance of theroad 2. That is, the distance "s" on theroad 2 contained in all image frames captured by thecamera 10 is the same. InFig. 3B , the first and second image frames 4a, 4b are captured by thecamera module 10a at a first point of time (t = t1) and at a second point of time (t = t2), respectively. There may be other image frames captured by thecamera module 10a during the time period (Δt) between the first and second points of time. Theprocessor 22 may have prior information about not only a capturing speed of thecamera module 10a capturing the image frames, but also a time period between two subsequent points of time. - Each image frame captured by the
camera module 10a has two image sections, i.e. a first image section and a second image section, which are virtually partitioned by theprocessor 22. Here, the lower-ranked image section, i.e. thefirst image section 4a', 4b', contains an image of part of theroad 2 which thebicycle 1 has passed earlier than the higher-ranked image section, i.e. thesecond image section 4a", 4b". It is to be noted that each image frame having two image sections is exemplary and that theprocessor 22 may virtually partition each image frame into more than two image sections. - The
processor 22 may select thefirst image frame 4a and thesecond image frame 4b among the plurality of image frames stored in thememory module 26 such that thefirst image frame 4a and thesecond image frame 4b have thesecond image section 4a" and thefirst image section 4b', respectively, containing an image of the same part of theroad 2 which thebicycle 1 has already passed. It may be possible by calculating the time period (Δt) between the first and second points of time when the first and second image frames 4a, 4b are captured, respectively. In particular, theprocessor 22 may calculate the time period (Δt) between the first and second points of time using an equation Δt = s / v, wherein v is the speed of thebicycle 1. The speed (v) of thebicycle 1 may be an average speed during the time period (Δt) between the first and second points of time. Alternatively, the speed (v) may be an instantaneous speed measured at any point of time during the time period. - As an alternative embodiment for selecting the first and second image frames 4a, 4b, the
processor 22 may control a capturing speed of thecamera 10 capturing image frames based on the obtained speed of thebicycle 1 such that thefirst image frame 4a and thesecond image frame 4b are two subsequent image frames which have thesecond image section 4a" and thefirst image section 4b', respectively, containing an image of the same part of theroad 2 which thebicycle 1 has already passed. In particular, based on the speed (v) of thebicycle 1 at the first point of time, theprocessor 22 may calculate the time period (Δt) using the equation Δt = s / v and thereafter theprocessor 22 may instruct thecamera module 10a to capture thesecond image frame 4b. - At S35, the
controller 22 compares thesecond image section 4a" of thefirst image frame 4a and thefirst image section 4b' of thesecond image frame 4b which contain an image of the same part of theroad 2. At 36, thecontroller 22 determines if theobject 30 has dropped down on theroad 2 based on the comparison result. In particular, theprocessor 22 determines that theobject 30 has dropped down on theroad 2 when at least one difference between thesecond image section 4a" of thefirst image frame 4a and thefirst image section 4b' of thesecond image frame 4b is detected. Since thesecond image section 4a" of thefirst image frame 4a and thefirst image section 4b' of thesecond image frame 4b contain the image of the same part of theroad 2 with a relatively short time difference, any difference between the twoimage sections 4a", 4b' can be easily detected, which may result in an easy detection of theobject 30 if it has been fallen down on theroad 2, without using a high-profile image recognition technology. - If the
second image section 4a" of thefirst image frame 4a and thefirst image section 4b' of thesecond image frame 4b contain an image of at least part of thebicycle 1, e.g. part of the rear wheel, theprocessor 22 is configured to remove the image of the at least part of thebicycle 1 from thesecond image section 4a" of thefirst image frame 4a and thefirst image section 4b' of thesecond image frame 4b. - Alternatively, the
second image section 4a" of thefirst image frame 4a and thefirst image section 4b' of thesecond image frame 4b contain an image of at least part of thebicycle 1, they may be compared without considering the image of the at least part of thebicycle 1. That is, since theprocessor 22 may have prior knowledge about the image of the at least part of thebicycle 1 contained in each image frame, the image of the at least part of thebicycle 1 may be simply disregarded when comparing the two image sections to enhance the quality of the comparison. A self learning algorithm may be used for theprocessor 22 to obtain the knowledge about the image of the at least part of thebicycle 1 contained in each image frame. - If it is determined that the
object 30 has dropped down on theroad 2 at S37, theprocessor 22 instructs theuser interface 28 to give at least one of an auditory or visual warning via thedisplay 28b and/orspeaker 28c at S38. If it is determined that the object has not dropped down on the road, on the other hand, the process returns to the step of S33. -
Fig. 4A is a flow chart depicting a process to be performed by theprocessor 22 in accordance with another exemplary embodiment of the invention. - The process of
Fig. 4A is analogous to that ofFig. 3A other than the steps of S44 and S45 which relate to selecting a plurality image frames and comparing image sections of different image frames. Thus, the other steps of S41-S43 and S46-S48 will not be explained below in detail and reference can be made to the corresponding steps ofFig. 3A . -
Fig. 4B is a schematic diagram for explaining an example of how to select the plurality of image frames and how to compare images sections of different image frames by theprocessor 22. InFigs. 4A and4B , four image frames, i.e. first to fourth image frames 4a-4d are selected and each image frame is virtually partitioned into four image sections, i.e. first to fourth image sections. - Referring to
Fig. 4B , the first to fourth image frames 4a-4d have been captured by thecamera module 10a at four different points of time, i.e. at t4, t3, t2, and t1, respectively, among which t1 is the earliest point of time and t4 is the latest point of time. A time period between two subsequent points of time may be a constant value (Δt). The first to fourth image frames 4a-4d are selected such that the second tofourth image sections 4a"-4a"" of thefirst image frame 4a and thefirst image sections 4b'-4d' of the second to fourth image frames 4b-4d contain an image of the same part of theroad 2, respectively. In particular, thesecond image section 4a" of thefirst image frame 4a and thefirst image section 4b' of thesecond image frame 4b contain an image of the same part of theroad 2 with a time difference Δt. Thethird image section 4a'" of thefirst image frame 4a and thefirst image section 4c' of thethird image frame 4c contain an image of the same part of theroad 2 with a time difference 2 x Δt. Thefourth image section 4a"" of thefirst image frame 4a and thefirst image section 4d' of thefourth image frame 4d contain an image of the same part of theroad 2 with a time difference 3 x Δt. - The
processor 22 may select the first to fourth image frames 4a-4d among the plurality of image frames stored in thememory module 26 using the speed of thebicycle 1 and the time period (Δt) between the two subsequent points of time. In particular, theprocessor 22 may calculate the time period (Δt) between the two subsequent points of time using an equation Δt = s / 3v, wherein v is the speed of thebicycle 1. The speed (v) of thebicycle 1 may be an average speed during the time period (3 x Δt) between the first and fourth points of time. Alternatively, the speed (v) may be an instantaneous speed measured at any point of time during the same time period. - As an alternative embodiment for selecting the first to fourth image frames 4a-4d, the
processor 22 may control a capturing speed of thecamera 10 capturing image frames based on the obtained speed of thebicycle 1 such that the first to fourth image frames 4a-4d are four subsequent image frames. In particular, based on the speed (v) of thebicycle 1 at the first point of time, theprocessor 22 may calculate the time period (Δt) between two subsequent points of time using the equation Δt = s / 3v and thereafter theprocessor 22 may instruct thecamera module 10a to capture the second to fourth image frames 4b-4d with a time difference of Δt between two subsequent points of time. - At S45, the
controller 22 compares thesecond image section 4a" of thefirst image frame 4a and thefirst image section 4b' of thesecond image frame 4b, thethird image section 4a'" of thefirst image frame 4a and thefirst image section 4c' of thethird image frame 4c, and thefourth image section 4a"" of thefirst image frame 4a and thefirst image section 4d' of thefourth image frame 4d. At 46, thecontroller 22 determines if theobject 30 has dropped down on theroad 2 based on the comparison result. In particular, theprocessor 22 determines that theobject 30 has dropped down on theroad 2 if there is at least one difference between the two image sections, i.e. thesecond image section 4a" of thefirst image frame 4a and thefirst image section 4b' of thesecond image frame 4b, the third image section4a"' of thefirst image frame 4a and thefirst image section 4c' of thethird image frame 4c, and thefourth image section 4a"" of thefirst image frame 4a and thefirst image section 4d' of thefourth image frame 4d. Otherwise, it is determined by theprocessor 22 that theobject 30 has not dropped down on theroad 2. In this way, accuracy of the monitoring apparatus can be enhanced by expanding both the monitoring distance of the road and time span. It is to be noted that the number of the image frames and image sections in each image frame may be more than four. -
Fig. 5 shows a schematic diagram depicting a car equipped with an apparatus in accordance with an embodiment of the invention. - Referring to
Fig. 5 , acar 100 comprises arooftop carrier 3 for carrying anobject 30 on it. Thecar 100 is equipped with an apparatus according to an embodiment of the invention. The apparatus comprises acamera 10 and a data processing device (not shown). Thecamera 10 is mounted at a rear part of thecar 100 such that it may capture a plurality of image frames each containing an image of part of a road which thecar 100 has already passed. The data processing device may be integrated into a center console of thecar 100 and is configured to perform a process according to an embodiment of the invention as described herein. - It is to be noted that an apparatus according to an embodiment of the invention may be combined with any vehicle having an outside carrier, such as a motorcycle with a rear carrier, a car with a rear carrier, and a boat with a rear carrier.
- While the invention has been described with reference to exemplary embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof without departing from the scope of the invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the invention without departing from the essential scope thereof. Therefore, it is intended that the invention not be limited to the particular embodiment disclosed, but that the invention will include all embodiments falling within the scope of the appended claims.
Claims (15)
- A method of monitoring, by a data processing device comprising a processor, an object loaded on a carrier of a vehicle, the method comprising:obtaining a plurality of image frames of a road on which the vehicle is travelling, captured by at least one camera mounted at a rear portion of the vehicle at different points of time;selecting a first image frame and at least one second image frame such that the first image frame and the at least one second image frame have a first image section and at least one second image section, respectively, containing an image of the same part of the road;comparing the first image section and the at least one second image section; anddetermining if the object has dropped down on the road based on the comparison result.
- The method of claim 1, wherein it is determined that the object has dropped down on the road when at least one difference between the first image section and the at least one second image section is detected.
- The method of claim 1 or 2, wherein selecting the first image frame and the at least one second image frame comprises:obtaining a speed of the vehicle; andselecting the first image frame and the at least one second image frame among the plurality of image frames based on the speed of the vehicle.
- The method of claim 1 or 2, wherein selecting the first image frame and the at least one second image frame comprises:obtaining a speed of the vehicle; andcontrolling a capturing speed of the camera capturing image frames based on the obtained speed such that the first image frame and the at least one second image frame are subsequent image frames.
- The method of any of the preceding claims, when the first image frame and the at least one second image frame contain an image of at least part of the vehicle, further comprising removing the image of the at least part of the vehicle from the first image frame and the at least one second image frame.
- The method of any of claims 1-4, wherein, when the first image frame and the at least one second image frame contain an image of at least part of the vehicle, the first image section and the at least one second image section are compared without considering the image of the at least part of the vehicle.
- The method of any of the preceding claims, wherein the plurality of image frames are captured by a single camera and each image frame has a plurality of image sections.
- The method of claim 7, wherein each of the plurality of image frames has N number of image sections from a first image section to an Nth image section, a lower-ranked image section containing an image of part of the road which the vehicle has passed earlier,wherein M number of image frames are selected among the plurality of image frames such that second to Nth image sections of the first image frame and first image sections of the second to Mth image frames contain an image of the same part of the road, respectively, wherein M is greater than two and is equal to or less than N, andwherein the second to the Nth image sections of the first image frame are compared with the first image sections of the second to Mth image frames, respectively, wherein a higher-ranked image frame means being captured at an earlier point of time.
- The method of any of claims 1-6, wherein the first image frame and the at least one second image frame are captured by separate cameras, respectively.
- An apparatus for monitoring an object loaded on a carrier of a vehicle, the apparatus comprising:at least one camera mountable at a rear portion of the vehicle so as to capture a plurality of image frames of a road on which the vehicle is travelling;a data processing device comprising a processor and configured to perform the method of any of claims 1-9; andmeans for giving a warning when it is determined that the object has dropped down on the road.
- The apparatus of claim 10, wherein the warning is at least one of an auditory warning or a visual warning.
- The apparatus of claim 11, wherein the means for giving the warning comprises a monitor for giving the visual warning, particularly wherein the means for giving the warning comprises a speaker for giving the auditory warning.
- A vehicle equipped with the apparatus of any of claims 10-12.
- The vehicle of claim 13, wherein the vehicle is a bicycle, a motorcycle, a car or a boat.
- A computer program product comprising instructions which, when executed by a computer, cause the computer to carry out the method according to any of claims 1-9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP22151599.2A EP4213113A1 (en) | 2022-01-14 | 2022-01-14 | A scheme for monitoring an object loaded on a carrier of a vehicle |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP22151599.2A EP4213113A1 (en) | 2022-01-14 | 2022-01-14 | A scheme for monitoring an object loaded on a carrier of a vehicle |
Publications (1)
Publication Number | Publication Date |
---|---|
EP4213113A1 true EP4213113A1 (en) | 2023-07-19 |
Family
ID=79730431
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP22151599.2A Pending EP4213113A1 (en) | 2022-01-14 | 2022-01-14 | A scheme for monitoring an object loaded on a carrier of a vehicle |
Country Status (1)
Country | Link |
---|---|
EP (1) | EP4213113A1 (en) |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200180608A1 (en) * | 2018-12-07 | 2020-06-11 | Hyundai Motor Company | Vehicle control method and system based on detection of falling of load |
-
2022
- 2022-01-14 EP EP22151599.2A patent/EP4213113A1/en active Pending
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200180608A1 (en) * | 2018-12-07 | 2020-06-11 | Hyundai Motor Company | Vehicle control method and system based on detection of falling of load |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4882571B2 (en) | Vehicle monitoring device | |
CN101681426B (en) | Image processing apparatus and image processing method | |
US6744380B2 (en) | Apparatus for monitoring area adjacent to vehicle | |
JP4114485B2 (en) | Vehicle traveling state detection device and vehicle traveling control device | |
US8050459B2 (en) | System and method for detecting pedestrians | |
WO2010058821A1 (en) | Approaching object detection system | |
US20060140447A1 (en) | Vehicle-monitoring device and method using optical flow | |
JP2000251080A (en) | Rear side monitoring device for vehicle | |
EP2501124A2 (en) | Image processing device and method | |
US7265661B2 (en) | Display system for vehicle and method of controlling the same | |
JP2002197445A (en) | Detector for abnormality in front of train utilizing optical flow | |
US20200304698A1 (en) | Imaging control apparatus, imaging control method, computer program, and electronic device | |
EP4213113A1 (en) | A scheme for monitoring an object loaded on a carrier of a vehicle | |
EP3503532A1 (en) | Electronic device, imageing system, and program | |
JPH1132253A (en) | Image-processing unit | |
JPH0981757A (en) | Vehicle position detecting device | |
WO2018163492A1 (en) | Driving mode switching control device, method, and program | |
JP3635591B2 (en) | Vehicle periphery monitoring device | |
JP2003208602A (en) | Rear-side side alarm device for vehicle | |
JPH09223218A (en) | Method and device for detecting traveling route | |
CN111417959A (en) | Information processing device, program, and information processing method | |
JP2006064653A (en) | Object detecting device and method | |
JP2000251199A (en) | Rear side part monitoring device for vehicle | |
CN115088248A (en) | Image pickup apparatus, image pickup system, and image pickup method | |
JP2000315255A (en) | Back side direction monitoring device for vehicle and back side direction monitoring alarm device for vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20240118 |
|
RBV | Designated contracting states (corrected) |
Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |