WO2019098729A1 - Procédé et dispositif de surveillance de véhicule - Google Patents

Procédé et dispositif de surveillance de véhicule Download PDF

Info

Publication number
WO2019098729A1
WO2019098729A1 PCT/KR2018/014060 KR2018014060W WO2019098729A1 WO 2019098729 A1 WO2019098729 A1 WO 2019098729A1 KR 2018014060 W KR2018014060 W KR 2018014060W WO 2019098729 A1 WO2019098729 A1 WO 2019098729A1
Authority
WO
WIPO (PCT)
Prior art keywords
state
user
alarm signal
vehicle
images
Prior art date
Application number
PCT/KR2018/014060
Other languages
English (en)
Korean (ko)
Inventor
비벡소니
아베드라마
Original Assignee
주식회사 아르비존
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020170153145A external-priority patent/KR20190056089A/ko
Priority claimed from KR1020180042543A external-priority patent/KR102069735B1/ko
Application filed by 주식회사 아르비존 filed Critical 주식회사 아르비존
Publication of WO2019098729A1 publication Critical patent/WO2019098729A1/fr

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/18Propelling the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects

Definitions

  • the present invention relates to a vehicle monitoring method, and more particularly, to a vehicle monitoring method for generating a panoramic image and monitoring the inside and the outside of the vehicle.
  • the driver Since the driver has a limited viewing angle, he can turn his head or use a black box to recognize the surroundings of the vehicle. Since the conventional black box has a viewing angle of about 120 degrees, it is not possible to grasp the situation that occurs at a portion outside the viewing angle of the black box.
  • the viewing angle can be extended by providing the panoramic image using two cameras at a certain distance.
  • the panoramic image can be attached to the subway or the building wall and used not only as an advertisement but also as a surveillance camera and a black box. In the case of surveillance cameras or black boxes, panoramic images are used to confirm a wider view. In addition, the panoramic image can compensate the disadvantage of a single camera having a narrow viewing angle in vehicle recognition, tracking, and pedestrian recognition by reconstructing the surrounding environment of the vehicle.
  • the conventional panoramic image has a disadvantage that it is necessary to install a black box for capturing a blind spot of a panoramic image because the camera and the lens are fixed, so that only the set portion can be photographed.
  • a conventional technique for solving these drawbacks is the Korean black box system of Korean Patent No. 10-13742116.
  • the present invention can generate a panoramic image using image data classified according to angle data obtained by rotating one camera. However, since one camera is used, it is troublesome to obtain the image data according to the angle by rotating the camera to acquire the panoramic image.
  • the present invention is directed to solving the above-mentioned problem and aims to generate a panoramic image using a dual camera.
  • Another object of the present invention is to prevent drowsiness by identifying a user's face in a video image of a vehicle.
  • the present invention makes it possible to prevent the occurrence of accidents by determining the situation of a blind spot that the user can not confirm using the panoramic image.
  • a method of monitoring a vehicle comprising: acquiring first and second images from a first camera and a second camera, respectively, and stitching the first and second images, And transmitting the 360-degree panoramic image to a mobile device, wherein the first and second images are wide-angle images.
  • the step of generating the 360 degree panorama image may further include extracting first and second minutiae from the first and second images, setting a pair of minutiae at corresponding positions of the first and second minutiae, , Spatially correcting the first and second images using the pair of feature points, and merging the corrected first and second images to generate a panoramic image.
  • identifying a user's face in the panoramic image comprising the steps of: identifying a user's face in the panoramic image; extracting face feature points including eyes and mouth from the face; and analyzing the facial feature points to determine a first state of the user, Is a sleep state, a drowsy state, or a steady state.
  • the alarm signal includes at least one of an interior light, a handle vibration, and a voice command .
  • the mobile device may stream the 360-degree panorama image to the media platform in real time using RTSP and RTMP protocols.
  • the present invention provides a system for monitoring a vehicle, the system comprising: a camera unit including first and second cameras for acquiring first and second images; a camera unit for stitching the first and second images, And a communication unit for generating a panoramic image for generating a panoramic image and transmitting the 360-degree panoramic image to a mobile device, wherein the first and second images are wide-angle images.
  • the panorama image generation unit may further include a feature extraction unit that extracts first and second feature points from the first and second images, sets feature point pairs at corresponding positions of the first and second feature points, The first and second images are spatially corrected, and the corrected first and second images are merged to generate a panoramic image.
  • a user state recognition unit for identifying a user's face in the panoramic image, extracting facial feature points including eyes and mouth from the face, and analyzing the facial feature points to determine a first state of the user;
  • An alarm signal output unit for providing a strong alarm signal when the first state is a sleep state and a weak alarm signal when the user is in a drowsy state, a second state receiving unit for receiving a response to the alarm signal from the user,
  • the first state of the user is one of a sleep state, a drowsy state, and a steady state
  • the alarm signal is one of a sleep state, a drowsy state, and a steady state
  • An interior light, a handle vibration, or a voice command, and the second state includes at least one of a recognition state, One of the states.
  • the alarm signal controller stops the alarm signal if it is determined that the second state of the user is awake, and stops the alarm signal at a predetermined arbitrary time if it is determined that the user is in the second state. And if the response is not received, the alarm signal is maintained as being in an inoperable state.
  • the vehicle control unit may determine whether any vehicle exists in the blind spot of the vehicle when the first state of the user is sleeping or drowsy, and determine whether the vehicle is attempting to change the lane using a sensor attached to the vehicle And controls the steering wheel when an arbitrary vehicle exists in the blind spot and an attempt to change lanes is made.
  • a panoramic image can be generated using a dual camera.
  • the present invention can simultaneously photograph the outside and the inside of the vehicle using a rotatable dual camera.
  • the present invention can prevent a drowsy operation by identifying a user's face in a video image inside the vehicle.
  • FIG. 1 is a view illustrating a monitoring system for a vehicle according to an embodiment of the present invention.
  • FIG. 2 is a perspective view of a vehicle monitoring system according to an embodiment of the present invention.
  • FIG. 3 is a diagram illustrating a configuration of a monitoring system for a vehicle according to an embodiment of the present invention.
  • FIG. 4 is a view for explaining a vehicle monitoring method according to an embodiment of the present invention.
  • FIG. 5 is a view for explaining a drowsiness driving prevention method according to an embodiment of the present invention.
  • FIG. 6 is a diagram for explaining an alarm signal management method according to an embodiment of the present invention.
  • FIG. 7 is a view for explaining a lane change control method according to an embodiment of the present invention.
  • FIG. 8 is a diagram for explaining a method of generating a panoramic image according to an embodiment of the present invention.
  • FIG. 9 is a diagram for explaining a method of extracting facial feature points of a user according to an embodiment of the present invention.
  • FIG. 10 is a diagram for explaining a configuration for analyzing eye feature points of a user according to an embodiment of the present invention.
  • FIG. 11 is a view for explaining a blind spot of a vehicle according to an embodiment of the present invention.
  • each of the components may be implemented as a hardware processor, the components may be integrated into one hardware processor, or a combination of the components may be implemented as a plurality of hardware processors.
  • FIG. 1 is a view showing a monitoring system for a vehicle according to an embodiment of the present invention
  • FIG. 2 is a perspective view of a monitoring system for a vehicle according to an embodiment of the present invention. More specifically, FIG. 2 (a) is a plan perspective view of the vehicle monitoring system, (b) is a front perspective view of the vehicle monitoring system, and FIG. 2 (c) is a side perspective view of the vehicle monitoring system.
  • the vehicle monitoring system includes two cameras 110 and 120 rotatable by 360 degrees.
  • the present invention can obtain a wide angle image of about 190 degrees using a fisheye lens having a square angle of more than 180 degrees with a lens of a camera.
  • the fisheye lens is an ultra-wide-angle in-focus lens which hardly corrects linear distortion, and is a lens that forms an image having a field of view of 180 degrees or higher in a certain limited range with negative distortion.
  • Fisheye lenses can have deep depth of field and maximize perspective. Therefore, it can be used in surveillance cameras to monitor a large area, and the maximum monitoring effect can be obtained with a minimum of equipment.
  • the first camera 110 and the second camera 120 are rotatable by 360 degrees to selectively photograph the outside and the inside of the vehicle. It is possible to more easily determine the environment around the vehicle by selecting the outside of the vehicle and to prevent inevitable accidents occurring inside the vehicle by selecting the interior of the vehicle or to prevent drowsiness by recognizing the user's face.
  • the vehicle monitoring system may further include a control button 1100, a micro SD card (memory card) slot 1200, and an lTE sim slot 1300.
  • the vehicle monitoring system 10 includes a camera unit 100, a sensor unit 200, a panorama image generation unit 300, a user state recognition unit 400, an alarm signal output unit 500, a voice control unit 800, Unit 900, and a communication unit 1000.
  • the camera unit 100 may include a first camera 110 and a second camera 120 as shown in FIGS. 1 and 2.
  • the first camera 110 and the second camera 120 may include a first image controller 130, a second image controller 140 ).
  • the first image controller 130 may acquire the first image from the first camera 110 and the second image controller 140 may acquire the second image from the second camera 120.
  • the first image controller 130 and the second image controller 140 may extract a video image using a complementary metal oxide semiconductor (CMOS) and an image signal processor (ISP).
  • CMOS is a technology that converts light into electric charge to acquire image, and each pixel has an analog / converter to process data.
  • the first image controller 130 and the second image controller 140 can use a BSI (Backside Illumination) which increases the frame per second (fps) by adopting CMOS and irradiates light on the rear surface.
  • BSI Backside Illumination
  • the first image controller 130 and the second image controller 140 acquire a high-quality image using a high dynamic range (DR), a high SNR (signal to noise ratio), and a pixel size of 2.2 ⁇ m can do.
  • DR high dynamic range
  • SNR signal to noise ratio
  • a pixel size of 2.2 ⁇ m can do.
  • the first image controller 130 and the second image controller 140 perform image processing including an optical system correction process for an image, a defect correction, and a pixel-unit image process using an ISP chip can do.
  • the sensor unit 200 may include a GPS sensor 210, a gyro sensor 220, and a speed sensor 230.
  • the vehicle monitoring system 10 predicts the traveling direction of the vehicle using the sensor unit 200 and can grasp the current speed and position. For example, if the vehicle is located in a blind spot of the user, the vehicle monitoring system 10 can determine whether the user intends to change the lane using the gyro sensor 220. [ In this case, the vehicle monitoring system 10 prevents the vehicle from being changed by changing the lane so as to prevent an accident with the vehicle located in the blind spot.
  • the panoramic image generation unit 300 may generate a panoramic image by stitching a first image obtained from the first image control unit 130 and a second image obtained from the second image control unit 140.
  • Stitching is a technique of combining two images to generate one image.
  • the panorama image generation unit 300 extracts first and second feature points from the first image and the second image, sets a pair of feature points corresponding to the first feature point and the second feature point, A first image and a second image are spatially corrected, and a first image and a second image are merged to generate a panorama image. More specifically, the panoramic image generation unit 300 may detect one or more feature points in the first image and the second image. The feature points may be set to feature points located at corresponding positions in the first image and the second image, and one or more feature points may be set. That is, the feature point pair may include one of the feature points of the first image and the feature point of the second image.
  • the panoramic image generation unit 300 may perform spatial correction of the first image and the second image by performing a bundle adjustment on the pair of feature points.
  • the beam speed correction can optimize the position of the feature point pair by referring to the difference between the position of the feature point in the previous frame and the position of the feature point in the current frame.
  • the panoramic image generation unit 300 may correct the first image and the second image by optimizing the positions of the pair of feature points.
  • the panoramic image generation unit 300 may generate a panoramic image by merging the first image and the second image based on the correction of the first image and the second image. 8, the panorama image generation unit 300 extracts feature points from the first image and the second image, respectively, and extracts feature points of the first image existing at corresponding positions in the extracted feature points, It is possible to generate the panoramic image using the set of feature points by matching the feature points of the second image.
  • the user state recognition unit 400 can identify the user's face in the panoramic image generated from the image of the inside of the vehicle. Referring to FIG. 9, the user state recognition unit 400 may extract facial feature points including eyes and mouth from a user's face to determine a first state of the user.
  • the first state may include a sleep state, a drowsy state, and a steady state.
  • a method for determining a first state of a user includes, for example, a method of storing facial feature points of a normal face of a user and comparing facial feature points extracted from a face of a current user. This method can analyze the facial feature points and continuously determine the user's condition through the pixel information of the pupil, the flickering of the eyes, the angle of the face, and the shaking.
  • the facial feature points located in the user's eyes can be extracted to determine the first state of the user by grasping the region of the pupil. This method is only one example for explaining the present invention, and other methods may be used.
  • the alarm signal output unit 500 may include an audio output unit 510, an interior lighting unit 520, a steering vibration unit 530, and a display unit 540.
  • the alarm signal output unit 500 may provide the user with at least one of a voice, an interior light, a handle vibration, and a display notification according to the first state of the user determined by the user state recognition unit 400.
  • the alarm signal output unit 500 can provide a strong alarm signal when the first state of the user is in the sleep state and a weak alarm signal when the user is in the drowsy state. The difference between the strong alarm signal and the weak alarm signal lies in the strength of the function provided by the alarm signal output section 500.
  • the alarm signal output unit 500 may provide a voice message to the user, and the voice message may have a form of a query in which the user can respond.
  • the voice control unit 800 can receive a response to the voice message provided through the alert signal output unit 500 of the user. At this time, the voice control unit 800 can determine the second state of the user by analyzing the response. For example, the second state can be determined through the contents of the user's response, the voice of the user, and the like.
  • the second state may include a cognitive state and an arousal state.
  • the awake state means that the user has correctly answered the voice message, the user's voice is also clear, and the recognition state means that the response is received from the user but the response does not correspond to the voice message properly. For example, when the alarm signal output unit 500 provides a voice message 'What is 1 + 3?' To the user, the voice control unit 800 notifies the user of the second state, State. If a response other than 4 is received or a response of 4 is received but the delay exceeds the predetermined time until the response is received, the second state of the user can be determined to be the acknowledged state.
  • the voice control unit 800 may receive a response that the user is 'sleepy.' At this time, the voice control unit 800 may analyze the sentence 'sleepy' to determine that the user's second state is a perceived state.
  • the alarm signal control unit 600 may stop the alarm signal provided by the alarm signal output unit 500 when it is determined that the user is in the awakening state.
  • the alarm signal control unit 600 may stop the alarm signal provided by the alarm signal output unit 500 and provide the alarm signal again at a predetermined arbitrary time if it determines that the user is in the second state . At this time, each time an alarm signal is provided, a voice message may be provided to the user to analyze the second state of the user again.
  • the alarm signal control unit 600 determines that the user is in the inoperable state and outputs the alarm signal of the alarm signal output unit 500 Can be continuously provided.
  • the alarm signal output unit 500 and the alarm signal control unit 600 can provide an alarm signal suitable for the user's state, thereby helping the user avoid or prevent the drowsiness operation.
  • the vehicle control unit 700 can determine whether a vehicle exists in a blind spot of the vehicle when the first state of the user determined by the user state recognition unit 400 is in a sleep state or a drowsy state.
  • the dead zone refers to all the sections other than 110 to 150 degrees in front and rear of the vehicle.
  • the vehicle control unit 700 can determine whether the user intends to change lanes from the sensor unit 200 when any vehicle exists in the blind spot of the vehicle.
  • the vehicle control unit 700 controls the steering wheel so that the vehicle can not change lanes . This allows the user to prevent accidents that may occur due to inappropriate actions such as changing the lane even though the vehicle is in a blind spot.
  • the storage unit 900 may store the first image and the second image obtained through the first image controller 130 and the second image controller 140. In addition, the storage unit 900 stores the panorama image generated by merging the first image and the second image, so that the user can acquire the past traveling image at any time.
  • the communication unit 1000 transmits the panoramic image to the mobile device via the wireless communication so that the panoramic image can be confirmed even by the mobile device.
  • Mobile devices can include all portable electronic devices other than smart phones and tablet PCs.
  • the mobile device can play the received panorama image and can stream the image to the media platform in real time using the protocol of RTSP or RTMP.
  • the media platform can refer to any platform that supports live streaming, such as YouTube and Facebook Live.
  • a vehicle monitoring system (hereinafter, a system) may acquire a first image from a first camera (S10) and acquire a second image from a second camera (S11).
  • the system Upon acquiring the first image and the second image, the system extracts the first frame from the first image (S20) and extracts the second frame from the second image (S21).
  • the system extracts the first feature point in the first frame (S30) and extracts the second feature point in the second frame (S31).
  • the system can correct the first image using the extracted first feature points (S40) and correct the second image using the second feature points (S41). More specifically, the system can set a pair of feature points at corresponding positions of the first feature point and the second feature point.
  • the feature point pair may include one of the feature points of the first image and the feature point of the second image.
  • the system can spatially correct the first image and the second image using the set of feature points.
  • the system may correct the first and second images, and then combine the corrected first and second images to generate a panorama image (S50).
  • the system can transmit the generated panorama image to the mobile device (S60).
  • FIG. 6 is a view for explaining a drowsiness driving prevention method according to an embodiment of the present invention.
  • the system can detect the face of the user in the panoramic image of the inside of the vehicle (S100).
  • the system may extract one or more facial feature points including information of the eyes and mouth included in the face of the user (S110).
  • the extracted facial feature points are for determining a first state of the user, and the first state of the user may include a sleep state, a drowsy state, and a normal state.
  • the system can store the facial feature points of the user's normal state and compare the facial feature points with the facial feature points included in the current user's face to determine the first state of the user (S120).
  • the system can provide a strong alarm signal to the user (S135).
  • the alarm signal may include one or more of a voice, an interior light, a handle vibration, and a display alert.
  • the system may provide a weak alarm signal to the user (S145).
  • a strong alarm signal and a weak alarm signal can be distinguished according to the difference in intensity of the alarm signal.
  • the system can determine the user's first state again without providing an alarm signal, thereby continuously checking the first state of the user.
  • a vehicle monitoring system (hereinafter referred to as a system) may determine a first state of a user and provide an alarm signal when the vehicle is in a sleep state or in a drowsy state (S200).
  • the system may provide an alert signal to the user and then provide a voice message in the form of a query (S210).
  • the user can send an appropriate response message for the voice message received from the system to the system by voice (S220).
  • the system may recognize the response message received from the user (S230). If the user sends a response message to the system but does not recognize the voice message in the system, or if the user fails to transmit a response message to the system, the voice message can be provided to the user again.
  • the system may analyze the received response message (S240) to determine the second state of the user.
  • the system stops the alarm signal and provides an alarm signal at predetermined time intervals (S255) to prevent the user from falling into a drowsy state. For example, the system can provide an alert signal once every five minutes until the user in the acknowledged state reaches the awakening state.
  • the system may stop the alarm signal (S260).
  • a vehicle monitoring system (hereinafter referred to as a system) may determine a first state of a user (S300).
  • the system may determine whether the first state of the user is a drowsy state or a sleep state (S310). If the first state of the user is in the drowsy state or in the sleep state, it can be checked whether any vehicle exists in the blind spot of the vehicle (S320). If the first state of the user is awake, the system may continue to determine the first state of the user until the first state of the user is determined to be a drowsy state or a sleep state.
  • the system can determine whether or not the user tries to change the lane. If the user attempts to change the lane (S340), the system controls the steering wheel (S350) so that the vehicle can not change the lane. With this arrangement, the system can control the steering wheel when the user is in a drowsy state or in a sleeping state to change the lane or when the vehicle is about to leave the lane so that the vehicle can keep the existing lane. Therefore, there is an effect that an accident that may occur to a user in a drowsy state or a sleep state can be prevented in advance.

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Traffic Control Systems (AREA)

Abstract

La présente invention concerne un procédé de surveillance d'un véhicule par un serveur, et le procédé comprend une étape consistant à obtenir des première et seconde images provenant respectivement des première et seconde caméras, une étape consistant à assembler les première et seconde images afin de générer une image panoramique à 360 degrés, et une étape consistant à transmettre l'image panoramique à 360 degrés à un dispositif mobile, les première et seconde images étant des images à grand angle.
PCT/KR2018/014060 2017-11-16 2018-11-16 Procédé et dispositif de surveillance de véhicule WO2019098729A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
KR10-2017-0153145 2017-11-16
KR1020170153145A KR20190056089A (ko) 2017-11-16 2017-11-16 듀얼 렌즈 대시 카메라 및 이를 이용한 라이브 스트리밍 방법
KR10-2018-0042543 2018-04-12
KR1020180042543A KR102069735B1 (ko) 2018-04-12 2018-04-12 차량용 모니터링 방법 및 장치

Publications (1)

Publication Number Publication Date
WO2019098729A1 true WO2019098729A1 (fr) 2019-05-23

Family

ID=66538734

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2018/014060 WO2019098729A1 (fr) 2017-11-16 2018-11-16 Procédé et dispositif de surveillance de véhicule

Country Status (1)

Country Link
WO (1) WO2019098729A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112406877A (zh) * 2019-08-20 2021-02-26 比亚迪股份有限公司 车辆的控制方法及装置
CN115134491A (zh) * 2022-05-27 2022-09-30 深圳市有方科技股份有限公司 图像处理方法和装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20050040400A (ko) * 2003-10-28 2005-05-03 엘지전자 주식회사 이동 통신 단말기의 졸음 방지 장치 및 방법
JP2007241729A (ja) * 2006-03-09 2007-09-20 Toyota Central Res & Dev Lab Inc 運転支援装置及び運転支援システム
KR20090109437A (ko) * 2008-04-15 2009-10-20 주식회사 만도 차량주행중 영상정합 방법 및 그 시스템
KR20120005751U (ko) * 2011-01-31 2012-08-16 주식회사 하이드 차량 내부감시 장치
KR101374211B1 (ko) * 2013-04-18 2014-03-13 이종훈 차량용 블랙박스 시스템

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20050040400A (ko) * 2003-10-28 2005-05-03 엘지전자 주식회사 이동 통신 단말기의 졸음 방지 장치 및 방법
JP2007241729A (ja) * 2006-03-09 2007-09-20 Toyota Central Res & Dev Lab Inc 運転支援装置及び運転支援システム
KR20090109437A (ko) * 2008-04-15 2009-10-20 주식회사 만도 차량주행중 영상정합 방법 및 그 시스템
KR20120005751U (ko) * 2011-01-31 2012-08-16 주식회사 하이드 차량 내부감시 장치
KR101374211B1 (ko) * 2013-04-18 2014-03-13 이종훈 차량용 블랙박스 시스템

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112406877A (zh) * 2019-08-20 2021-02-26 比亚迪股份有限公司 车辆的控制方法及装置
CN112406877B (zh) * 2019-08-20 2022-09-09 比亚迪股份有限公司 车辆的控制方法及装置
CN115134491A (zh) * 2022-05-27 2022-09-30 深圳市有方科技股份有限公司 图像处理方法和装置
CN115134491B (zh) * 2022-05-27 2023-11-24 深圳市有方科技股份有限公司 图像处理方法和装置

Similar Documents

Publication Publication Date Title
US20210219049A1 (en) Microphone pattern based on selected image of dual lens image capture device
WO2020027607A1 (fr) Dispositif de détection d'objets et procédé de commande
KR102015956B1 (ko) 기가비트 이더넷 통신망에서의 차량 번호 인식 시스템 및 차량 정보 전송 방법
WO2014109422A1 (fr) Appareil de suivi vocal et son procédé de commande
WO2017090892A1 (fr) Caméra de génération d'informations d'affichage à l'écran, terminal de synthèse d'informations d'affichage à l'écran (20) et système de partage d'informations d'affichage à l'écran le comprenant
WO2019098729A1 (fr) Procédé et dispositif de surveillance de véhicule
WO2015023076A1 (fr) Procédé de capture d'image d'iris, support d'enregistrement lisible par ordinateur contenant le procédé, et appareil de capture d'image d'iris
WO2020218717A1 (fr) Dispositif de vision des alentours
WO2016024680A1 (fr) Boîte noire de véhicule permettant de reconnaître en temps réel une plaque d'immatriculation de véhicule en mouvement
US20230328432A1 (en) Method and apparatus for dynamic reduction of camera body acoustic shadowing in wind noise processing
US20230032321A1 (en) Asymmetric Camera Sensor Positioning for Enhanced Package Detection
WO2018097384A1 (fr) Appareil et procédé de notification de fréquentation
US20220038628A1 (en) Method and apparatus for active reduction of mechanically coupled vibration in microphone signals
WO2022010221A1 (fr) Capteur de sécurité embarqué installé dans un véhicule et procédé permettant de fournir une plate-forme de service associée
WO2021137555A1 (fr) Dispositif électronique comprenant un capteur d'image et son procédé de fonctionnement
KR102069735B1 (ko) 차량용 모니터링 방법 및 장치
CN207638749U (zh) 智能监控摄像头及安防系统
WO2022050622A1 (fr) Dispositif d'affichage et son procédé de commande
WO2017204598A1 (fr) Terminal et procédé de configuration de protocole de données pour une image photographiée
JP2015186058A (ja) 撮像装置
WO2021101148A1 (fr) Dispositif électronique comprenant un ois d'inclinaison et procédé de capture d'image et de traitement d'image capturée
WO2014035053A1 (fr) Système de caméra utilisant une chambre super grand angulaire
WO2019088591A2 (fr) Système de surveillance de vue panoramique
WO2013165040A1 (fr) Appareil d'imagerie de réglage de composition
WO2019240520A1 (fr) Dispositif électronique et procédé pour identifier des images capturant l'intention d'un utilisateur

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18877875

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18877875

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 18877875

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 29.01.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18877875

Country of ref document: EP

Kind code of ref document: A1