WO2023242888A1 - Dispositif de surveillance de cabine de véhicule, système de surveillance de cabine de véhicule, et procédé de surveillance de cabine de véhicule - Google Patents

Dispositif de surveillance de cabine de véhicule, système de surveillance de cabine de véhicule, et procédé de surveillance de cabine de véhicule Download PDF

Info

Publication number
WO2023242888A1
WO2023242888A1 PCT/JP2022/023561 JP2022023561W WO2023242888A1 WO 2023242888 A1 WO2023242888 A1 WO 2023242888A1 JP 2022023561 W JP2022023561 W JP 2022023561W WO 2023242888 A1 WO2023242888 A1 WO 2023242888A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
image
target image
area
vehicle interior
Prior art date
Application number
PCT/JP2022/023561
Other languages
English (en)
Japanese (ja)
Inventor
太郎 熊谷
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to PCT/JP2022/023561 priority Critical patent/WO2023242888A1/fr
Publication of WO2023242888A1 publication Critical patent/WO2023242888A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems

Definitions

  • Patent Document 1 discloses an in-vehicle device including a driver camera that transmits data used for driving evaluation of a vehicle driver to a driving evaluation device, and the driving evaluation device uses an image of the driver while driving as the data.
  • An in-vehicle device that transmits data is disclosed.
  • a vehicle interior monitoring device is a vehicle interior monitoring device that outputs monitoring information based on a captured image captured by an imaging device of a range where occupants in at least two seats can exist in a vehicle interior of a vehicle, an image acquisition unit that acquires a captured image; and a target image generation unit that generates a target image by invalidating an area other than the target area in an area on the captured image acquired by the image acquisition unit. , and an image processing section that outputs monitoring information based on the target image generated by the target image generation section to an output device.
  • FIG. 1 is a diagram showing a configuration example of a vehicle interior monitoring device according to a first embodiment
  • FIG. 2 is a diagram illustrating an example of the interior of the vehicle captured by an imaging device installed near the overhead console in the vehicle interior in the first embodiment.
  • FIG. 3 is a diagram illustrating an example of a captured image obtained by capturing an image of the interior of a vehicle as shown in FIG. 2 from an installation position of the imaging device in the first embodiment.
  • 4 is a diagram illustrating an example of a target image generated by the target image generation unit invalidating the non-target area by overwriting the pixel value of the non-target area with a different value for the captured image as shown in FIG. 3.
  • FIG. 10A is a diagram for explaining an example of a flow from when an occupant confirms target image candidates displayed on a display device to specifying a target area
  • FIG. 10B and FIG. FIG. 7 is a diagram for explaining an example of a flow from confirming displayed target image candidates to specifying a non-target area.
  • a target image candidate is presented to a vehicle occupant, information regarding a target area specified by the vehicle occupant is received based on the target image candidate, and a target image is generated based on the received information regarding the target area.
  • It is a flowchart for explaining the operation of the vehicle interior monitoring device configured to perform the following steps.
  • 12A and 12B are diagrams illustrating an example of the hardware configuration of the vehicle interior monitoring device according to the first embodiment.
  • the imaging device 2 is, for example, a near-infrared camera or a visible light camera, and images an occupant present in the vehicle.
  • the imaging device 2 may be shared with a so-called "Driver Monitoring System (DMS)", for example.
  • DMS Driver Monitoring System
  • the imaging device 2 is installed so as to be able to image an area where occupants in at least two seats can be present in the cabin of the vehicle.
  • the range in which the occupant can exist in the vehicle interior is, for example, a range corresponding to the space near the backrest of the seat and the front of the headrest.
  • the imaging device 2 is installed, for example, in the center of the vehicle in the vehicle width direction.
  • the "center” in the vehicle width direction is not limited to the “center” strictly, but also includes “substantially the center.”
  • the imaging device 2 is installed in the center console of the vehicle or near the dashboard center where a car navigation system or the like is provided. Note that this is just an example, and the imaging device 2 may be installed, for example, on a meter panel in front of the driver, an instrument panel, an overhead console, a mirror such as a room mirror, the ceiling, or a seat.
  • the imaging device 2 may be a wide-angle camera, and may be installed so as to be able to image an area where at least two seats exist in the vehicle interior, that is, an area where occupants of at least two seats may exist. .
  • the server 32 is an external server provided outside the vehicle interior monitoring device 1, and acquires monitoring information output from the vehicle, specifically, the vehicle interior monitoring device 1, from outside the vehicle, and stores the acquired monitoring information. It is a computing device that performs calculations based on or stores acquired monitoring information.
  • the server 32 is a personal computer provided outside the vehicle.
  • the calculations performed by the server 32 based on the acquired monitoring information include, for example, image processing on captured images included in the monitoring information, processing of the monitoring information, and calculation of the passenger's driving from the past to the present based on the monitoring information. This is an analysis of the number of times of carelessness, etc.
  • the vehicle interior monitoring device 1 can also output the monitoring information in association with information that allows identification of the occupant.
  • the server 32 can transmit the stored monitoring information or calculation results based on the monitoring information to a vehicle or an in-vehicle device, or a personal computer or a mobile device owned by an individual, for example, by communication.
  • a vehicle or in-vehicle device, or a personal computer or mobile device owned by an individual can receive monitoring information or calculation results based on the monitoring information from the server 32, and can perform various processes.
  • the storage device 31 and the server 32 may be a common device.
  • the display device 33 displays monitoring information output from the vehicle, specifically, the vehicle interior monitoring device 1.
  • the display device 33 is assumed to be, for example, a touch panel display included in a navigation device (not shown) installed in a vehicle. Note that this is just an example, and the display device 33 may be, for example, a variety of display devices installed in a location other than the navigation device in the vehicle, or a display device installed in a personal computer or mobile device owned by an individual. It may be a display device.
  • the vehicle interior monitoring device 1 includes an image acquisition section 11, a target image generation section 12, and an image processing section 13.
  • the image processing section 13 includes a first processing section 131 and a second processing section 132.
  • the image acquisition unit 11 acquires a captured image from the imaging device 2.
  • the image acquisition unit 11 outputs the acquired captured image to the target image generation unit 12.
  • the target area is an area on the captured image that is associated with a seat in the vehicle interior.
  • the area associated with the seat is, for example, the smallest rectangular area that includes the entire seat on the captured image, or the smallest rectangular area that includes the headrest of the seat on the captured image.
  • the installation position and angle of view of the imaging device 2 are known.
  • the range of seat positions within the vehicle interior is known.
  • the administrator or the like sets in advance how much area on the captured image corresponds to each seat in the vehicle interior, and the administrator or the like sets in advance how much area corresponds to each seat in the vehicle interior. It is assumed that information regarding the corresponding area is stored in a location that can be referenced by the vehicle interior monitoring device 1.
  • the area associated with the driver's seat is an area on the captured image where the driver is assumed to be imaged.
  • the area associated with the passenger seat is an area on the captured image where it is assumed that the passenger in the passenger seat (hereinafter referred to as "passenger seat occupant”) is imaged.
  • the area associated with the rear seat is an area where it is assumed that a passenger in the rear seat (hereinafter referred to as "rear seat passenger”) is imaged.
  • the target image generation unit 12 specifies a target area on the captured image based on preset information regarding the area corresponding to each seat in the vehicle interior.
  • the target area is an area associated with the driver's seat on the captured image, in other words, an area where the driver is assumed to be imaged, and an area corresponding to the passenger seat. In other words, this is the area where the passenger in the passenger seat is assumed to be imaged.
  • the target image generation unit 12 identifies a target area in the captured image, it invalidates the non-target area by overwriting the pixel values of the area other than the target area, that is, the non-target area, with a different value.
  • the captured image in which the pixel values of the area have been overwritten is set as the target image.
  • the target image generation unit 12 may fill out the non-target area by overwriting the pixel value of the non-target area with a single value in the captured image, or may overwrite the non-target area with a random pixel value, or may overwrite the pixel value of the non-target area with a single value, or may overwrite the pixel value of the non-target area with a random pixel value, or may overwrite the pixel value of the non-target area with a single value, or may overwrite with a random pixel value, It may be overwritten with pixel values that follow the established rules.
  • the target image generation unit 12 fills in the non-target area on the captured image by, for example, overwriting the pixel values of the non-target area with a single value.
  • the target image generation unit 12 creates a gradation image in the non-target area on the captured image by, for example, overwriting the pixel values of the non-target area with random pixel values.
  • the target image generation unit 12 generates a non-target area so that, on the captured image, pixels in the non-target area are expressed in a color order according to a preset rule, such as red, white, and yellow. overwrite the pixel value of Further, the target image generation unit 12 overwrites the pixel values of the non-target area on the captured image, for example, so that the pixels of the non-target area indicate a pattern or an object according to a preset rule.
  • the preset rules are appropriately set by an administrator or the like, and are stored in a location where the vehicle interior monitoring device 1 can refer to them.
  • an administrator or the like sets objects other than people as objects shown in the non-target area. If a person is shown in the non-target area and the vehicle interior monitoring device 1 performs image recognition processing on the captured image to generate monitoring information, there is a risk that the person will be mistakenly recognized as a passenger. It is.
  • the image processing unit 13 generates the monitoring information. Details of the image processing section 13 will be described later.
  • FIGS. 2, 3, and 4 are diagrams for explaining an example of a target image generated by the target image generation unit 12 in the first embodiment.
  • FIG. 2 is a diagram showing an example of the interior of the vehicle captured by the imaging device 2 installed near the overhead console in the vehicle.
  • FIG. 3 is a diagram showing an example of an image taken by the imaging device 2 from the installation position of the interior of the vehicle as shown in FIG. 2.
  • FIG. 4 is an example of a target image generated by the target image generation unit 12 by invalidating the non-target area by overwriting the pixel values of the non-target area with different values for the captured image as shown in FIG. 3.
  • FIG. 2 is a diagram showing an example of the interior of the vehicle captured by the imaging device 2 installed near the overhead console in the vehicle.
  • FIG. 3 is a diagram showing an example of an image taken by the imaging device 2 from the installation position of the interior of the vehicle as shown in FIG. 2.
  • FIG. 4 is an example of a target image generated by the target image generation unit 12 by invalidating the non-
  • IR indicates the imaging range of the imaging device 2.
  • DR indicates a driver
  • P1 indicates a front passenger seat occupant
  • P2 and P3 indicate a rear seat occupant.
  • DS indicates a driver's seat
  • PS indicates a passenger seat
  • BS indicates a rear seat.
  • DR indicates a driver
  • P1 indicates a passenger in the passenger seat
  • DS indicates a driver's seat
  • PS indicates a passenger seat.
  • the target areas are set in advance to be the area associated with the driver's seat and the area associated with the passenger seat. Therefore, the target image generation unit 12 generates a non-target area, in other words, an area associated with the driver's seat (indicated by Im in FIG. 3) out of the area of the captured image as shown in FIG. (indicated by TA1 in FIG. 3) and an area other than the area associated with the passenger seat (indicated by TA2 in FIG. 3) are overwritten with different values, and the image is captured in which the pixel values of the non-target area are overwritten.
  • the image be a target image (indicated by TIm1 in FIG. 4).
  • the target image shown in FIG. 4 is a target image in which the target image generation unit 12 overwrites the pixel values of the non-target area with a single value indicating black, and fills the non-target area with black.
  • the target image generation unit 12 generates, for example, a target area associated with a certain seat (in the above example, a target area associated with a driver's seat) on a captured image captured by the imaging device 2. Overwrite the pixel values of the non-target area with a different value, such as by filling out the non-target area, which is an area other than the area (and the area associated with the passenger seat), and target the captured image after overwriting the pixel values. Generate as an image.
  • FIG. 5 is a diagram for explaining another example of the target image generated by the target image generation unit 12 in the first embodiment.
  • FIG. 5 is a diagram showing an example of a target image generated by the target image generation unit 12 by cutting out a plurality of target images and invalidating non-target areas from the captured image as shown in FIG. 3 described above. It is.
  • DR indicates a driver
  • P1 indicates a passenger in the passenger seat
  • DS indicates a driver's seat
  • PS indicates a passenger seat.
  • the target image generation unit 12 cuts out three or more target regions from the captured image, and sets the captured images of the plurality of cut out target regions as target images. Further, only one target area may be set on the captured image. In this case, the target image generation unit 12 cuts out the target area from the captured image, and sets the captured image of the cut out target area as the target image.
  • the target image generation unit 12 cuts out a target region associated with a certain seat from the captured image captured by the imaging device 2, and generates the cut out captured image as a target image.
  • the target image generation unit 12 generates a captured image (indicated by TIm4 in FIG. 6) after overwriting the pixel values of the non-target area (indicated by NA in FIG. 6) with different pixel values on the captured image of the cutout region. Let be the target image.
  • the target image generation unit 12 cuts out one cropping region including the plurality of target regions from the captured image, and cuts out one cropping region including the plurality of target regions from the captured image and By overwriting the pixel values of the area, that is, the non-target area, with a different value, the non-target area is invalidated, and the captured image of the cutout area after the pixel values of the non-target area are overwritten is generated as the target image. do.
  • the target image generation unit 12 generates the target image at the timing at which the target image should be generated (hereinafter referred to as "target image generation timing").
  • the target image generation timing is, for example, set in advance.
  • the target image generation unit 12 generates the target image when a preset target image generation timing has arrived, or in other words, when a preset time (hereinafter referred to as “target image generation time”) has elapsed. generate.
  • the target image generation timing may be, for example, when the image recognition result for the captured image satisfies the target image generation condition.
  • the target image generation condition for example, an image recognition result for the captured image is set as the target image generation timing.
  • the target image generation conditions include, as a result of performing image recognition processing on a captured image, when a passenger getting into a vehicle is detected, or when a door is detected as being opened.
  • a condition is set: when.
  • the target image generation unit 12 performs known image recognition processing on the captured image output from the image acquisition unit 11.
  • the target image generation unit 12 determines the target image generation timing by comparing the result of image recognition processing with the target image generation conditions.
  • the target image generation unit 12 After generating the target image, the target image generation unit 12 outputs the generated target image to the image processing unit 13.
  • the second processing unit 132 stores the generated monitoring information in the storage device 31, outputs it to the server 32, or displays it on the display device 33.
  • the second processing unit 132 can generate monitoring information for each occupant. .
  • the second processing unit 132 may detect the occupant's face or facial parts using, for example, a method using machine learning. Specifically, the second processing unit 132 inputs the target image to a trained machine learning model that receives the captured image and outputs information regarding the face or facial parts of the occupant in the captured image, and Obtain information about the passenger's face or facial parts. The second processing unit 132 may detect the face or facial parts of the occupant on the target image, for example, by pattern matching.
  • the second processing unit 132 also performs a well-known image recognition process on the target image, receives the captured image as input, and outputs information indicating the position of the occupant's head on the captured image using trained machine learning.
  • the position of the occupant's head on the target image in other words, the two-dimensional position of the occupant's head is detected.
  • the second processing unit 132 detects the positions of the occupant's eyes on the target image by the method of detecting facial parts on the target image as described above, and based on the distance between the occupant's eyes on the target image. , the depth distance from the imaging device 2 to the position of the occupant's head is estimated.
  • the second processing unit 132 detects the position of the passenger's head in real space based on the detected two-dimensional position of the passenger's head and the estimated depth distance from the imaging device 2 to the position of the passenger's head. do.
  • the second processing unit 132 calculates the positional relationship between the structure in the vehicle interior and the body of the occupant by performing a known image recognition process on the target image, and based on the calculated positional relationship, Detects the position of the occupant's body.
  • the structure inside the vehicle interior is, for example, a window frame or a pillar. Since the positional relationship in real space between the structure inside the vehicle interior and the imaging device 2 is known in advance, the second processing unit 132 determines the positional relationship between the structure inside the vehicle interior and the body of the occupant in the target image. For example, the position of the occupant's body in real space can be estimated.
  • the second processing unit 132 determines the occupant's physique based on the height of the occupant's face in the target image, for example. Note that this is just an example, and the second processing unit 132 may determine the occupant's physique from the target image using a known image-based physique determination technique.
  • the second processing unit 132 stores, for example, the target images generated by the target image generation unit 12 in chronological order, and performs known image recognition processing on the stored chronological target images. It is sufficient to detect the movement of the occupant.
  • the occupant's biological information includes, for example, information regarding the occupant's pulse rate.
  • the second processing unit 132 may calculate the passenger's pulse rate using, for example, a known technique that estimates the pulse rate by utilizing minute changes in the pixel value of the face image due to blood flow. .
  • the personal identification information for identifying the occupant includes, for example, information regarding the occupant's name or a facial image.
  • the second processing unit 132 uses, for example, a known technique for identifying an individual to obtain personal authentication information including pre-registered facial features of a person, and performs known image recognition on the target image. The occupant may be identified by comparing the facial features of the occupant detected through processing.
  • the occupant attribute information includes, for example, information regarding whether the occupant is an adult or a child, or information regarding gender.
  • the second processing unit 132 detects the attributes of the occupant by, for example, performing a known image recognition process on the target image.
  • the second processing unit 132 may detect the attributes of the occupant using a method using machine learning, for example. Specifically, the second processing unit 132 inputs the target image to a trained machine learning model that receives the captured image as input and outputs information regarding the attributes of the occupant on the captured image, and Get information about.
  • the second processing unit 132 may detect the attributes of the occupant on the target image, for example, by pattern matching.
  • the information indicating the state of the occupant includes, for example, information indicating whether the occupant is inattentive or in a state where the alertness level is decreased.
  • the information indicating the condition of the occupant may be, for example, information indicating the occupant's happiness, anger, romance, or happiness, information indicating whether the occupant is smoking, information indicating whether the occupant is eating or drinking, or , information indicating whether or not a call is being made using a mobile phone or the like.
  • the second processing unit 132 detects the state of the occupant by, for example, performing known image recognition processing on the target image.
  • the second processing unit 132 may detect the condition of the occupant using a method using machine learning, for example.
  • the second processing unit 132 inputs the target image to a trained machine learning model that receives the captured image and outputs information indicating the state of the occupant in the captured image, and Obtain information indicating the status.
  • the second processing unit 132 may detect the state of the occupant on the target image by pattern matching, for example.
  • the second processing unit 132 first determines whether or not the passenger's face is captured in the target image, and when it is determined that the passenger's face is captured, the second processing unit 132 determines whether or not the passenger's face is captured based on the target image. At least one of occupant characteristic information indicating body characteristics of the occupant, occupant biological information, personal identification information identifying the occupant, occupant attribute information, or occupant status information indicating the occupant condition is generated as monitoring information. It's okay. Since the example method of detecting the face of the passenger based on the target image has already been explained, a redundant explanation will be omitted. For example, there may be cases where no occupant is captured in the target image.
  • the second processing unit 132 After generating the monitoring information, the second processing unit 132 stores the generated monitoring information in the storage device 31, outputs it to the server 32, or displays it on the display device 33. For example, the second processing unit 132 includes the target image in the monitoring information.
  • FIG. 7 is a flowchart for explaining the operation of the vehicle interior monitoring device 1 according to the first embodiment.
  • the vehicle interior monitoring device 1 when the vehicle is powered on, the vehicle interior monitoring device 1 repeatedly performs the process shown in the flowchart of FIG. 7 until the vehicle is powered off.
  • the image acquisition unit 11 acquires a captured image from the imaging device 2 (step ST10).
  • the image acquisition unit 11 outputs the acquired captured image to the target image generation unit 12.
  • the target image generation unit 12 determines whether or not the target image generation timing has come, and when the target image generation timing has come, the target image generation unit 12 determines whether or not the area on the captured image is A target image is generated by invalidating areas other than the target area, that is, non-target areas (step ST20). After generating the target image, the target image generation unit 12 outputs the generated target image to the image processing unit 13 . Note that when the target image generation unit 12 determines that it is not the target image generation timing, the operation of the vehicle interior monitoring device 1 skips the process of step ST30-1 and the process of step ST30-2, which will be described later.
  • the first processing unit 131 of the image processing unit 13 stores the target image generated by the target image generation unit 12 in step ST20 as monitoring information in the storage device 31, outputs it to the server 32, or outputs it to the display device 33.
  • a first process for displaying is performed (step ST30-1).
  • the second processing unit 132 of the image processing unit 13 performs various image processing on the target image generated by the target image generation unit 12 in step ST20 to generate monitoring information, and outputs the generated monitoring information to the output device 3.
  • a second process of outputting is performed (step ST30-2).
  • FIG. 8 is a flowchart for explaining a detailed example of the second process of step ST30-2 in FIG. 7.
  • the second processing unit 132 determines whether or not the face of the occupant could be detected from the target image (step ST301).
  • the second processing unit 132 ends the second process.
  • the second processing unit 132 detects the body of the occupant based on the target image generated by the target image generation unit 12 in step ST20. At least one of occupant characteristic information indicating characteristics, occupant biological information, individual identification information specifying the occupant, occupant attribute information, or occupant state information indicating the condition of the occupant is generated as monitoring information (step ST302).
  • the vehicle interior monitoring device 1 acquires a captured image in which the imaging device 2 captures a range in which occupants in at least two seats can exist in the vehicle interior of the vehicle, and In contrast, a target image is generated by invalidating areas other than the target area (non-target area) among the areas on the captured image. Then, the vehicle interior monitoring device 1 performs the first process and the second process on the generated target image, and outputs monitoring information based on the target image to the output device 3.
  • the in-vehicle monitoring device 1 recognizes the monitoring information regarding the occupant who has not consented to the output of monitoring information regarding itself, or the in-vehicle monitoring system 100 to which the monitoring information is output. It is possible to prevent monitoring information related to passengers who have not done so from being output to the outside without permission. In this way, the vehicle interior monitoring device 1 prevents personal information of vehicle occupants from being unintentionally provided to a third party even if the imaging device 2 that captures captured images used for monitoring the vehicle interior is a wide-angle camera. This can be prevented.
  • the vehicle interior monitoring device 1 may specify an area that is set in advance to be associated with a seat as the target area.
  • the vehicle interior monitoring device 1 does not need to perform processing to identify the range in which the occupant is present on the captured image, for example, by edge detection or the like. Thereby, the vehicle interior monitoring device 1 can reduce the load of processing for identifying the target area.
  • the target image generation unit 12 performs known image recognition processing on the captured image, detects the headrest of the seat, and sets the minimum rectangle including the detected headrest to the area corresponding to the seat.
  • the target image generation unit 12 specifies a target area from among the areas corresponding to each seat set by performing image recognition processing.
  • the region corresponding to which seat on the captured image is to be the target region is set in advance.
  • the vehicle occupant can specify the target area while actually viewing the captured image.
  • the vehicle interior monitoring device presents target image candidates to the vehicle occupant, receives information regarding the target area designated by the vehicle occupant based on the target image candidate, and determines the target area based on the received information regarding the target area. Generate an image.
  • the passenger confirms it and specifies which area on the captured image is to be the target area. Specifically, the occupant specifies the target area by touching the display device 33. When the occupant specifies the target area, the occupant confirms the specified target area by, for example, touching a confirm button (not shown) displayed on the display device 33. When the confirmation button is touched, the receiving unit 14 receives information regarding the specified target area.
  • FIG. 10A is a diagram for explaining an example of a flow from when a passenger checks target image candidates displayed on the display device 33 to designating a target area.
  • FIG. 10A shows, as an example, the area corresponding to the driver's seat among the area corresponding to the driver's seat, the area corresponding to the passenger seat, and the area corresponding to the center of the rear seat on the captured image.
  • An example screen of the display device 33 is shown in the flow of generating a target image candidate as a target region candidate, displaying it on the display device 33, and accepting designation of the target region.
  • the upper diagram in FIG. 10A shows target image candidates. Note that in FIG. 10A, as an example, the reception unit 14 fills out the non-target area in white in the target image candidate.
  • the driver specifies a region corresponding to the driver's seat and a region corresponding to the center of the rear seat as target regions. Specifically, the driver touches one point in the target image candidate displayed on the display device 33 in the area that is desired to be the target area. Here, the area corresponding to the driver's seat in the target image candidate is not filled out. Therefore, the driver touches one point in the area corresponding to the center of the rear seat in the target image candidate so that the area corresponding to the center of the rear seat is also included as the target area, in other words, so as not to fill it in. do.
  • the reception unit 14 selects the filled-in touched position for the target image candidate displayed on the display device 33. Avoid filling the area corresponding to the center of the corresponding rear seat. Specifically, the reception unit 14 returns the pixel value of the region corresponding to the center of the rear seat in the target image candidate to the pixel value before overwriting. As a result, the display device 33 displays a target image candidate in which the area corresponding to the driver's seat and the area corresponding to the center of the rear seat are set as target area candidates (see the lower diagram of FIG. 10A). ). After confirming the displayed target image candidates, the driver touches a confirm button (not shown) to confirm the target area.
  • the accepting unit 14 accepts the target area specified by the driver, in other words, the area on the target image candidate that is currently being displayed without overwriting the pixel values. Then, the reception unit 14 outputs the received information regarding the target area to the target image generation unit 12.
  • the target image generation unit 12 generates a target image based on the target area received by the reception unit 14. Note that the target image generation unit 12 may acquire from the reception unit 14 the target image candidate displayed on the display device 33 by the reception unit 14 when the confirm button is touched, and may use this as the target image.
  • FIGS. 10B and 10C are diagrams for explaining an example of the flow from when the occupant confirms the target image candidates displayed on the display device 33 to designating a non-target area.
  • FIGS. 10B and 10C show examples of screens on the display device 33 in a flow in which the reception unit 14 displays the captured image as a target image candidate on the display device 33 and receives the designation of a target region, as an example.
  • the upper diagrams in FIGS. 10B and 10C show target image candidates.
  • the driver designates an area corresponding to the passenger seat and an area corresponding to the center of the rear seat as non-target areas.
  • the driver touches one point in the target image candidates displayed on the display device 33 in the area that is desired to be set as a non-target area.
  • the target image candidate the content of the captured image is displayed in the area corresponding to the passenger seat and the area corresponding to the center of the rear seat.
  • the driver may set the area corresponding to the passenger seat and the area corresponding to the center of the rear seat as non-target areas, in other words, fill in the area corresponding to the passenger seat. and one point in the area corresponding to the center of the rear seat (see top diagram of FIG. 10B).
  • the reception unit 14 selects the target displayed on the display device 33. For example, the area where the content of the captured image was displayed for the image candidate, that is, the area corresponding to the passenger seat corresponding to the touched position and the area corresponding to the center of the rear seat, is filled in. Specifically, in the target image candidate, the receiving unit 14 overwrites the pixel values of the region corresponding to the passenger seat and the pixel values of the region corresponding to the center of the rear seat with different values.
  • the display device 33 displays a target image candidate in which only the area corresponding to the driver's seat is set as a target area candidate (see the lower diagram of FIG. 10B).
  • the reception unit 14 overwrites the pixel values of the area corresponding to the passenger seat and the pixel values of the area corresponding to the center of the rear seat with pixel values indicating white.
  • the driver touches a confirm button (not shown) to confirm the target area.
  • the reception unit 14 selects an area other than the non-target area specified by the driver, in other words, an area on the target image candidate whose pixel values are currently being displayed without being overwritten. , accepted as the target area. Then, the reception unit 14 outputs the received information regarding the target area to the target image generation unit 12.
  • the target image generation unit 12 generates a target image based on the target area received by the reception unit 14. Note that the target image generation unit 12 may acquire from the reception unit 14 the target image candidate displayed on the display device 33 by the reception unit 14 when the confirm button is touched, and may use this as the target image.
  • the driver designates only the area corresponding to the passenger seat as the non-target area.
  • the driver touches one point in the target image candidates displayed on the display device 33 in the area that is desired to be set as a non-target area.
  • the content of the captured image is displayed in the area corresponding to the passenger seat in the target image candidate. Therefore, the driver touches one point in the area corresponding to the passenger seat among the target image candidates so as to set the area corresponding to the passenger seat as a non-target area, in other words, to fill in the area (see top diagram in Figure 10C).
  • the reception unit 14 selects the target image candidate displayed on the display device 33 from the area where the content of the captured image was displayed, i.e. , for example, fills in the area corresponding to the passenger seat corresponding to the touched position. Specifically, the receiving unit 14 overwrites the pixel values in the area corresponding to the passenger seat in the target image candidate with a different value. As a result, the display device 33 displays a target image candidate in which the area corresponding to the driver's seat and the area corresponding to the center of the rear seat are set as target area candidates (see the lower diagram of FIG. 10C). . In the lower diagram of FIG.
  • the reception unit 14 overwrites the pixel values of the area corresponding to the passenger seat and the pixel values of the area corresponding to the center of the rear seat with pixel values indicating white.
  • the driver touches a confirm button (not shown) to confirm the target area.
  • the reception unit 14 selects an area other than the non-target area specified by the driver, in other words, an area on the target image candidate whose pixel values are currently being displayed without being overwritten. , accepted as the target area.
  • the reception unit 14 outputs the received information regarding the target area to the target image generation unit 12.
  • the target image generation unit 12 generates a target image based on the target area received by the reception unit 14. Note that the target image generation unit 12 may acquire from the reception unit 14 the target image candidate displayed on the display device 33 by the reception unit 14 when the confirm button is touched, and may use this as the target image.
  • the reception unit 14 receives information regarding the target area (step ST12).
  • the reception unit 14 outputs the received information regarding the target area to the target image generation unit 12.
  • the target image generation unit 12 generates a target image based on the information regarding the target area output from the reception unit 14 in step ST20.
  • the occupant of the vehicle specifies a target area while actually looking at the captured image, but this is only one example.
  • the occupant specifies the target area by operating an input device such as a touch panel display provided in the vehicle or a touch panel display included in a mobile terminal carried by the occupant.
  • the passenger inputs information indicating the seat from the input screen of the target area.
  • the receiving unit 14 receives the area corresponding to the designated seat as the target area.
  • the passenger may specify information indicating the seat and the image size from the input screen of the target area.
  • the target image generation unit 12 uses the captured image as the target image.
  • the image processing unit 13 determines whether to perform the first processing or the second processing for each target region. good. For example, if a region corresponding to the driver's seat and a region corresponding to a seat other than the driver's seat are target regions in the target image, the image processing unit 13 may generate a region corresponding to the driver's seat and a seat other than the driver's seat. It is determined whether the first process or the second process should be performed on each area. In this case, for example, the administrator etc. may need to prepare information (hereinafter referred to as "monitoring content") that associates the area corresponding to the seat with the process to be performed for the area, either the first process or the second process. "Setting information”) is generated and stored in a location where the vehicle interior monitoring devices 1 and 1a can refer to it. The image processing unit 13 determines whether to perform the first process or the second process for each target area based on the monitoring content setting information.
  • the vehicle interior monitoring device 1 also includes an input interface device 1002 and an output interface device 1003 that perform wired or wireless communication with devices such as the imaging device 2 or the output device 3.
  • FIGS. 12A and 12B An example of the hardware configuration of the vehicle interior monitoring device 1a shown in FIG. 9 is also as shown in FIGS. 12A and 12B.
  • the functions of the image acquisition unit 11, target image generation unit 12, image processing unit 13, and reception unit 14 are realized by the processing circuit 1001.
  • the processing circuit is the processor 1004, the functions of the image acquisition unit 11, target image generation unit 12, image processing unit 13, and reception unit 14 are realized by software, firmware, or a combination of software and firmware. .
  • Software or firmware is written as a program and stored in memory 1005.
  • the processor 1004 executes the functions of the image acquisition unit 11, target image generation unit 12, image processing unit 13, and reception unit 14 by reading and executing programs stored in the memory 1005. That is, the vehicle interior monitoring device 1a stores a program that, when executed by the processor 1004, results in the execution of steps ST10 to ST30-1 and ST30-2 in FIG.
  • a memory 1005 is provided.
  • the vehicle interior monitoring device 1a also includes an input interface device 1002 and an output interface device 1003 that perform wired or wireless communication with devices such as the imaging device 2 or the output device 3.
  • the vehicle interior monitoring devices 1 and 1a monitor the interior of the vehicle based on captured images of a range where occupants in at least two seats can exist in the vehicle interior, and when outputting the monitoring results, the vehicle interior monitoring devices 1 and 1a This can prevent personal information from being unintentionally provided to third parties.
  • the target image generation unit 12 invalidates the area other than the target area by overwriting the pixel values of the area other than the target area in the captured image with a different value, and The captured image after the pixel values of the area have been overwritten is set as the target image.
  • the vehicle interior monitoring devices 1 and 1a monitor the interior of the vehicle based on captured images of the range where occupants in at least two seats can exist in the vehicle interior, and when outputting the monitoring results, the vehicle interior monitoring devices 1 and 1a It is possible to prevent the passenger's personal information from being unintentionally provided to a third party.
  • the target image generation unit 12 invalidates areas other than the target area by cutting out the target area from the captured image, and converts the captured image of the target area cut out from the captured image into the target image. If there are multiple target areas, the areas other than the target area are invalidated by cutting out each of the multiple target areas from the captured image, and the captured images of the multiple target areas cut out from the captured image are each created as a target image. shall be.
  • the target image generation unit 12 cuts out one cutout region including the plurality of target regions from the captured image, and The areas other than the target area are invalidated by overwriting the pixel values of the multiple target areas with different values, and the captured image of the cutout area after the pixel values of the multiple target areas are overwritten is , as the target image.
  • the vehicle interior monitoring devices 1 and 1a monitor the interior of the vehicle based on captured images of the range where occupants in at least two seats can exist in the vehicle interior, and when outputting the monitoring results, the vehicle interior monitoring devices 1 and 1a It is possible to prevent the passenger's personal information from being unintentionally provided to a third party.
  • the image processing unit 13 stores the target image generated by the target image generation unit 12 in the storage device 31 as monitoring information, outputs it to an external server (server 32), or , a first processing unit 131 that performs a first process to be displayed on the display device 33, and occupant characteristic information indicating the body characteristics of the occupant, biological information of the occupant, and information on the occupant based on the target image generated by the target image generation unit 12.
  • An external server (server 32 ) or a second processing unit 132 that performs second processing to display on the display device 33.
  • the vehicle interior monitoring devices 1 and 1a monitor the interior of the vehicle based on captured images of a range where occupants in at least two seats can exist in the vehicle interior, and when outputting the monitoring results, the vehicle interior monitoring devices 1 and 1a In addition to preventing personal information from being unintentionally provided to third parties, it is also possible to provide various services using monitoring information.
  • the image processing unit 13 processes the In the 2 processing, whether to generate occupant characteristic information as monitoring information, generate occupant biological information as monitoring information, generate personal identification information as monitoring information, or generate occupant attribute information as monitoring information, Whether occupant status information is generated as monitoring information can be changed. Therefore, the vehicle interior monitoring devices 1 and 1a can generate monitoring information taking into account that the required monitoring information differs depending on which seat the occupant is sitting on.
  • the vehicle interior monitoring device 1a also includes a reception unit 14 that displays target image candidates on the display device 33 and receives a target area designated based on the displayed target image candidates, and the target image generation unit 12
  • the configuration is such that the target image is generated based on the target area received by the unit 14. Therefore, the vehicle interior monitoring device 1a monitors the interior of the vehicle based on captured images of the range where occupants in at least two seats may exist in the vehicle interior, and outputs the monitoring results when the occupants respond to the specified This can prevent the personal information of vehicle occupants from being unintentionally provided to a third party.
  • the vehicle interior monitoring system 100, 100a includes the vehicle interior monitoring devices 1, 1a as described above, and an imaging device 2 that captures an image of a range where occupants in at least two seats can exist in the vehicle interior. It was configured as follows. Therefore, the vehicle interior monitoring system 100, 100a monitors the interior of the vehicle based on a captured image of a range where occupants in at least two seats can exist in the vehicle interior, and when outputting the monitoring result, the vehicle interior monitoring system 100, 100a This can prevent personal information from being unintentionally provided to third parties.
  • any component of the embodiments can be modified or any component of the embodiments can be omitted.
  • the vehicle interior monitoring device monitors the interior of the vehicle based on a captured image of a range where occupants in at least two seats can exist in the vehicle interior, and outputs the monitoring result. It is possible to prevent personal information from being unintentionally provided to a third party.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

La présente invention comprend : une unité d'acquisition d'image (11), qui acquiert une image capturée ; une unité de génération d'image cible (12), qui génère une image cible en invalidant une région autre qu'une région cible parmi des régions sur l'image capturée, dans l'image capturée acquise par l'unité d'acquisition d'image (11) ; et une unité de traitement d'image (13), qui délivre en sortie, à un dispositif de sortie (3), des informations de surveillance sur la base de l'image cible qui a été générée par l'unité de génération d'image cible (12).
PCT/JP2022/023561 2022-06-13 2022-06-13 Dispositif de surveillance de cabine de véhicule, système de surveillance de cabine de véhicule, et procédé de surveillance de cabine de véhicule WO2023242888A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/023561 WO2023242888A1 (fr) 2022-06-13 2022-06-13 Dispositif de surveillance de cabine de véhicule, système de surveillance de cabine de véhicule, et procédé de surveillance de cabine de véhicule

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/023561 WO2023242888A1 (fr) 2022-06-13 2022-06-13 Dispositif de surveillance de cabine de véhicule, système de surveillance de cabine de véhicule, et procédé de surveillance de cabine de véhicule

Publications (1)

Publication Number Publication Date
WO2023242888A1 true WO2023242888A1 (fr) 2023-12-21

Family

ID=89192467

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/023561 WO2023242888A1 (fr) 2022-06-13 2022-06-13 Dispositif de surveillance de cabine de véhicule, système de surveillance de cabine de véhicule, et procédé de surveillance de cabine de véhicule

Country Status (1)

Country Link
WO (1) WO2023242888A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017212647A (ja) * 2016-05-26 2017-11-30 パナソニックIpマネジメント株式会社 座席モニタリング装置、座席モニタリングシステムおよび座席モニタリング方法
JP2019110518A (ja) * 2017-12-18 2019-07-04 パナソニックIpマネジメント株式会社 撮像装置および撮像システム
JP2021189778A (ja) * 2020-05-29 2021-12-13 株式会社Jvcケンウッド 車両用記録制御装置、記録制御方法およびプログラム

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017212647A (ja) * 2016-05-26 2017-11-30 パナソニックIpマネジメント株式会社 座席モニタリング装置、座席モニタリングシステムおよび座席モニタリング方法
JP2019110518A (ja) * 2017-12-18 2019-07-04 パナソニックIpマネジメント株式会社 撮像装置および撮像システム
JP2021189778A (ja) * 2020-05-29 2021-12-13 株式会社Jvcケンウッド 車両用記録制御装置、記録制御方法およびプログラム

Similar Documents

Publication Publication Date Title
JP6655036B2 (ja) 車両の表示システム及び車両の表示システムの制御方法
JP6454368B2 (ja) 車両の表示システム及び車両の表示システムの制御方法
JP5088669B2 (ja) 車両周辺監視装置
US20170330044A1 (en) Thermal monitoring in autonomous-driving vehicles
US20150009010A1 (en) Vehicle vision system with driver detection
EP3588372B1 (fr) Commande d'un véhicule autonome basée sur le comportement du passager
CN105144199A (zh) 支持多种功能的基于成像装置的乘坐者监控系统
CN114746311B (zh) 用于载具中的识别和生物监测的管理系统和方法
US10666901B1 (en) System for soothing an occupant in a vehicle
US20150125126A1 (en) Detection system in a vehicle for recording the speaking activity of a vehicle occupant
JP7118136B2 (ja) 搭乗者状態判定装置、警告出力制御装置及び搭乗者状態判定方法
KR102272774B1 (ko) 차량, 및 차량의 제어방법
US11455810B2 (en) Driver attention state estimation
US20230311901A1 (en) Apparatuses, systems and methods for improving operation of autonomous vehicles
US20220036101A1 (en) Methods, systems and computer program products for driver monitoring
US20210049385A1 (en) Security features using vehicle personalization
CN114423343A (zh) 认知功能推测装置、学习装置及认知功能推测方法
JP2018148530A (ja) 車両用表示制御装置、車両用表示システム、車両用表示制御方法およびプログラム
WO2023242888A1 (fr) Dispositif de surveillance de cabine de véhicule, système de surveillance de cabine de véhicule, et procédé de surveillance de cabine de véhicule
US11881054B2 (en) Device and method for determining image data of the eyes, eye positions and/or a viewing direction of a vehicle user in a vehicle
CN103425962A (zh) 车辆用脸部认证装置及方法
US20220270381A1 (en) Occupant monitoring device for vehicle
WO2023242889A1 (fr) Dispositif de capture d'image, système de surveillance d'occupant et véhicule
CN113799730A (zh) 车辆控制系统及车辆控制方法
JP7378681B2 (ja) 乗員状態判定装置、乗員状態判定方法、および、乗員状態判定システム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22946708

Country of ref document: EP

Kind code of ref document: A1