WO2022270379A1 - In-vehicle device, notification method for object, program, and system for vehicle - Google Patents

In-vehicle device, notification method for object, program, and system for vehicle Download PDF

Info

Publication number
WO2022270379A1
WO2022270379A1 PCT/JP2022/023959 JP2022023959W WO2022270379A1 WO 2022270379 A1 WO2022270379 A1 WO 2022270379A1 JP 2022023959 W JP2022023959 W JP 2022023959W WO 2022270379 A1 WO2022270379 A1 WO 2022270379A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
unit
image
vehicle device
selection
Prior art date
Application number
PCT/JP2022/023959
Other languages
French (fr)
Japanese (ja)
Inventor
浩一郎 竹内
正也 伊藤
Original Assignee
株式会社デンソー
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社デンソー filed Critical 株式会社デンソー
Publication of WO2022270379A1 publication Critical patent/WO2022270379A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R11/00Arrangements for holding or mounting articles, not otherwise provided for
    • B60R11/04Mounting of cameras operative during drive; Arrangement of controls thereof relative to the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present disclosure relates to an in-vehicle device, an object notification method, a program, and a vehicle system.
  • a vehicle external environment recognition device is disclosed in Patent Document 1.
  • a vehicle external environment recognition device photographs the surroundings of the vehicle and generates an image.
  • a vehicle external environment recognition device performs image recognition on an image to recognize an object.
  • the vehicle's external environment recognition device switches the control parameters of the camera according to the vehicle's external environment.
  • an in-vehicle device an object notification method, a program, and a vehicle system capable of recognizing an object in a vehicle interior with high accuracy.
  • the in-vehicle device is configured to communicate between the user's mobile terminal and the communicable cloud via the communication unit.
  • the in-vehicle device includes an image acquisition unit configured to acquire an image of the interior of the vehicle captured by a camera, a vehicle environment that is the environment of the vehicle, and an object in the interior of the vehicle based on the camera.
  • a selection factor acquisition unit configured to acquire a selection factor including at least one of the relative positions of the two; and a model configured to select an image recognition model according to the selection factor acquired by the selection factor acquisition unit.
  • a selection unit and an object recognition configured to perform image recognition using the image recognition model selected by the model selection unit on the image acquired by the image acquisition unit, and to perform a process of recognizing the object.
  • a unit When the object recognition unit recognizes the object that satisfies a predetermined condition, it notifies the cloud via the communication unit of information on the recognition result of the object that leads to notification to the portable terminal.
  • An in-vehicle device which is one aspect of the present disclosure, can select an appropriate image recognition model and perform image recognition according to the vehicle environment and the relative positions of objects. Therefore, the in-vehicle device can recognize objects in the vehicle interior with high accuracy. As a result, the in-vehicle device can accurately notify the mobile terminal.
  • Another aspect of the present disclosure is to acquire an image of the interior of a vehicle with a camera, and obtain at least the vehicle environment, which is the environment of the vehicle, and the relative position of an object in the interior of the vehicle with respect to the camera.
  • Acquire a selection factor including one select an image recognition model according to the selection factor, perform image recognition on the image using the selected image recognition model, perform processing to recognize the object, Notifying a cloud that can communicate with the mobile terminal, via a communication unit, of information about a recognition result of the object that will lead to notification to the user's mobile terminal when the object that satisfies a predetermined condition is recognized.
  • the object notification method which is another aspect of the present disclosure, it is possible to select an appropriate image recognition model and perform image recognition according to the vehicle environment and the relative position of the object. Therefore, objects in the vehicle interior can be recognized with high accuracy. As a result, it is possible to accurately notify the mobile terminal.
  • Another aspect of the present disclosure is a vehicle system comprising a cloud capable of communicating with a user's mobile terminal, and an in-vehicle device configured to communicate with the cloud via a communication unit.
  • the in-vehicle device includes an image acquisition unit configured to acquire an image of an interior of a vehicle captured by a camera, a vehicle environment that is the environment of the vehicle, and an object in the interior of the vehicle as a reference to the camera.
  • a selection factor acquisition unit configured to acquire a selection factor including at least one of the relative positions of an object configured to perform image recognition on the image acquired by the model selection unit and the image acquired by the image acquisition unit using the image recognition model selected by the model selection unit, and to perform processing to recognize the object.
  • a recognition unit configured to acquire an image of an interior of a vehicle captured by a camera, a vehicle environment that is the environment of the vehicle, and an object in the interior of the vehicle as a reference to the camera.
  • a selection factor acquisition unit configured to acquire a selection factor including at least one of the relative positions of an object configured to perform image recognition on the image acquired by the model selection unit and the image acquired by the image acquisition unit using the image recognition model selected by the model selection unit, and to perform processing to recognize the object.
  • the in-vehicle device When the object recognition unit recognizes the object that satisfies a predetermined condition, the in-vehicle device notifies the cloud of information regarding the recognition result of the object via the communication unit.
  • the cloud is configured to notify the portable terminal based on the information about the recognition result of the object notified from the in-vehicle device.
  • a vehicle system which is another aspect of the present disclosure, can select an appropriate image recognition model and perform image recognition according to the vehicle environment and relative positions of objects. Therefore, the vehicle system can recognize objects in the vehicle interior with high accuracy. As a result, the vehicle system can accurately notify the mobile terminal.
  • FIG. 1 is a block diagram showing the configuration of a mobility IoT system
  • FIG. 3 is a block diagram showing a functional configuration of a controller of the in-vehicle device
  • FIG. It is explanatory drawing showing arrangement
  • the configuration of the mobility IoT system 1 will be described based on FIG. IoT is an abbreviation for Internet of Things.
  • the mobility IoT system 1 includes an in-vehicle device 3 , a cloud 5 and a service providing server 7 . Although only one in-vehicle device 3 is shown in FIG. 1 for convenience, the mobility IoT system 1 includes a plurality of in-vehicle devices 3, for example. The multiple in-vehicle devices 3 are mounted in different vehicles 9, respectively. A combination of the in-vehicle device 3 and the cloud 5 corresponds to a vehicle system.
  • the in-vehicle device 3 can communicate with the cloud 5 via a communication device 19 mounted on the vehicle 9. Detailed configurations of the in-vehicle device 3 and the vehicle 9 will be described later.
  • the cloud 5 can communicate with the in-vehicle device 3, the service providing server 7, and the mobile terminal 23.
  • the mobile terminal 23 is, for example, a mobile terminal owned by the user of the vehicle 9 .
  • Examples of the mobile terminal 23 include a smart phone, a tablet terminal, a notebook PC, and the like.
  • the cloud 5 includes a control unit 25, a communication unit 27, and a storage unit 29.
  • the control unit 25 includes a CPU 31 and a semiconductor memory such as RAM or ROM (hereinafter referred to as memory 33).
  • the functions of the control unit 25 are implemented by the CPU 31 executing programs stored in the memory 33 . Also, by executing this program, a method corresponding to the program is executed.
  • the communication unit 27 can perform wireless communication with the communication device 19 and the mobile terminal 23 .
  • the storage unit 29 can record information.
  • the service providing server 7 can communicate with the cloud 5.
  • the service providing server 7 is, for example, a server installed to provide a service for managing operation of the vehicle 9 .
  • the mobility IoT system 1 may include a plurality of service providing servers 7 having different service contents.
  • the cloud 5 collects vehicle 9 data transmitted from each of the multiple in-vehicle devices 3 via the communication device 19 .
  • the cloud 5 stores the collected data in the storage unit 29 for each vehicle 9 .
  • the cloud 5 creates a digital twin based on the data of the vehicle 9 stored in the storage unit 29.
  • a digital twin is normalized index data.
  • the service providing server 7 can acquire the data of the predetermined vehicle stored in the storage unit 29 using the index data acquired from the digital twin.
  • the service providing server 7 determines control details of the vehicle 9 and transmits instructions corresponding to the control details to the cloud 5 .
  • the cloud 5 transmits control contents to the vehicle 9 based on the instruction.
  • the in-vehicle device 3 includes a control unit 35, a storage unit 37, and a clock 38.
  • the control unit 35 includes a CPU 39 and a semiconductor memory such as RAM or ROM (hereinafter referred to as memory 41).
  • the functions of the control unit 35 are realized by the CPU 39 executing programs stored in the memory 41 . Also, by executing this program, a method corresponding to the program is executed.
  • the storage unit 37 can store information.
  • the storage unit 37 stores a plurality of image recognition models.
  • the image recognition model is used to recognize the object 57 by performing image recognition on an image generated using the camera 15 mounted on the vehicle 9 .
  • the clock 38 transmits time information representing time to the control unit 35 .
  • control unit 35 includes an image acquisition unit 43, a communication unit 45, a selection factor acquisition unit 47, a model selection unit 49, an object recognition unit 51, and a vehicle state acquisition unit. a unit 53;
  • the image acquisition unit 43 uses the camera 15 to photograph the inside of the cabin 55 of the vehicle 9 and acquires the image.
  • the communication unit 45 communicates with the cloud 5 .
  • the selection factor acquisition unit 47 acquires selection factors.
  • the selection factor is at least one of the vehicle environment, which is the environment of the vehicle 9, and the relative position of the object 57 in the cabin 55 of the vehicle 9 with respect to the camera 15 (hereinafter referred to as the relative position of the object). include.
  • the selection factor affects object recognition accuracy when image recognition is performed on an image captured by the camera 15 .
  • the vehicle environment is the vehicle environment that affects the recognition accuracy when image recognition is performed on images captured by the camera 15 .
  • the vehicle environment is, for example, the vehicle environment that affects the brightness in the vehicle interior 55, the brightness around the vehicle 9, the direction in which light enters the vehicle interior 55, and the like.
  • the vehicle environment includes, for example, the position of the vehicle 9, the direction or direction of travel of the vehicle 9, the current time, the weather, and the state of the room lamps 17, and the like.
  • the position of the vehicle 9 and the traveling direction of the vehicle 9 affect the brightness in the vehicle interior 55 and the direction in which light enters the vehicle interior 55 .
  • the current time affects the brightness in the vehicle interior 55 and the direction in which light enters the vehicle interior 55 .
  • the weather affects the brightness inside the vehicle compartment 55 .
  • Weather includes, for example, fine weather, cloudy weather, rainy weather, fog, and the like.
  • the state of the room lamp 17 affects the brightness in the passenger compartment 55 .
  • the state of the room lamp 17 includes, for example, a lit state, an extinguished state, a state in which the amount of light is within a specific range, and the like. When there are a plurality of room lamps 17, the state of each room lamp 17 corresponds to the vehicle environment.
  • the relative position of the object consists of the distance from the camera 15 to the object 57 and the direction of the object 57 with respect to the camera 15.
  • the relative position of the object affects the size of object 57 in the image produced by camera 15 .
  • FIG. In addition to the relative positions of the objects, if the direction of light incident on the vehicle interior 55 is known as the vehicle environment, for example, the state in which the light is irradiated to the object 57 and the state in which the light is irradiated from behind the object 57 can be determined. , backlit conditions, etc. can be seen.
  • the relative position of the object affects the recognition accuracy when image recognition is performed on an image captured by the camera 15 .
  • a plurality of image recognition models stored in the storage unit 37 are each associated with a selection factor.
  • the image recognition model is, for example, an image recognition model obtained by learning in the selection factor associated therewith. Using the image recognition model in its associated selection factors, the object 57 can be recognized with high accuracy.
  • the object 57 can be recognized with high accuracy. Also, for example, if the image recognition model is used in the relative position of the object included in the selection factors associated with it, the object 57 can be recognized with high accuracy.
  • the model selection unit 49 selects an image recognition model according to the selection factors acquired by the selection factor acquisition unit 47 .
  • the model selection unit 49 selects an image recognition model according to the vehicle environment. Specifically, when the vehicle interior 55 is dark at night, the model selection unit 49 selects an image recognition model for dark camera images. In addition, the model selection unit 49 selects the following when the interior of the vehicle interior 55 is bright during the day, when the interior of the vehicle interior 55 is bright due to sunset, and when the interior of the vehicle interior 55 is bright due to lighting of the room lamp 17 at night. Select the appropriate image recognition model for each case.
  • the model selection unit 49 selects an image recognition model according to the relative position of the object. Specifically, when the distance from the camera 15 to the object 57 is long, the model selection unit 49 selects an image recognition model for camera images in which the object 57 is small. Also, when the distance from the camera 15 to the object 57 is short, the model selection unit 49 selects an image recognition model for camera images in which the object 57 is captured large. In addition, the model selection unit 49 selects an image recognition model suitable for each case, such as when the object 57 is in front of the camera 15 and when the object 57 is oblique or close to the side of the camera 15. . By selecting the image recognition model using the vehicle environment and the relative position of the object as selection factors, the object can be recognized with higher accuracy.
  • the object recognition unit 51 performs image recognition on the image acquired by the image acquisition unit 43 using the image recognition model selected by the model selection unit 49, and performs processing for recognizing the object 57.
  • the vehicle state acquisition unit 53 acquires a signal representing the state of the vehicle 9 from the vehicle ECU 11.
  • the state of the vehicle 9 includes, for example, engine on, start of running, stop, key lock, and the like.
  • the vehicle 9 includes a vehicle ECU 11, a sensor 13, a camera 15, a room lamp 17, a communication device 19, and a navigation system 21 in addition to the in-vehicle device 3.
  • the vehicle-mounted device 3 can communicate with each of the vehicle ECU 11 , the sensor 13 , the camera 15 , the room lamp 17 , the communication device 19 , and the navigation 21 .
  • the vehicle ECU 11 detects the state of the vehicle 9 and transmits a signal representing the detected state to the vehicle state acquisition unit 53 .
  • the vehicle ECU 11 detects the vehicle speed of the vehicle 9 .
  • the vehicle speed of the vehicle 9 changes from 0 Km/h to a threshold value or more
  • the vehicle ECU 11 transmits a signal indicating start of running to the vehicle state acquisition unit 53 .
  • the vehicle ECU 11 transmits a signal indicating that the vehicle is stopped to the vehicle state acquisition unit 53 .
  • the vehicle ECU 11 detects, for example, the shift position. When the shift position changes to parking, the vehicle ECU 11 transmits a signal indicating that the vehicle is stopped to the vehicle state acquisition unit 53 .
  • the sensor 13 and the camera 15 are installed inside the vehicle compartment 55 of the vehicle 9, respectively.
  • the position of sensor 13 is close to the position of camera 15 .
  • the sensor 13 and camera 15 are installed above the windshield or near the rearview mirror.
  • the relative position of the camera 15 with respect to the position of the sensor 13 is constant.
  • the sensor 13 is, for example, a millimeter wave radar.
  • the sensor 13 can detect the object 57 in the vehicle interior 55 by comparing the detection result of the sensor 13 with the reference data.
  • the reference data is the detection result of the sensor 13 when the object 57 does not exist inside the vehicle compartment 55 .
  • Objects 57 include, for example, things (excluding people and animals), people, animals, and the like.
  • the object is, for example, an object that can be carried by the user of the vehicle 9 . Humans include, for example, infants.
  • the sensor 13 can calculate the presence or absence of an object and the relative position of the object.
  • the camera 15 is installed inside the vehicle compartment 55 .
  • the photographing range of the camera 15 includes a range within the vehicle interior 55 where the object 57 is likely to be placed.
  • the photographing range of the camera 15 includes, for example, part or all of the driver's seat, front passenger's seat, rear seat, and dashboard.
  • One or more room lamps 17 are installed in the vehicle interior 55 .
  • the room lamp 17 can be turned on and off. Also, the room lamp 17 can be adjusted in light quantity.
  • the room lamp 17 transmits a signal representing the state of the room lamp 17 to the controller 35 .
  • the communication device 19 can communicate with the communication unit 27 of the cloud 5.
  • Navi 21 has the functions of a normal navigation system.
  • the navigation 21 can acquire the position of the vehicle 9 and the traveling direction of the vehicle 9 based on the data received from the positioning satellites.
  • FIG. 4 Processing executed by the in-vehicle device 3 will be described with reference to FIG.
  • the process shown in FIG. 4 is executed, for example, when the in-vehicle device 3 in sleep mode is activated.
  • the in-vehicle device 3 is activated, for example, as follows.
  • the vehicle ECU 11 When the door of the vehicle 9 is unlocked, the vehicle ECU 11 is activated.
  • the activated vehicle ECU 11 activates the in-vehicle device 3 .
  • the in-vehicle device 3 enters a sleep state when the processing shown in FIG. 4 is completed.
  • step 1 of FIG. 4 the vehicle state acquisition unit 53 determines whether the engine of the vehicle 9 has been turned on and whether the vehicle 9 has started running, based on the signal acquired from the vehicle ECU 11. If it is determined that the engine of the vehicle 9 has been turned on and that the vehicle 9 has started running, the process proceeds to step 2 . If it is determined that the engine of the vehicle 9 is not on, or if it is determined that the vehicle 9 has not started running, the process returns to step 1 .
  • step 2 the selection factor acquisition unit 47 uses the sensor 13 to acquire the relative position of the object.
  • the selection factor acquisition unit 47 acquires the vehicle environment. For example, the selection factor acquisition unit 47 acquires the position of the vehicle 9 and the traveling direction of the vehicle 9 using the navigation system 21 . The selection factor acquisition unit 47 also acquires the current time using the clock 38, for example. Also, the selection factor acquisition unit 47 acquires the weather by communicating with the cloud 5 via the communication device 19, for example. The weather is, for example, the current weather at the location of the vehicle 9 . Further, the selection factor acquisition unit 47 acquires the state of the room lamp 17 from the room lamp 17, for example. Further, the selection factor acquisition unit 47 may, for example, photograph the interior of the vehicle interior 55 with the camera 15 , analyze the photographed image, and detect the brightness within the vehicle interior 55 . Note that the navigation 21, the clock 38, and the room lamp 17 correspond to devices mounted on the vehicle 9. FIG. Cloud 5 corresponds to the exterior of vehicle 9 . Note that the selection factor acquisition unit 47 may perform only one of the processes of steps 2 and 3.
  • the model selection unit 49 selects an image recognition model according to selection factors.
  • the selection factors include both the relative position of the object obtained in step 2 above and the vehicle environment obtained in step 3 above.
  • the plurality of image recognition models stored in the storage unit 37 are each associated with a selection factor.
  • a model selection unit 49 selects an image recognition model associated with a selection factor from among a plurality of image recognition models stored in the storage unit 37 .
  • step 5 the vehicle state acquisition unit 53 determines whether or not the vehicle 9 has finished traveling based on the signal acquired from the vehicle ECU 11 . If it is determined that the vehicle 9 has finished traveling, the process proceeds to step 6 . If it is determined that the vehicle 9 has not finished running, the process proceeds to step 2 .
  • step 6 the vehicle state acquisition unit 53 determines whether the vehicle 9 has stopped and whether the key lock has been turned on, based on the signal obtained from the vehicle ECU 11 . If it is determined that the vehicle 9 has stopped and that the key lock has been turned on, the process proceeds to step 7 . If it is determined that the vehicle 9 is not stopped or if it is determined that the key lock is not turned on, the process returns to before step 6 .
  • step 7 the image acquisition unit 43 uses the camera 15 to photograph the inside of the vehicle interior 55 and acquires the image.
  • step 8 the object recognition unit 51 performs image recognition on the image acquired in step 7 using the image recognition model selected in step 4, and performs processing to recognize the object 57.
  • step 9 the object recognition unit 51 determines whether or not the specific object 57 has been recognized in the process of step 8. If it is determined that the specific object 57 has been recognized, the process proceeds to step 10 .
  • Specific objects 57 are, for example, children, animals, and baggage. If it is determined that the specific object 57 has not been recognized, this process ends.
  • a specific object 57 corresponds to an object that satisfies a predetermined condition.
  • step 10 the communication unit 45 transmits information about the specific object 57 to the cloud 5 in order to notify the mobile terminal 23 from the cloud 5.
  • the processing from step 1 to this point corresponds to the notification method.
  • the cloud 5 Upon receiving the information, the cloud 5 notifies the portable terminal 23 that a child, pet, or baggage remains in the passenger compartment 55 .
  • the information transmitted by the communication unit 45 corresponds to information regarding the recognition result of the object 57 leading to notification to the mobile terminal 23 .
  • the mobile terminal 23 displays, for example, a notification image, and generates sound, vibration, or the like. A user of the vehicle 9 can know that there is an object 57 in the vehicle interior 55 from the notification image, sound, vibration, or the like.
  • the communication unit 45 transmits, for example, the camera image obtained in step 7 or an image obtained by processing it to the cloud 5 .
  • Cloud 5 transmits those images to mobile terminal 23 .
  • the mobile terminal 23 displays the camera image obtained in step 7 or an image obtained by processing it.
  • a user of the vehicle 9 can know that the object 57 is inside the vehicle compartment 55 and what the object 57 is by looking at the displayed image.
  • the in-vehicle device 3 uses the camera 15 to photograph the inside of the vehicle interior 55 to obtain an image.
  • the in-vehicle device 3 acquires the selection factor.
  • the in-vehicle device 3 selects an image recognition model according to selection factors.
  • the in-vehicle device 3 performs image recognition on the acquired image using the selected image recognition model, and performs processing for recognizing the object 57 .
  • the in-vehicle device 3 notifies the portable terminal 23 via the cloud 5 .
  • the in-vehicle device 3 can select an appropriate image recognition model and perform image recognition according to the vehicle environment and the relative positions of objects. Therefore, the in-vehicle device 3 can recognize the object 57 with high accuracy. As a result, the in-vehicle device 3 can accurately notify the mobile terminal 23 .
  • the in-vehicle device 3 acquires selection factors in steps 2 and 3 before acquiring an image in step 7 above. Also, the in-vehicle device 3 selects an image recognition model in step 4 before acquiring an image in step 7 .
  • the time from acquiring an image to completing image recognition can be shortened compared to acquiring selection factors after acquiring an image or selecting an image recognition model after acquiring an image.
  • Selection factors include the position of the vehicle 9, the traveling direction of the vehicle 9, the current time, and the weather. Therefore, the in-vehicle device 3 can select an appropriate image recognition model according to the position of the vehicle 9, the traveling direction of the vehicle 9, the time of day, and the weather.
  • Selection factors include the state of the room lamp 17 . Therefore, the in-vehicle device 3 can select an appropriate image recognition model according to the state of the room lamp 17 .
  • the in-vehicle device 3 acquires the vehicle environment from outside the vehicle 9 .
  • the outside of the vehicle 9 is the cloud 5, for example. Therefore, the in-vehicle device 3 can acquire a vehicle environment that is difficult to acquire in the vehicle 9 .
  • the in-vehicle device 3 acquires the vehicle environment from a device mounted in the vehicle 9.
  • the devices mounted on the vehicle 9 are, for example, the on-vehicle device 3, the room lamp 17, the navigation 21, the clock 38, and the like. Therefore, the in-vehicle device 3 can acquire various vehicle environments.
  • the in-vehicle device 3 acquires the relative position of the object using the sensor 13 . Therefore, the in-vehicle device 3 can acquire the relative position of the object accurately and easily.
  • selection factors included both the vehicle environment and the relative positions of objects. Selection factors may include the vehicle environment, but not the relative positions of objects. Also in this case, the in-vehicle device 3 can select an appropriate image recognition model according to the vehicle environment and perform image recognition.
  • the selection factor includes the relative position of the object, but does not have to include the vehicle environment. Also in this case, the in-vehicle device 3 can select an appropriate image recognition model according to the relative position of the object and perform image recognition.
  • the vehicle environment does not have to include one or more of the position of the vehicle 9, the traveling direction of the vehicle 9, the time of day, and the weather.
  • the vehicle environment may further include other elements.
  • the in-vehicle device 3 acquires an image when the vehicle 9 stops and the key lock is turned on.
  • the timing of acquiring the image may be another timing.
  • the in-vehicle device 3 may acquire images before the engine is turned on, when the vehicle is idling, during driving, during a period from when the vehicle is stopped until the key lock is turned on, and the like. Also in this case, the same effects as in the first embodiment can be obtained.
  • the in-vehicle device 3 acquires selection factors in steps 2 and 3 before acquiring an image in step 7, and selects an image recognition model in step 4. did.
  • the in-vehicle device 3 may acquire selection factors and select an image recognition model. Also, the in-vehicle device 3 may perform processing in the order of acquisition of selection factors, acquisition of images, and selection of an image recognition model. Even in these cases, the in-vehicle device 3 can achieve the effects (1A), (1C) to (1G) of the first embodiment.
  • the sensor 13 is used to obtain the relative position of the object.
  • Sensor 13 was a millimeter wave radar.
  • the sensor 13 may be a sensor other than the millimeter wave radar.
  • the method of acquiring the relative position of the object may be another method. For example, based on the image of the camera 15, the relative position of the object may be obtained.
  • the in-vehicle device 3 may perform other processing in addition to notifying the portable terminal 23.
  • Other processing includes, for example, processing for sounding the horn of the vehicle 9 . If the object 57 is a human infant, the in-vehicle device 3 operates the air conditioner of the vehicle 9 to lower the temperature of the passenger compartment 55, opens the windows, unlocks the doors, etc. in step 10. It can be performed.
  • the in-vehicle device 3 may have one or more functions of the sensor 13, the camera 15, the communication device 19, and the navigation 21.
  • the selection factor acquisition unit 47 may acquire selection factors while the vehicle 9 is running.
  • the model selection unit 49 may select an image recognition model while the vehicle 9 is running.
  • the object recognition unit 51 may perform the process of recognizing the object while the vehicle 9 is parked. In this case, a notification can be given if an object 57 is left behind in the parked vehicle 9 .
  • the controller 35 and its techniques described in the present disclosure may be implemented by a dedicated computer provided by configuring a processor with one or more dedicated hardware logic circuits.
  • the controller 35 and techniques described in this disclosure are a combination of a processor and memory programmed to perform one or more functions and a processor configured by one or more hardware logic circuits.
  • may be implemented by one or more dedicated computers configured by The method of realizing the function of each unit included in the control unit 35 does not necessarily include software, and all the functions may be realized using one or more pieces of hardware.
  • a plurality of functions possessed by one component in the above embodiment may be realized by a plurality of components, or a function possessed by one component may be realized by a plurality of components. . Also, a plurality of functions possessed by a plurality of components may be realized by a single component, or a function realized by a plurality of components may be realized by a single component. Also, part of the configuration of the above embodiment may be omitted. Moreover, at least part of the configuration of the above embodiment may be added or replaced with respect to the configuration of the other above embodiment.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Fittings On The Vehicle Exterior For Carrying Loads, And Devices For Holding Or Mounting Articles (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

An in-vehicle device (3) comprises an image acquisition unit, a selection factor acquisition unit, a model selection unit, and an object recognition unit. The selection factor acquisition unit acquires a selection factor including at least one of a vehicle environment and a relative location of an object with respect to a camera (15). The model selection unit selects an image recognition model according to the selection factor. The object recognition unit performs a process for recognizing an object by using the selected image recognition model to perform image recognition. If an object is recognized, a notification is sent to a cloud (5) via a communication unit.

Description

車載装置、物体の通知方法、プログラム、及び車両用システムIn-vehicle device, object notification method, program, and vehicle system 関連出願の相互参照Cross-reference to related applications
 本国際出願は、2021年6月24日に日本国特許庁に出願された日本国特許出願第2021-104882号に基づく優先権を主張するものであり、日本国特許出願第2021-104882号の全内容を本国際出願に参照により援用する。 This international application claims priority based on Japanese Patent Application No. 2021-104882 filed with the Japan Patent Office on June 24, 2021, and Japanese Patent Application No. 2021-104882 The entire contents are incorporated by reference into this international application.
 本開示は車載装置、物体の通知方法、プログラム、及び車両用システムに関する。 The present disclosure relates to an in-vehicle device, an object notification method, a program, and a vehicle system.
 車両の外部環境認識装置が特許文献1に開示されている。車両の外部環境認識装置は、車両の周囲を撮影し、画像を生成する。車両の外部環境認識装置は、画像に対し画像認識を行い、物体を認識する。車両の外部環境認識装置は、車両の外部環境に応じてカメラの制御パラメータを切り替える。 A vehicle external environment recognition device is disclosed in Patent Document 1. A vehicle external environment recognition device photographs the surroundings of the vehicle and generates an image. A vehicle external environment recognition device performs image recognition on an image to recognize an object. The vehicle's external environment recognition device switches the control parameters of the camera according to the vehicle's external environment.
特開2014-178836号公報JP 2014-178836 A
 車室内をカメラで撮影して得られた画像に対し画像認識を行い、車室内にある物体を認識することが考えられる。この場合に生じる以下の課題が、発明者の詳細な検討の結果、見出された。画像認識の精度は、車室内の環境、カメラから物体までの距離等に影響される。そのため、特許文献1に記載の技術のようにカメラの制御パラメータを切り替えたとしても、画像認識の精度が低いおそれがある。 It is conceivable to perform image recognition on the images obtained by photographing the interior of the vehicle with a camera, and to recognize objects inside the vehicle. The following problems that arise in this case were found as a result of detailed studies by the inventors. The accuracy of image recognition is affected by the environment inside the vehicle, the distance from the camera to the object, and the like. Therefore, even if the control parameters of the camera are switched as in the technology described in Patent Document 1, the accuracy of image recognition may be low.
 本開示の1つの局面では、車室内にある物体を高精度に認識することができる車載装置、物体の通知方法、プログラム、及び車両用システムを提供することが好ましい。 In one aspect of the present disclosure, it is preferable to provide an in-vehicle device, an object notification method, a program, and a vehicle system capable of recognizing an object in a vehicle interior with high accuracy.
 本開示の1つの局面は、車載装置である。車載装置は、通信ユニットを介して、ユーザの携帯端末と通信可能なクラウドとの間で通信を行うように構成される。車載装置は、カメラにより車両の車室内を撮影した画像を取得するように構成された画像取得ユニットと、前記車両の環境である車両環境、及び、前記車室内にある物体の前記カメラを基準とする相対位置の少なくとも一方を含む選択要因を取得するように構成された選択要因取得ユニットと、前記選択要因取得ユニットが取得した前記選択要因に応じて画像認識モデルを選択するように構成されたモデル選択ユニットと、前記画像取得ユニットが取得した前記画像に対し、前記モデル選択ユニットが選択した前記画像認識モデルを用いて画像認識を行い、前記物体を認識する処理を行うように構成された物体認識ユニットと、を備える。前記物体認識ユニットが所定の条件を満たす前記物体を認識した場合、前記携帯端末への報知に繋がる前記物体の認識結果に関する情報を、前記通信ユニットを介して前記クラウドへ通知する。 One aspect of the present disclosure is an in-vehicle device. The in-vehicle device is configured to communicate between the user's mobile terminal and the communicable cloud via the communication unit. The in-vehicle device includes an image acquisition unit configured to acquire an image of the interior of the vehicle captured by a camera, a vehicle environment that is the environment of the vehicle, and an object in the interior of the vehicle based on the camera. a selection factor acquisition unit configured to acquire a selection factor including at least one of the relative positions of the two; and a model configured to select an image recognition model according to the selection factor acquired by the selection factor acquisition unit. a selection unit, and an object recognition configured to perform image recognition using the image recognition model selected by the model selection unit on the image acquired by the image acquisition unit, and to perform a process of recognizing the object. a unit; When the object recognition unit recognizes the object that satisfies a predetermined condition, it notifies the cloud via the communication unit of information on the recognition result of the object that leads to notification to the portable terminal.
 本開示の1つの局面である車載装置は、車両環境や物体の相対位置に応じて、適切な画像認識モデルを選択し、画像認識を行うことができる。そのため、車載装置は、車室内にある物体を高精度に認識することができる。その結果、車載装置は、携帯端末への通知を正確に行うことができる。 An in-vehicle device, which is one aspect of the present disclosure, can select an appropriate image recognition model and perform image recognition according to the vehicle environment and the relative positions of objects. Therefore, the in-vehicle device can recognize objects in the vehicle interior with high accuracy. As a result, the in-vehicle device can accurately notify the mobile terminal.
 本開示の別の局面は、カメラにより車両の車室内を撮影した画像を取得し、前記車両の環境である車両環境、及び、前記車室内にある物体の前記カメラを基準とする相対位置の少なくとも一方を含む選択要因を取得し、前記選択要因に応じて画像認識モデルを選択し、前記画像に対し、選択した前記画像認識モデルを用いて画像認識を行い、前記物体を認識する処理を行い、所定条件を満たす前記物体を認識した場合、ユーザの携帯端末への報知に繋がる前記物体の認識結果に関する情報を、通信ユニットを介して、前記携帯端末と通信可能なクラウドへ通知する、物体の通知方法である。 Another aspect of the present disclosure is to acquire an image of the interior of a vehicle with a camera, and obtain at least the vehicle environment, which is the environment of the vehicle, and the relative position of an object in the interior of the vehicle with respect to the camera. Acquire a selection factor including one, select an image recognition model according to the selection factor, perform image recognition on the image using the selected image recognition model, perform processing to recognize the object, Notifying a cloud that can communicate with the mobile terminal, via a communication unit, of information about a recognition result of the object that will lead to notification to the user's mobile terminal when the object that satisfies a predetermined condition is recognized. The method.
 本開示の別の局面である物体の通知方法によれば、車両環境や物体の相対位置に応じて、適切な画像認識モデルを選択し、画像認識を行うことができる。そのため、車室内にある物体を高精度に認識することができる。その結果、携帯端末への通知を正確に行うことができる。 According to the object notification method, which is another aspect of the present disclosure, it is possible to select an appropriate image recognition model and perform image recognition according to the vehicle environment and the relative position of the object. Therefore, objects in the vehicle interior can be recognized with high accuracy. As a result, it is possible to accurately notify the mobile terminal.
 本開示の別の局面は、ユーザの携帯端末と通信可能なクラウドと、通信ユニットを介して前記クラウドとの間で通信を行うように構成された車載装置と、を備える車両用システムである。 Another aspect of the present disclosure is a vehicle system comprising a cloud capable of communicating with a user's mobile terminal, and an in-vehicle device configured to communicate with the cloud via a communication unit.
 前記車載装置は、カメラにより車両の車室内を撮影した画像を取得するように構成された画像取得ユニットと、前記車両の環境である車両環境、及び、前記車室内にある物体の前記カメラを基準とする相対位置の少なくとも一方を含む選択要因を取得するように構成された選択要因取得ユニットと、前記選択要因取得ユニットが取得した前記選択要因に応じて画像認識モデルを選択するように構成されたモデル選択ユニットと、前記画像取得ユニットが取得した前記画像に対し、前記モデル選択ユニットが選択した前記画像認識モデルを用いて画像認識を行い、前記物体を認識する処理を行うように構成された物体認識ユニットと、を備える。 The in-vehicle device includes an image acquisition unit configured to acquire an image of an interior of a vehicle captured by a camera, a vehicle environment that is the environment of the vehicle, and an object in the interior of the vehicle as a reference to the camera. a selection factor acquisition unit configured to acquire a selection factor including at least one of the relative positions of an object configured to perform image recognition on the image acquired by the model selection unit and the image acquired by the image acquisition unit using the image recognition model selected by the model selection unit, and to perform processing to recognize the object. a recognition unit;
 前記車載装置は、前記物体認識ユニットが所定条件を満たす前記物体を認識した場合、前記物体の認識結果に関する情報を、前記通信ユニットを介して前記クラウドへ通知する。 When the object recognition unit recognizes the object that satisfies a predetermined condition, the in-vehicle device notifies the cloud of information regarding the recognition result of the object via the communication unit.
 前記クラウドは、前記車載装置から通知された前記物体の認識結果に関する情報に基づいて、前記携帯端末へ報知するように構成される。 The cloud is configured to notify the portable terminal based on the information about the recognition result of the object notified from the in-vehicle device.
 本開示の別の局面である車両用システムは、車両環境や物体の相対位置に応じて、適切な画像認識モデルを選択し、画像認識を行うことができる。そのため、車両用システムは、車室内にある物体を高精度に認識することができる。その結果、車両用システムは、携帯端末への通知を正確に行うことができる。 A vehicle system, which is another aspect of the present disclosure, can select an appropriate image recognition model and perform image recognition according to the vehicle environment and relative positions of objects. Therefore, the vehicle system can recognize objects in the vehicle interior with high accuracy. As a result, the vehicle system can accurately notify the mobile terminal.
モビリティIoTシステムの構成を表すブロック図である。1 is a block diagram showing the configuration of a mobility IoT system; FIG. 車載装置の制御部における機能的構成を表すブロック図である。3 is a block diagram showing a functional configuration of a controller of the in-vehicle device; FIG. 車室内におけるセンサ、カメラ、及び物体の配置を表す説明図である。It is explanatory drawing showing arrangement|positioning of the sensor in a vehicle interior, a camera, and an object. 車載装置が実行する処理を表すフローチャートである。It is a flow chart showing processing which an in-vehicle device performs.
 本開示の例示的な実施形態について図面を参照しながら説明する。
<第1実施形態>
 1.モビリティIoTシステム1の構成
 モビリティIoTシステム1の構成を、図1に基づき説明する。IoTは、Internet of Thingsの略である。モビリティIoTシステム1は、車載装置3と、クラウド5と、サービス提供サーバ7と、を備える。なお、図1では便宜上、1のみの車載装置3を記載しているが、モビリティIoTシステム1は、例えば、複数の車載装置3を備える。複数の車載装置3は、それぞれ、異なる車両9に搭載されている。車載装置3とクラウド5との組み合わせは車両用システムに対応する。
Exemplary embodiments of the present disclosure are described with reference to the drawings.
<First embodiment>
1. Configuration of Mobility IoT System 1 The configuration of the mobility IoT system 1 will be described based on FIG. IoT is an abbreviation for Internet of Things. The mobility IoT system 1 includes an in-vehicle device 3 , a cloud 5 and a service providing server 7 . Although only one in-vehicle device 3 is shown in FIG. 1 for convenience, the mobility IoT system 1 includes a plurality of in-vehicle devices 3, for example. The multiple in-vehicle devices 3 are mounted in different vehicles 9, respectively. A combination of the in-vehicle device 3 and the cloud 5 corresponds to a vehicle system.
 車載装置3は、車両9に搭載された通信機19を介して、クラウド5と通信可能である。なお、車載装置3及び車両9の詳しい構成は後述する。 The in-vehicle device 3 can communicate with the cloud 5 via a communication device 19 mounted on the vehicle 9. Detailed configurations of the in-vehicle device 3 and the vehicle 9 will be described later.
 クラウド5は、車載装置3、サービス提供サーバ7、及び携帯端末23と通信可能である。携帯端末23は、例えば、車両9のユーザが所持する携帯端末である。携帯端末23として、例えば、スマートフォン、タブレット端末、ノートPC等が挙げられる。 The cloud 5 can communicate with the in-vehicle device 3, the service providing server 7, and the mobile terminal 23. The mobile terminal 23 is, for example, a mobile terminal owned by the user of the vehicle 9 . Examples of the mobile terminal 23 include a smart phone, a tablet terminal, a notebook PC, and the like.
 クラウド5は、制御部25と、通信部27と、記憶部29とを備える。制御部25は、CPU31と、例えば、RAM又はROM等の半導体メモリ(以下、メモリ33とする)とを備える。制御部25の機能は、メモリ33に格納されたプログラムをCPU31が実行することにより実現される。また、このプログラムが実行されることで、プログラムに対応する方法が実行される。通信部27は、通信機19及び携帯端末23との間で無線通信を行うことができる。記憶部29は情報を記録することができる。 The cloud 5 includes a control unit 25, a communication unit 27, and a storage unit 29. The control unit 25 includes a CPU 31 and a semiconductor memory such as RAM or ROM (hereinafter referred to as memory 33). The functions of the control unit 25 are implemented by the CPU 31 executing programs stored in the memory 33 . Also, by executing this program, a method corresponding to the program is executed. The communication unit 27 can perform wireless communication with the communication device 19 and the mobile terminal 23 . The storage unit 29 can record information.
 サービス提供サーバ7は、クラウド5と通信可能である。サービス提供サーバ7は、例えば、車両9の運行を管理するサービス等を提供するために設置されたサーバである。なお、モビリティIoTシステム1は、サービス内容が互いに異なる複数のサービス提供サーバ7を備えてもよい。 The service providing server 7 can communicate with the cloud 5. The service providing server 7 is, for example, a server installed to provide a service for managing operation of the vehicle 9 . Note that the mobility IoT system 1 may include a plurality of service providing servers 7 having different service contents.
 クラウド5は、複数の車載装置3のそれぞれから通信機19を介して送信された車両9のデータを収集する。クラウド5は、車両9ごとに、収集したデータを記憶部29に記憶する。 The cloud 5 collects vehicle 9 data transmitted from each of the multiple in-vehicle devices 3 via the communication device 19 . The cloud 5 stores the collected data in the storage unit 29 for each vehicle 9 .
 クラウド5は、記憶部29に記憶されている車両9のデータに基づき、デジタルツインを作成する。デジタルツインは、正規化されたインデックスデータである。サービス提供サーバ7は、デジタルツインから取得したインデックスデータを用いて記憶部29に記憶されている所定車両のデータを取得することができる。サービス提供サーバ7は、車両9の制御内容を決定し、制御内容に対応する指示をクラウド5に送信する。クラウド5は、指示に基づき、車両9へ制御内容を送信する。 The cloud 5 creates a digital twin based on the data of the vehicle 9 stored in the storage unit 29. A digital twin is normalized index data. The service providing server 7 can acquire the data of the predetermined vehicle stored in the storage unit 29 using the index data acquired from the digital twin. The service providing server 7 determines control details of the vehicle 9 and transmits instructions corresponding to the control details to the cloud 5 . The cloud 5 transmits control contents to the vehicle 9 based on the instruction.
 2.車載装置3及び車両9の構成
 車載装置3及び車両9の構成を、図1~図3に基づき説明する。図1に示すように、車載装置3は、制御部35と、記憶部37と、時計38と、を備える。制御部35は、CPU39と、例えば、RAM又はROM等の半導体メモリ(以下、メモリ41とする)とを備える。制御部35の機能は、メモリ41に格納されたプログラムをCPU39が実行することにより実現される。また、このプログラムが実行されることで、プログラムに対応する方法が実行される。
2. Configurations of In-vehicle Device 3 and Vehicle 9 Configurations of the in-vehicle device 3 and vehicle 9 will be described with reference to FIGS. 1 to 3. FIG. As shown in FIG. 1, the in-vehicle device 3 includes a control unit 35, a storage unit 37, and a clock 38. The control unit 35 includes a CPU 39 and a semiconductor memory such as RAM or ROM (hereinafter referred to as memory 41). The functions of the control unit 35 are realized by the CPU 39 executing programs stored in the memory 41 . Also, by executing this program, a method corresponding to the program is executed.
 記憶部37は情報を記憶することができる。記憶部37には、複数の画像認識モデルが記憶されている。画像認識モデルは、車両9に搭載されたカメラ15を用いて生成した画像に対し画像認識を行い、物体57を識別するために使用される。時計38は、時刻を表す時刻情報を制御部35に送信する。 The storage unit 37 can store information. The storage unit 37 stores a plurality of image recognition models. The image recognition model is used to recognize the object 57 by performing image recognition on an image generated using the camera 15 mounted on the vehicle 9 . The clock 38 transmits time information representing time to the control unit 35 .
 図2に示すように、制御部35の機能的な構成は、画像取得ユニット43と、通信ユニット45と、選択要因取得ユニット47と、モデル選択ユニット49と、物体認識ユニット51と、車両状態取得ユニット53と、を含む。 As shown in FIG. 2, the functional configuration of the control unit 35 includes an image acquisition unit 43, a communication unit 45, a selection factor acquisition unit 47, a model selection unit 49, an object recognition unit 51, and a vehicle state acquisition unit. a unit 53;
 画像取得ユニット43は、カメラ15を用いて車両9の車室55内を撮影し、画像を取得する。通信ユニット45は、クラウド5との間で通信を行う。 The image acquisition unit 43 uses the camera 15 to photograph the inside of the cabin 55 of the vehicle 9 and acquires the image. The communication unit 45 communicates with the cloud 5 .
 選択要因取得ユニット47は、選択要因を取得する。選択要因は、車両9の環境である車両環境、及び、車両9の車室55内にある物体57の、カメラ15を基準とする相対位置(以下では物体の相対位置とする)の少なくとも一方を含む。選択要因は、カメラ15が撮影して生じる画像に対して画像認識を行った場合の物体認識精度に影響する。 The selection factor acquisition unit 47 acquires selection factors. The selection factor is at least one of the vehicle environment, which is the environment of the vehicle 9, and the relative position of the object 57 in the cabin 55 of the vehicle 9 with respect to the camera 15 (hereinafter referred to as the relative position of the object). include. The selection factor affects object recognition accuracy when image recognition is performed on an image captured by the camera 15 .
 車両環境は、カメラ15が撮影して生じる画像に対して画像認識を行った場合の認識精度に影響する車両環境である。車両環境は、例えば、車室55内の明るさ、車両9の周囲の明るさ、車室55に光が入射する方向等に影響する車両環境である。車両環境として、例えば、車両9の位置、車両9の向き又は進行方向、現在時刻、天候、及びルームランプ17の状態等が挙げられる。 The vehicle environment is the vehicle environment that affects the recognition accuracy when image recognition is performed on images captured by the camera 15 . The vehicle environment is, for example, the vehicle environment that affects the brightness in the vehicle interior 55, the brightness around the vehicle 9, the direction in which light enters the vehicle interior 55, and the like. The vehicle environment includes, for example, the position of the vehicle 9, the direction or direction of travel of the vehicle 9, the current time, the weather, and the state of the room lamps 17, and the like.
 車両9の位置、及び車両9の進行方向は、車室55内の明るさ、及び、車室55に光が入射する方向に影響する。現在時刻は、車室55内の明るさ、及び、車室55に光が入射する方向に影響する。天候は、車室55内の明るさに影響する。天候として、例えば、晴天、曇天、雨天、霧等が挙げられる。ルームランプ17の状態は、車室55内の明るさに影響する。ルームランプ17の状態として、例えば、点灯している状態、消灯している状態、光量が特定の範囲内である状態等が挙げられる。ルームランプ17が複数ある場合は、それぞれのルームランプ17の状態が車両環境に該当する。 The position of the vehicle 9 and the traveling direction of the vehicle 9 affect the brightness in the vehicle interior 55 and the direction in which light enters the vehicle interior 55 . The current time affects the brightness in the vehicle interior 55 and the direction in which light enters the vehicle interior 55 . The weather affects the brightness inside the vehicle compartment 55 . Weather includes, for example, fine weather, cloudy weather, rainy weather, fog, and the like. The state of the room lamp 17 affects the brightness in the passenger compartment 55 . The state of the room lamp 17 includes, for example, a lit state, an extinguished state, a state in which the amount of light is within a specific range, and the like. When there are a plurality of room lamps 17, the state of each room lamp 17 corresponds to the vehicle environment.
 物体の相対位置は、カメラ15から物体57までの距離と、カメラ15を基準とする物体57の方向とから構成される。物体の相対位置は、カメラ15が生成した画像における物体57の大きさに影響する。物体57がカメラ15から遠いほど、カメラ15が生成した画像において物体57は小さい。また、物体の相対位置に加え、上記車両環境として、例えば、車室55に光が入射する方向が分かると、物体57に光が照射している状態や、物体57の背後から光が照射し、逆光となっている状態等が分かる。物体の相対位置は、カメラ15が撮影して生じる画像に対して画像認識を行った場合の認識精度に影響する。 The relative position of the object consists of the distance from the camera 15 to the object 57 and the direction of the object 57 with respect to the camera 15. The relative position of the object affects the size of object 57 in the image produced by camera 15 . The farther the object 57 is from the camera 15, the smaller the object 57 is in the image produced by the camera 15. FIG. In addition to the relative positions of the objects, if the direction of light incident on the vehicle interior 55 is known as the vehicle environment, for example, the state in which the light is irradiated to the object 57 and the state in which the light is irradiated from behind the object 57 can be determined. , backlit conditions, etc. can be seen. The relative position of the object affects the recognition accuracy when image recognition is performed on an image captured by the camera 15 .
 記憶部37に記憶されている複数の画像認識モデルは、それぞれ、選択要因と対応付けられている。画像認識モデルは、例えば、それと対応付けられている選択要因において学習して得られた画像認識モデルである。画像認識モデルを、それと対応付けられている選択要因において使用すると、物体57を高精度に認識することができる。 A plurality of image recognition models stored in the storage unit 37 are each associated with a selection factor. The image recognition model is, for example, an image recognition model obtained by learning in the selection factor associated therewith. Using the image recognition model in its associated selection factors, the object 57 can be recognized with high accuracy.
 例えば、画像認識モデルを、それと対応付けられている選択要因に含まれる車両環境において使用すると、物体57を高精度に認識することができる。また、例えば、画像認識モデルを、それと対応付けられている選択要因に含まれる物体の相対位置において使用すると、物体57を高精度に認識することができる。 For example, if the image recognition model is used in the vehicle environment included in the selection factors associated with it, the object 57 can be recognized with high accuracy. Also, for example, if the image recognition model is used in the relative position of the object included in the selection factors associated with it, the object 57 can be recognized with high accuracy.
 モデル選択ユニット49は、選択要因取得ユニット47が取得した選択要因に応じて画像認識モデルを選択する。 The model selection unit 49 selects an image recognition model according to the selection factors acquired by the selection factor acquisition unit 47 .
 例えば、モデル選択ユニット49は、車両環境に応じて画像認識モデルを選択する。具体的には、モデル選択ユニット49は、夜間に車室55内が暗い場合は、暗いカメラ画像向けの画像認識モデルを選択する。また、モデル選択ユニット49は、昼間に車室55内が明るい場合、夕日によって車室55内が明るい場合、夜間であるがルームランプ17の点灯により車室55内が明るい場合等のそれぞれにおいて、場合に適した画像認識モデルを選択する。 For example, the model selection unit 49 selects an image recognition model according to the vehicle environment. Specifically, when the vehicle interior 55 is dark at night, the model selection unit 49 selects an image recognition model for dark camera images. In addition, the model selection unit 49 selects the following when the interior of the vehicle interior 55 is bright during the day, when the interior of the vehicle interior 55 is bright due to sunset, and when the interior of the vehicle interior 55 is bright due to lighting of the room lamp 17 at night. Select the appropriate image recognition model for each case.
 また、例えば、モデル選択ユニット49は、物体の相対位置に応じて画像認識モデルを選択する。具体的には、モデル選択ユニット49は、カメラ15から物体57までの距離が遠い場合は、物体57が小さく撮影されたカメラ画像向けの画像認識モデルを選択する。また、モデル選択ユニット49は、カメラ15から物体57までの距離が近い場合は、物体57が大きく撮影されたカメラ画像向けの画像認識モデルを選択する。また、モデル選択ユニット49は、物体57がカメラ15の正面にある場合、物体57が、カメラ15を基準として斜めや側方に近い場合等のそれぞれにおいて、場合に適した画像認識モデルを選択する。選択要因として、車両環境及び物体の相対位置を用いて画像認識モデルを選択することで、物体をさらに高精度に認識することができる。 Also, for example, the model selection unit 49 selects an image recognition model according to the relative position of the object. Specifically, when the distance from the camera 15 to the object 57 is long, the model selection unit 49 selects an image recognition model for camera images in which the object 57 is small. Also, when the distance from the camera 15 to the object 57 is short, the model selection unit 49 selects an image recognition model for camera images in which the object 57 is captured large. In addition, the model selection unit 49 selects an image recognition model suitable for each case, such as when the object 57 is in front of the camera 15 and when the object 57 is oblique or close to the side of the camera 15. . By selecting the image recognition model using the vehicle environment and the relative position of the object as selection factors, the object can be recognized with higher accuracy.
 物体認識ユニット51は、画像取得ユニット43が取得した画像に対し、モデル選択ユニット49が選択した画像認識モデルを用いて画像認識を行い、物体57を認識する処理を行う。 The object recognition unit 51 performs image recognition on the image acquired by the image acquisition unit 43 using the image recognition model selected by the model selection unit 49, and performs processing for recognizing the object 57.
 車両状態取得ユニット53は、車両ECU11から車両9の状態を表す信号を取得する。車両9の状態として、例えば、エンジンオン、走行開始、停車、キーロック等が挙げられる。 The vehicle state acquisition unit 53 acquires a signal representing the state of the vehicle 9 from the vehicle ECU 11. The state of the vehicle 9 includes, for example, engine on, start of running, stop, key lock, and the like.
 図1に示すように、車両9は、車載装置3に加えて、車両ECU11、センサ13、カメラ15、ルームランプ17、通信機19、及びナビ21を備える。車載装置3は、車両ECU11、センサ13、カメラ15、ルームランプ17、通信機19、及びナビ21のそれぞれと、通信を行うことができる。 As shown in FIG. 1, the vehicle 9 includes a vehicle ECU 11, a sensor 13, a camera 15, a room lamp 17, a communication device 19, and a navigation system 21 in addition to the in-vehicle device 3. The vehicle-mounted device 3 can communicate with each of the vehicle ECU 11 , the sensor 13 , the camera 15 , the room lamp 17 , the communication device 19 , and the navigation 21 .
 車両ECU11は、車両9の状態を検出し、検出した状態を表す信号を車両状態取得ユニット53に送信する。例えば、車両ECU11は、車両9の車速を検出する。車両9の車速が0Km/hから閾値以上に変化したとき、車両ECU11は、走行開始を表す信号を車両状態取得ユニット53に送信する。また、車両9の車速が閾値以上から0Km/hに変化したとき、車両ECU11は、停車を表す信号を車両状態取得ユニット53に送信する。 The vehicle ECU 11 detects the state of the vehicle 9 and transmits a signal representing the detected state to the vehicle state acquisition unit 53 . For example, the vehicle ECU 11 detects the vehicle speed of the vehicle 9 . When the vehicle speed of the vehicle 9 changes from 0 Km/h to a threshold value or more, the vehicle ECU 11 transmits a signal indicating start of running to the vehicle state acquisition unit 53 . Further, when the vehicle speed of the vehicle 9 changes from the threshold value or higher to 0 Km/h, the vehicle ECU 11 transmits a signal indicating that the vehicle is stopped to the vehicle state acquisition unit 53 .
 車両ECU11は、例えば、シフトの位置を検出する。シフトの位置がパーキングに変化したとき、車両ECU11は、停車を表す信号を車両状態取得ユニット53に送信する。 The vehicle ECU 11 detects, for example, the shift position. When the shift position changes to parking, the vehicle ECU 11 transmits a signal indicating that the vehicle is stopped to the vehicle state acquisition unit 53 .
 図3に示すように、センサ13及びカメラ15は、それぞれ、車両9の車室55内に設置されている。センサ13の位置はカメラ15の位置に近い。例えば、センサ13及びカメラ15は、フロントガラスの上部、又はルームミラー付近に設置される。また、センサ13の位置を基準とするカメラ15の相対位置は一定である。  As shown in FIG. 3, the sensor 13 and the camera 15 are installed inside the vehicle compartment 55 of the vehicle 9, respectively. The position of sensor 13 is close to the position of camera 15 . For example, the sensor 13 and camera 15 are installed above the windshield or near the rearview mirror. Also, the relative position of the camera 15 with respect to the position of the sensor 13 is constant.
 センサ13は、例えば、ミリ波レーダである。センサ13は、センサ13の検出結果を、レファレンスデータと対比することで、車室55内の物体57を検出することができる。レファレンスデータとは、車室55内に物体57が存在しない場合のセンサ13の検出結果である。物体57として、例えば、物(人及び動物を除く)、人、動物等が挙げられる。物は、例えば、車両9のユーザが持ち運び可能な物である。人として、例えば、幼児が挙げられる。センサ13は、物体の有無及び物体の相対位置を算出することができる。 The sensor 13 is, for example, a millimeter wave radar. The sensor 13 can detect the object 57 in the vehicle interior 55 by comparing the detection result of the sensor 13 with the reference data. The reference data is the detection result of the sensor 13 when the object 57 does not exist inside the vehicle compartment 55 . Objects 57 include, for example, things (excluding people and animals), people, animals, and the like. The object is, for example, an object that can be carried by the user of the vehicle 9 . Humans include, for example, infants. The sensor 13 can calculate the presence or absence of an object and the relative position of the object.
 カメラ15は、車室55内に設置されている。カメラ15の撮影範囲は、車室55内のうち、物体57が置かれる可能性が高い範囲を含む。カメラ15の撮影範囲は、例えば、運転席、助手席、後部座席、及びダッシュボード等の一部又は全部を含む。 The camera 15 is installed inside the vehicle compartment 55 . The photographing range of the camera 15 includes a range within the vehicle interior 55 where the object 57 is likely to be placed. The photographing range of the camera 15 includes, for example, part or all of the driver's seat, front passenger's seat, rear seat, and dashboard.
 ルームランプ17は車室55内に1つ又は複数設置されている。ルームランプ17は、点灯及び消灯が可能である。また、ルームランプ17は光量調整が可能である。ルームランプ17は、ルームランプ17の状態を表す信号を制御部35に送信する。 One or more room lamps 17 are installed in the vehicle interior 55 . The room lamp 17 can be turned on and off. Also, the room lamp 17 can be adjusted in light quantity. The room lamp 17 transmits a signal representing the state of the room lamp 17 to the controller 35 .
 通信機19は、クラウド5の通信部27との間で通信を行うことができる。ナビ21は、通常のナビゲーションシステムの機能を有する。ナビ21は、測位衛星から受信したデータに基づき、車両9の位置、及び車両9の進行方向を取得することができる。 The communication device 19 can communicate with the communication unit 27 of the cloud 5. Navi 21 has the functions of a normal navigation system. The navigation 21 can acquire the position of the vehicle 9 and the traveling direction of the vehicle 9 based on the data received from the positioning satellites.
 3.車載装置3が実行する処理
 車載装置3が実行する処理を図4に基づき説明する。図4に示す処理は、例えば、スリープ状態にあった車載装置3が起動したときに実行される。車載装置3は、例えば、以下のように起動する。車両9のドアのロックが解除されたとき、車両ECU11が起動する。起動した車両ECU11は、車載装置3を起動する。なお、車載装置3は、図4に示す処理が終了したとき、スリープ状態となる。
3. Processing Executed by In-vehicle Device 3 Processing executed by the in-vehicle device 3 will be described with reference to FIG. The process shown in FIG. 4 is executed, for example, when the in-vehicle device 3 in sleep mode is activated. The in-vehicle device 3 is activated, for example, as follows. When the door of the vehicle 9 is unlocked, the vehicle ECU 11 is activated. The activated vehicle ECU 11 activates the in-vehicle device 3 . Note that the in-vehicle device 3 enters a sleep state when the processing shown in FIG. 4 is completed.
 図4のステップ1では、車両状態取得ユニット53が、車両ECU11から取得した信号に基づき、車両9のエンジンがオンになったか否かと、車両9が走行を開始したか否かと、を判断する。車両9のエンジンがオンになったと判断し、かつ、車両9が走行を開始したと判断した場合、本処理はステップ2に進む。車両9のエンジンがオンでないと判断した場合、又は、車両9が走行を開始していないと判断した場合、本処理はステップ1の前に戻る。 In step 1 of FIG. 4, the vehicle state acquisition unit 53 determines whether the engine of the vehicle 9 has been turned on and whether the vehicle 9 has started running, based on the signal acquired from the vehicle ECU 11. If it is determined that the engine of the vehicle 9 has been turned on and that the vehicle 9 has started running, the process proceeds to step 2 . If it is determined that the engine of the vehicle 9 is not on, or if it is determined that the vehicle 9 has not started running, the process returns to step 1 .
 ステップ2では、選択要因取得ユニット47が、センサ13を用いて、物体の相対位置を取得する。 In step 2, the selection factor acquisition unit 47 uses the sensor 13 to acquire the relative position of the object.
 ステップ3では、選択要因取得ユニット47が車両環境を取得する。例えば、選択要因取得ユニット47は、ナビ21を用いて、車両9の位置、及び車両9の進行方向を取得する。また、選択要因取得ユニット47は、例えば、時計38を用いて現在時刻を取得する。また、選択要因取得ユニット47は、例えば、通信機19を介してクラウド5と通信を行うことにより、天候を取得する。天候は、例えば、車両9の位置における現時点での天候である。また、選択要因取得ユニット47は、例えば、ルームランプ17から、ルームランプ17の状態を取得する。また、選択要因取得ユニット47は、例えば、カメラ15で車室55内を撮影し、撮影した画像を解析し、車室55内の明るさを検出してもよい。なお、ナビ21、時計38、及びルームランプ17は、車両9に搭載された装置に対応する。クラウド5は車両9の外部に対応する。なお、選択要因取得ユニット47は、ステップ2、3の処理のうちの一方のみを行ってもよい。 In step 3, the selection factor acquisition unit 47 acquires the vehicle environment. For example, the selection factor acquisition unit 47 acquires the position of the vehicle 9 and the traveling direction of the vehicle 9 using the navigation system 21 . The selection factor acquisition unit 47 also acquires the current time using the clock 38, for example. Also, the selection factor acquisition unit 47 acquires the weather by communicating with the cloud 5 via the communication device 19, for example. The weather is, for example, the current weather at the location of the vehicle 9 . Further, the selection factor acquisition unit 47 acquires the state of the room lamp 17 from the room lamp 17, for example. Further, the selection factor acquisition unit 47 may, for example, photograph the interior of the vehicle interior 55 with the camera 15 , analyze the photographed image, and detect the brightness within the vehicle interior 55 . Note that the navigation 21, the clock 38, and the room lamp 17 correspond to devices mounted on the vehicle 9. FIG. Cloud 5 corresponds to the exterior of vehicle 9 . Note that the selection factor acquisition unit 47 may perform only one of the processes of steps 2 and 3.
 ステップ4では、モデル選択ユニット49が、選択要因に応じて画像認識モデルを選択する。選択要因は、前記ステップ2で取得した物体の相対位置と、前記ステップ3で取得した車両環境との両方を含む。 In step 4, the model selection unit 49 selects an image recognition model according to selection factors. The selection factors include both the relative position of the object obtained in step 2 above and the vehicle environment obtained in step 3 above.
 上述したように、記憶部37に記憶されている複数の画像認識モデルは、それぞれ、選択要因に対応付けられている。モデル選択ユニット49は、記憶部37に記憶されている複数の画像認識モデルの中から、選択要因に対応付けられた画像認識モデルを選択する。 As described above, the plurality of image recognition models stored in the storage unit 37 are each associated with a selection factor. A model selection unit 49 selects an image recognition model associated with a selection factor from among a plurality of image recognition models stored in the storage unit 37 .
 ステップ5では、車両状態取得ユニット53が、車両ECU11から取得した信号に基づき、車両9の走行が終了したか否かを判断する。車両9の走行が終了したと判断した場合、本処理はステップ6に進む。車両9の走行が終了していないと判断した場合、本処理はステップ2に進む。 In step 5, the vehicle state acquisition unit 53 determines whether or not the vehicle 9 has finished traveling based on the signal acquired from the vehicle ECU 11 . If it is determined that the vehicle 9 has finished traveling, the process proceeds to step 6 . If it is determined that the vehicle 9 has not finished running, the process proceeds to step 2 .
 ステップ6では、車両状態取得ユニット53が、車両ECU11から取得した信号に基づき、車両9が停車したか否かと、キーロックがオンになったか否かと、を判断する。車両9が停車したと判断し、かつ、キーロックがオンになったと判断した場合、本処理はステップ7に進む。車両9が停車していないと判断した場合、又は、キーロックがオンになっていないと判断した場合、本処理はステップ6の前に戻る。 In step 6, the vehicle state acquisition unit 53 determines whether the vehicle 9 has stopped and whether the key lock has been turned on, based on the signal obtained from the vehicle ECU 11 . If it is determined that the vehicle 9 has stopped and that the key lock has been turned on, the process proceeds to step 7 . If it is determined that the vehicle 9 is not stopped or if it is determined that the key lock is not turned on, the process returns to before step 6 .
 ステップ7では、画像取得ユニット43が、カメラ15を用いて車室55内を撮影し、画像を取得する。 In step 7, the image acquisition unit 43 uses the camera 15 to photograph the inside of the vehicle interior 55 and acquires the image.
 ステップ8では、物体認識ユニット51が、前記ステップ7で取得した画像に対し、前記ステップ4で選択した画像認識モデルを用いて画像認識を行い、物体57を認識する処理を行う。 In step 8, the object recognition unit 51 performs image recognition on the image acquired in step 7 using the image recognition model selected in step 4, and performs processing to recognize the object 57.
 ステップ9では、前記ステップ8の処理において特定の物体57を認識したか否かを、物体認識ユニット51が判断する。特定の物体57を認識したと判断した場合、本処理はステップ10に進む。特定の物体57とは、例えば、子供、動物、及び手荷物等である。特定の物体57を認識しなかったと判断した場合、本処理は終了する。特定の物体57は、所定条件を満たす物体に対応する。 In step 9, the object recognition unit 51 determines whether or not the specific object 57 has been recognized in the process of step 8. If it is determined that the specific object 57 has been recognized, the process proceeds to step 10 . Specific objects 57 are, for example, children, animals, and baggage. If it is determined that the specific object 57 has not been recognized, this process ends. A specific object 57 corresponds to an object that satisfies a predetermined condition.
 ステップ10では、通信ユニット45は、クラウド5から携帯端末23へ通知を行わせるため、特定の物体57に関する情報をクラウド5へ送信する。ステップ1からここまでの処理は通知方法に対応する。当該情報を受信したクラウド5は、携帯端末23へ、車室55内に子供、ペット、又は手荷物が残存している旨を通知する。通信ユニット45が送信する情報は、携帯端末23への報知に繋がる物体57の認識結果に関する情報に対応する。携帯端末23は、例えば、報知画像を表示したり、音声又は振動等を発生させたりする。車両9のユーザは、報知画像、音声、振動等により、車室55内に物体57があることを知ることができる。通信ユニット45は、例えば、前記ステップ7で取得したカメラ画像、又はそれを加工した画像を、クラウド5へ送信する。クラウド5は、それらの画像を携帯端末23へ送信する。携帯端末23は、前記ステップ7で取得したカメラ画像、又はそれを加工した画像を表示する。車両9のユーザは、表示された画像を見ることで、物体57が車室55内にあること、及び、物体57が何であるかを知ることができる。 In step 10, the communication unit 45 transmits information about the specific object 57 to the cloud 5 in order to notify the mobile terminal 23 from the cloud 5. The processing from step 1 to this point corresponds to the notification method. Upon receiving the information, the cloud 5 notifies the portable terminal 23 that a child, pet, or baggage remains in the passenger compartment 55 . The information transmitted by the communication unit 45 corresponds to information regarding the recognition result of the object 57 leading to notification to the mobile terminal 23 . The mobile terminal 23 displays, for example, a notification image, and generates sound, vibration, or the like. A user of the vehicle 9 can know that there is an object 57 in the vehicle interior 55 from the notification image, sound, vibration, or the like. The communication unit 45 transmits, for example, the camera image obtained in step 7 or an image obtained by processing it to the cloud 5 . Cloud 5 transmits those images to mobile terminal 23 . The mobile terminal 23 displays the camera image obtained in step 7 or an image obtained by processing it. A user of the vehicle 9 can know that the object 57 is inside the vehicle compartment 55 and what the object 57 is by looking at the displayed image.
 4.車載装置3が奏する効果
 (1A)車載装置3は、カメラ15を用いて車室55内を撮影し、画像を取得する。車載装置3は、選択要因を取得する。車載装置3は、選択要因に応じて画像認識モデルを選択する。車載装置3は、取得した画像に対し、選択した画像認識モデルを用いて画像認識を行い、物体57を認識する処理を行う。車載装置3は、物体57を認識した場合、クラウド5を介して携帯端末23へ報知する。
4. Effects of In-vehicle Device 3 (1A) The in-vehicle device 3 uses the camera 15 to photograph the inside of the vehicle interior 55 to obtain an image. The in-vehicle device 3 acquires the selection factor. The in-vehicle device 3 selects an image recognition model according to selection factors. The in-vehicle device 3 performs image recognition on the acquired image using the selected image recognition model, and performs processing for recognizing the object 57 . When the in-vehicle device 3 recognizes the object 57 , the in-vehicle device 3 notifies the portable terminal 23 via the cloud 5 .
 車載装置3は、車両環境や物体の相対位置に応じて、適切な画像認識モデルを選択し、画像認識を行うことができる。そのため、車載装置3は、物体57を高精度に認識することができる。その結果、車載装置3は、携帯端末23への報知を正確に行うことができる。 The in-vehicle device 3 can select an appropriate image recognition model and perform image recognition according to the vehicle environment and the relative positions of objects. Therefore, the in-vehicle device 3 can recognize the object 57 with high accuracy. As a result, the in-vehicle device 3 can accurately notify the mobile terminal 23 .
 (1B)車載装置3は、前記ステップ7で画像を取得するよりも前に、前記ステップ2及び前記ステップ3で選択要因を取得する。また、車載装置3は、前記ステップ7で画像を取得するよりも前に、前記ステップ4で画像認識モデルを選択する。 (1B) The in-vehicle device 3 acquires selection factors in steps 2 and 3 before acquiring an image in step 7 above. Also, the in-vehicle device 3 selects an image recognition model in step 4 before acquiring an image in step 7 .
 そのため、画像を取得した後に選択要因を取得したり、画像を取得した後に画像認識モデルを選択したりする場合よりも、画像を取得してから画像認識を完了するまでの時間を短縮できる。 Therefore, the time from acquiring an image to completing image recognition can be shortened compared to acquiring selection factors after acquiring an image or selecting an image recognition model after acquiring an image.
 (1C)選択要因は、車両9の位置、車両9の進行方向、現在時刻、及び天候を含む。そのため、車載装置3は、車両9の位置、車両9の進行方向、時刻、及び天候に応じて適切な画像認識モデルを選択することができる。 (1C) Selection factors include the position of the vehicle 9, the traveling direction of the vehicle 9, the current time, and the weather. Therefore, the in-vehicle device 3 can select an appropriate image recognition model according to the position of the vehicle 9, the traveling direction of the vehicle 9, the time of day, and the weather.
 (1D)選択要因は、ルームランプ17の状態を含む。そのため、車載装置3は、ルームランプ17の状態に応じて適切な画像認識モデルを選択することができる。 (1D) Selection factors include the state of the room lamp 17 . Therefore, the in-vehicle device 3 can select an appropriate image recognition model according to the state of the room lamp 17 .
 (1E)車載装置3は、車両9の外部から車両環境を取得する。車両9の外部とは、例えば、クラウド5である。そのため、車載装置3は、車両9において取得することが困難な車両環境を取得することができる。 (1E) The in-vehicle device 3 acquires the vehicle environment from outside the vehicle 9 . The outside of the vehicle 9 is the cloud 5, for example. Therefore, the in-vehicle device 3 can acquire a vehicle environment that is difficult to acquire in the vehicle 9 .
 (1F)車載装置3は、車両9に搭載された装置から車両環境を取得する。車両9に搭載された装置とは、例えば、車載装置3、ルームランプ17、ナビ21、時計38等である。そのため、車載装置3は、多様な車両環境を取得することができる。 (1F) The in-vehicle device 3 acquires the vehicle environment from a device mounted in the vehicle 9. The devices mounted on the vehicle 9 are, for example, the on-vehicle device 3, the room lamp 17, the navigation 21, the clock 38, and the like. Therefore, the in-vehicle device 3 can acquire various vehicle environments.
 (1G)車載装置3は、センサ13を用いて、物体の相対位置を取得する。そのため、車載装置3は、正確かつ容易に物体の相対位置を取得することができる。
<他の実施形態>
 以上、本開示の実施形態について説明したが、本開示は上述の実施形態に限定されることなく、種々変形して実施することができる。
(1G) The in-vehicle device 3 acquires the relative position of the object using the sensor 13 . Therefore, the in-vehicle device 3 can acquire the relative position of the object accurately and easily.
<Other embodiments>
Although the embodiments of the present disclosure have been described above, the present disclosure is not limited to the above-described embodiments, and various modifications can be made.
 (1)第1実施形態では、選択要因は、車両環境と物体の相対位置との両方を含んでいた。選択要因は、車両環境は含むが、物体の相対位置は含まなくてもよい。この場合も、車載装置3は、車両環境に応じて適切な画像認識モデルを選択し、画像認識を行うことができる。 (1) In the first embodiment, selection factors included both the vehicle environment and the relative positions of objects. Selection factors may include the vehicle environment, but not the relative positions of objects. Also in this case, the in-vehicle device 3 can select an appropriate image recognition model according to the vehicle environment and perform image recognition.
 また、選択要因は、物体の相対位置は含むが、車両環境は含まなくてもよい。この場合も、車載装置3は、物体の相対位置に応じて適切な画像認識モデルを選択し、画像認識を行うことができる。 Also, the selection factor includes the relative position of the object, but does not have to include the vehicle environment. Also in this case, the in-vehicle device 3 can select an appropriate image recognition model according to the relative position of the object and perform image recognition.
 車両環境は、車両9の位置、車両9の進行方向、時刻、及び天候のうちの1以上を含まなくてもよい。車両環境は、他の要素をさらに含んでいてもよい。 The vehicle environment does not have to include one or more of the position of the vehicle 9, the traveling direction of the vehicle 9, the time of day, and the weather. The vehicle environment may further include other elements.
 (2)第1実施形態では、車載装置3は、車両9が停車し、かつ、キーロックがオンになったときに画像を取得した。画像を取得するタイミングは、他のタイミングであってもよい。例えば、車載装置3は、エンジンがオンになる前、アイドリング状態のとき、走行中、停車からキーロックオンまでの期間等に画像を取得してもよい。この場合も、第1実施形態と同様の効果を奏することができる。 (2) In the first embodiment, the in-vehicle device 3 acquires an image when the vehicle 9 stops and the key lock is turned on. The timing of acquiring the image may be another timing. For example, the in-vehicle device 3 may acquire images before the engine is turned on, when the vehicle is idling, during driving, during a period from when the vehicle is stopped until the key lock is turned on, and the like. Also in this case, the same effects as in the first embodiment can be obtained.
 (3)第1実施形態では、車載装置3は、前記ステップ7で画像を取得するよりも前に、前記ステップ2及び前記ステップ3で選択要因を取得し、前記ステップ4で画像認識モデルを選択した。 (3) In the first embodiment, the in-vehicle device 3 acquires selection factors in steps 2 and 3 before acquiring an image in step 7, and selects an image recognition model in step 4. did.
 車載装置3は、画像を取得した後に、選択要因を取得し、画像認識モデルを選択してもよい。また、車載装置3は、選択要因の取得、画像の取得、画像認識モデルの選択の順番で処理を行ってもよい。これらの場合でも、車載装置3は、第1実施形態の効果(1A)、(1C)~(1G)の効果を奏することができる。 After acquiring the image, the in-vehicle device 3 may acquire selection factors and select an image recognition model. Also, the in-vehicle device 3 may perform processing in the order of acquisition of selection factors, acquisition of images, and selection of an image recognition model. Even in these cases, the in-vehicle device 3 can achieve the effects (1A), (1C) to (1G) of the first embodiment.
 (4)第1実施形態では、センサ13を用いて物体の相対位置を取得した。センサ13はミリ波レーダであった。センサ13はミリ波レーダ以外のセンサであってもよい。また、物体の相対位置を取得する方法は、他の方法であってもよい。例えば、カメラ15の画像に基づき、物体の相対位置を取得してもよい。 (4) In the first embodiment, the sensor 13 is used to obtain the relative position of the object. Sensor 13 was a millimeter wave radar. The sensor 13 may be a sensor other than the millimeter wave radar. Also, the method of acquiring the relative position of the object may be another method. For example, based on the image of the camera 15, the relative position of the object may be obtained.
 (5)前記ステップ10において、車載装置3は、携帯端末23への報知に加えて、他の処理を行ってもよい。他の処理として、例えば、車両9のクラクションを鳴らす処理等が挙げられる。また、物体57が人間の幼児である場合、車載装置3は、前記ステップ10において、車両9のエアコンを動作させて車室55の温度を下げる、窓を開ける、ドアロックを解除する等の処理を行うことができる。 (5) In step 10, the in-vehicle device 3 may perform other processing in addition to notifying the portable terminal 23. Other processing includes, for example, processing for sounding the horn of the vehicle 9 . If the object 57 is a human infant, the in-vehicle device 3 operates the air conditioner of the vehicle 9 to lower the temperature of the passenger compartment 55, opens the windows, unlocks the doors, etc. in step 10. It can be performed.
 (6)車載装置3は、センサ13、カメラ15、通信機19、及びナビ21のうちの1以上の機能を備えていてもよい。 (6) The in-vehicle device 3 may have one or more functions of the sensor 13, the camera 15, the communication device 19, and the navigation 21.
 (7)選択要因取得ユニット47は、車両9の走行中に選択要因を取得してもよい。モデル選択ユニット49は、車両9の走行中に画像認識モデルを選択してもよい。物体認識ユニット51は、車両9の走行後、車両9が駐車しているときに、物体を認識する処理を行ってもよい。この場合、駐車している車両9の中に物体57が置き去りにされた場合に通知を行うことができる。 (7) The selection factor acquisition unit 47 may acquire selection factors while the vehicle 9 is running. The model selection unit 49 may select an image recognition model while the vehicle 9 is running. After the vehicle 9 has traveled, the object recognition unit 51 may perform the process of recognizing the object while the vehicle 9 is parked. In this case, a notification can be given if an object 57 is left behind in the parked vehicle 9 .
 (8)本開示に記載の制御部35及びその手法は、一つ以上の専用ハードウェア論理回路によってプロセッサを構成することによって提供された専用コンピュータにより、実現されてもよい。もしくは、本開示に記載の制御部35及びその手法は、一つ乃至は複数の機能を実行するようにプログラムされたプロセッサ及びメモリと一つ以上のハードウェア論理回路によって構成されたプロセッサとの組み合わせにより構成された一つ以上の専用コンピュータにより、実現されてもよい。制御部35に含まれる各部の機能を実現する手法には、必ずしもソフトウェアが含まれている必要はなく、その全部の機能が、一つあるいは複数のハードウェアを用いて実現されてもよい。 (8) The controller 35 and its techniques described in the present disclosure may be implemented by a dedicated computer provided by configuring a processor with one or more dedicated hardware logic circuits. Alternatively, the controller 35 and techniques described in this disclosure are a combination of a processor and memory programmed to perform one or more functions and a processor configured by one or more hardware logic circuits. may be implemented by one or more dedicated computers configured by The method of realizing the function of each unit included in the control unit 35 does not necessarily include software, and all the functions may be realized using one or more pieces of hardware.
 (9)上記実施形態における1つの構成要素が有する複数の機能を、複数の構成要素によって実現したり、1つの構成要素が有する1つの機能を、複数の構成要素によって実現したりしてもよい。また、複数の構成要素が有する複数の機能を、1つの構成要素によって実現したり、複数の構成要素によって実現される1つの機能を、1つの構成要素によって実現したりしてもよい。また、上記実施形態の構成の一部を省略してもよい。また、上記実施形態の構成の少なくとも一部を、他の上記実施形態の構成に対して付加又は置換してもよい。 (9) A plurality of functions possessed by one component in the above embodiment may be realized by a plurality of components, or a function possessed by one component may be realized by a plurality of components. . Also, a plurality of functions possessed by a plurality of components may be realized by a single component, or a function realized by a plurality of components may be realized by a single component. Also, part of the configuration of the above embodiment may be omitted. Moreover, at least part of the configuration of the above embodiment may be added or replaced with respect to the configuration of the other above embodiment.
 (10)上述した車載装置3の他、当該車載装置3を構成要素とするシステム、当該車載装置3の制御部35としてコンピュータを機能させるためのプログラム、このプログラムを記録した半導体メモリ等の非遷移的実態的記録媒体、物体の認識方法等、種々の形態で本開示を実現することもできる。 (10) In addition to the above-described in-vehicle device 3, a system having the in-vehicle device 3 as a component, a program for causing a computer to function as the control unit 35 of the in-vehicle device 3, a non-transition of a semiconductor memory storing this program, etc. The present disclosure can also be implemented in various forms such as a physical recording medium, an object recognition method, and the like.

Claims (11)

  1.  通信ユニット(45)を介して、ユーザの携帯端末(23)と通信可能なクラウド(5)との間で通信を行うように構成された車載装置(3)であって、
     カメラ(15)により車両(9)の車室(55)内を撮影した画像を取得するように構成された画像取得ユニット(43)と、
     前記車両の環境である車両環境、及び、前記車室内にある物体(57)の前記カメラを基準とする相対位置の少なくとも一方を含む選択要因を取得するように構成された選択要因取得ユニット(47)と、
     前記選択要因取得ユニットが取得した前記選択要因に応じて画像認識モデルを選択するように構成されたモデル選択ユニット(49)と、
     前記画像取得ユニットが取得した前記画像に対し、前記モデル選択ユニットが選択した前記画像認識モデルを用いて画像認識を行い、前記物体を認識する処理を行うように構成された物体認識ユニット(51)と、
     を備え、
     前記物体認識ユニットが所定条件を満たす前記物体を認識した場合、前記携帯端末への報知に繋がる前記物体の認識結果に関する情報を、前記通信ユニットを介して前記クラウドへ通知する、
     車載装置。
    An in-vehicle device (3) configured to communicate between a user's mobile terminal (23) and a communicable cloud (5) via a communication unit (45),
    an image acquisition unit (43) configured to acquire an image of the interior (55) of the vehicle (9) captured by the camera (15);
    A selection factor obtaining unit (47) configured to obtain a selection factor including at least one of a vehicle environment, which is the environment of the vehicle, and a position of an object (57) in the vehicle interior relative to the camera. )When,
    a model selection unit (49) configured to select an image recognition model according to the selection factor obtained by the selection factor obtaining unit;
    An object recognition unit (51) configured to perform image recognition on the image acquired by the image acquisition unit using the image recognition model selected by the model selection unit, and to perform processing for recognizing the object. When,
    with
    When the object recognition unit recognizes the object that satisfies a predetermined condition, notifies the cloud via the communication unit of information regarding the recognition result of the object that leads to notification to the mobile terminal.
    In-vehicle device.
  2.  請求項1に記載の車載装置であって、
     前記選択要因取得ユニットは、前記画像取得ユニットが前記画像を取得する前に、前記選択要因を取得するように構成され、
     前記モデル選択ユニットは、前記画像取得ユニットが前記画像を取得する前に、前記画像認識モデルを選択するように構成された、
     車載装置。
    The in-vehicle device according to claim 1,
    the selection factor obtaining unit is configured to obtain the selection factor before the image obtaining unit obtains the image;
    wherein the model selection unit is configured to select the image recognition model before the image acquisition unit acquires the image;
    In-vehicle device.
  3.  請求項1に記載の車載装置であって、
     前記選択要因取得ユニットは、前記車両の走行中に前記選択要因を取得するように構成され、
     前記モデル選択ユニットは、前記車両の走行中に前記画像認識モデルを選択するように構成され、
     前記物体認識ユニットは、前記車両の走行後、前記車両が駐車しているときに、前記物体を認識する処理を行うように構成された、
     車載装置。
    The in-vehicle device according to claim 1,
    The selection factor acquisition unit is configured to acquire the selection factor while the vehicle is running;
    the model selection unit is configured to select the image recognition model while the vehicle is running;
    The object recognition unit is configured to perform a process of recognizing the object when the vehicle is parked after running the vehicle.
    In-vehicle device.
  4.  請求項1~3のいずれか1項に記載の車載装置であって、
     前記選択要因は、前記車両の位置、前記車両の進行方向、現在時刻、及び天候から成る群から選択される1以上を含む前記車両環境を含む、
     車載装置。
    The in-vehicle device according to any one of claims 1 to 3,
    The selection factor includes the vehicle environment including one or more selected from the group consisting of the position of the vehicle, the direction of travel of the vehicle, the current time, and the weather.
    In-vehicle device.
  5.  請求項1~4のいずれか1項に記載の車載装置であって、
     前記選択要因は、前記車室に設けられたルームランプ(17)の状態を含む前記車両環境を含む、
     車載装置。
    The in-vehicle device according to any one of claims 1 to 4,
    The selection factor includes the vehicle environment including the state of a room lamp (17) provided in the vehicle compartment,
    In-vehicle device.
  6.  請求項1~5のいずれか1項に記載の車載装置であって、
     前記選択要因取得ユニットは、前記車両の外部(5)から前記車両環境を取得するように構成された、
     車載装置。
    The in-vehicle device according to any one of claims 1 to 5,
    said selection factor obtaining unit is adapted to obtain said vehicle environment from outside (5) said vehicle;
    In-vehicle device.
  7.  請求項1~6のいずれか1項に記載の車載装置であって、
     前記選択要因取得ユニットは、前記車両に搭載された装置(3、17、21)から前記車両環境を取得するように構成された、
     車載装置。
    The in-vehicle device according to any one of claims 1 to 6,
    wherein the selection factor acquisition unit is configured to acquire the vehicle environment from a device (3, 17, 21) onboard the vehicle;
    In-vehicle device.
  8.  請求項1~7のいずれか1項に記載の車載装置であって、
     前記選択要因取得ユニットは、センサ(13)を用いて、前記相対位置を取得するように構成された、
     車載装置。
    The in-vehicle device according to any one of claims 1 to 7,
    wherein the selection factor acquisition unit is configured to acquire the relative position using a sensor (13);
    In-vehicle device.
  9.  カメラ(15)により車両(9)の車室(55)内を撮影した画像を取得し、
     前記車両の環境である車両環境、及び、前記車室内にある物体(57)の前記カメラを基準とする相対位置の少なくとも一方を含む選択要因を取得し、
     前記選択要因に応じて画像認識モデルを選択し、
     前記画像に対し、選択した前記画像認識モデルを用いて画像認識を行い、前記物体を認識する処理を行い、
     所定条件を満たす前記物体を認識した場合、ユーザの携帯端末(23)への報知に繋がる前記物体の認識結果に関する情報を、通信ユニットを介して、前記携帯端末と通信可能なクラウド(5)へ通知する、
     物体の通知方法。
    Acquiring an image of the interior (55) of the vehicle (9) captured by the camera (15),
    obtaining a selection factor including at least one of a vehicle environment, which is the environment of the vehicle, and a relative position of an object (57) in the vehicle interior with respect to the camera;
    selecting an image recognition model according to the selection factor;
    performing image recognition on the image using the selected image recognition model to perform processing for recognizing the object;
    When the object that satisfies a predetermined condition is recognized, information on the recognition result of the object, which leads to notification to the mobile terminal (23) of the user, is sent to the cloud (5) that can communicate with the mobile terminal via the communication unit. Notice,
    Object notification method.
  10.  請求項1~8のいずれか1項に記載の車載装置の制御部としてコンピュータを機能させるプログラム。 A program that causes a computer to function as a control unit of the in-vehicle device according to any one of claims 1 to 8.
  11.  ユーザの携帯端末(23)と通信可能なクラウド(5)と、通信ユニット(45)を介して前記クラウドとの間で通信を行うように構成された車載装置(3)と、を備える車両用システムであって、
     前記車載装置は、
      カメラ(15)により車両(9)の車室(55)内を撮影した画像を取得するように構成された画像取得ユニット(43)と、
      前記車両の環境である車両環境、及び、前記車室内にある物体(57)の前記カメラを基準とする相対位置の少なくとも一方を含む選択要因を取得するように構成された選択要因取得ユニット(47)と、
      前記選択要因取得ユニットが取得した前記選択要因に応じて画像認識モデルを選択するように構成されたモデル選択ユニット(49)と、
      前記画像取得ユニットが取得した前記画像に対し、前記モデル選択ユニットが選択した前記画像認識モデルを用いて画像認識を行い、前記物体を認識する処理を行うように構成された物体認識ユニット(51)と、
      を備え、
      前記物体認識ユニットが所定条件を満たす前記物体を認識した場合、前記物体の認識結果に関する情報を、前記通信ユニットを介して前記クラウドへ通知し、
     前記クラウドは、
      前記車載装置から通知された前記物体の認識結果に関する情報に基づいて、前記携帯端末へ報知するように構成された、
     車両用システム。
    For a vehicle comprising a cloud (5) communicable with a user's portable terminal (23) and an in-vehicle device (3) configured to communicate with the cloud via a communication unit (45) a system,
    The in-vehicle device
    an image acquisition unit (43) configured to acquire an image of the interior (55) of the vehicle (9) captured by the camera (15);
    A selection factor obtaining unit (47) configured to obtain a selection factor including at least one of a vehicle environment, which is the environment of the vehicle, and a position of an object (57) in the vehicle interior relative to the camera. )When,
    a model selection unit (49) configured to select an image recognition model according to the selection factor obtained by the selection factor obtaining unit;
    An object recognition unit (51) configured to perform image recognition on the image acquired by the image acquisition unit using the image recognition model selected by the model selection unit, and to perform processing for recognizing the object. When,
    with
    when the object recognition unit recognizes the object that satisfies a predetermined condition, notifies the cloud of information about the recognition result of the object via the communication unit;
    The cloud is
    configured to notify the mobile terminal based on the information about the recognition result of the object notified from the in-vehicle device,
    vehicle system.
PCT/JP2022/023959 2021-06-24 2022-06-15 In-vehicle device, notification method for object, program, and system for vehicle WO2022270379A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021-104882 2021-06-24
JP2021104882A JP7439797B2 (en) 2021-06-24 2021-06-24 In-vehicle device, object notification method, and program

Publications (1)

Publication Number Publication Date
WO2022270379A1 true WO2022270379A1 (en) 2022-12-29

Family

ID=84544289

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/023959 WO2022270379A1 (en) 2021-06-24 2022-06-15 In-vehicle device, notification method for object, program, and system for vehicle

Country Status (2)

Country Link
JP (1) JP7439797B2 (en)
WO (1) WO2022270379A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002347476A (en) * 2001-05-28 2002-12-04 Mitsubishi Motors Corp Car-mounted display glare-proof mechanism
JP2014215877A (en) * 2013-04-26 2014-11-17 株式会社デンソー Object detection device
JP2019123421A (en) * 2018-01-18 2019-07-25 株式会社デンソー Occupant detection system, lighting control system, and vehicle interior lighting method
JP2019172191A (en) * 2018-03-29 2019-10-10 矢崎総業株式会社 Vehicle interior monitoring module, and, monitoring system
JP2021018593A (en) * 2019-07-19 2021-02-15 株式会社日立製作所 Information processing device for vehicle

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002347476A (en) * 2001-05-28 2002-12-04 Mitsubishi Motors Corp Car-mounted display glare-proof mechanism
JP2014215877A (en) * 2013-04-26 2014-11-17 株式会社デンソー Object detection device
JP2019123421A (en) * 2018-01-18 2019-07-25 株式会社デンソー Occupant detection system, lighting control system, and vehicle interior lighting method
JP2019172191A (en) * 2018-03-29 2019-10-10 矢崎総業株式会社 Vehicle interior monitoring module, and, monitoring system
JP2021018593A (en) * 2019-07-19 2021-02-15 株式会社日立製作所 Information processing device for vehicle

Also Published As

Publication number Publication date
JP2023003665A (en) 2023-01-17
JP7439797B2 (en) 2024-02-28

Similar Documents

Publication Publication Date Title
CN109204325B (en) Vehicle control device mounted on vehicle and method of controlling vehicle
KR101976419B1 (en) Door control Apparatus for Vehicle and Vehicle
CN107878460B (en) Control method and server for automatic driving vehicle
KR101954199B1 (en) Around view monitoring apparatus for vehicle, driving control apparatus and vehicle
US10529264B2 (en) Power management for a vehicle smart mirror system
WO2017138369A1 (en) Information processing device, information processing method, and program
JP2019508764A (en) System and method for generating a parking alert
KR101969805B1 (en) Vehicle control device and vehicle comprising the same
US10713501B2 (en) Focus system to enhance vehicle vision performance
US11351917B2 (en) Vehicle-rendering generation for vehicle display based on short-range communication
WO2018056104A1 (en) Vehicle control device, vehicle control method, and moving body
WO2019225349A1 (en) Information processing device, information processing method, imaging device, lighting device, and mobile object
KR20160147557A (en) Automatic parking apparatus for vehicle and Vehicle
KR20190031057A (en) Driver assistance apparatus and vehicle
US10895236B2 (en) Vehicle engine control apparatus and vehicle engine control method
US20190275888A1 (en) Methods and systems for providing visual notifications in the peripheral vision of a driver
KR20170002087A (en) Display Apparatus and Vehicle Having The Same
WO2022270379A1 (en) In-vehicle device, notification method for object, program, and system for vehicle
US20170305333A1 (en) Rear trunk button localization for end user
CN208515481U (en) A kind of information interaction system and the vehicle comprising the information interaction system
US11613267B2 (en) Vehicle and control device of the same
JP2019036861A (en) Information processing apparatus, information processing method, and program
KR102089955B1 (en) Robot for vehicle mounted on the vehcile and method for controlling the robot for vehicle
US11914914B2 (en) Vehicle interface control
US20230322173A1 (en) Method for automatically controlling vehicle interior devices including driver`s seat and apparatus therefor

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22828296

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE