WO2021189641A1 - 遗留对象检测 - Google Patents
遗留对象检测 Download PDFInfo
- Publication number
- WO2021189641A1 WO2021189641A1 PCT/CN2020/093003 CN2020093003W WO2021189641A1 WO 2021189641 A1 WO2021189641 A1 WO 2021189641A1 CN 2020093003 W CN2020093003 W CN 2020093003W WO 2021189641 A1 WO2021189641 A1 WO 2021189641A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- cabin
- vehicle
- legacy
- reference image
- Prior art date
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 41
- 238000000034 method Methods 0.000 claims abstract description 60
- 238000004891 communication Methods 0.000 claims description 26
- 230000007613 environmental effect Effects 0.000 claims description 16
- 238000004590 computer program Methods 0.000 claims description 14
- 238000004378 air conditioning Methods 0.000 claims description 8
- 238000012790 confirmation Methods 0.000 claims description 8
- 238000001914 filtration Methods 0.000 claims description 3
- 230000003213 activating effect Effects 0.000 claims 1
- 238000010801 machine learning Methods 0.000 description 15
- 238000010586 diagram Methods 0.000 description 10
- 238000013528 artificial neural network Methods 0.000 description 7
- QVGXLLKOCUKJST-UHFFFAOYSA-N atomic oxygen Chemical compound [O] QVGXLLKOCUKJST-UHFFFAOYSA-N 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 229910052760 oxygen Inorganic materials 0.000 description 5
- 239000001301 oxygen Substances 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000003068 static effect Effects 0.000 description 3
- 230000001960 triggered effect Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000005265 energy consumption Methods 0.000 description 2
- 230000004913 activation Effects 0.000 description 1
- 239000006227 byproduct Substances 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000001816 cooling Methods 0.000 description 1
- 238000010438 heat treatment Methods 0.000 description 1
- 238000003707 image sharpening Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/59—Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R21/00—Arrangements or fittings on vehicles for protecting or preventing injuries to occupants or pedestrians in case of accidents or other traffic risks
- B60R21/01—Electrical circuits for triggering passive safety arrangements, e.g. airbags, safety belt tighteners, in case of vehicle accidents or impending vehicle accidents
- B60R21/015—Electrical circuits for triggering passive safety arrangements, e.g. airbags, safety belt tighteners, in case of vehicle accidents or impending vehicle accidents including means for detecting the presence or position of passengers, passenger seats or child seats, and the related safety parameters therefor, e.g. speed or timing of airbag inflation in relation to occupant position or seat belt use
- B60R21/01512—Passenger detection systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
Definitions
- the present disclosure relates to the field of computer vision technology, and in particular to methods and devices for detecting leftover objects and vehicles.
- the present disclosure provides a method and device for detecting a leftover object, and a vehicle.
- a method for detecting a leftover object comprising: acquiring a reference image in the cabin of a vehicle when there is no leftover object in the cabin; and collecting personnel leaving the vehicle In the case of the first image in the cabin; detecting objects left in the cabin when the person leaves the vehicle according to the first image and the reference image.
- a device for detecting a leftover object comprising: a first acquisition module configured to obtain a reference image in the cabin of a vehicle when there is no leftover object in the cabin;
- the first collection module is used to collect the first image in the cabin when the person leaves the vehicle;
- the detection module is used to detect when the person leaves the vehicle according to the first image and the reference image The remaining objects in the cabin.
- a computer device including a memory and a processor.
- a computer program is stored on the memory, and the computer program can be executed by the processor to implement the method described in any embodiment.
- a vehicle in which an image acquisition device is provided in the cabin, and the legacy device as described in any embodiment of the present disclosure is communicatively connected to the image acquisition device.
- Object detection device or computer equipment according to any embodiment of the present disclosure.
- a computer-readable storage medium having a computer program stored thereon, and when the program is executed by a processor, the method described in any of the embodiments is implemented.
- a computer program product which implements the method described in any embodiment when the computer program product is read and executed by a computer.
- a computer program including computer-readable code, when the computer-readable code is executed in an electronic device, the processor in the electronic device executes for realizing the present disclosure
- the processor in the electronic device executes for realizing the present disclosure
- the embodiment of the present disclosure acquires the reference image in the cabin of the vehicle when there are no objects left in the cabin, collects the first image in the cabin when the person leaves the vehicle, and according to the first image and the reference The image detects the remaining objects in the cabin when the person leaves the vehicle.
- the above method can not only detect animate living bodies, but also inanimate objects. The method is simple, has a wide range of applications, and has high detection accuracy.
- Fig. 1 is a flowchart of a method for detecting a leftover object according to an embodiment of the present disclosure.
- Fig. 2(A) is a schematic diagram of a first image of an embodiment of the present disclosure.
- Fig. 2(B) is a schematic diagram of a reference image of an embodiment of the present disclosure.
- FIG. 3 is a schematic diagram of the relationship between the machine learning model and the image acquisition device according to an embodiment of the present disclosure.
- Fig. 4 is a schematic diagram of a message notification interface of a communication terminal according to an embodiment of the present disclosure.
- Fig. 5 is a block diagram of a device for detecting a leftover object according to an embodiment of the present disclosure.
- Fig. 6 is a schematic diagram of a computer device according to an embodiment of the present disclosure.
- Fig. 7 is a schematic diagram of a vehicle according to an embodiment of the present disclosure.
- FIG. 8(A) is a schematic diagram of the distribution of the image acquisition device of the embodiment of the present disclosure.
- FIG. 8(B) is a schematic diagram of another distribution of the image acquisition device of the embodiment of the present disclosure.
- first, second, third, etc. may be used in this disclosure to describe various information, the information should not be limited to these terms. These terms are only used to distinguish the same type of information from each other.
- first information may also be referred to as second information, and similarly, the second information may also be referred to as first information.
- word “if” as used herein can be interpreted as "when” or “when” or "in response to determination”.
- the embodiment of the present disclosure provides a method for detecting objects left in a vehicle. As shown in FIG. 1, the method may include the following steps 101-103.
- Step 101 Obtain a reference image in the cabin of the vehicle when there are no objects left in the cabin.
- Step 102 Collect a first image in the cabin when a person leaves the vehicle.
- Step 103 Detect objects left in the cabin when the person leaves the vehicle according to the first image and the reference image.
- the vehicle may be a vehicle, for example, a private car, a bus, a school bus, a truck, a train, etc., and may also be a tool used to carry people or goods such as ships or airplanes.
- the cabin of the vehicle may be a cabin, a cabin, a cabin, or the like.
- a reference image in the cabin can be acquired when there are no objects left in the cabin.
- the reference image in the vehicle cabin may only include inherent objects in the vehicle cabin (for example, seats, steering wheels, vehicle interior trim, etc.), but not legacy objects.
- the reference image can be collected and stored when there are no objects left in the cabin, and can be used one time or multiple times, such as repeated use when the detection of left objects is required multiple times. For example, it can be collected by the image acquisition device on the vehicle before leaving the factory and stored in the storage unit of the vehicle, or it can be collected by the user through the image acquisition device on the vehicle when the user confirms that there are no objects left in the cabin.
- the user terminal collects and stores it in the storage unit of the vehicle. Further, it is also possible to perform image compression processing on the original image collected in the cabin, and then store the compressed image as a background image, thereby reducing storage space and improving image processing efficiency.
- the reference image can be collected once, and the collected reference image can be used as the reference image for each remaining object detection.
- the reference image can also be updated in a specific manner.
- the updating of the reference image in a specific manner may be to update the reference image at regular intervals (for example, one day or a week, etc.), or to update the reference image under the trigger of a specific event.
- the specific event may be the detection of a legacy object, the receipt of a reference image update instruction, for example, detection of a change in the background in the cabin or detection of a person about to enter the cabin of the vehicle (for example, vehicle unlocking), etc.
- the background image may be the background image of the entire cabin, or it may be a certain area or certain areas in the cabin (for example, the driving area, the co-pilot area, the rear seat, the child seat, the love seat area, the back-up At least one of the luggage area, luggage area, etc.) in the background image.
- the collected image can be directly used as the background image, or the image can be cropped as needed, and the cropped image can be used as the background image.
- the number of background images can be one or more.
- the background image may be an image captured by the image capture device.
- the number of the background images may be greater than one, and each background image is acquired by one of the image acquisition devices.
- an image in the cabin can be taken and stored, so as to be used as a reference image for detecting the remaining objects when the person leaves the vehicle next time.
- the target image in the cabin can also be collected before the person enters the vehicle as a reference image for the detection of the leftover object when the person leaves the vehicle.
- Leftover objects can include items, such as wallets, keys, mobile phones, umbrellas, briefcases, suitcases, etc., and can also include living bodies, such as people, pets, etc. People can be children, the elderly, people who fall asleep, people with limited mobility, etc. All kinds of people who may be left in the car.
- the person may include any person on the vehicle, such as a driver, a flight attendant, and a passenger. It may be determined whether the person leaves the vehicle based on at least one of the opening status of the cabin door of the vehicle, the operating status of the vehicle, the movement trajectory of the person, and a specific instruction. For example, in the case where it is detected that the door is opened or the seat belt of the person is unfastened, it is determined that the person gets off the vehicle. For another example, when it is detected that the vehicle is turned off and the door is opened, it is determined that the person gets off the vehicle. For another example, when it is detected that the movement trajectory of the person is from inside the cabin to the outside of the cabin, it is determined that the person gets off the vehicle. For another example, in a case where a confirmation instruction for the end of the trip is detected, it is determined that the person gets off the vehicle. In addition, it is also possible to determine whether the person leaves the vehicle based on other methods, which will not be repeated here.
- the first image in the cabin may include one image or multiple images.
- Each first image can be collected by an image acquisition device in the cabin, for example, one image or multiple images can be acquired as the first image through a video stream collected by the image acquisition device.
- objects left in the cabin when the person leaves the vehicle may be detected according to the first image and the reference image.
- the leftover objects may include living bodies and/or objects that are carried by the person into the cabin and are forgotten in the cabin when the person leaves the vehicle.
- the numbers of the first image and the reference image are both greater than 1
- the objects left in the cabin when the person leaves the vehicle may be detected according to each first image and a corresponding reference image. For example, assuming that the i-th first image and the i-th reference image are both images corresponding to the i-th sub-area in the cabin, it is possible to detect that the person has left the place based on the i-th first image and the i-th reference image.
- the leftover object detection can be performed on all the sub-areas, or only a part of the leftover object can be detected by the leftover object.
- the operation of detecting the remaining objects can be executed all the time, or can be executed when triggered under certain circumstances, for example, triggered by a vehicle. For another example, it is passively triggered by a user terminal that has previously established a communication connection with the vehicle.
- the user terminal may send a detection trigger instruction, and after receiving the detection trigger instruction, it may start to perform the operation of detecting the leftover object.
- the detection trigger instruction may also include a target detection category, so as to determine whether the to-be-processed image includes a legacy object of a specific category. For example, the user may find that the key is missing after getting off the car.
- the user can send a detection trigger instruction including the "key" category through the mobile phone to trigger the detection of the remaining objects in the "key” category.
- the remaining objects in the cabin when the person leaves the vehicle may be detected based on the difference between the first image and the reference image. For example, at least one target object included in the first image but not included in the reference image may be determined as the legacy object.
- the first image includes a mobile phone, a child, a seat and a pillow
- the reference image includes a seat and a pillow
- the first image includes but If the reference image does not include a mobile phone and a child, the mobile phone and/or the child are determined as the remaining objects in the cabin when the person leaves the vehicle. In this way, the remaining objects can be detected more intuitively, and the implementation is simple and the implementation cost is low.
- the target object in the reference image may be obtained by annotating the reference image, or may be obtained by detecting the reference image.
- the target object may be detected once from the reference image each time the difference between the first image and the reference image is determined, or the historically detected target object may be directly used as the target object.
- a second image in the cabin when the person enters the vehicle may be collected; the remaining object to be detected is determined according to the difference between the second image and the reference image .
- the situation of the person entering the vehicle may include the situation when the person enters the vehicle, or the situation before the person leaves the vehicle after entering the vehicle.
- the second image may be collected when the vehicle is about to reach the destination.
- an application for example, a map application or a car-hailing application, etc.
- At least one target object included in the second image but not included in the reference image may be determined as the remaining object to be detected.
- the second image captured when Person A gets in the car includes a mobile phone, and there is a set of keys in the car before Person A gets in the car, and then only the mobile phone is used as the remaining object to be detected corresponding to Person A.
- the captured first image includes a mobile phone, it is determined that there is an object left in the car when the person A gets off the car. If the first image taken when the person A gets off the car does not include a mobile phone, it is determined that there is no object left in the car when the person A gets off the car.
- the second image of the cabin when each person enters the vehicle can be separately collected, and the second image corresponding to each person and the reference The difference between the images determines the remaining objects to be detected for each person.
- a second image of the cabin when the person enters the vehicle may be collected; based on the reference image and the second image, it is determined that the remaining object to be detected is in the The position in the first image; based on the position of the leftover object to be detected in the first image, the leftover object is detected from the leftover object to be detected in the first image.
- the position of the left object to be detected can be roughly determined first, and then the left object detection can be performed based on the position, thereby improving the detection efficiency.
- the second image and the reference image may be input to the pre-trained first machine learning model, and the leftover object to be detected (called the suspected leftover object) and the suspected leftover object can be determined according to the result of the first machine learning model The position in the second image, and then determine the position of the suspected leftover object in the first image according to the position of the suspected leftover object in the second image.
- the machine learning model may use a neural network, or the machine learning model may use a model that combines a neural network and a traditional vision algorithm (for example, optical flow method, image sharpening method, image difference algorithm or Carter tracking algorithm).
- the neural network in the embodiment of the present disclosure may include an input layer, at least one intermediate layer, and an output layer, and the input layer, at least one intermediate layer, and output layer each include one or more neurons.
- the intermediate layer usually refers to a layer located between the input layer and the output layer, such as a hidden layer.
- the intermediate layer of the neural network may include but is not limited to at least one of a convolutional layer, a ReLU (Rectified Linear Units, modified linear unit) layer, etc., the intermediate layer included in the neural network The more layers there are, the deeper the network.
- the neural network may specifically be a deep neural network or a convolutional neural network.
- target tracking may be performed on the legacy object to be detected; and the position of the legacy object to be detected in the first image may be determined according to the result of target tracking.
- the objects that may be forgotten in the cabin that is, the suspected leftover objects
- the position of the suspected leftover objects can be determined more accurately, and then the leftover objects can be detected based on the position of the suspected leftover objects, thereby improving detection Efficiency and accuracy.
- each image acquisition device can correspond to a machine learning model, and each machine learning model is used to collect an image acquisition device.
- the second image is detected.
- the second image and background image captured by the image capture device 1 can be input to the machine learning model 1 to detect in the second image captured by the image capture device 1 Suspected leftover objects; input the second image and background image collected by the image acquisition device 2 into the machine learning model 2 to detect the suspected leftover objects in the second image collected by the image acquisition device 2; ...; and so on.
- the machine learning model can also be shared to detect images collected by multiple different image collection devices, which is not limited in the present disclosure.
- non-legacy objects may be filtered out from the first image.
- the non-legacy objects may include items carried into the cabin by personnel and expected to remain in the cabin, for example, pillows or car accessories.
- a legacy object confirmation instruction may be received; according to the legacy object confirmation instruction, non-legacy objects are filtered out from the first image.
- a third image in the cabin of the vehicle may be taken before the person leaves the vehicle, and the third image may be sent to the display device (for example, the central control screen of the vehicle or the user terminal The display interface) is displayed, and the personnel can send the remaining object confirmation instruction through the user terminal or the central control of the vehicle.
- the historical processing result of the legacy object can also be obtained. If a certain object is determined as a legacy object in the history detection process, but has not been processed for a long time or many times (for example, taken out of the cabin), the object is determined as a non-legacy object. In this way, it is possible to reduce the probability of misjudging non-legacy objects, such as items expected to be left in the cabin, that are carried into the cabin by the person mentioned above as remaining objects, and reduce the erroneous detection.
- the position of the legacy object in the cabin and/or the category of the legacy object may be determined.
- the position may be a rough position such as the front passenger seat and the rear seat, or may be more accurate position information, for example, the coordinates of the left object in the cabin.
- the categories of the legacy objects can be simply divided into living categories and item categories, and each category can be further divided into more detailed subcategories.
- the living category can be divided into pet categories and child categories
- the item categories can be divided into keys. Category, wallet category, mobile phone category, etc.
- the legacy object By determining the location and/or category of the legacy object, it is convenient to perform follow-up operations based on the location and/or category of the legacy object, for example, sending notification messages, controlling the environment parameters in the cabin to reduce the probability of security problems of the legacy object, etc.
- the first image may be input to a pre-trained second machine learning model, and the position and/or category of the legacy object can be obtained according to the output result of the second machine learning model.
- the second machine learning model and the aforementioned first machine learning model may be the same model or different models.
- the second machine learning model may include a first sub-model and a second sub-model, wherein the first sub-model is used to detect living objects, and the second sub-model is used to detect object objects.
- the first sub-model may be pre-trained through sample images of living objects
- the second sub-model may be pre-trained through sample images of article objects.
- the sample images can include images taken under different light intensities and different scenes to improve the accuracy of the trained object recognition model.
- a first notification message may be sent to the vehicle and/or a preset communication terminal. By sending the first notification message, it is helpful for the personnel to find the leftover objects, so as to take out the leftover objects in time.
- the first notification message may include prompt information used to characterize the existence of a legacy object. Further, the first notification message also includes the time when the legacy object is left behind, and the category and/or location of the legacy object.
- the vehicle may output prompt information, including voice prompt information output through a car audio or horn and/or light prompt information output through vehicle lights. Further, by outputting different sound prompt information, or outputting light prompt information through light-emitting devices in different positions, different positions of the left object can be indicated.
- the communication terminal can establish a communication connection with the transportation means through any connection method such as mobile data connection, Bluetooth connection, WiFi connection, etc.
- the communication terminal can be a smart terminal such as a mobile phone, a tablet computer, a smart watch, a notebook computer, and the like.
- the communication terminal may output prompt information.
- the prompt information includes at least one of text prompt information and image prompt information.
- the text prompt information may be text content such as the following forms: "There are items left on the back seat of the car" or "There are children in the car". It can also include the time when the legacy object was detected, such as "time: February 13, 2020 18:35; location: car back seat; legacy object category: wallet”.
- the image prompt information may include the first image, or may only include the image of the legacy object intercepted from the first image. By sending the captured images of the remaining objects, the amount of data transmission can be reduced.
- the first control information for adjusting the environmental parameters in the cabin may also be sent to the vehicle based on the category of the legacy object, so as to provide more comfort for the legacy object
- the cabin environment For example, when the category of the legacy object is a living body category, first control information for adjusting the environmental parameters in the cabin may be sent to the vehicle, for example, window opening control information and/or air conditioning operation Control the information to reduce the probability that the poor environment in the cabin (for example, the temperature is too high or the oxygen content is low) leads to the danger of the remaining objects of the living category.
- window opening control information may be sent to the vehicle to open the window of the vehicle.
- the window opening control information includes, but is not limited to, number information, position information, and/or opening degree information of windows to be opened.
- the window opening control information may be generated based on at least one of the location of the legacy object, the in-cabin environment parameter, and the outer-cabin environment parameter. For example, in the case where the remaining object is located in the back seat of the vehicle, a window in the rear of the vehicle can be controlled to open. For another example, when the oxygen content in the cabin is lower than a certain threshold, the two windows in the rear of the vehicle can be controlled to open.
- the opening degree can be set in advance.
- the moving distance of the window opening can be fixedly set to 5cm, so that the oxygen content in the cabin can be maintained within the required range, and it can also prevent people outside the cabin from causing harm to the remaining objects.
- the remaining objects come out through the window, thereby ensuring the safety of the remaining objects.
- the opening degree can also be dynamically set according to the environmental parameters outside the cabin. For example, in the case that the ambient temperature outside the cabin is outside the preset range, the opening degree can be set to a small value (for example, 5 cm), and vice versa.
- the opening degree can be set larger (for example, 8cm). In this way, the impact of the outside environment on the remaining objects is reduced.
- air conditioning operation control information can also be sent to the vehicle to turn on the air conditioner of the vehicle, and furthermore can control the temperature and/or operation mode of the air conditioner (For example, cooling or heating).
- the air conditioner For example, cooling or heating.
- the first control information can be sent directly when a leftover object is detected.
- air-conditioning operation control information can be sent to the vehicle to make the air-conditioner operate in a suitable temperature/humidity control mode for the living body.
- window opening control information can be sent to the vehicle to control the degree of window opening, such as controlling the window to open a gap instead of all opening, so as to improve the air environment in the vehicle, and at the same time to prevent living bodies in the vehicle cabin from leaving the vehicle cabin Or received threats from outside the cabin.
- the environmental parameters in the cabin may also be detected first, and the first control information is sent only when the environmental parameters exceed a preset range.
- the air-conditioning operation control information may be sent to the vehicle.
- the air conditioner can be controlled to turn off again.
- window opening control information may be sent to the vehicle to control the window to open a gap.
- the leftover object in a case where the leftover object is detected, it can be determined whether the leftover object is taken out of the cabin.
- at least one of the following operations can be performed: record the time when the legacy object was taken out of the cabin and/or the person who took out the legacy object Identity information; sending second control information for adjusting the environmental parameters in the cabin to the vehicle; sending a second notification message to a preset communication terminal. For example, it is possible to record "At 19:00:35 on March 22, 2020, the user with ID XXX took out the pet".
- the second control information for adjusting the environmental parameters in the cabin can be sent to the vehicle at this time
- window closing control information and/or air conditioning closing control information may be sent to the vehicle to close the windows and/or air conditioning of the vehicle.
- the second notification message may include at least one of the name, category, retrieval time, and identity information of the retriever. In this way, a notification can be sent in time when the legacy object is taken out, and the probability of false take-out can be reduced.
- a third image in the cabin within a preset time period after the passenger leaves the vehicle may be acquired; according to the third image and the reference image, it is possible to determine whether the legacy object is from the The cabin was taken out. Specifically, the difference between the third image and the reference image can be used to determine whether the remaining object is taken out of the cabin. If there is at least one target object included in the third image but not included in the reference image, it is determined that there is a legacy object that has not been taken out. Otherwise, it is determined that all the remaining objects have been taken out. It is easy to implement and low detection cost to detect whether the remaining objects are taken out by acquiring images.
- a third image in the cabin within a preset time period after the passenger leaves the vehicle can be acquired; according to the third image and the first image, it is The cabin was taken out. Specifically, according to the difference between the third image and the first image, it can be determined whether the remaining object is taken out of the cabin. If there is at least one target object included in the third image but included in the first image, it is determined that there is a legacy object to be taken out. Otherwise, it is determined that the remaining objects have not been taken out.
- the information of the legacy object may be stored in the database.
- the information of the legacy object may include at least one of the image, location, category, time of the legacy object, the person to which the legacy object belongs, and the person who took out the legacy object.
- the image acquisition device (for example, a camera) on the vehicle can be activated to collect images when image collection is required, and the image collection device can also be turned off when image collection is not required. In this way, the image acquisition device does not need to be in working condition all the time, and energy consumption is reduced. For example, when the vehicle is started, the image capture device on the vehicle is activated. Or when it is determined that a person enters the vehicle, the image acquisition device on the vehicle is activated. Or when it is determined that the person is about to leave the vehicle, the image capture device on the vehicle is activated.
- the image acquisition device is turned off. If only the remaining objects need to be detected, the image acquisition device is turned off when the person leaves the vehicle.
- the legacy object detection method of the embodiment of the present disclosure can detect not only living bodies but also static objects, and the detection accuracy is high.
- the embodiments of the present disclosure can be used in different application scenarios such as private cars, online car-hailing, or school buses, and have a wide range of applications. Among them, personnel getting on and off the vehicle can be determined in different ways according to the actual scene. For example, in a private car scenario, whether the driver gets in or out of the car can be determined according to the signal strength of the communication connection between the driver's communication terminal and the vehicle.
- the online car-hailing scenario it is possible to determine whether the passenger gets on or off the car according to the operation of the driver in the online car-hailing application (for example, the operation of confirming the receipt of the passenger or the operation of confirming the arrival at the destination).
- the writing order of the steps does not mean a strict execution order but constitutes any limitation on the implementation process.
- the specific execution order of each step should be based on its function and possibility.
- the inner logic is determined.
- the present disclosure also provides a device for detecting leftover objects.
- the device includes: a first acquisition module 501, a first acquisition module 502, and a detection module 503.
- the first acquiring module 501 is configured to acquire a reference image in the cabin of the vehicle when there are no objects left in the cabin.
- the first collection module 502 is configured to collect a first image in the cabin when a person leaves the vehicle.
- the detection module 503 is configured to detect objects left in the cabin when the person leaves the vehicle according to the first image and the reference image.
- the first acquisition module 501 is configured to acquire a target image in the cabin before the person enters the vehicle, the target image is the reference image; the detection module uses Yu: According to the difference between the first image and the reference image, detect objects left in the cabin when the person leaves the vehicle.
- the device further includes: a second collection module, configured to collect a second image in the cabin when the person enters the vehicle; a first determining module, configured according to the The difference between the second image and the reference image determines the remaining object to be detected.
- the first determining module includes: a first obtaining unit, configured to obtain target object information of the second image and a reference image, respectively; and a first determining unit, configured to include the second image However, at least one target object not included in the reference image is determined to be the remaining object to be detected.
- the detection module 503 includes: a first collection unit, configured to collect a second image in the cabin when the person enters the vehicle; a second determination unit, configured based on the The reference image and the second image determine the position of the leftover object to be detected in the first image; the detection unit is configured to determine the position of the leftover object to be detected in the first image from The leftover object is detected from the leftover object to be detected in the first image.
- the second determining unit includes: a tracking subunit, configured to perform target tracking on a legacy object to be detected based on the reference image and multiple frames of the second image; and the determining subunit is configured to perform target tracking based on the The result of target tracking determines the position of the remaining object to be detected in the first image.
- the device further includes: a receiving module, configured to receive a legacy object confirmation before detecting a legacy object in the cabin when the person leaves the vehicle according to the first image and the reference image Instruction; a filtering module for filtering out non-legacy objects from the first image according to the legacy object confirmation instruction.
- a receiving module configured to receive a legacy object confirmation before detecting a legacy object in the cabin when the person leaves the vehicle according to the first image and the reference image Instruction
- a filtering module for filtering out non-legacy objects from the first image according to the legacy object confirmation instruction.
- the device further includes: a second determining module, configured to determine the position of the legacy object in the cabin and/or the position of the legacy object in the case of detecting the legacy object category.
- the device further includes: a first sending module, configured to send a first notification message to the vehicle and/or a preset communication terminal when the legacy object is detected.
- a first sending module configured to send a first notification message to the vehicle and/or a preset communication terminal when the legacy object is detected.
- the device further includes: a second sending module, configured to send, based on the category of the legacy object, to the vehicle a second sending module for adjusting the cabin when the legacy object is detected.
- the second sending module is configured to send first control information for adjusting environmental parameters in the cabin to the vehicle when the category of the legacy object is a living body category .
- the second sending module is used to send window opening control information and/or air conditioning operation control information to the vehicle.
- the device further includes: a third determination module, configured to determine whether the legacy object has been taken out of the cabin when the legacy object is detected; an execution module, configured to: When it is determined that the legacy object is taken out of the cabin, perform at least one of the following operations: record the time when the legacy object was taken out of the cabin and/or the identity information of the person who took out the legacy object ; Send second control information for adjusting the environmental parameters in the cabin to the vehicle; send a second notification message to a preset communication terminal.
- a third determination module configured to determine whether the legacy object has been taken out of the cabin when the legacy object is detected
- an execution module configured to: When it is determined that the legacy object is taken out of the cabin, perform at least one of the following operations: record the time when the legacy object was taken out of the cabin and/or the identity information of the person who took out the legacy object ; Send second control information for adjusting the environmental parameters in the cabin to the vehicle; send a second notification message to a preset communication terminal.
- the third determining module includes: a second acquiring unit, configured to acquire a third image in the cabin within a preset time period after the passenger leaves the vehicle; and a third determining unit , Used to determine whether the left-over object is taken out of the cabin according to the third image and the reference image.
- the device further includes an activation module, configured to activate the image acquisition device on the vehicle when the vehicle is started.
- the device further includes: a closing module, configured to turn off the image capture device when it is determined that the leftover object is taken out of the cabin.
- the embodiment of the present disclosure also includes a computer device including a memory and a processor.
- a computer program is stored on the memory, and the computer program can be executed by the processor to implement the method described in any embodiment.
- FIG. 6 shows a more specific hardware structure diagram of a computer device provided by an embodiment of this specification.
- the device may include a processor 601, a memory 602, an input/output interface 603, a communication interface 604, and a bus 605.
- the processor 601, the memory 602, the input/output interface 603, and the communication interface 604 realize the communication connection between each other in the device through the bus 605.
- the processor 601 can be implemented by a general CPU (Central Processing Unit), a microprocessor, an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits for execution related Program to implement the technical solutions provided in the embodiments of this specification.
- a general CPU Central Processing Unit
- ASIC Application Specific Integrated Circuit
- the memory 602 may be implemented in the form of ROM (Read Only Memory), RAM (Random Access Memory, random access memory), static storage device, dynamic storage device, etc.
- the memory 602 may store an operating system and other application programs.
- related program codes are stored in the memory 602 and called and executed by the processor 601.
- the input/output interface 603 is used to connect an input/output module to realize information input and output.
- the input/output/module can be configured in the device as a component (not shown in the figure), or can be connected to the device to provide corresponding functions.
- the input device may include a keyboard, a mouse, a touch screen, a microphone, various sensors, etc., and an output device may include a display, a speaker, a vibrator, an indicator light, and the like.
- the communication interface 604 is used to connect a communication module (not shown in the figure) to realize the communication interaction between the device and other devices.
- the communication module can realize communication through wired means (such as USB, network cable, etc.), or through wireless means (such as mobile network, WIFI, Bluetooth, etc.).
- the bus 605 includes a path for transmitting information between various components of the device (for example, the processor 601, the memory 602, the input/output interface 603, and the communication interface 604).
- the device may also include the equipment necessary for normal operation. Other components.
- the above-mentioned device may also include only the components necessary to implement the solutions of the embodiments of the present specification, and not necessarily include all the components shown in the figures.
- an embodiment of the present disclosure also provides a vehicle.
- the cabin of the vehicle is provided with an image capture device, and the image capture device is communicatively connected with the image capture device as described in any embodiment of the present disclosure.
- a legacy object detection device or a computer device according to any embodiment of the present disclosure.
- the image acquisition device is used to acquire the first image.
- the image acquisition device may start to capture images to be processed in the cabin from the time the person enters the cabin until the person leaves the cabin, or may start shooting the cabin after the person enters the cabin for a period of time The pending image.
- the image capture device may be arranged on the top of the cabin.
- the number of image acquisition devices in the cabin can be one or more. When the number of the image acquisition device is 1, the image to be processed in the entire cabin is acquired by the image acquisition device. When the number of image acquisition devices is greater than one, the images to be processed in a sub-area in the cabin are respectively acquired by each image acquisition device.
- the number, location and distribution of the image acquisition device in the cabin can be determined according to the shape and size of the cabin and the field of view of the image acquisition device.
- an image capture device can be installed in the center of the roof (referring to the top of the inner side), as shown in Figure 8(A); an image capture device can also be installed above each row of seats The device is shown in Figure 8(B).
- the captured area is more comprehensive.
- the first image captured by the image capture device can be detected frame by frame; in other embodiments, the frame rate of the image captured by the image capture device is often relatively large, for example, dozens of images are captured per second. Therefore, frame skipping detection can also be performed on the first image, for example, only the first, third, and fifth frames of the captured first image are detected.
- the frame skipping step (that is, the frame number interval between adjacent image frames for detection) can be determined according to the actual scene, for example, when the light is poor, there are more objects to be detected, the first image captured is low in definition, etc. In this case, the frame skipping step can be set to be smaller, and the frame skipping step can be set to be larger in the case of better light, fewer objects to be detected, and higher definition of the first image taken.
- the field of view of the image acquisition device may be relatively large, including both areas where leftover objects are more likely to appear, and areas where there are generally no leftover objects. Therefore, when detecting potential leftover objects in the cabin in the first image, the region of interest in the first image can also be determined first, and then the leftover objects are detected in the region of interest. For example, leftover objects are more likely to appear on the seats of the cabin, but generally do not appear on the center console. Therefore, the seats are the area of interest.
- the embodiments of this specification also provide a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, the method described in any of the foregoing embodiments is implemented.
- Computer-readable media include permanent and non-permanent, removable and non-removable media, and information storage can be realized by any method or technology.
- the information can be computer-readable instructions, data structures, program modules, or other data.
- Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disc (DVD) or other optical storage, Magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission media can be used to store information that can be accessed by computing devices. According to the definition in this article, computer-readable media does not include transitory media, such as modulated data signals and carrier waves.
- a typical implementation device is a computer.
- the specific form of the computer can be a personal computer, a laptop computer, a cellular phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email receiving and sending device, and a game control A console, a tablet computer, a wearable device, or a combination of any of these devices.
- the various embodiments in this specification are described in a progressive manner, and the same or similar parts between the various embodiments can be referred to each other, and each embodiment focuses on the difference from other embodiments.
- the description is relatively simple, and for related parts, please refer to the part of the description of the method embodiment.
- the device embodiments described above are only illustrative, and the modules described as separate components may or may not be physically separated.
- the functions of the modules can be combined in the same way when implementing the solutions of the embodiments of this specification. Or multiple software and/or hardware implementations. It is also possible to select some or all of the modules according to actual needs to achieve the objectives of the solutions of the embodiments. Those of ordinary skill in the art can understand and implement without creative work.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Computing Systems (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- Quality & Reliability (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Mechanical Engineering (AREA)
- Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Traffic Control Systems (AREA)
- Emergency Alarm Devices (AREA)
- Image Analysis (AREA)
- Fittings On The Vehicle Exterior For Carrying Loads, And Devices For Holding Or Mounting Articles (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
Description
Claims (20)
- 一种遗留对象检测方法,其特征在于,所述方法包括:获取交通工具的舱内无遗留对象的情况下所述舱内的参考图像;采集人员离开所述交通工具的情况下所述舱内的第一图像;根据所述第一图像和所述参考图像检测所述人员离开所述交通工具时所述舱内的遗留对象。
- 根据权利要求1所述的方法,其特征在于,所述获取交通工具的舱内无遗留对象的情况下所述舱内的参考图像,包括:在所述人员进入所述交通工具之前采集所述舱内的目标图像,将所述目标图像确定为所述参考图像;所述根据所述第一图像和所述参考图像检测所述人员离开所述交通工具时所述舱内的遗留对象,包括:根据所述第一图像和所述参考图像之间的差异,检测所述人员离开所述交通工具时所述舱内的遗留对象。
- 根据权利要求1或2所述的方法,其特征在于,所述方法还包括:在根据所述第一图像和所述参考图像检测所述人员离开所述交通工具时所述舱内的遗留对象之前,采集所述人员进入所述交通工具的情况下所述舱内的第二图像;根据所述第二图像和所述参考图像之间的差异,确定待检测的遗留对象。
- 根据权利要求3所述的方法,其特征在于,所述根据所述第二图像和所述参考图像之间的差异,确定待检测的遗留对象,包括:分别获取所述第二图像和所述参考图像中的目标对象信息;将所述第二图像包括但所述参考图像未包括的至少一目标对象,确定为待检测的所述遗留对象。
- 根据权利要求1所述的方法,其特征在于,所述根据所述第一图像和所述参考图像检测所述人员离开所述交通工具时所述舱内的遗留对象,包括:采集所述人员进入所述交通工具的情况下所述舱内的第二图像;基于所述参考图像和所述第二图像,确定待检测的遗留对象在所述第一图像中的位置;基于所述待检测的遗留对象在所述第一图像中的位置,从所述第一图像中的待检测的遗留对象中检测所述遗留对象。
- 根据权利要求5所述的方法,其特征在于,所述基于所述参考图像和所述第二图像,确定待检测的遗留对象在所述第一图像中的位置,包括:基于所述参考图像和多帧所述第二图像,对待检测的遗留对象进行目标跟踪;根据目标跟踪的结果确定所述待检测的遗留对象在所述第一图像中的位置。
- 根据权利要求1至6任意一项所述的方法,其特征在于,所述方法还包括:在根据所述第一图像和所述参考图像检测所述人员离开所述交通工具时所述舱内的遗留对象之前,接收遗留对象确认指令;根据所述遗留对象确认指令从所述第一图像中过滤掉非遗留对象。
- 根据权利要求1至7任意一项所述的方法,其特征在于,所述方法还包括:在检测到所述遗留对象的情况下,确定所述遗留对象在所述舱内的位置和/或所述遗留对象的类别。
- 根据权利要求1至8任意一项所述的方法,其特征在于,所述方法还包括:在检测到所述遗留对象的情况下,向所述交通工具和/或预先设置的通信终端发送第一通知消息。
- 根据权利要求1至9任意一项所述的方法,其特征在于,所述方法还包括:在检测到所述遗留对象的情况下,基于所述遗留对象的类别,向所述交通工具发送用于调节所述舱内的环境参数的第一控制信息。
- 根据权利要求10所述的方法,其特征在于,所述基于所述遗留对象的类别,向所述交通工具发送用于调节所述舱内的环境参数的第一控制信息,包括:在检测到所述遗留对象的类别为活体的情况下,向所述交通工具发送用于调节所述舱内的环境参数的第一控制信息。
- 根据权利要求10或11所述的方法,其特征在于,所述向所述交通工具发送用于调节所述舱内的环境参数的第一控制信息,包括:向所述交通工具发送窗户开启控制信息和/或空调运行控制信息。
- 根据权利要求1至12任意一项所述的方法,其特征在于,所述方法还包括:在检测到所述遗留对象的情况下,确定所述遗留对象是否从所述舱内被取出;在确定所述遗留对象从所述舱内被取出的情况下,执行以下至少任一操作:记录所述遗留对象从所述舱内被取出的时间;记录取出所述遗留对象的人员的身份信息;向所述交通工具发送用于调节所述舱内的环境参数的第二控制信息;向预先设置的通信终端发送第二通知消息。
- 根据权利要求13所述的方法,其特征在于,所述确定所述遗留对象是否从所述舱内被取出,包括:获取所述乘客离开所述交通工具之后的预设时间段内所述舱内的第三图像;根据所述第三图像与所述参考图像确定所述遗留对象是否从所述舱内被取出;和/或,所述方法还包括:在所述交通工具启动时,启用所述交通工具上的图像采集装置;和/或,所述方法还包括:在确定所述遗留对象从所述舱内被取出的情况下,关闭所述图像采集装置。
- 一种遗留对象检测装置,其特征在于,所述装置包括:第一获取模块,用于获取交通工具的舱内无遗留对象的情况下所述舱内的参考图像;第一采集模块,用于采集人员离开所述交通工具的情况下所述舱内的第一图像;检测模块,用于根据所述第一图像和所述参考图像检测所述人员离开所述交通工具时所述舱内的遗留对象。
- 一种计算机设备,包括存储器和处理器;计算机程序存储在所述存储器上并可运行在所述处理器上,其特征在于,所述处理器执行所述程序时实现权利要求1至14任意一项所述的方法。
- 一种交通工具,其特征在于,所述交通工具的舱内设置有图像采集装置,以及与所述图像采集装置通信连接的如权利要求15所述的遗留对象检测装置或如权利要求16所述的计算机设备。
- 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现权利要求1至14任意一项所述的方法。
- 一种计算机程序产品,其特征在于,当所述计算机程序产品被计算机读取并执行时,实现如权利要求1至14任一项所述的方法。
- 一种计算机程序,包括计算机可读代码,其特征在于,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行用于实现权利要求1至14中任一所述的方法。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020217022181A KR20210121015A (ko) | 2020-03-25 | 2020-05-28 | 남겨진 객체의 검출 |
JP2021540530A JP7403546B2 (ja) | 2020-03-25 | 2020-05-28 | 遺留対象検出 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010217625.9 | 2020-03-25 | ||
CN202010217625.9A CN111415347B (zh) | 2020-03-25 | 2020-03-25 | 遗留对象检测方法和装置及交通工具 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021189641A1 true WO2021189641A1 (zh) | 2021-09-30 |
Family
ID=71493201
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/093003 WO2021189641A1 (zh) | 2020-03-25 | 2020-05-28 | 遗留对象检测 |
Country Status (4)
Country | Link |
---|---|
JP (1) | JP7403546B2 (zh) |
KR (1) | KR20210121015A (zh) |
CN (1) | CN111415347B (zh) |
WO (1) | WO2021189641A1 (zh) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115848306A (zh) * | 2022-12-23 | 2023-03-28 | 阿维塔科技(重庆)有限公司 | 一种车辆遗留人员的检测方法、检测装置与车辆 |
CN117036482A (zh) * | 2023-08-22 | 2023-11-10 | 北京智芯微电子科技有限公司 | 目标对象定位方法、装置、拍摄设备、芯片、设备及介质 |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111931734A (zh) * | 2020-09-25 | 2020-11-13 | 深圳佑驾创新科技有限公司 | 识别遗落物体的方法、装置、车载终端和存储介质 |
CN113792622A (zh) * | 2021-08-27 | 2021-12-14 | 深圳市商汤科技有限公司 | 帧率调整方法及装置、电子设备和存储介质 |
CN113763683A (zh) * | 2021-09-09 | 2021-12-07 | 南京奥拓电子科技有限公司 | 一种物品遗留提醒的方法、装置和存储介质 |
WO2023039781A1 (zh) * | 2021-09-16 | 2023-03-23 | 华北电力大学扬中智能电气研究中心 | 一种遗留物检测方法、装置、电子设备及存储介质 |
CN116416192A (zh) * | 2021-12-30 | 2023-07-11 | 华为技术有限公司 | 一种检测的方法及装置 |
CN117917586A (zh) * | 2022-10-21 | 2024-04-23 | 法雷奥汽车内部控制(深圳)有限公司 | 舱内检测方法、舱内检测装置、计算机程序产品以及机动车辆 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105427529A (zh) * | 2015-12-04 | 2016-03-23 | 北京奇虎科技有限公司 | 一种车内环境监控的方法及终端 |
CN106560836A (zh) * | 2015-10-02 | 2017-04-12 | Lg电子株式会社 | 在车辆中提供防物品遗失服务的设备、方法和移动终端 |
CN108973853A (zh) * | 2018-06-15 | 2018-12-11 | 威马智慧出行科技(上海)有限公司 | 一种车辆警示装置及车辆警示方法 |
CN109733315A (zh) * | 2019-01-15 | 2019-05-10 | 吉利汽车研究院(宁波)有限公司 | 一种共享汽车的管理方法及系统 |
CN110758320A (zh) * | 2019-10-23 | 2020-02-07 | 上海能塔智能科技有限公司 | 自助试驾的防遗留处理方法、装置、电子设备与存储介质 |
CN110857073A (zh) * | 2018-08-24 | 2020-03-03 | 通用汽车有限责任公司 | 提供遗忘通知的系统和方法 |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001285842A (ja) | 2000-03-29 | 2001-10-12 | Minolta Co Ltd | 監視システム |
JP2002063668A (ja) | 2000-06-07 | 2002-02-28 | Toshiba Corp | 車内の人検出通報装置及び危険状態回避装置 |
JP4419672B2 (ja) | 2003-09-16 | 2010-02-24 | 株式会社デンソー | 車両内忘れ物防止装置 |
JP4441887B2 (ja) | 2006-03-31 | 2010-03-31 | 株式会社デンソー | 自動車用ユーザーもてなしシステム |
CN101777183A (zh) * | 2009-01-13 | 2010-07-14 | 北京中星微电子有限公司 | 检测静止物体的方法、装置及检测遗留物体的方法、装置 |
JP6343769B2 (ja) | 2013-08-23 | 2018-06-20 | 中嶋 公栄 | 忘れ物防止システム、旅客自動車の乗務員に対する情報提供方法、コンピュータプログラム |
CN103605983B (zh) * | 2013-10-30 | 2017-01-25 | 天津大学 | 一种遗留物检测和跟踪方法 |
CN103714325B (zh) * | 2013-12-30 | 2017-01-25 | 中国科学院自动化研究所 | 基于嵌入式系统的遗留物和遗失物实时检测方法 |
CN106921846A (zh) * | 2015-12-24 | 2017-07-04 | 北京计算机技术及应用研究所 | 视频移动终端遗留物检测装置 |
JP6909960B2 (ja) | 2017-03-31 | 2021-07-28 | パナソニックIpマネジメント株式会社 | 検知装置、検知方法及び検知プログラム |
CN113038023A (zh) * | 2017-05-24 | 2021-06-25 | 深圳市大疆创新科技有限公司 | 拍摄控制方法及装置 |
TWI637323B (zh) * | 2017-11-20 | 2018-10-01 | 緯創資通股份有限公司 | 基於影像的物件追蹤方法及其系統與電腦可讀取儲存媒體 |
JP2019168815A (ja) | 2018-03-22 | 2019-10-03 | 東芝メモリ株式会社 | 情報処理装置、情報処理方法、及び情報処理プログラム |
-
2020
- 2020-03-25 CN CN202010217625.9A patent/CN111415347B/zh active Active
- 2020-05-28 KR KR1020217022181A patent/KR20210121015A/ko not_active Application Discontinuation
- 2020-05-28 WO PCT/CN2020/093003 patent/WO2021189641A1/zh active Application Filing
- 2020-05-28 JP JP2021540530A patent/JP7403546B2/ja active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106560836A (zh) * | 2015-10-02 | 2017-04-12 | Lg电子株式会社 | 在车辆中提供防物品遗失服务的设备、方法和移动终端 |
CN105427529A (zh) * | 2015-12-04 | 2016-03-23 | 北京奇虎科技有限公司 | 一种车内环境监控的方法及终端 |
CN108973853A (zh) * | 2018-06-15 | 2018-12-11 | 威马智慧出行科技(上海)有限公司 | 一种车辆警示装置及车辆警示方法 |
CN110857073A (zh) * | 2018-08-24 | 2020-03-03 | 通用汽车有限责任公司 | 提供遗忘通知的系统和方法 |
CN109733315A (zh) * | 2019-01-15 | 2019-05-10 | 吉利汽车研究院(宁波)有限公司 | 一种共享汽车的管理方法及系统 |
CN110758320A (zh) * | 2019-10-23 | 2020-02-07 | 上海能塔智能科技有限公司 | 自助试驾的防遗留处理方法、装置、电子设备与存储介质 |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115848306A (zh) * | 2022-12-23 | 2023-03-28 | 阿维塔科技(重庆)有限公司 | 一种车辆遗留人员的检测方法、检测装置与车辆 |
CN115848306B (zh) * | 2022-12-23 | 2024-05-17 | 阿维塔科技(重庆)有限公司 | 一种车辆遗留人员的检测方法、检测装置与车辆 |
CN117036482A (zh) * | 2023-08-22 | 2023-11-10 | 北京智芯微电子科技有限公司 | 目标对象定位方法、装置、拍摄设备、芯片、设备及介质 |
CN117036482B (zh) * | 2023-08-22 | 2024-06-14 | 北京智芯微电子科技有限公司 | 目标对象定位方法、装置、拍摄设备、芯片、设备及介质 |
Also Published As
Publication number | Publication date |
---|---|
CN111415347B (zh) | 2024-04-16 |
JP2022530299A (ja) | 2022-06-29 |
KR20210121015A (ko) | 2021-10-07 |
JP7403546B2 (ja) | 2023-12-22 |
CN111415347A (zh) | 2020-07-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021189641A1 (zh) | 遗留对象检测 | |
US12084045B2 (en) | Systems and methods for operating a vehicle based on sensor data | |
CN111937050B (zh) | 乘客相关物品丢失减少 | |
CN108725357B (zh) | 基于人脸识别的参数控制方法、系统与云端服务器 | |
US20170043783A1 (en) | Vehicle control system for improving occupant safety | |
US10249088B2 (en) | System and method for remote virtual reality control of movable vehicle partitions | |
US20160249191A1 (en) | Responding to in-vehicle environmental conditions | |
CN107357194A (zh) | 自主驾驶车辆中的热监测 | |
WO2019095887A1 (zh) | 通用的车内乘客防遗忘的传感装置的实现方法和系统 | |
US20170154513A1 (en) | Systems And Methods For Automatic Detection Of An Occupant Condition In A Vehicle Based On Data Aggregation | |
US11148670B2 (en) | System and method for identifying a type of vehicle occupant based on locations of a portable device | |
US11572039B2 (en) | Confirmed automated access to portions of vehicles | |
US11783636B2 (en) | System and method for detecting abnormal passenger behavior in autonomous vehicles | |
US20230153424A1 (en) | Systems and methods for an automous security system | |
CN114809833B (zh) | 开启车门的控制方法、车门控制装置及车门控制系统 | |
US20230188836A1 (en) | Computer vision system used in vehicles | |
CN116834691A (zh) | 车内遗留对象的提醒方法、系统、计算机存储介质及车辆 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2021540530 Country of ref document: JP Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20927363 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20927363 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 13/03/2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20927363 Country of ref document: EP Kind code of ref document: A1 |