WO2021189641A1 - 遗留对象检测 - Google Patents

遗留对象检测 Download PDF

Info

Publication number
WO2021189641A1
WO2021189641A1 PCT/CN2020/093003 CN2020093003W WO2021189641A1 WO 2021189641 A1 WO2021189641 A1 WO 2021189641A1 CN 2020093003 W CN2020093003 W CN 2020093003W WO 2021189641 A1 WO2021189641 A1 WO 2021189641A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
cabin
vehicle
legacy
reference image
Prior art date
Application number
PCT/CN2020/093003
Other languages
English (en)
French (fr)
Inventor
何任东
刘卫龙
吴阳平
伍俊
范亦卿
Original Assignee
上海商汤临港智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤临港智能科技有限公司 filed Critical 上海商汤临港智能科技有限公司
Priority to JP2021540530A priority Critical patent/JP7403546B2/ja
Priority to KR1020217022181A priority patent/KR20210121015A/ko
Publication of WO2021189641A1 publication Critical patent/WO2021189641A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R21/00Arrangements or fittings on vehicles for protecting or preventing injuries to occupants or pedestrians in case of accidents or other traffic risks
    • B60R21/01Electrical circuits for triggering passive safety arrangements, e.g. airbags, safety belt tighteners, in case of vehicle accidents or impending vehicle accidents
    • B60R21/015Electrical circuits for triggering passive safety arrangements, e.g. airbags, safety belt tighteners, in case of vehicle accidents or impending vehicle accidents including means for detecting the presence or position of passengers, passenger seats or child seats, and the related safety parameters therefor, e.g. speed or timing of airbag inflation in relation to occupant position or seat belt use
    • B60R21/01512Passenger detection systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Definitions

  • the present disclosure relates to the field of computer vision technology, and in particular to methods and devices for detecting leftover objects and vehicles.
  • the present disclosure provides a method and device for detecting a leftover object, and a vehicle.
  • a method for detecting a leftover object comprising: acquiring a reference image in the cabin of a vehicle when there is no leftover object in the cabin; and collecting personnel leaving the vehicle In the case of the first image in the cabin; detecting objects left in the cabin when the person leaves the vehicle according to the first image and the reference image.
  • a device for detecting a leftover object comprising: a first acquisition module configured to obtain a reference image in the cabin of a vehicle when there is no leftover object in the cabin;
  • the first collection module is used to collect the first image in the cabin when the person leaves the vehicle;
  • the detection module is used to detect when the person leaves the vehicle according to the first image and the reference image The remaining objects in the cabin.
  • a computer device including a memory and a processor.
  • a computer program is stored on the memory, and the computer program can be executed by the processor to implement the method described in any embodiment.
  • a vehicle in which an image acquisition device is provided in the cabin, and the legacy device as described in any embodiment of the present disclosure is communicatively connected to the image acquisition device.
  • Object detection device or computer equipment according to any embodiment of the present disclosure.
  • a computer-readable storage medium having a computer program stored thereon, and when the program is executed by a processor, the method described in any of the embodiments is implemented.
  • a computer program product which implements the method described in any embodiment when the computer program product is read and executed by a computer.
  • a computer program including computer-readable code, when the computer-readable code is executed in an electronic device, the processor in the electronic device executes for realizing the present disclosure
  • the processor in the electronic device executes for realizing the present disclosure
  • the embodiment of the present disclosure acquires the reference image in the cabin of the vehicle when there are no objects left in the cabin, collects the first image in the cabin when the person leaves the vehicle, and according to the first image and the reference The image detects the remaining objects in the cabin when the person leaves the vehicle.
  • the above method can not only detect animate living bodies, but also inanimate objects. The method is simple, has a wide range of applications, and has high detection accuracy.
  • Fig. 1 is a flowchart of a method for detecting a leftover object according to an embodiment of the present disclosure.
  • Fig. 2(A) is a schematic diagram of a first image of an embodiment of the present disclosure.
  • Fig. 2(B) is a schematic diagram of a reference image of an embodiment of the present disclosure.
  • FIG. 3 is a schematic diagram of the relationship between the machine learning model and the image acquisition device according to an embodiment of the present disclosure.
  • Fig. 4 is a schematic diagram of a message notification interface of a communication terminal according to an embodiment of the present disclosure.
  • Fig. 5 is a block diagram of a device for detecting a leftover object according to an embodiment of the present disclosure.
  • Fig. 6 is a schematic diagram of a computer device according to an embodiment of the present disclosure.
  • Fig. 7 is a schematic diagram of a vehicle according to an embodiment of the present disclosure.
  • FIG. 8(A) is a schematic diagram of the distribution of the image acquisition device of the embodiment of the present disclosure.
  • FIG. 8(B) is a schematic diagram of another distribution of the image acquisition device of the embodiment of the present disclosure.
  • first, second, third, etc. may be used in this disclosure to describe various information, the information should not be limited to these terms. These terms are only used to distinguish the same type of information from each other.
  • first information may also be referred to as second information, and similarly, the second information may also be referred to as first information.
  • word “if” as used herein can be interpreted as "when” or “when” or "in response to determination”.
  • the embodiment of the present disclosure provides a method for detecting objects left in a vehicle. As shown in FIG. 1, the method may include the following steps 101-103.
  • Step 101 Obtain a reference image in the cabin of the vehicle when there are no objects left in the cabin.
  • Step 102 Collect a first image in the cabin when a person leaves the vehicle.
  • Step 103 Detect objects left in the cabin when the person leaves the vehicle according to the first image and the reference image.
  • the vehicle may be a vehicle, for example, a private car, a bus, a school bus, a truck, a train, etc., and may also be a tool used to carry people or goods such as ships or airplanes.
  • the cabin of the vehicle may be a cabin, a cabin, a cabin, or the like.
  • a reference image in the cabin can be acquired when there are no objects left in the cabin.
  • the reference image in the vehicle cabin may only include inherent objects in the vehicle cabin (for example, seats, steering wheels, vehicle interior trim, etc.), but not legacy objects.
  • the reference image can be collected and stored when there are no objects left in the cabin, and can be used one time or multiple times, such as repeated use when the detection of left objects is required multiple times. For example, it can be collected by the image acquisition device on the vehicle before leaving the factory and stored in the storage unit of the vehicle, or it can be collected by the user through the image acquisition device on the vehicle when the user confirms that there are no objects left in the cabin.
  • the user terminal collects and stores it in the storage unit of the vehicle. Further, it is also possible to perform image compression processing on the original image collected in the cabin, and then store the compressed image as a background image, thereby reducing storage space and improving image processing efficiency.
  • the reference image can be collected once, and the collected reference image can be used as the reference image for each remaining object detection.
  • the reference image can also be updated in a specific manner.
  • the updating of the reference image in a specific manner may be to update the reference image at regular intervals (for example, one day or a week, etc.), or to update the reference image under the trigger of a specific event.
  • the specific event may be the detection of a legacy object, the receipt of a reference image update instruction, for example, detection of a change in the background in the cabin or detection of a person about to enter the cabin of the vehicle (for example, vehicle unlocking), etc.
  • the background image may be the background image of the entire cabin, or it may be a certain area or certain areas in the cabin (for example, the driving area, the co-pilot area, the rear seat, the child seat, the love seat area, the back-up At least one of the luggage area, luggage area, etc.) in the background image.
  • the collected image can be directly used as the background image, or the image can be cropped as needed, and the cropped image can be used as the background image.
  • the number of background images can be one or more.
  • the background image may be an image captured by the image capture device.
  • the number of the background images may be greater than one, and each background image is acquired by one of the image acquisition devices.
  • an image in the cabin can be taken and stored, so as to be used as a reference image for detecting the remaining objects when the person leaves the vehicle next time.
  • the target image in the cabin can also be collected before the person enters the vehicle as a reference image for the detection of the leftover object when the person leaves the vehicle.
  • Leftover objects can include items, such as wallets, keys, mobile phones, umbrellas, briefcases, suitcases, etc., and can also include living bodies, such as people, pets, etc. People can be children, the elderly, people who fall asleep, people with limited mobility, etc. All kinds of people who may be left in the car.
  • the person may include any person on the vehicle, such as a driver, a flight attendant, and a passenger. It may be determined whether the person leaves the vehicle based on at least one of the opening status of the cabin door of the vehicle, the operating status of the vehicle, the movement trajectory of the person, and a specific instruction. For example, in the case where it is detected that the door is opened or the seat belt of the person is unfastened, it is determined that the person gets off the vehicle. For another example, when it is detected that the vehicle is turned off and the door is opened, it is determined that the person gets off the vehicle. For another example, when it is detected that the movement trajectory of the person is from inside the cabin to the outside of the cabin, it is determined that the person gets off the vehicle. For another example, in a case where a confirmation instruction for the end of the trip is detected, it is determined that the person gets off the vehicle. In addition, it is also possible to determine whether the person leaves the vehicle based on other methods, which will not be repeated here.
  • the first image in the cabin may include one image or multiple images.
  • Each first image can be collected by an image acquisition device in the cabin, for example, one image or multiple images can be acquired as the first image through a video stream collected by the image acquisition device.
  • objects left in the cabin when the person leaves the vehicle may be detected according to the first image and the reference image.
  • the leftover objects may include living bodies and/or objects that are carried by the person into the cabin and are forgotten in the cabin when the person leaves the vehicle.
  • the numbers of the first image and the reference image are both greater than 1
  • the objects left in the cabin when the person leaves the vehicle may be detected according to each first image and a corresponding reference image. For example, assuming that the i-th first image and the i-th reference image are both images corresponding to the i-th sub-area in the cabin, it is possible to detect that the person has left the place based on the i-th first image and the i-th reference image.
  • the leftover object detection can be performed on all the sub-areas, or only a part of the leftover object can be detected by the leftover object.
  • the operation of detecting the remaining objects can be executed all the time, or can be executed when triggered under certain circumstances, for example, triggered by a vehicle. For another example, it is passively triggered by a user terminal that has previously established a communication connection with the vehicle.
  • the user terminal may send a detection trigger instruction, and after receiving the detection trigger instruction, it may start to perform the operation of detecting the leftover object.
  • the detection trigger instruction may also include a target detection category, so as to determine whether the to-be-processed image includes a legacy object of a specific category. For example, the user may find that the key is missing after getting off the car.
  • the user can send a detection trigger instruction including the "key" category through the mobile phone to trigger the detection of the remaining objects in the "key” category.
  • the remaining objects in the cabin when the person leaves the vehicle may be detected based on the difference between the first image and the reference image. For example, at least one target object included in the first image but not included in the reference image may be determined as the legacy object.
  • the first image includes a mobile phone, a child, a seat and a pillow
  • the reference image includes a seat and a pillow
  • the first image includes but If the reference image does not include a mobile phone and a child, the mobile phone and/or the child are determined as the remaining objects in the cabin when the person leaves the vehicle. In this way, the remaining objects can be detected more intuitively, and the implementation is simple and the implementation cost is low.
  • the target object in the reference image may be obtained by annotating the reference image, or may be obtained by detecting the reference image.
  • the target object may be detected once from the reference image each time the difference between the first image and the reference image is determined, or the historically detected target object may be directly used as the target object.
  • a second image in the cabin when the person enters the vehicle may be collected; the remaining object to be detected is determined according to the difference between the second image and the reference image .
  • the situation of the person entering the vehicle may include the situation when the person enters the vehicle, or the situation before the person leaves the vehicle after entering the vehicle.
  • the second image may be collected when the vehicle is about to reach the destination.
  • an application for example, a map application or a car-hailing application, etc.
  • At least one target object included in the second image but not included in the reference image may be determined as the remaining object to be detected.
  • the second image captured when Person A gets in the car includes a mobile phone, and there is a set of keys in the car before Person A gets in the car, and then only the mobile phone is used as the remaining object to be detected corresponding to Person A.
  • the captured first image includes a mobile phone, it is determined that there is an object left in the car when the person A gets off the car. If the first image taken when the person A gets off the car does not include a mobile phone, it is determined that there is no object left in the car when the person A gets off the car.
  • the second image of the cabin when each person enters the vehicle can be separately collected, and the second image corresponding to each person and the reference The difference between the images determines the remaining objects to be detected for each person.
  • a second image of the cabin when the person enters the vehicle may be collected; based on the reference image and the second image, it is determined that the remaining object to be detected is in the The position in the first image; based on the position of the leftover object to be detected in the first image, the leftover object is detected from the leftover object to be detected in the first image.
  • the position of the left object to be detected can be roughly determined first, and then the left object detection can be performed based on the position, thereby improving the detection efficiency.
  • the second image and the reference image may be input to the pre-trained first machine learning model, and the leftover object to be detected (called the suspected leftover object) and the suspected leftover object can be determined according to the result of the first machine learning model The position in the second image, and then determine the position of the suspected leftover object in the first image according to the position of the suspected leftover object in the second image.
  • the machine learning model may use a neural network, or the machine learning model may use a model that combines a neural network and a traditional vision algorithm (for example, optical flow method, image sharpening method, image difference algorithm or Carter tracking algorithm).
  • the neural network in the embodiment of the present disclosure may include an input layer, at least one intermediate layer, and an output layer, and the input layer, at least one intermediate layer, and output layer each include one or more neurons.
  • the intermediate layer usually refers to a layer located between the input layer and the output layer, such as a hidden layer.
  • the intermediate layer of the neural network may include but is not limited to at least one of a convolutional layer, a ReLU (Rectified Linear Units, modified linear unit) layer, etc., the intermediate layer included in the neural network The more layers there are, the deeper the network.
  • the neural network may specifically be a deep neural network or a convolutional neural network.
  • target tracking may be performed on the legacy object to be detected; and the position of the legacy object to be detected in the first image may be determined according to the result of target tracking.
  • the objects that may be forgotten in the cabin that is, the suspected leftover objects
  • the position of the suspected leftover objects can be determined more accurately, and then the leftover objects can be detected based on the position of the suspected leftover objects, thereby improving detection Efficiency and accuracy.
  • each image acquisition device can correspond to a machine learning model, and each machine learning model is used to collect an image acquisition device.
  • the second image is detected.
  • the second image and background image captured by the image capture device 1 can be input to the machine learning model 1 to detect in the second image captured by the image capture device 1 Suspected leftover objects; input the second image and background image collected by the image acquisition device 2 into the machine learning model 2 to detect the suspected leftover objects in the second image collected by the image acquisition device 2; ...; and so on.
  • the machine learning model can also be shared to detect images collected by multiple different image collection devices, which is not limited in the present disclosure.
  • non-legacy objects may be filtered out from the first image.
  • the non-legacy objects may include items carried into the cabin by personnel and expected to remain in the cabin, for example, pillows or car accessories.
  • a legacy object confirmation instruction may be received; according to the legacy object confirmation instruction, non-legacy objects are filtered out from the first image.
  • a third image in the cabin of the vehicle may be taken before the person leaves the vehicle, and the third image may be sent to the display device (for example, the central control screen of the vehicle or the user terminal The display interface) is displayed, and the personnel can send the remaining object confirmation instruction through the user terminal or the central control of the vehicle.
  • the historical processing result of the legacy object can also be obtained. If a certain object is determined as a legacy object in the history detection process, but has not been processed for a long time or many times (for example, taken out of the cabin), the object is determined as a non-legacy object. In this way, it is possible to reduce the probability of misjudging non-legacy objects, such as items expected to be left in the cabin, that are carried into the cabin by the person mentioned above as remaining objects, and reduce the erroneous detection.
  • the position of the legacy object in the cabin and/or the category of the legacy object may be determined.
  • the position may be a rough position such as the front passenger seat and the rear seat, or may be more accurate position information, for example, the coordinates of the left object in the cabin.
  • the categories of the legacy objects can be simply divided into living categories and item categories, and each category can be further divided into more detailed subcategories.
  • the living category can be divided into pet categories and child categories
  • the item categories can be divided into keys. Category, wallet category, mobile phone category, etc.
  • the legacy object By determining the location and/or category of the legacy object, it is convenient to perform follow-up operations based on the location and/or category of the legacy object, for example, sending notification messages, controlling the environment parameters in the cabin to reduce the probability of security problems of the legacy object, etc.
  • the first image may be input to a pre-trained second machine learning model, and the position and/or category of the legacy object can be obtained according to the output result of the second machine learning model.
  • the second machine learning model and the aforementioned first machine learning model may be the same model or different models.
  • the second machine learning model may include a first sub-model and a second sub-model, wherein the first sub-model is used to detect living objects, and the second sub-model is used to detect object objects.
  • the first sub-model may be pre-trained through sample images of living objects
  • the second sub-model may be pre-trained through sample images of article objects.
  • the sample images can include images taken under different light intensities and different scenes to improve the accuracy of the trained object recognition model.
  • a first notification message may be sent to the vehicle and/or a preset communication terminal. By sending the first notification message, it is helpful for the personnel to find the leftover objects, so as to take out the leftover objects in time.
  • the first notification message may include prompt information used to characterize the existence of a legacy object. Further, the first notification message also includes the time when the legacy object is left behind, and the category and/or location of the legacy object.
  • the vehicle may output prompt information, including voice prompt information output through a car audio or horn and/or light prompt information output through vehicle lights. Further, by outputting different sound prompt information, or outputting light prompt information through light-emitting devices in different positions, different positions of the left object can be indicated.
  • the communication terminal can establish a communication connection with the transportation means through any connection method such as mobile data connection, Bluetooth connection, WiFi connection, etc.
  • the communication terminal can be a smart terminal such as a mobile phone, a tablet computer, a smart watch, a notebook computer, and the like.
  • the communication terminal may output prompt information.
  • the prompt information includes at least one of text prompt information and image prompt information.
  • the text prompt information may be text content such as the following forms: "There are items left on the back seat of the car" or "There are children in the car". It can also include the time when the legacy object was detected, such as "time: February 13, 2020 18:35; location: car back seat; legacy object category: wallet”.
  • the image prompt information may include the first image, or may only include the image of the legacy object intercepted from the first image. By sending the captured images of the remaining objects, the amount of data transmission can be reduced.
  • the first control information for adjusting the environmental parameters in the cabin may also be sent to the vehicle based on the category of the legacy object, so as to provide more comfort for the legacy object
  • the cabin environment For example, when the category of the legacy object is a living body category, first control information for adjusting the environmental parameters in the cabin may be sent to the vehicle, for example, window opening control information and/or air conditioning operation Control the information to reduce the probability that the poor environment in the cabin (for example, the temperature is too high or the oxygen content is low) leads to the danger of the remaining objects of the living category.
  • window opening control information may be sent to the vehicle to open the window of the vehicle.
  • the window opening control information includes, but is not limited to, number information, position information, and/or opening degree information of windows to be opened.
  • the window opening control information may be generated based on at least one of the location of the legacy object, the in-cabin environment parameter, and the outer-cabin environment parameter. For example, in the case where the remaining object is located in the back seat of the vehicle, a window in the rear of the vehicle can be controlled to open. For another example, when the oxygen content in the cabin is lower than a certain threshold, the two windows in the rear of the vehicle can be controlled to open.
  • the opening degree can be set in advance.
  • the moving distance of the window opening can be fixedly set to 5cm, so that the oxygen content in the cabin can be maintained within the required range, and it can also prevent people outside the cabin from causing harm to the remaining objects.
  • the remaining objects come out through the window, thereby ensuring the safety of the remaining objects.
  • the opening degree can also be dynamically set according to the environmental parameters outside the cabin. For example, in the case that the ambient temperature outside the cabin is outside the preset range, the opening degree can be set to a small value (for example, 5 cm), and vice versa.
  • the opening degree can be set larger (for example, 8cm). In this way, the impact of the outside environment on the remaining objects is reduced.
  • air conditioning operation control information can also be sent to the vehicle to turn on the air conditioner of the vehicle, and furthermore can control the temperature and/or operation mode of the air conditioner (For example, cooling or heating).
  • the air conditioner For example, cooling or heating.
  • the first control information can be sent directly when a leftover object is detected.
  • air-conditioning operation control information can be sent to the vehicle to make the air-conditioner operate in a suitable temperature/humidity control mode for the living body.
  • window opening control information can be sent to the vehicle to control the degree of window opening, such as controlling the window to open a gap instead of all opening, so as to improve the air environment in the vehicle, and at the same time to prevent living bodies in the vehicle cabin from leaving the vehicle cabin Or received threats from outside the cabin.
  • the environmental parameters in the cabin may also be detected first, and the first control information is sent only when the environmental parameters exceed a preset range.
  • the air-conditioning operation control information may be sent to the vehicle.
  • the air conditioner can be controlled to turn off again.
  • window opening control information may be sent to the vehicle to control the window to open a gap.
  • the leftover object in a case where the leftover object is detected, it can be determined whether the leftover object is taken out of the cabin.
  • at least one of the following operations can be performed: record the time when the legacy object was taken out of the cabin and/or the person who took out the legacy object Identity information; sending second control information for adjusting the environmental parameters in the cabin to the vehicle; sending a second notification message to a preset communication terminal. For example, it is possible to record "At 19:00:35 on March 22, 2020, the user with ID XXX took out the pet".
  • the second control information for adjusting the environmental parameters in the cabin can be sent to the vehicle at this time
  • window closing control information and/or air conditioning closing control information may be sent to the vehicle to close the windows and/or air conditioning of the vehicle.
  • the second notification message may include at least one of the name, category, retrieval time, and identity information of the retriever. In this way, a notification can be sent in time when the legacy object is taken out, and the probability of false take-out can be reduced.
  • a third image in the cabin within a preset time period after the passenger leaves the vehicle may be acquired; according to the third image and the reference image, it is possible to determine whether the legacy object is from the The cabin was taken out. Specifically, the difference between the third image and the reference image can be used to determine whether the remaining object is taken out of the cabin. If there is at least one target object included in the third image but not included in the reference image, it is determined that there is a legacy object that has not been taken out. Otherwise, it is determined that all the remaining objects have been taken out. It is easy to implement and low detection cost to detect whether the remaining objects are taken out by acquiring images.
  • a third image in the cabin within a preset time period after the passenger leaves the vehicle can be acquired; according to the third image and the first image, it is The cabin was taken out. Specifically, according to the difference between the third image and the first image, it can be determined whether the remaining object is taken out of the cabin. If there is at least one target object included in the third image but included in the first image, it is determined that there is a legacy object to be taken out. Otherwise, it is determined that the remaining objects have not been taken out.
  • the information of the legacy object may be stored in the database.
  • the information of the legacy object may include at least one of the image, location, category, time of the legacy object, the person to which the legacy object belongs, and the person who took out the legacy object.
  • the image acquisition device (for example, a camera) on the vehicle can be activated to collect images when image collection is required, and the image collection device can also be turned off when image collection is not required. In this way, the image acquisition device does not need to be in working condition all the time, and energy consumption is reduced. For example, when the vehicle is started, the image capture device on the vehicle is activated. Or when it is determined that a person enters the vehicle, the image acquisition device on the vehicle is activated. Or when it is determined that the person is about to leave the vehicle, the image capture device on the vehicle is activated.
  • the image acquisition device is turned off. If only the remaining objects need to be detected, the image acquisition device is turned off when the person leaves the vehicle.
  • the legacy object detection method of the embodiment of the present disclosure can detect not only living bodies but also static objects, and the detection accuracy is high.
  • the embodiments of the present disclosure can be used in different application scenarios such as private cars, online car-hailing, or school buses, and have a wide range of applications. Among them, personnel getting on and off the vehicle can be determined in different ways according to the actual scene. For example, in a private car scenario, whether the driver gets in or out of the car can be determined according to the signal strength of the communication connection between the driver's communication terminal and the vehicle.
  • the online car-hailing scenario it is possible to determine whether the passenger gets on or off the car according to the operation of the driver in the online car-hailing application (for example, the operation of confirming the receipt of the passenger or the operation of confirming the arrival at the destination).
  • the writing order of the steps does not mean a strict execution order but constitutes any limitation on the implementation process.
  • the specific execution order of each step should be based on its function and possibility.
  • the inner logic is determined.
  • the present disclosure also provides a device for detecting leftover objects.
  • the device includes: a first acquisition module 501, a first acquisition module 502, and a detection module 503.
  • the first acquiring module 501 is configured to acquire a reference image in the cabin of the vehicle when there are no objects left in the cabin.
  • the first collection module 502 is configured to collect a first image in the cabin when a person leaves the vehicle.
  • the detection module 503 is configured to detect objects left in the cabin when the person leaves the vehicle according to the first image and the reference image.
  • the first acquisition module 501 is configured to acquire a target image in the cabin before the person enters the vehicle, the target image is the reference image; the detection module uses Yu: According to the difference between the first image and the reference image, detect objects left in the cabin when the person leaves the vehicle.
  • the device further includes: a second collection module, configured to collect a second image in the cabin when the person enters the vehicle; a first determining module, configured according to the The difference between the second image and the reference image determines the remaining object to be detected.
  • the first determining module includes: a first obtaining unit, configured to obtain target object information of the second image and a reference image, respectively; and a first determining unit, configured to include the second image However, at least one target object not included in the reference image is determined to be the remaining object to be detected.
  • the detection module 503 includes: a first collection unit, configured to collect a second image in the cabin when the person enters the vehicle; a second determination unit, configured based on the The reference image and the second image determine the position of the leftover object to be detected in the first image; the detection unit is configured to determine the position of the leftover object to be detected in the first image from The leftover object is detected from the leftover object to be detected in the first image.
  • the second determining unit includes: a tracking subunit, configured to perform target tracking on a legacy object to be detected based on the reference image and multiple frames of the second image; and the determining subunit is configured to perform target tracking based on the The result of target tracking determines the position of the remaining object to be detected in the first image.
  • the device further includes: a receiving module, configured to receive a legacy object confirmation before detecting a legacy object in the cabin when the person leaves the vehicle according to the first image and the reference image Instruction; a filtering module for filtering out non-legacy objects from the first image according to the legacy object confirmation instruction.
  • a receiving module configured to receive a legacy object confirmation before detecting a legacy object in the cabin when the person leaves the vehicle according to the first image and the reference image Instruction
  • a filtering module for filtering out non-legacy objects from the first image according to the legacy object confirmation instruction.
  • the device further includes: a second determining module, configured to determine the position of the legacy object in the cabin and/or the position of the legacy object in the case of detecting the legacy object category.
  • the device further includes: a first sending module, configured to send a first notification message to the vehicle and/or a preset communication terminal when the legacy object is detected.
  • a first sending module configured to send a first notification message to the vehicle and/or a preset communication terminal when the legacy object is detected.
  • the device further includes: a second sending module, configured to send, based on the category of the legacy object, to the vehicle a second sending module for adjusting the cabin when the legacy object is detected.
  • the second sending module is configured to send first control information for adjusting environmental parameters in the cabin to the vehicle when the category of the legacy object is a living body category .
  • the second sending module is used to send window opening control information and/or air conditioning operation control information to the vehicle.
  • the device further includes: a third determination module, configured to determine whether the legacy object has been taken out of the cabin when the legacy object is detected; an execution module, configured to: When it is determined that the legacy object is taken out of the cabin, perform at least one of the following operations: record the time when the legacy object was taken out of the cabin and/or the identity information of the person who took out the legacy object ; Send second control information for adjusting the environmental parameters in the cabin to the vehicle; send a second notification message to a preset communication terminal.
  • a third determination module configured to determine whether the legacy object has been taken out of the cabin when the legacy object is detected
  • an execution module configured to: When it is determined that the legacy object is taken out of the cabin, perform at least one of the following operations: record the time when the legacy object was taken out of the cabin and/or the identity information of the person who took out the legacy object ; Send second control information for adjusting the environmental parameters in the cabin to the vehicle; send a second notification message to a preset communication terminal.
  • the third determining module includes: a second acquiring unit, configured to acquire a third image in the cabin within a preset time period after the passenger leaves the vehicle; and a third determining unit , Used to determine whether the left-over object is taken out of the cabin according to the third image and the reference image.
  • the device further includes an activation module, configured to activate the image acquisition device on the vehicle when the vehicle is started.
  • the device further includes: a closing module, configured to turn off the image capture device when it is determined that the leftover object is taken out of the cabin.
  • the embodiment of the present disclosure also includes a computer device including a memory and a processor.
  • a computer program is stored on the memory, and the computer program can be executed by the processor to implement the method described in any embodiment.
  • FIG. 6 shows a more specific hardware structure diagram of a computer device provided by an embodiment of this specification.
  • the device may include a processor 601, a memory 602, an input/output interface 603, a communication interface 604, and a bus 605.
  • the processor 601, the memory 602, the input/output interface 603, and the communication interface 604 realize the communication connection between each other in the device through the bus 605.
  • the processor 601 can be implemented by a general CPU (Central Processing Unit), a microprocessor, an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits for execution related Program to implement the technical solutions provided in the embodiments of this specification.
  • a general CPU Central Processing Unit
  • ASIC Application Specific Integrated Circuit
  • the memory 602 may be implemented in the form of ROM (Read Only Memory), RAM (Random Access Memory, random access memory), static storage device, dynamic storage device, etc.
  • the memory 602 may store an operating system and other application programs.
  • related program codes are stored in the memory 602 and called and executed by the processor 601.
  • the input/output interface 603 is used to connect an input/output module to realize information input and output.
  • the input/output/module can be configured in the device as a component (not shown in the figure), or can be connected to the device to provide corresponding functions.
  • the input device may include a keyboard, a mouse, a touch screen, a microphone, various sensors, etc., and an output device may include a display, a speaker, a vibrator, an indicator light, and the like.
  • the communication interface 604 is used to connect a communication module (not shown in the figure) to realize the communication interaction between the device and other devices.
  • the communication module can realize communication through wired means (such as USB, network cable, etc.), or through wireless means (such as mobile network, WIFI, Bluetooth, etc.).
  • the bus 605 includes a path for transmitting information between various components of the device (for example, the processor 601, the memory 602, the input/output interface 603, and the communication interface 604).
  • the device may also include the equipment necessary for normal operation. Other components.
  • the above-mentioned device may also include only the components necessary to implement the solutions of the embodiments of the present specification, and not necessarily include all the components shown in the figures.
  • an embodiment of the present disclosure also provides a vehicle.
  • the cabin of the vehicle is provided with an image capture device, and the image capture device is communicatively connected with the image capture device as described in any embodiment of the present disclosure.
  • a legacy object detection device or a computer device according to any embodiment of the present disclosure.
  • the image acquisition device is used to acquire the first image.
  • the image acquisition device may start to capture images to be processed in the cabin from the time the person enters the cabin until the person leaves the cabin, or may start shooting the cabin after the person enters the cabin for a period of time The pending image.
  • the image capture device may be arranged on the top of the cabin.
  • the number of image acquisition devices in the cabin can be one or more. When the number of the image acquisition device is 1, the image to be processed in the entire cabin is acquired by the image acquisition device. When the number of image acquisition devices is greater than one, the images to be processed in a sub-area in the cabin are respectively acquired by each image acquisition device.
  • the number, location and distribution of the image acquisition device in the cabin can be determined according to the shape and size of the cabin and the field of view of the image acquisition device.
  • an image capture device can be installed in the center of the roof (referring to the top of the inner side), as shown in Figure 8(A); an image capture device can also be installed above each row of seats The device is shown in Figure 8(B).
  • the captured area is more comprehensive.
  • the first image captured by the image capture device can be detected frame by frame; in other embodiments, the frame rate of the image captured by the image capture device is often relatively large, for example, dozens of images are captured per second. Therefore, frame skipping detection can also be performed on the first image, for example, only the first, third, and fifth frames of the captured first image are detected.
  • the frame skipping step (that is, the frame number interval between adjacent image frames for detection) can be determined according to the actual scene, for example, when the light is poor, there are more objects to be detected, the first image captured is low in definition, etc. In this case, the frame skipping step can be set to be smaller, and the frame skipping step can be set to be larger in the case of better light, fewer objects to be detected, and higher definition of the first image taken.
  • the field of view of the image acquisition device may be relatively large, including both areas where leftover objects are more likely to appear, and areas where there are generally no leftover objects. Therefore, when detecting potential leftover objects in the cabin in the first image, the region of interest in the first image can also be determined first, and then the leftover objects are detected in the region of interest. For example, leftover objects are more likely to appear on the seats of the cabin, but generally do not appear on the center console. Therefore, the seats are the area of interest.
  • the embodiments of this specification also provide a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, the method described in any of the foregoing embodiments is implemented.
  • Computer-readable media include permanent and non-permanent, removable and non-removable media, and information storage can be realized by any method or technology.
  • the information can be computer-readable instructions, data structures, program modules, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disc (DVD) or other optical storage, Magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission media can be used to store information that can be accessed by computing devices. According to the definition in this article, computer-readable media does not include transitory media, such as modulated data signals and carrier waves.
  • a typical implementation device is a computer.
  • the specific form of the computer can be a personal computer, a laptop computer, a cellular phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email receiving and sending device, and a game control A console, a tablet computer, a wearable device, or a combination of any of these devices.
  • the various embodiments in this specification are described in a progressive manner, and the same or similar parts between the various embodiments can be referred to each other, and each embodiment focuses on the difference from other embodiments.
  • the description is relatively simple, and for related parts, please refer to the part of the description of the method embodiment.
  • the device embodiments described above are only illustrative, and the modules described as separate components may or may not be physically separated.
  • the functions of the modules can be combined in the same way when implementing the solutions of the embodiments of this specification. Or multiple software and/or hardware implementations. It is also possible to select some or all of the modules according to actual needs to achieve the objectives of the solutions of the embodiments. Those of ordinary skill in the art can understand and implement without creative work.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Quality & Reliability (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Mechanical Engineering (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Traffic Control Systems (AREA)
  • Emergency Alarm Devices (AREA)
  • Image Analysis (AREA)
  • Fittings On The Vehicle Exterior For Carrying Loads, And Devices For Holding Or Mounting Articles (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

提供一种遗留对象检测方法,获取交通工具的舱内无遗留对象的情况下所述舱内的参考图像,采集人员离开交通工具的情况下所述舱内的第一图像,并根据所述第一图像和参考图像检测所述人员离开所述交通工具时所述舱内的遗留对象。

Description

遗留对象检测
相关交叉引用
本公开要求于2020年3月25日提交的、申请号为202010217625.9的中国专利申请的优先权,该申请的全文以引用的方式并入本文中。
技术领域
本公开涉及计算机视觉技术领域,尤其涉及遗留对象检测方法和装置及交通工具。
背景技术
人员在乘坐交通工具时,常常容易在车上遗留物品(例如,钱包、钥匙等),甚至是活体(例如,宠物、儿童等),造成财物损失,甚至会导致遗留在车上的活体发生生命危险。因此,有必要对交通工具内的遗留对象进行检测,以根据检测结果采取应对措施来降低损失和风险。
发明内容
本公开提供一种遗留对象检测方法和装置及交通工具。
根据本公开实施例的第一方面,提供一种遗留对象检测方法,所述方法包括:获取交通工具的舱内无遗留对象的情况下所述舱内的参考图像;采集人员离开所述交通工具的情况下所述舱内的第一图像;根据所述第一图像和参考图像检测所述人员离开所述交通工具时所述舱内的遗留对象。
根据本公开实施例的第二方面,提供一种遗留对象检测装置,所述装置包括:第一获取模块,用于获取交通工具的舱内无遗留对象的情况下所述舱内的参考图像;第一采集模块,用于采集人员离开所述交通工具的情况下所述舱内的第一图像;检测模块,用于根据所述第一图像和参考图像检测所述人员离开所述交通工具时所述舱内的遗留对象。
根据本公开实施例的第三方面,提供一种计算机设备,该计算机设备包括存储器、处理器。所述存储器上存储有计算机程序,所述计算机程序可由所述处理器执行,以实现任一实施例所述的方法。
根据本公开实施例的第四方面,提供一种交通工具,所述交通工具的舱内设置有图像采集装置,以及与所述图像采集装置通信连接的如本公开任一实施例所述的遗留对象检测装置或如本公开任一实施例所述的计算机设备。
根据本公开实施例的第五方面,提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现任一实施例所述的方法。
根据本公开实施例的第六方面,提供一种计算机程序产品,当所述计算机程序产品被计算机读取并执行时,实现任一实施例所述的方法。
根据本公开实施例的第七方面,提供一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行用于实现本公开任一实施例所述的方法。
本公开实施例获取交通工具的舱内无遗留对象的情况下所述舱内的参考图像,采集人员离开交通工具的情况下所述舱内的第一图像,并根据所述第一图像和参考图像检测所述人员离开所述交通工具时所述舱内的遗留对象。上述方式不仅能够检测有生命的活体,还能检测无生命的物品,方法简单,适用范围广泛,且检测准确率高。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,而非限制本公开。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,这些附图示出了符合本公开的实施例,并与说明书一起用于说明本公开的技术方案。
图1是本公开实施例的遗留对象检测方法流程图。
图2(A)是本公开实施例的第一图像的示意图。
图2(B)是本公开实施例的参考图像的示意图。
图3是本公开实施例的机器学习模型与图像采集装置的关系示意图。
图4是本公开实施例的通信终端的消息通知界面的示意图。
图5是本公开实施例的遗留对象检测装置的框图。
图6是本公开实施例的计算机设备的示意图。
图7是本公开实施例的交通工具的示意图。
图8(A)是本公开实施例的图像采集装置的分布的示意图。
图8(B)是本公开实施例的图像采集装置的另一分布的示意图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本公开相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开的一些方面相一致的装置和方法的例子。
在本公开使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本公开。在本公开和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,本文中使用的术语“和/或”是指并包含一个或多个相关联的列出项目的任何或所有可能组合。另外,本文中术语“至少一种”表示多种中的任意一种或多种中的至少两种的任意组合。
应当理解,尽管在本公开可能采用术语第一、第二、第三等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本公开范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。取决于语境,如在此所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于确定”。
为了使本技术领域的人员更好的理解本公开实施例中的技术方案,并使本公开实施例的上述目的、特征和优点能够更加明显易懂,下面结合附图对本公开实施例中的技术方案作进一步详细的说明。
本公开实施例提供一种交通工具内遗留对象检测方法,如图1所示,所述方法可包括以下步骤101~103。
步骤101:获取交通工具的舱内无遗留对象的情况下所述舱内的参考图像。
步骤102:采集人员离开所述交通工具的情况下所述舱内的第一图像。
步骤103:根据所述第一图像和参考图像检测所述人员离开所述交通工具时所述舱内的遗留对象。
在本公开实施例中,所述交通工具可以是车辆,例如,私家车、公交车、校车、货车、火车等,也可以是船舶或者飞机等用于载人或者载货的工具。相应地,所述交通工具的舱可以是车舱、船舱或者机舱等。下面以所述交通工具是车辆,所述舱是车舱为例,对本公开实施例的方案进行说明。其他类型交通工具的遗留对象检测技术实现方式类似,不再赘述。
在步骤101中,可以获取车舱内无遗留对象的情况下所述车舱内的参考图像。其中,所述车舱内的参考图像中可以仅包括所述车舱内的固有对象(例如,座椅、方向盘、车辆内饰等),而不包括遗留对象。所述参考图像可以在车舱内无遗留对象的情况下进行采集并存储,可以一次使用或多次使用,如在多次需要进行遗留对象检测的情况下反复使用。例如,可以在车辆出厂前采集由车辆上的图像采集装置采集并存储在车辆的存储单元中,或者也可以在用户确认车舱内无遗留对象的情况下由用户通过车辆上的图像采集装置采集或者用户终端(例如,手机,平板电脑、照相机等)采集并存储在车辆的存储单元中。进一步地,还可以对采集到所述舱内的原始图像进行图像压缩处理,再将压缩后的图像作为背景图像进行存储,从而减少存储空间,提高图像处理效率。
可以采集一次参考图像,并将采集的参考图像作为每次进行遗留对象检测的参考图像。或者,也可以按照特定方式对所述参考图像进行更新。所述按照特定方式对所述参考图像进行更新,可以是每隔一段时间(例如,一天或者一个星期等)对所述参考图像进行更新,也可以是在特定事件的触发下对所述参考图像进行更新,所述特定事件可以是检测到遗留对象,接收到参考图像更新指令,例如,检测到车舱内的背景发生变化或者检测到人员即将进入交通工具的舱内(例如,车辆解锁)等,触发对参考图像的更新。
所述背景图像可以是整个车舱的背景图像,也可以是车舱内的某个或某些区域(例如,驾驶区域、副驾驶区域、后排座椅、儿童座椅、爱心座位区、后备箱区域、行李区域等至少之一)内的背景图像。在采集到整个车舱的图像之后,可以直接将采集到的图像作为背景图像,也可以根据需要对所述图像进行裁剪,并将裁剪后的图像作为背景图像。背景图像的数量可以是一张或者多张。例如,在车舱内包括一个图像采集装置的情况下,所述背景图像可以是由所述图像采集装置采集的一张图像。在车舱内包括多个图像采集装置的情况下,所述背景图像的数量可以大于1,每张背景图像由其中一个图像采集装置采集。
在实际应用中,在人员下车之后,如果车舱内无遗留对象,可以拍摄车舱内的图像并进行存储,从而作为下一次人员离开交通工具的情况下进行遗留对象检测的参考图像。 或者,也可以在人员进入交通工具之前采集所述舱内的目标图像作为所述人员离开所述交通工具的情况下进行遗留对象检测的参考图像。遗留对象可以包括物品,如钱包、钥匙、手机、雨伞、公文包、行李箱等,还可以包括活体,如人、宠物等,人可以为儿童、老人、睡着的人、行动不便的人等各种可能被遗留在车上的人。
在步骤102中,所述人员可以包括交通工具上的任何人员,例如司机,乘务人员,乘客等。可以基于交通工具的舱门的开启状况、交通工具的运行状况、人员的移动轨迹和特定指令中的至少一者来确定人员是否离开所述交通工具。例如,在检测到舱门开启或者人员的安全带被解开的情况下,确定人员下车。又例如,在检测到车辆熄火,且舱门开启的情况下,确定人员下车。又例如,检测到人员的移动轨迹是从舱内向舱外的情况下,确定人员下车。再例如,在检测到行程结束的确认指令的情况下,确定人员下车。除此之外,还可以基于其他方式确定人员是否离开所述交通工具,此处不再赘述。
所述舱内的第一图像可以包括一张图像,也可以包括多张图像。每张第一图像可以由车舱内的一个图像采集装置采集,如可以通过所述图像采集装置采集到的视频流获取一个图像或多个图像作为所述第一图像。
在步骤103中,可以根据所述第一图像和参考图像检测所述人员离开所述交通工具时所述舱内的遗留对象。其中,遗留对象可以包括由所述人员携带至所述舱内,且在所述人员离开所述交通工具时被遗忘在所述舱内的活体和/或物品。在第一图像和参考图像的数量均大于1的情况下,可以分别根据每张第一图像与对应的一张参考图像检测所述人员离开所述交通工具时所述舱内的遗留对象。例如,假设第i张第一图像和第i张参考图像均为舱内的第i子区域对应的图像,则可以根据第i张第一图像和第i张参考图像来检测所述人员离开所述交通工具时所述舱内的第i子区域的遗留对象。其中,每次检测遗留对象时可以对所有子区域都进行遗留对象检测,也可以仅对其中部分子区域进行遗留对象检测。
检测遗留对象的操作可以始终执行,也可以在特定情况下经触发而执行,例如,由交通工具触发。又例如,由预先与交通工具建立通信连接的用户终端被动触发。用户终端可以发送检测触发指令,在接收到所述检测触发指令之后,可以开始执行遗留对象检测的操作。所述检测触发指令中还可以包括目标检测类别,以便确定所述待处理图像中是否包括特定类别的遗留对象。例如,用户可能在下车以后发现钥匙不见了,这时,可以通过手机发送包括“钥匙”类别的检测触发指令,从而触发在检测“钥匙”类别的遗留对象。
在一些实施例中,可以根据所述第一图像和参考图像之间的差异,检测所述人员离开所述交通工具时所述舱内的遗留对象。例如,可以将所述第一图像包括但所述参考图像未包括的至少一个目标对象,确定为所述遗留对象。如图2(A)所示,所述第一图像中包括手机、儿童、座椅和抱枕,如图2(B)所示,所述参考图像中包括座椅和抱枕,第一图像包括但参考图像未包括的是手机和儿童,则将手机和/或儿童确定为所述人员离开所述交通工具时所述舱内的遗留对象。通过这种方式,能够比较直观地检测出遗留对象,实现简单,实现成本低。
其中,参考图像中的目标对象可以通过对所述参考图像标注的方式获取到,也可以通过对所述参考图像进行检测的方式获取到。可以在每次确定所述第一图像和参考图像之间的差异时都从参考图像中检测一次目标对象,也可以直接采用历史检测到的目标对象作为目标对象。
在一些实施例中,可以采集所述人员进入所述交通工具的情况下所述舱内的第二图像;根据所述第二图像和所述参考图像之间的差异,确定待检测的遗留对象。所述人员进入所述交通工具的情况,可以包括人员进入所述交通工具时的情况,也可以包括人员进入所述交通工具以后,离开所述交通工具以前的情况。例如,可以在所述交通工具即将到达目的地的情况下采集所述第二图像。在实际应用中,可以根据所述交通工具上的智能终端上运行的应用程序(例如,地图类应用程序或者网约车应用程序等)来确定所述交通工具是否即将到达目的地。通过这种方式,能够确定与特定人员有关的待检测的遗留对象,从而实现仅对与特定人员相关的待检测的遗留对象进行遗留对象检测,提高了检测准确度,减少了检测资源消耗。
在一些实施例中,可以将所述第二图像包括但所述参考图像未包括的至少一目标对象,确定为待检测的所述遗留对象。通过这种方式,可以建立遗留对象与人员之间的关联,仅检测与特定人员有关联的遗留对象。可以为进入交通工具的人员分配身份信息,将所述人员进入所述交通工具的情况下确定的待检测的遗留对象与所述人员的身份信息进行绑定,从而建立所述关联。基于此,在所述人员离开所述交通工具的情况下,仅从与所述人员相关的待检测的遗留对象中确定遗留对象,降低了将其他人员遗留在交通工具内的遗留对象确定为所述人员的遗留对象的概率。
例如,在人员A上车时拍摄到的第二图像中包括手机,在人员A上车之前车内有一串钥匙,则仅将手机作为人员A对应的待检测的遗留对象。在人员A下车时,如果拍摄到的第一图像中包括手机,则确定人员A下车时在车内有遗留对象。如果人员A下 车时拍摄到的第一图像中不包括手机,则确定人员A下车时在车内没有遗留对象。
在进入舱内的人员有多个的情况下,可以分别采集每个人员进入所述交通工具的情况下所述舱内的第二图像,并根据每个人员对应的第二图像和所述参考图像之间的差异,确定所述每个人员的待检测的遗留对象。
在另一些实施例中,可以采集所述人员进入所述交通工具的情况下所述舱内的第二图像;基于所述参考图像和所述第二图像,确定待检测的遗留对象在所述第一图像中的位置;基于所述待检测的遗留对象在所述第一图像中的位置,从所述第一图像中的待检测的遗留对象中检测所述遗留对象。这样,能够先大致确定待检测的遗留对象的位置,然后基于所述位置进行遗留对象检测,从而提高检测效率。
其中,可以将第二图像与参考图像输入预先训练的第一机器学习模型,并根据所述第一机器学习模型的结果确定待检测的遗留对象(称为疑似遗留对象)以及所述疑似遗留对象在所述第二图像中的位置,然后根据所述疑似遗留对象在所述第二图像中的位置确定所述疑似遗留对象在所述第一图像中的位置。所述机器学习模型可以采用神经网络,或者所述机器学习模型采用神经网络与传统视觉算法(例如,光流法,图像锐化法,图像差分算法或者卡尔特跟踪算法)相结合的模型。本公开实施例中的神经网络可以包括输入层、至少一个中间层和输出层,所述输入层、至少一个中间层和输出层均包括一个或多个神经元。其中,所述中间层通常是指位于输入层和输出层之间的层,如隐藏层等。在一个可选例子中,所述神经网络的中间层可以包括但不限于卷积层、ReLU(Rectified Linear Units,修正线性单元)层等中的至少一者,所述神经网络所包含的中间层的层数越多,则网络越深。所述神经网络可以具体为深度神经网络或卷积神经网络。
例如,可以基于所述参考图像和多帧所述第二图像,对待检测的遗留对象进行目标跟踪;根据目标跟踪的结果确定待检测的遗留对象在所述第一图像中的位置。通过这种方式,能够先确定可能会被遗忘在舱内的对象(即疑似遗留对象),并较为准确地确定疑似遗留对象的位置,再根据疑似遗留对象的位置检测出遗留对象,从而提高检测效率和精确度。
如图3所示,在第二图像通过舱内的多个图像采集装置进行采集的情况下,每个图像采集装置可以对应一个机器学习模型,每个机器学习模型用于对一个图像采集装置采集的第二图像进行检测。例如,在舱内包括N个图像采集装置的情况下,可以将图像采集装置1采集到的第二图像与背景图像输入到机器学习模型1,以在图像采集装置1采集的第二图像中检测疑似遗留对象;将图像采集装置2采集到的第二图像与背景图像输 入到机器学习模型2,以在图像采集装置2采集的第二图像中检测疑似遗留对象;……;以此类推。也可以共用机器学习模型,分别对多个不同的图像采集装置采集的图像进行检测,本公开对此并不限制。
在一些实施例中,在根据所述第一图像和参考图像检测所述人员离开所述交通工具时所述舱内的遗留对象之前,还可以先从所述第一图像中过滤掉非遗留对象。所述非遗留对象可以包括由人员携带至所述舱内,并期望留在所述舱内的物品,例如,抱枕或者车饰等。例如,可以接收遗留对象确认指令;根据所述遗留对象确认指令从所述第一图像中过滤掉非遗留对象。作为一种实现方式,可以在人员离开所述交通工具之前拍摄所述交通工具的舱内的第三图像,将所述第三图像发送到显示设备(例如,车辆的中控屏或者用户终端的显示界面)进行显示,人员可以通过用户终端或者车辆的中控发送遗留对象确认指令。又例如,还可以获取遗留对象的历史处理结果。如果某个对象在历史检测过程中被确定为遗留对象,但长时间或者多次未被处理(例如,从所述舱内取出),则将所述对象确定为非遗留对象。通过这种方式,可以降低将上述由人员携带至所述舱内,并期望留在所述舱内的物品等非遗留对象误判为遗留对象的概率,减少误检测情况。
在检测到所述遗留对象的情况下,可以确定所述遗留对象在所述舱内的位置和/或所述遗留对象的类别。所述位置可以是副驾驶座位、后排座位这样的粗略的位置,也可以是更为精确的位置信息,例如,是遗留对象在所述舱内的坐标。所述遗留对象的类别可以简单地分为活体类别和物品类别,每种类别也可以进一步划分为更细致的子类,例如,活体类别可以划分为宠物类别和儿童类别,物品类别可以划分为钥匙类别、钱包类别和手机类别等。通过确定遗留对象的位置和/或类别,便于根据所述遗留对象的位置和/或类别执行后续操作,例如,发送通知消息、控制舱内环境参数以降低遗留对象发生安全问题的概率等。
可以将所述第一图像输入预先训练的第二机器学习模型,并根据所述第二机器学习模型的输出结果得到所述遗留对象的位置和/或类别。所述第二机器学习模型与前述第一机器学习模型可以是相同的模型,也可以是不同的模型。进一步地,所述第二机器学习模型可以包括第一子模型和第二子模型,其中,所述第一子模型用于检测活体对象,第二子模型用于检测物品对象。其中,所述第一子模型可以通过活体对象的样本图像预先训练,所述第二子模型可以通过物品对象的样本图像预先训练。样本图像中可以包括在不同光照强度、不同场景下拍摄到的图像,以提高训练出的对象识别模型的准确性。
在一些实施例中,在检测到所述遗留对象的情况下,可以向所述交通工具和/或预先 设置的通信终端发送第一通知消息。通过发送第一通知消息,有助于人员发现遗留对象,从而及时取出遗留对象。所述第一通知消息中可以包括用于表征存在遗留对象的提示信息。进一步地,所述第一通知消息中还包括遗落所述遗留对象的时间、所述遗留对象的类别和/或位置。在向所述交通工具发送第一通知消息之后,所述交通工具可以输出提示信息,包括通过车载音响或者喇叭输出的声音提示信息和/或通过车灯输出的灯光提示信息。进一步地,通过输出不同的声音提示信息,或者通过不同位置的发光设备输出灯光提示信息,从而能够指示遗留对象的不同位置。
所述通信终端可以通过移动数据连接、蓝牙连接、WiFi连接等任一连接方式与交通工具建立通信连接,所述通信终端可以是手机、平板电脑、智能手表、笔记本电脑等智能终端。在向所述通信终端发送第一通知消息之后,所述通信终端可以输出提示信息,如图4所示,提示信息包括文本提示信息和图像提示信息中的至少一者。文本提示信息可以是诸如以下形式的文字内容:“汽车后座上有遗留物品”或者“车内有儿童”。还可以包括检测到遗留对象的时间,例如“时间:2020年2月13日18:35;位置:汽车后座;遗留对象类别:钱包”。图像提示信息中可以包括第一图像,也可以仅包括从第一图像中截取的遗留对象的图像。通过发送截取的遗留对象的图像,能够减少数据传输量。
在检测到所述遗留对象的情况下,还可以基于所述遗留对象的类别,向所述交通工具发送用于调节所述舱内的环境参数的第一控制信息,从而为遗留对象提供较为舒适的舱内环境。例如,可以在所述遗留对象的类别为活体类别的情况下,向所述交通工具发送用于调节所述舱内的环境参数的第一控制信息,例如,窗户开启控制信息和/或空调运行控制信息,以降低舱内环境较差(例如,温度过高或者含氧量较低)导致活体类别的遗留对象发生危险的概率。。具体来说,在所述遗留对象的类别为活体类别的情况下,可以向所述交通工具发送窗户开启控制信息,以打开所述交通工具的窗户。
所述窗户开启控制信息包括但不限于待开启的窗户的数量信息、位置信息和/或开启程度信息等。可选地,可以基于遗留对象所在位置、所述舱内环境参数和舱外环境参数中的至少一者生成所述窗户开启控制信息。例如,在遗留对象位于车辆后座的情况下,可以控制车辆后排的一扇窗户开启。又例如,在舱内含氧量低于一定阈值的情况下,可以控制车辆后排的两扇窗户开启。所述开启程度可以预先设置,例如,窗户开启的移动距离可以固定设置为5cm,这样,既能将舱内含氧量维持在所需范围内,又能防止舱外人员对遗留对象造成危害或者遗留对象翻窗而出,从而保证遗留对象的安全。所述开启程度也可以根据舱外的环境参数动态设置,例如,在舱外环境温度在预设范围之外的情 况下,所述开启程度可以设置得较小(例如,5cm),反之,所述开启程度可以设置得较大(例如,8cm)。这样,降低舱外环境对遗留对象的影响。
在所述遗留对象的类别为活体类别的情况下,还可以向所述交通工具发送空调运行控制信息,以打开所述交通工具的空调,进一步还可以控制所述空调的温度和/或运行模式(例如,制冷或者制热)。通过对窗户和/或空调进行控制,可以降低发生舱内温度过高或者含氧量不足等情况的概率,导致活体类别的遗留对象发生危险。
在一示例中,可以在检测到遗留对象的情况下直接发送所述第一控制信息,例如,可以向所述交通工具发送空调运行控制信息,使空调运行在活体适宜的温度/湿度控制模式,也可以向所述交通工具发送窗户开启控制信息,以控制车窗打开程度,如控制车窗打开一条缝隙而并非全部打开,以改善交通工具内的空气环境,同时以免车舱内活体离开车舱或者受到来自车舱外的威胁。
在另一示例中,也可以先检测所述舱内的环境参数,在所述环境参数超过预设范围的情况下才发送所述第一控制信息。例如,在检测到舱内的温度过高或者过低的情况下,可以向所述交通工具发送空调运行控制信息。当舱内温度适宜之后,还可以控制空调重新关闭。又例如,在检测到舱内的含氧量过低时,可以向所述交通工具发送窗户开启控制信息,以控制车窗打开一条缝隙。
在一些实施例中,在检测到所述遗留对象的情况下,可以确定所述遗留对象是否从所述舱内被取出。在确定所述遗留对象从所述舱内被取出的情况下,可以执行以下至少任一操作:记录所述遗留对象从所述舱内被取出的时间和/或取出所述遗留对象的人员的身份信息;向所述交通工具发送用于调节所述舱内的环境参数的第二控制信息;向预先设置的通信终端发送第二通知消息。例如,可以记录“2020年3月22日19:00:35,ID为XXX的用户取出了宠物”。如果在此之前向交通工具发送了用于调节所述舱内的环境参数的第一控制信息,则此时可以向所述交通工具发送用于调节所述舱内的环境参数的第二控制信息,例如,可以向所述交通工具发送窗户关闭控制信息和/或空调关闭控制信息,以关闭交通工具的窗户和/或空调。这样能够减少交通工具的能耗,且减少了人员手动操作,降低了操作复杂度。还可以根据记录的信息生成第二通知消息发送给通信终端,第二通知消息可以包括被取出的遗留对象的名称、类别、取出时间、取出人员的身份信息等中的至少之一。这样能在遗留对象被取出时及时发送通知,降低误取概率。
可选地,可以获取所述乘客离开所述交通工具之后的预设时间段内所述舱内的第三图像;根据所述第三图像与所述参考图像确定所述遗留对象是否从所述舱内被取出。具 体来说,可以根据第三图像与参考图像之间的差异,确定所述遗留对象是否从所述舱内被取出。如果存在第三图像包括但所述参考图像未包括的至少一目标对象,确定存在遗留对象未被取出。否则,确定遗留对象均已被取出。通过获取图像来检测遗留对象是否被取出,实现简单,检测成本低。
可选地,可以获取所述乘客离开所述交通工具之后的预设时间段内所述舱内的第三图像;根据所述第三图像与所述第一图像确定所述遗留对象是否从所述舱内被取出。具体来说,可以根据第三图像与第一图像之间的差异,确定所述遗留对象是否从所述舱内被取出。如果存在第三图像包括但所述第一图像包括的至少一目标对象,确定存在遗留对象被取出。否则,确定遗留对象未被取出。
在一些实施例中,在检测到遗留对象的情况下,可以将遗留对象的信息存储到数据库中。所述遗留对象的信息可包括遗留对象的图像、位置、类别、遗留时间、遗留对象所属的人员、取出遗留对象的人员等信息中的至少一者。通过建立数据库,便于查看遗留对象的信息。
在上述实施例中,可以在需要进行图像采集的情况下启动交通工具上的图像采集装置(例如,摄像头)来采集图像,还可以在不需要进行图像采集的情况下关闭所述图像采集装置。这样,使图像采集装置无需始终处于工作状态,减少了能耗。例如,在所述交通工具启动时,启用所述交通工具上的图像采集装置。或者在确定人员进入所述交通工具的情况下,启用所述交通工具上的图像采集装置。或者在确定人员即将离开所述交通工具的情况下,启用所述交通工具上的图像采集装置。又例如,如果需要检测所述遗留对象是否从所述舱内被取出,在确定所述遗留对象从所述舱内被取出的情况下,关闭所述图像采集装置。如果仅需要检测遗留对象,则在人员离开所述交通工具的情况下,关闭所述图像采集装置。
本公开实施例的遗留对象检测方式,不仅能够检测活体,还能检测静态物体,且检测准确率高。本公开实施例可用于私家车、网约车或者校车等不同的应用场景下,适用范围广。其中,人员上下车可以根据实际场景按照不同的方式确定。例如,在私家车场景下,可以根据司机的通信终端与车辆之间的通信连接的信号强度确定司机是否上车或者下车。在网约车场景下,可以根据司机在网约车应用程序中的操作(例如,确认接到乘客的操作或者确认到达目的地的操作)确定乘客是否上车或者下车。
本领域技术人员可以理解,在具体实施方式的上述方法中,各步骤的撰写顺序并不意味着严格的执行顺序而对实施过程构成任何限定,各步骤的具体执行顺序应当以其功 能和可能的内在逻辑确定。
如图5所示,本公开还提供一种遗留对象检测装置,所述装置包括:第一获取模块501、第一采集模块502和检测模块503。
第一获取模块501用于获取交通工具的舱内无遗留对象的情况下所述舱内的参考图像。
第一采集模块502用于采集人员离开所述交通工具的情况下所述舱内的第一图像。
检测模块503用于根据所述第一图像和参考图像检测所述人员离开所述交通工具时所述舱内的遗留对象。
在一些实施例中,所述第一获取模块501用于:在所述人员进入所述交通工具之前采集所述舱内的目标图像,所述目标图像为所述参考图像;所述检测模块用于:根据所述第一图像和参考图像之间的差异,检测所述人员离开所述交通工具时所述舱内的遗留对象。
在一些实施例中,所述装置还包括:第二采集模块,用于采集所述人员进入所述交通工具的情况下所述舱内的第二图像;第一确定模块,用于根据所述第二图像和所述参考图像之间的差异,确定待检测的遗留对象。
在一些实施例中,所述第一确定模块包括:第一获取单元,用于分别获取所述第二图像和参考图像的目标对象信息;第一确定单元,用于将所述第二图像包括但所述参考图像未包括的至少一目标对象,确定为待检测的所述遗留对象。
在一些实施例中,所述检测模块503包括:第一采集单元,用于采集所述人员进入所述交通工具的情况下所述舱内的第二图像;第二确定单元,用于基于所述参考图像和所述第二图像,确定待检测的遗留对象在所述第一图像中的位置;检测单元,用于基于所述待检测的遗留对象在所述第一图像中的位置,从所述第一图像中的待检测的遗留对象中检测所述遗留对象。
在一些实施例中,所述第二确定单元包括:跟踪子单元,用于基于所述参考图像和多帧所述第二图像,对待检测的遗留对象进行目标跟踪;确定子单元,用于根据目标跟踪的结果确定待检测的遗留对象在所述第一图像中的位置。
在一些实施例中,所述装置还包括:接收模块,用于在根据所述第一图像和参考图像检测所述人员离开所述交通工具时所述舱内的遗留对象之前,接收遗留对象确认指令; 过滤模块,用于根据所述遗留对象确认指令从所述第一图像中过滤掉非遗留对象。
在一些实施例中,所述装置还包括:第二确定模块,用于在检测到所述遗留对象的情况下,确定所述遗留对象在所述舱内的位置和/或所述遗留对象的类别。
在一些实施例中,所述装置还包括:第一发送模块,用于在检测到所述遗留对象的情况下,向所述交通工具和/或预先设置的通信终端发送第一通知消息。
在一些实施例中,所述装置还包括:第二发送模块,用于在检测到所述遗留对象的情况下,基于所述遗留对象的类别,向所述交通工具发送用于调节所述舱内的环境参数的第一控制信息。
在一些实施例中,所述第二发送模块用于:在所述遗留对象的类别为活体类别的情况下,向所述交通工具发送用于调节所述舱内的环境参数的第一控制信息。
在一些实施例中,所述第二发送模块用于:向所述交通工具发送窗户开启控制信息和/或空调运行控制信息。
在一些实施例中,所述装置还包括:第三确定模块,用于在检测到所述遗留对象的情况下,确定所述遗留对象是否从所述舱内被取出;执行模块,用于在确定所述遗留对象从所述舱内被取出的情况下,执行以下至少任一操作:记录所述遗留对象从所述舱内被取出的时间和/或取出所述遗留对象的人员的身份信息;向所述交通工具发送用于调节所述舱内的环境参数的第二控制信息;向预先设置的通信终端发送第二通知消息。
在一些实施例中,所述第三确定模块包括:第二获取单元,用于获取所述乘客离开所述交通工具之后的预设时间段内所述舱内的第三图像;第三确定单元,用于根据所述第三图像与所述参考图像确定所述遗留对象是否从所述舱内被取出。
在一些实施例中,所述装置还包括:启用模块,用于在所述交通工具启动时,启用所述交通工具上的图像采集装置。
在一些实施例中,所述装置还包括:关闭模块,用于在确定所述遗留对象从所述舱内被取出的情况下,关闭所述图像采集装置。
本公开实施例提供的装置具有的功能或包含的模块可以用于执行上文方法实施例描述的方法,其具体实现可以参照上文方法实施例的描述,为了简洁,这里不再赘述。
本公开实施例还包括一种计算机设备,包括存储器和处理器。所述存储器上存储有计算机程序,所述计算机程序可由所述处理器执行,以实现任一实施例所述的方法。
图6示出了本说明书实施例所提供的一种更为具体的计算机设备硬件结构示意图,该设备可以包括:处理器601、存储器602、输入/输出接口603、通信接口604和总线605。其中处理器601、存储器602、输入/输出接口603和通信接口604通过总线605实现彼此之间在设备内部的通信连接。
处理器601可以采用通用的CPU(Central Processing Unit,中央处理器)、微处理器、应用专用集成电路(Application Specific Integrated Circuit,ASIC)、或者一个或多个集成电路等方式实现,用于执行相关程序,以实现本说明书实施例所提供的技术方案。
存储器602可以采用ROM(Read Only Memory,只读存储器)、RAM(Random Access Memory,随机存取存储器)、静态存储设备,动态存储设备等形式实现。存储器602可以存储操作系统和其他应用程序,在通过软件或者固件来实现本说明书实施例所提供的技术方案时,相关的程序代码保存在存储器602中,并由处理器601来调用执行。
输入/输出接口603用于连接输入/输出模块,以实现信息输入及输出。输入输出/模块可以作为组件配置在设备中(图中未示出),也可以外接于设备以提供相应功能。其中输入设备可以包括键盘、鼠标、触摸屏、麦克风、各类传感器等,输出设备可以包括显示器、扬声器、振动器、指示灯等。
通信接口604用于连接通信模块(图中未示出),以实现本设备与其他设备的通信交互。其中通信模块可以通过有线方式(例如USB、网线等)实现通信,也可以通过无线方式(例如移动网络、WIFI、蓝牙等)实现通信。
总线605包括一通路,在设备的各个组件(例如处理器601、存储器602、输入/输出接口603和通信接口604)之间传输信息。
需要说明的是,尽管上述设备仅示出了处理器601、存储器602、输入/输出接口603、通信接口604以及总线605,但是在具体实施过程中,该设备还可以包括实现正常运行所必需的其他组件。此外,本领域的技术人员可以理解的是,上述设备中也可以仅包含实现本说明书实施例方案所必需的组件,而不必包含图中所示的全部组件。
如图7所示,本公开实施例还提供一种交通工具,所述交通工具的舱内设置有图像采集装置,以及与所述图像采集装置通信连接的如本公开任一实施例所述的遗留对象检测装置或如本公开任一实施例所述的计算机设备。
所述图像采集装置用于获取所述第一图像。所述图像采集装置可以从人员进入舱内开始拍摄所述舱内的待处理图像,直到所述人员离开所述舱内,也可以在人员进入舱内 一段时间后,再开始拍摄所述舱内的待处理图像。
在一些实施例中,所述图像采集装置可以设置在所述舱内的顶部。舱内的图像采集装置的数量可以是一个,也可以是多个。当图像采集装置的数量为1时,通过该图像采集装置采集整个舱内的待处理图像。当图像采集装置的数量大于1时,通过各个图像采集装置分别采集舱内的一个子区域的待处理图像。图像采集装置的数量、位置以及在舱内的分布,可以根据舱内的形状和尺寸,以及图像采集装置的视野范围确定。例如,对于车厢这种窄而长的区域,可以在车顶(指内侧的顶部)中心设置一个图像采集装置,如图8(A)所示;也可以在每排座椅上方设置一个图像采集装置,如图8(B)所示。通过设置多个图像采集装置,使拍摄到的区域更加全面。
在一些实施例中,对图像采集装置采集的第一图像可以进行逐帧检测;在另一些实施例中,由于图像采集装置采集图像的帧率常常是比较大的,例如,每秒采集几十帧图像,因此,也可以对第一图像进行跳帧检测,例如,仅检测拍摄到的第1、3、5帧第一图像。跳帧步距(即,进行检测的相邻图像帧之间的帧数间隔)可根据实际场景确定,例如,在光线较差、检测对象较多、拍摄到的第一图像清晰度较低等情况下,跳帧步距可设置得较小,在光线较好、检测对象较少、拍摄到的第一图像清晰度较高等情况下,跳帧步距可设置得较大。
图像采集装置的视野可能较大,既包括比较容易出现遗留对象的区域,又包括一般不会出现遗留对象的区域。因此,在所述第一图像中检测所述舱内的潜在遗留对象时,还可以先确定所述第一图像中的感兴趣区域,然后,在感兴趣区域内检测遗留对象。例如,车舱的座椅上比较容易出现遗留对象,而中控台上一般不会出现遗留对象,因此,座椅为感兴趣区域。
本说明书实施例还提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现前述任一实施例所述的方法。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于 存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
通过以上的实施方式的描述可知,本领域的技术人员可以清楚地了解到本说明书实施例可借助软件加必需的通用硬件平台的方式来实现。基于这样的理解,本说明书实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在存储介质中,如ROM/RAM、磁碟、光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本说明书实施例各个实施例或者实施例的某些部分所述的方法。
上述实施例阐明的系统、装置、模块或单元,具体可以由计算机芯片或实体实现,或者由具有某种功能的产品来实现。一种典型的实现设备为计算机,计算机的具体形式可以是个人计算机、膝上型计算机、蜂窝电话、相机电话、智能电话、个人数字助理、媒体播放器、导航设备、电子邮件收发设备、游戏控制台、平板计算机、可穿戴设备或者这些设备中的任意几种设备的组合。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于装置实施例而言,由于其基本相似于方法实施例,所以描述得比较简单,相关之处参见方法实施例的部分说明即可。以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,在实施本说明书实施例方案时可以把各模块的功能在同一个或多个软件和/或硬件中实现。也可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。

Claims (20)

  1. 一种遗留对象检测方法,其特征在于,所述方法包括:
    获取交通工具的舱内无遗留对象的情况下所述舱内的参考图像;
    采集人员离开所述交通工具的情况下所述舱内的第一图像;
    根据所述第一图像和所述参考图像检测所述人员离开所述交通工具时所述舱内的遗留对象。
  2. 根据权利要求1所述的方法,其特征在于,
    所述获取交通工具的舱内无遗留对象的情况下所述舱内的参考图像,包括:在所述人员进入所述交通工具之前采集所述舱内的目标图像,将所述目标图像确定为所述参考图像;
    所述根据所述第一图像和所述参考图像检测所述人员离开所述交通工具时所述舱内的遗留对象,包括:根据所述第一图像和所述参考图像之间的差异,检测所述人员离开所述交通工具时所述舱内的遗留对象。
  3. 根据权利要求1或2所述的方法,其特征在于,所述方法还包括:
    在根据所述第一图像和所述参考图像检测所述人员离开所述交通工具时所述舱内的遗留对象之前,
    采集所述人员进入所述交通工具的情况下所述舱内的第二图像;
    根据所述第二图像和所述参考图像之间的差异,确定待检测的遗留对象。
  4. 根据权利要求3所述的方法,其特征在于,所述根据所述第二图像和所述参考图像之间的差异,确定待检测的遗留对象,包括:分别获取所述第二图像和所述参考图像中的目标对象信息;
    将所述第二图像包括但所述参考图像未包括的至少一目标对象,确定为待检测的所述遗留对象。
  5. 根据权利要求1所述的方法,其特征在于,所述根据所述第一图像和所述参考图像检测所述人员离开所述交通工具时所述舱内的遗留对象,包括:
    采集所述人员进入所述交通工具的情况下所述舱内的第二图像;
    基于所述参考图像和所述第二图像,确定待检测的遗留对象在所述第一图像中的位置;
    基于所述待检测的遗留对象在所述第一图像中的位置,从所述第一图像中的待检测的遗留对象中检测所述遗留对象。
  6. 根据权利要求5所述的方法,其特征在于,所述基于所述参考图像和所述第二图像,确定待检测的遗留对象在所述第一图像中的位置,包括:
    基于所述参考图像和多帧所述第二图像,对待检测的遗留对象进行目标跟踪;
    根据目标跟踪的结果确定所述待检测的遗留对象在所述第一图像中的位置。
  7. 根据权利要求1至6任意一项所述的方法,其特征在于,所述方法还包括:
    在根据所述第一图像和所述参考图像检测所述人员离开所述交通工具时所述舱内的遗留对象之前,接收遗留对象确认指令;
    根据所述遗留对象确认指令从所述第一图像中过滤掉非遗留对象。
  8. 根据权利要求1至7任意一项所述的方法,其特征在于,所述方法还包括:
    在检测到所述遗留对象的情况下,确定所述遗留对象在所述舱内的位置和/或所述遗留对象的类别。
  9. 根据权利要求1至8任意一项所述的方法,其特征在于,所述方法还包括:
    在检测到所述遗留对象的情况下,向所述交通工具和/或预先设置的通信终端发送第一通知消息。
  10. 根据权利要求1至9任意一项所述的方法,其特征在于,所述方法还包括:
    在检测到所述遗留对象的情况下,基于所述遗留对象的类别,向所述交通工具发送用于调节所述舱内的环境参数的第一控制信息。
  11. 根据权利要求10所述的方法,其特征在于,所述基于所述遗留对象的类别,向所述交通工具发送用于调节所述舱内的环境参数的第一控制信息,包括:
    在检测到所述遗留对象的类别为活体的情况下,向所述交通工具发送用于调节所述舱内的环境参数的第一控制信息。
  12. 根据权利要求10或11所述的方法,其特征在于,所述向所述交通工具发送用于调节所述舱内的环境参数的第一控制信息,包括:
    向所述交通工具发送窗户开启控制信息和/或空调运行控制信息。
  13. 根据权利要求1至12任意一项所述的方法,其特征在于,所述方法还包括:
    在检测到所述遗留对象的情况下,确定所述遗留对象是否从所述舱内被取出;
    在确定所述遗留对象从所述舱内被取出的情况下,执行以下至少任一操作:
    记录所述遗留对象从所述舱内被取出的时间;
    记录取出所述遗留对象的人员的身份信息;
    向所述交通工具发送用于调节所述舱内的环境参数的第二控制信息;
    向预先设置的通信终端发送第二通知消息。
  14. 根据权利要求13所述的方法,其特征在于,
    所述确定所述遗留对象是否从所述舱内被取出,包括:获取所述乘客离开所述交通工具之后的预设时间段内所述舱内的第三图像;根据所述第三图像与所述参考图像确定所述遗留对象是否从所述舱内被取出;和/或,
    所述方法还包括:在所述交通工具启动时,启用所述交通工具上的图像采集装置;和/或,
    所述方法还包括:在确定所述遗留对象从所述舱内被取出的情况下,关闭所述图像采集装置。
  15. 一种遗留对象检测装置,其特征在于,所述装置包括:
    第一获取模块,用于获取交通工具的舱内无遗留对象的情况下所述舱内的参考图像;
    第一采集模块,用于采集人员离开所述交通工具的情况下所述舱内的第一图像;
    检测模块,用于根据所述第一图像和所述参考图像检测所述人员离开所述交通工具时所述舱内的遗留对象。
  16. 一种计算机设备,包括存储器和处理器;计算机程序存储在所述存储器上并可运行在所述处理器上,其特征在于,所述处理器执行所述程序时实现权利要求1至14任意一项所述的方法。
  17. 一种交通工具,其特征在于,所述交通工具的舱内设置有图像采集装置,以及与所述图像采集装置通信连接的如权利要求15所述的遗留对象检测装置或如权利要求16所述的计算机设备。
  18. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现权利要求1至14任意一项所述的方法。
  19. 一种计算机程序产品,其特征在于,当所述计算机程序产品被计算机读取并执行时,实现如权利要求1至14任一项所述的方法。
  20. 一种计算机程序,包括计算机可读代码,其特征在于,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行用于实现权利要求1至14中任一所述的方法。
PCT/CN2020/093003 2020-03-25 2020-05-28 遗留对象检测 WO2021189641A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2021540530A JP7403546B2 (ja) 2020-03-25 2020-05-28 遺留対象検出
KR1020217022181A KR20210121015A (ko) 2020-03-25 2020-05-28 남겨진 객체의 검출

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010217625.9 2020-03-25
CN202010217625.9A CN111415347B (zh) 2020-03-25 2020-03-25 遗留对象检测方法和装置及交通工具

Publications (1)

Publication Number Publication Date
WO2021189641A1 true WO2021189641A1 (zh) 2021-09-30

Family

ID=71493201

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/093003 WO2021189641A1 (zh) 2020-03-25 2020-05-28 遗留对象检测

Country Status (4)

Country Link
JP (1) JP7403546B2 (zh)
KR (1) KR20210121015A (zh)
CN (1) CN111415347B (zh)
WO (1) WO2021189641A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115848306A (zh) * 2022-12-23 2023-03-28 阿维塔科技(重庆)有限公司 一种车辆遗留人员的检测方法、检测装置与车辆
CN117036482A (zh) * 2023-08-22 2023-11-10 北京智芯微电子科技有限公司 目标对象定位方法、装置、拍摄设备、芯片、设备及介质

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111931734A (zh) * 2020-09-25 2020-11-13 深圳佑驾创新科技有限公司 识别遗落物体的方法、装置、车载终端和存储介质
CN113792622A (zh) * 2021-08-27 2021-12-14 深圳市商汤科技有限公司 帧率调整方法及装置、电子设备和存储介质
CN113763683A (zh) * 2021-09-09 2021-12-07 南京奥拓电子科技有限公司 一种物品遗留提醒的方法、装置和存储介质
WO2023039781A1 (zh) * 2021-09-16 2023-03-23 华北电力大学扬中智能电气研究中心 一种遗留物检测方法、装置、电子设备及存储介质
CN116416192A (zh) * 2021-12-30 2023-07-11 华为技术有限公司 一种检测的方法及装置
CN117917586A (zh) * 2022-10-21 2024-04-23 法雷奥汽车内部控制(深圳)有限公司 舱内检测方法、舱内检测装置、计算机程序产品以及机动车辆

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105427529A (zh) * 2015-12-04 2016-03-23 北京奇虎科技有限公司 一种车内环境监控的方法及终端
CN106560836A (zh) * 2015-10-02 2017-04-12 Lg电子株式会社 在车辆中提供防物品遗失服务的设备、方法和移动终端
CN108973853A (zh) * 2018-06-15 2018-12-11 威马智慧出行科技(上海)有限公司 一种车辆警示装置及车辆警示方法
CN109733315A (zh) * 2019-01-15 2019-05-10 吉利汽车研究院(宁波)有限公司 一种共享汽车的管理方法及系统
CN110758320A (zh) * 2019-10-23 2020-02-07 上海能塔智能科技有限公司 自助试驾的防遗留处理方法、装置、电子设备与存储介质
CN110857073A (zh) * 2018-08-24 2020-03-03 通用汽车有限责任公司 提供遗忘通知的系统和方法

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001285842A (ja) 2000-03-29 2001-10-12 Minolta Co Ltd 監視システム
JP2002063668A (ja) 2000-06-07 2002-02-28 Toshiba Corp 車内の人検出通報装置及び危険状態回避装置
JP4419672B2 (ja) 2003-09-16 2010-02-24 株式会社デンソー 車両内忘れ物防止装置
JP4441887B2 (ja) 2006-03-31 2010-03-31 株式会社デンソー 自動車用ユーザーもてなしシステム
CN101777183A (zh) * 2009-01-13 2010-07-14 北京中星微电子有限公司 检测静止物体的方法、装置及检测遗留物体的方法、装置
JP6343769B2 (ja) 2013-08-23 2018-06-20 中嶋 公栄 忘れ物防止システム、旅客自動車の乗務員に対する情報提供方法、コンピュータプログラム
CN103605983B (zh) * 2013-10-30 2017-01-25 天津大学 一种遗留物检测和跟踪方法
CN103714325B (zh) * 2013-12-30 2017-01-25 中国科学院自动化研究所 基于嵌入式系统的遗留物和遗失物实时检测方法
CN106921846A (zh) * 2015-12-24 2017-07-04 北京计算机技术及应用研究所 视频移动终端遗留物检测装置
JP6909960B2 (ja) 2017-03-31 2021-07-28 パナソニックIpマネジメント株式会社 検知装置、検知方法及び検知プログラム
CN113163119A (zh) * 2017-05-24 2021-07-23 深圳市大疆创新科技有限公司 拍摄控制方法及装置
TWI637323B (zh) * 2017-11-20 2018-10-01 緯創資通股份有限公司 基於影像的物件追蹤方法及其系統與電腦可讀取儲存媒體
JP2019168815A (ja) 2018-03-22 2019-10-03 東芝メモリ株式会社 情報処理装置、情報処理方法、及び情報処理プログラム

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106560836A (zh) * 2015-10-02 2017-04-12 Lg电子株式会社 在车辆中提供防物品遗失服务的设备、方法和移动终端
CN105427529A (zh) * 2015-12-04 2016-03-23 北京奇虎科技有限公司 一种车内环境监控的方法及终端
CN108973853A (zh) * 2018-06-15 2018-12-11 威马智慧出行科技(上海)有限公司 一种车辆警示装置及车辆警示方法
CN110857073A (zh) * 2018-08-24 2020-03-03 通用汽车有限责任公司 提供遗忘通知的系统和方法
CN109733315A (zh) * 2019-01-15 2019-05-10 吉利汽车研究院(宁波)有限公司 一种共享汽车的管理方法及系统
CN110758320A (zh) * 2019-10-23 2020-02-07 上海能塔智能科技有限公司 自助试驾的防遗留处理方法、装置、电子设备与存储介质

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115848306A (zh) * 2022-12-23 2023-03-28 阿维塔科技(重庆)有限公司 一种车辆遗留人员的检测方法、检测装置与车辆
CN115848306B (zh) * 2022-12-23 2024-05-17 阿维塔科技(重庆)有限公司 一种车辆遗留人员的检测方法、检测装置与车辆
CN117036482A (zh) * 2023-08-22 2023-11-10 北京智芯微电子科技有限公司 目标对象定位方法、装置、拍摄设备、芯片、设备及介质
CN117036482B (zh) * 2023-08-22 2024-06-14 北京智芯微电子科技有限公司 目标对象定位方法、装置、拍摄设备、芯片、设备及介质

Also Published As

Publication number Publication date
CN111415347B (zh) 2024-04-16
CN111415347A (zh) 2020-07-14
KR20210121015A (ko) 2021-10-07
JP2022530299A (ja) 2022-06-29
JP7403546B2 (ja) 2023-12-22

Similar Documents

Publication Publication Date Title
WO2021189641A1 (zh) 遗留对象检测
US20230294665A1 (en) Systems and methods for operating a vehicle based on sensor data
CN111937050B (zh) 乘客相关物品丢失减少
CN108725357B (zh) 基于人脸识别的参数控制方法、系统与云端服务器
US20170043783A1 (en) Vehicle control system for improving occupant safety
US20200171977A1 (en) Vehicle occupancy management systems and methods
US10249088B2 (en) System and method for remote virtual reality control of movable vehicle partitions
US20180322413A1 (en) Network of autonomous machine learning vehicle sensors
US20160249191A1 (en) Responding to in-vehicle environmental conditions
CN107357194A (zh) 自主驾驶车辆中的热监测
US20170154513A1 (en) Systems And Methods For Automatic Detection Of An Occupant Condition In A Vehicle Based On Data Aggregation
US20180147986A1 (en) Method and system for vehicle-based image-capturing
WO2019095887A1 (zh) 通用的车内乘客防遗忘的传感装置的实现方法和系统
US11577688B2 (en) Smart window apparatus, systems, and related methods for use with vehicles
US11783636B2 (en) System and method for detecting abnormal passenger behavior in autonomous vehicles
US20210081687A1 (en) System and method for providing rear seat monitoring within a vehicle
US11572039B2 (en) Confirmed automated access to portions of vehicles
US11845390B2 (en) Cabin monitoring system
US20230153424A1 (en) Systems and methods for an automous security system
CN114809833B (zh) 开启车门的控制方法、车门控制装置及车门控制系统
CN116834691A (zh) 车内遗留对象的提醒方法、系统、计算机存储介质及车辆

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2021540530

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20927363

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20927363

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 13/03/2023)

122 Ep: pct application non-entry in european phase

Ref document number: 20927363

Country of ref document: EP

Kind code of ref document: A1