CN111738158A - Control method and device for vehicle, electronic device and storage medium - Google Patents

Control method and device for vehicle, electronic device and storage medium Download PDF

Info

Publication number
CN111738158A
CN111738158A CN202010582762.2A CN202010582762A CN111738158A CN 111738158 A CN111738158 A CN 111738158A CN 202010582762 A CN202010582762 A CN 202010582762A CN 111738158 A CN111738158 A CN 111738158A
Authority
CN
China
Prior art keywords
passenger
vehicle
seat
getting
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010582762.2A
Other languages
Chinese (zh)
Inventor
何任东
吴阳平
许亮
李轲
张连路
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Original Assignee
Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Lingang Intelligent Technology Co Ltd filed Critical Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Priority to CN202010582762.2A priority Critical patent/CN111738158A/en
Publication of CN111738158A publication Critical patent/CN111738158A/en
Priority to JP2021564084A priority patent/JP2022541703A/en
Priority to PCT/CN2020/135808 priority patent/WO2021258664A1/en
Priority to KR1020217034158A priority patent/KR20220000902A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/40Business processes related to the transportation industry
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/593Recognising seat occupancy
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Tourism & Hospitality (AREA)
  • Primary Health Care (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Traffic Control Systems (AREA)
  • Operations Research (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The disclosure relates to a control method and apparatus of a vehicle, an electronic device, and a storage medium. The method comprises the following steps: acquiring a face image of a passenger of a vehicle, and determining identity information of the passenger based on the face image; acquiring an image of the interior of the vehicle and determining seating information of the occupant based on the image of the interior; determining a getting-off point of the passenger according to the identity information of the passenger; and prompting to get off the vehicle according to the information of the getting-off place and the seat of the passenger.

Description

Control method and device for vehicle, electronic device and storage medium
Technical Field
The present disclosure relates to the field of traffic technologies, and in particular, to a method and an apparatus for controlling a vehicle, an electronic device, and a storage medium.
Background
For primary and middle school students, teachers supervise at schools and parents monitor at home, but commuting between home and schools is often neglected. At present, parents take the pictures themselves, or children go to school or go to school independently or together. The efficiency of the parent delivery mode is low; in consideration of traffic conditions and public security, the way in which children go to school and go to school alone or together is less safe. Therefore, many schools offer school bus services for the convenience of parents. If the school bus lacks reasonable management, a dead zone of monitoring can be caused.
Disclosure of Invention
The present disclosure provides a control solution for a vehicle.
According to an aspect of the present disclosure, there is provided a control method of a vehicle, including:
acquiring a face image of a passenger of a vehicle, and determining identity information of the passenger based on the face image;
acquiring an image of the interior of the vehicle and determining seating information of the occupant based on the image of the interior;
determining a getting-off point of the passenger according to the identity information of the passenger;
and prompting to get off the vehicle according to the information of the getting-off place and the seat of the passenger.
In the embodiment of the disclosure, the face image of the passenger of the vehicle is collected, the identity information of the passenger is determined based on the face image, the image in the vehicle is collected, the seat information of the passenger is determined based on the image in the vehicle, the getting-off point of the passenger is determined according to the identity information of the passenger, and the getting-off prompt is performed according to the getting-off point and the seat information of the passenger, so that the getting-off prompt of the passenger can be performed in a targeted manner, and the passenger can be effectively reminded to get off at the correct parking station.
In one possible implementation manner, the determining the getting-off location of the passenger according to the identity information of the passenger includes:
and determining the getting-off place of the passenger according to the identity information of the passenger and the current time.
In this implementation, by determining the getting-off point of the passenger according to the identity information of the passenger and the current time, the getting-off point of the passenger can be accurately determined at different time periods.
In one possible implementation form of the method,
a display screen corresponding to the seat is arranged in the vehicle;
the getting-off prompt according to the getting-off place and seat information of the passenger comprises the following steps: determining a display screen corresponding to the seat of the passenger according to the seat information of the passenger; and prompting to get off according to the getting-off place of the passenger through a display screen corresponding to the seat of the passenger.
In this implementation manner, the display screen corresponding to the seat of the passenger is determined according to the seat information of the passenger, and the getting-off prompt is performed according to the getting-off point of the passenger through the display screen corresponding to the seat of the passenger, so that the getting-off prompt can be performed on the passenger through the display screen corresponding to the seat of the passenger in a targeted manner, and the passenger can be effectively reminded to get off at the correct parking point.
In one possible implementation form of the method,
the display screens in the vehicles correspond to the seats one by one;
alternatively, the first and second electrodes may be,
the display screen in the vehicle corresponds one-to-one to the seating areas, wherein any seating area includes at least one seat.
In this implementation, the getting-off prompt is performed according to the getting-off point of the passenger through the display screens corresponding to the seats of the passenger one by one, so that the getting-off prompt can be performed on the passenger through the display screens corresponding to the seats of the passenger in a targeted manner; through the display screen corresponding to the seat area one to one, the getting-off prompt is carried out according to the getting-off position of the passenger, so that the cost of arranging the display screen in the vehicle is reduced on the premise of carrying out the getting-off prompt on the passenger in a targeted manner, and the total space occupied by the display screen in the vehicle is reduced.
In one possible implementation form of the method,
the vehicle is provided with vibration modules which correspond to the seats one by one;
the getting-off prompt according to the getting-off place and seat information of the passenger comprises the following steps: determining a vibration module corresponding to the seat of the passenger according to the seat information of the passenger; and controlling the vibration module corresponding to the seat of the passenger to vibrate according to the getting-off place of the passenger.
In the implementation mode, the vibration module corresponding to the seat of the passenger can be used for prompting the passenger to get off the vehicle in a targeted manner, so that the passenger can be effectively reminded of getting off the vehicle at the correct stop station, for example, the sleeping passenger can be effectively reminded of getting off the vehicle in time.
In one possible implementation manner, the providing of the get-off prompt according to the get-off location and the seat information of the passenger includes:
according to the getting-off place and the seat information of the passenger, getting-off seat information corresponding to the stop station of the vehicle is determined, wherein the getting-off seat information corresponding to any stop station comprises the following steps: the getting-off place is the seat information of the passengers at the stop station;
the getting-off prompt is carried out according to the getting-off seat information corresponding to the stop station through a display screen corresponding to a driver of the vehicle; and/or carrying out get-off prompt according to the information of the get-off seat corresponding to the stop station through the broadcast of the vehicle.
In the implementation mode, the driver of the vehicle is reminded of getting off according to the getting-off seat information corresponding to the stop stations through the display screen corresponding to the driver of the vehicle, so that the driver can be reminded whether the passengers of the stop stations get off and whether the passengers get off the vehicle at the stop stations, wherein the stop stations are the stop stations, and the probability that the passengers get off or forget to get off the vehicle can be reduced. Through the broadcasting of the vehicle, the getting-off prompt is carried out according to the getting-off seat information corresponding to the stop station, so that all people in the vehicle can hear the getting-off prompt, if the getting-off position is that the passenger at the stop station which arrives at the current position does not get off (for example, the passenger does not see or hear the getting-off prompt because of sleeping), other passengers (for example, the passenger near the passenger) can remind the passenger of getting off, and the probability that the passenger gets off by mistake or forgets to get off can be reduced.
In a possible implementation manner, the prompting of getting off is performed according to the information of the getting off seat corresponding to the stop point through a display screen corresponding to a driver of the vehicle, including:
and displaying a getting-off seat bitmap according to getting-off seat information of a stop station to be reached or to be reached currently through a display screen corresponding to a driver of the vehicle, wherein the getting-off position is highlighted in the getting-off seat bitmap and is a seat of a passenger of the stop station to be reached or to be reached currently.
In this implementation, through the display screen that the driver of vehicle corresponds, according to the seat information of getting off of the stop that is about to arrive or arrives at present, show the seat bitmap of getting off, wherein in the seat bitmap of getting off, the outstanding seat that shows the place of getting off for the passenger of the stop that is about to arrive or arrives at present, can make things convenient for the driver to confirm directly perceivedly that the place of getting off is the seat of the passenger of the stop that is about to arrive or arrives at present from this, the driver can confirm according to the seat bitmap of getting off that whether the place of getting off is all got off and whether there is passenger to get off the wrong car for the passenger of this stop to can reduce the probability that passenger got off the wrong car or forgot to get off.
In one possible implementation manner, the method further includes:
detecting whether passengers of a stop station with the current getting-off place are all getting-off;
in response to the fact that the getting-off position is detected to be that passengers of the currently arriving stop station get off, prompting that the currently arriving stop station is finished getting off through a display screen corresponding to a driver of the vehicle; and/or responding to the situation that the existing getting-off place is detected to be that the passengers at the current arriving stop station do not get off the vehicle, and carrying out getting-off prompt through at least one of a display screen corresponding to the seats of the passengers, a vibration module corresponding to the seats of the passengers, a display screen corresponding to the driver of the vehicle and the broadcast of the vehicle, wherein the getting-off place is the current arriving stop station and does not get off the vehicle.
In the implementation mode, the passengers who answer the bus at the stop station which arrives at present all get off in response to the detection of the getting-off position, and the display screen corresponding to the driver of the vehicle prompts that the stop station which arrives at present is finished getting off, so that the parking time can be reduced, and the traffic efficiency of traffic work is improved. The getting-off prompt is carried out by responding to the fact that the position of getting-off is detected to be that the passenger at the current arriving stop does not get off, and the position of getting-off is at least one of a display screen corresponding to the seat of the passenger at the current arriving stop and not getting off, a vibration module corresponding to the seat of the passenger, a display screen corresponding to the driver of the vehicle and the broadcast of the vehicle, so that the probability that the passenger sits at the stop can be further reduced.
In one possible implementation, after the determining the location of the passenger for getting off the vehicle, the method further includes:
responding to the current time belonging to the school time and the fact that the vehicle is about to arrive at the getting-off place of the passenger, and sending receiving and sending information to a parent terminal corresponding to the passenger;
and/or the presence of a gas in the gas,
and sending receiving and sending information to a parent terminal corresponding to the passenger in response to the fact that the current time belongs to the time of school and the vehicle is about to arrive at the boarding place of any passenger.
In this implementation, the current time is the school time, and the vehicle is about to arrive at the get-off point of the passenger, and the delivery information is sent to the parent terminal corresponding to the passenger, so that the parent can be helped to arrive at the stop site in advance before the school bus arrives at the stop site, the condition that the student goes home independently by the parents and/or the student at the stop site and the like can be reduced, and the safety of the school bus commuting can be improved. By responding to the fact that the current time belongs to the school time and the vehicle is about to reach the boarding place of any passenger, the communication information is sent to the parent terminal corresponding to the passenger, so that parents and students can arrive at a stop site in advance before the school bus arrives at the stop site, the situation that the students such as the school bus and/or the students miss the school bus can be reduced, and the commuting efficiency of the school bus can be improved.
In one possible implementation, after the acquiring the image of the vehicle interior, the method further comprises:
detecting whether the occupant wears a seat belt based on an image of the interior after the occupant is detected to be seated;
in response to detecting that the occupant is not wearing a seat belt, performing a seat belt reminder according to seat information of the occupant.
In this embodiment, the seat belt presentation is performed based on the seat information of the occupant, so that the seat belt presentation can be performed in a targeted manner, and the occupant can be effectively reminded to wear the seat belt. For example, to the student who takes the school bus, carry out the safety belt suggestion through the seat information according to the student, can effectively remind the student to wear the safety belt to can improve the security of school bus commute.
In one possible implementation, the prompting a seat belt according to the seat information of the occupant includes:
and prompting a safety belt according to the seat information of the passenger through at least one of a display screen corresponding to the seat of the passenger, a vibration module corresponding to the seat of the passenger, a display screen corresponding to a driver of the vehicle and the broadcast of the vehicle.
In this embodiment, the seat belt presentation is performed on the basis of the seat information of the passenger on the display screen corresponding to the seat of the passenger, so that the passenger can be presented with the seat belt in a targeted manner on the display screen corresponding to the seat of the passenger, and the passenger can be effectively reminded to wear the seat belt. And carrying out safety belt prompt according to the seat information of the passenger through a display screen corresponding to the driver of the vehicle, so that the driver can remind the passenger to wear the safety belt according to the seat of the passenger. And carrying out safety belt prompting according to the seat information of the passengers through the broadcast of the vehicle, so that all people in the vehicle can hear the safety belt prompting and the mutual supervision among the passengers is facilitated.
In one possible implementation, the detecting whether the occupant wears a seat belt based on the image of the interior includes:
inputting the image of the interior into a first neural network, and outputting a belt wearing result of the occupant via the first neural network, wherein the first neural network is trained in advance using an image sample of a belt containing a preset color and/or a preset pattern.
In the implementation mode, the safety belt detection is performed through the first neural network, so that the safety belt detection precision can be improved, and the safety belt detection cost can be reduced.
In one possible implementation, the method further includes:
performing a live body detection based on the image of the interior after the vehicle is turned off, or after the vehicle is turned off and the doors are closed;
in response to detecting the living body, prompting the existence of the left-behind passenger in the vehicle through at least one of a display screen corresponding to a driver of the vehicle, a broadcast of the vehicle and a lamp of the vehicle, and/or sending a left-behind passenger prompting message to a terminal of a relevant person of the vehicle.
In this embodiment, when a living body is detected by performing living body detection based on the image of the inside after the vehicle is turned off or after the vehicle is turned off and the door is closed, the presence of the passenger is prompted, so that the safety of the vehicle can be further improved.
In one possible implementation, the method further includes:
when the passenger enters the vehicle, displaying welcome information aiming at the passenger through a display screen at the boarding position of the vehicle according to the identity information of the passenger;
and/or the presence of a gas in the gas,
and after the passenger sits down, displaying welcome information aiming at the passenger according to the identity information of the passenger through a display screen corresponding to the seat of the passenger.
In this implementation, the passenger can obtain a personalized riding experience by displaying welcome information for the passenger based on the identity information of the passenger via a display screen at the boarding location of the vehicle and/or displaying welcome information for the passenger based on the identity information of the passenger via a display screen corresponding to the seat of the passenger after the passenger is seated.
In one possible implementation, the method further includes:
acquiring an image of a driver of the vehicle, and performing fatigue state detection and/or negative emotion detection on the driver based on the image of the driver to obtain a fatigue state detection result and/or a negative emotion detection result corresponding to the driver;
and storing and processing the fatigue state detection result and/or the negative emotion detection result corresponding to the driver.
In this implementation, the fatigue state detection result and/or the negative emotion detection result corresponding to the driver are/is stored, so that subsequent improvement can be facilitated, and the driving safety of the vehicle can be enhanced.
According to an aspect of the present disclosure, there is provided a control method of a vehicle, including:
acquiring an image of the interior of the vehicle and determining seating information for occupants of the vehicle based on the image of the interior;
detecting whether the occupant wears a seat belt based on an image of the interior after the occupant is detected to be seated;
in response to detecting that the occupant is not wearing a seat belt, performing a seat belt reminder according to seat information of the occupant.
By presenting the seat belt based on the seat information of the occupant, the seat belt can be presented in a targeted manner, and the occupant can be effectively reminded to wear the seat belt. For example, to the student who takes the school bus, carry out the safety belt suggestion through the seat information according to the student, can effectively remind the student to wear the safety belt to can improve the security of school bus commute.
In one possible implementation, the detecting whether the occupant wears a seat belt based on the image of the interior includes:
inputting the image of the interior into a first neural network, and outputting a belt wearing result of the occupant via the first neural network, wherein the first neural network is trained in advance using an image sample of a belt containing a preset color and/or a preset pattern.
In the implementation mode, the safety belt detection is performed through the first neural network, so that the safety belt detection precision can be improved, and the safety belt detection cost can be reduced.
According to an aspect of the present disclosure, there is provided a control apparatus of a vehicle, including:
the device comprises a first determination module, a second determination module and a third determination module, wherein the first determination module is used for acquiring a face image of a passenger of a vehicle and determining the identity information of the passenger based on the face image;
a second determination module to acquire an image of the interior of the vehicle and determine seating information of the occupant based on the image of the interior;
the third determining module is used for determining the getting-off point of the passenger according to the identity information of the passenger;
and the first get-off prompt module is used for carrying out get-off prompt according to the information of the get-off place and the seat of the passenger.
In one possible implementation manner, the third determining module is configured to:
and determining the getting-off place of the passenger according to the identity information of the passenger and the current time.
In one possible implementation form of the method,
a display screen corresponding to the seat is arranged in the vehicle;
the first get-off prompt module is used for: determining a display screen corresponding to the seat of the passenger according to the seat information of the passenger; and prompting to get off according to the getting-off place of the passenger through a display screen corresponding to the seat of the passenger.
In one possible implementation form of the method,
the display screens in the vehicles correspond to the seats one by one;
alternatively, the first and second electrodes may be,
the display screen in the vehicle corresponds one-to-one to the seating areas, wherein any seating area includes at least one seat.
In one possible implementation form of the method,
the vehicle is provided with vibration modules which correspond to the seats one by one;
the first get-off prompt module is used for: determining a vibration module corresponding to the seat of the passenger according to the seat information of the passenger; and controlling the vibration module corresponding to the seat of the passenger to vibrate according to the getting-off place of the passenger.
In one possible implementation manner, the first get-off prompt module is configured to:
according to the getting-off place and the seat information of the passenger, getting-off seat information corresponding to the stop station of the vehicle is determined, wherein the getting-off seat information corresponding to any stop station comprises the following steps: the getting-off place is the seat information of the passengers at the stop station;
the getting-off prompt is carried out according to the getting-off seat information corresponding to the stop station through a display screen corresponding to a driver of the vehicle; and/or carrying out get-off prompt according to the information of the get-off seat corresponding to the stop station through the broadcast of the vehicle.
In one possible implementation manner, the first get-off prompt module is configured to:
and displaying a getting-off seat bitmap according to getting-off seat information of a stop station to be reached or to be reached currently through a display screen corresponding to a driver of the vehicle, wherein the getting-off position is highlighted in the getting-off seat bitmap and is a seat of a passenger of the stop station to be reached or to be reached currently.
In one possible implementation manner, the method further includes:
the first detection module is used for detecting whether passengers at the current arrival stop station get off the bus or not;
the second getting-off prompting module is used for responding to the fact that the getting-off position is that passengers of the currently arrived stop station get off, and prompting that the currently arrived stop station is finished getting off through a display screen corresponding to a driver of the vehicle; and/or responding to the situation that the existing getting-off place is detected to be that the passengers at the current arriving stop station do not get off the vehicle, and carrying out getting-off prompt through at least one of a display screen corresponding to the seats of the passengers, a vibration module corresponding to the seats of the passengers, a display screen corresponding to the driver of the vehicle and the broadcast of the vehicle, wherein the getting-off place is the current arriving stop station and does not get off the vehicle.
In a possible implementation manner, the apparatus further includes a sending prompting module, configured to:
responding to the current time belonging to the school time and the fact that the vehicle is about to arrive at the getting-off place of the passenger, and sending receiving and sending information to a parent terminal corresponding to the passenger;
and/or the presence of a gas in the gas,
and sending receiving and sending information to a parent terminal corresponding to the passenger in response to the fact that the current time belongs to the time of school and the vehicle is about to arrive at the boarding place of any passenger.
In one possible implementation, the apparatus further includes:
a second detection module configured to detect whether the occupant wears a seat belt based on an image of the inside after detecting that the occupant is seated;
and the safety belt prompting module is used for responding to the detection that the passenger does not wear the safety belt and prompting the safety belt according to the seat information of the passenger.
In one possible implementation, the seat belt reminder module is configured to:
and prompting a safety belt according to the seat information of the passenger through at least one of a display screen corresponding to the seat of the passenger, a vibration module corresponding to the seat of the passenger, a display screen corresponding to a driver of the vehicle and the broadcast of the vehicle.
In one possible implementation manner, the second detection module is configured to:
inputting the image of the interior into a first neural network, and outputting a belt wearing result of the occupant via the first neural network, wherein the first neural network is trained in advance using an image sample of a belt containing a preset color and/or a preset pattern.
In one possible implementation, the apparatus further includes:
the living body detection module is used for carrying out living body detection based on the image of the interior after the vehicle is switched off or after the vehicle is switched off and the vehicle door is closed;
and the left-behind passenger prompting module is used for responding to the detection of the living body, prompting the existence of the left-behind passenger in the vehicle through at least one of a display screen corresponding to a driver of the vehicle, the broadcast of the vehicle and a lamp of the vehicle, and/or sending left-behind passenger prompting information to a terminal of a relevant person of the vehicle.
In one possible implementation, the apparatus further includes a display module configured to:
when the passenger enters the vehicle, displaying welcome information aiming at the passenger through a display screen at the boarding position of the vehicle according to the identity information of the passenger;
and/or the presence of a gas in the gas,
and after the passenger sits down, displaying welcome information aiming at the passenger according to the identity information of the passenger through a display screen corresponding to the seat of the passenger.
In one possible implementation, the apparatus further includes:
the driver detection module is used for acquiring images of a driver of the vehicle, and detecting fatigue state and/or negative emotion of the driver based on the images of the driver to obtain a fatigue state detection result and/or a negative emotion detection result corresponding to the driver;
and the storage module is used for storing the fatigue state detection result and/or the negative emotion detection result corresponding to the driver.
According to an aspect of the present disclosure, there is provided a control apparatus of a vehicle, including:
a second determination module to acquire an image of an interior of the vehicle and determine seating information of an occupant of the vehicle based on the image of the interior;
a second detection module configured to detect whether the occupant wears a seat belt based on an image of the inside after detecting that the occupant is seated;
and the safety belt prompting module is used for responding to the detection that the passenger does not wear the safety belt and prompting the safety belt according to the seat information of the passenger.
In one possible implementation manner, the second detection module is configured to:
inputting the image of the interior into a first neural network, and outputting a belt wearing result of the occupant via the first neural network, wherein the first neural network is trained in advance using an image sample of a belt containing a preset color and/or a preset pattern.
According to an aspect of the present disclosure, there is provided an electronic device including: one or more processors; a memory for storing executable instructions; wherein the one or more processors are configured to invoke the memory-stored executable instructions to perform the above-described method.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
In the embodiment of the disclosure, the face image of the passenger of the vehicle is collected, the identity information of the passenger is determined based on the face image, the image in the vehicle is collected, the seat information of the passenger is determined based on the image in the vehicle, the getting-off point of the passenger is determined according to the identity information of the passenger, and the getting-off prompt is performed according to the getting-off point and the seat information of the passenger, so that the getting-off prompt of the passenger can be performed in a targeted manner, and the passenger can be effectively reminded to get off at the correct parking station. For example, in school bus commuting, children sometimes forget to get off the bus at fixed points because of being young and sleeping or vague. By adopting the embodiment of the disclosure, students taking school buses can be effectively reminded to get off at the correct stop, and the probability of getting off wrongly or forgetting to get off is greatly reduced. Compared with a school bus management mode of reminding getting-off through manpower in the related art, the embodiment of the disclosure can reduce cost and improve efficiency.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 shows a flowchart of a control method of a vehicle according to an embodiment of the present disclosure.
Fig. 2 shows a schematic view of a lower seating diagram in an embodiment of the present disclosure.
Fig. 3 shows a block diagram of a control device of a vehicle provided by an embodiment of the present disclosure.
Fig. 4 illustrates a block diagram of an electronic device 800 provided by an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
Fig. 1 shows a flowchart of a control method of a vehicle according to an embodiment of the present disclosure. The execution subject of the control method of the vehicle may be a control device of the vehicle. For example, the control device of the vehicle may be mounted on the vehicle. For example, the control method of the vehicle may be performed by a terminal device or other processing device. The terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, or a vehicle-mounted device. In some possible implementations, the vehicle control method may be implemented by a processor calling computer readable instructions stored in a memory. As shown in fig. 1, the control method of a vehicle includes steps S11 through S14.
In step S11, a face image of an occupant of a vehicle is acquired, and based on the face image, identity information of the occupant is determined.
In the embodiment of the present disclosure, the vehicle may be a vehicle for commuting, such as a school bus, a regular bus, and the like, or may be other public vehicles or private vehicles, such as a net appointment bus, a bus, and the like. An occupant of a vehicle may represent a person riding the vehicle.
In an embodiment of the present disclosure, a camera is provided on a vehicle. The camera may include one or more of an RGB (Red, Green, Blue) camera, an RGBD (Red, Green, Blue, Depth) camera, a binocular infrared camera, and an infrared thermal imaging camera module. By adopting a binocular infrared camera or an RGBD camera, the function of living body detection can be realized.
In one possible implementation, facial images of a passenger of the vehicle may be collected via the camera upon detecting that a human body is proximate to the camera. For example, a distance sensor or a pyroelectric infrared sensor may be provided in the vicinity of the camera to detect whether a human body approaches the camera by the distance sensor or the pyroelectric infrared sensor or the like.
In the embodiment of the present disclosure, the correspondence between the face information and the identity information of the occupant of the vehicle may be stored in advance. The face information of the passenger can include a face photo of the passenger and/or face features extracted from the face photo of the passenger. The identity information of the occupant may represent information that can be used to uniquely determine the identity of the occupant. For example, the identity information of the occupant may include at least one of the occupant's name, identification number, serial number, and the like.
In the embodiment of the present disclosure, the facial image of the passenger of the vehicle may be acquired by a camera at a boarding location of the vehicle (for example, on or near a door of a boarding door, a position of a card reader for boarding and disembarking the vehicle, and the like), or the facial image of the passenger of the vehicle may be acquired by a camera inside the vehicle. The identity information of the passenger can be determined by comparing the acquired face image of the passenger with face information stored in advance. In a possible implementation manner, the face features in the face image may be extracted through a second neural network, the face features are compared with the face features stored in advance, and the identity information corresponding to the face image (i.e., the identity information of the passenger) is determined according to the comparison result. The second neural network can be trained in advance based on a face image set with face key point labeling information, so that the second neural network learns the capabilities of face positioning and face feature extraction. The key points of the face can include key points of forehead, eyes, eyebrows, mouth, nose, face contour and other parts. The number of face key points may be 21, 106, 240, etc. The structure of the second neural network is not limited in the embodiments of the present disclosure, and may include, for example and without limitation, a Back Propagation (BP) neural network, a convolutional neural network, a Radial Basis Function (RBF) neural network, a perceptron neural network, a linear neural network, a feedback neural network, and the like.
In one possible implementation, identity information of occupants of the vehicle and ride times may be stored for subsequent analysis of ride conditions of occupants of the vehicle. Wherein the ride time may include a time of entry into the vehicle and/or a time of exit from the vehicle. For example, identity information of an occupant of the vehicle and a ride time may be stored in a memory of the vehicle. For another example, the identity information of the occupant of the vehicle and the ride time may be transmitted to a server to be stored by the server.
In a possible implementation manner, after it is detected that the passenger enters the vehicle, a boarding reminding message may be sent to a parent terminal (i.e., a parent terminal) corresponding to the passenger to prompt the parent of the passenger that the passenger has got into the vehicle. For example, the school bus of vehicle, the passenger is the student, detects the student and gets into the school bus after, can send the warning information of getting on the bus to the parent's terminal that this student corresponds to the suggestion head of a family this student has got on the bus, thereby makes things convenient for the head of a family remote monitoring child to take the condition of school bus.
In a possible implementation manner, after the passenger is detected to enter the vehicle, the boarding information corresponding to the passenger can be sent to a service end, and a parent of the passenger can view the boarding record of the passenger through an APP (application). Wherein the boarding information corresponding to the occupant may include the identity information of the occupant and the boarding time of the occupant.
In step S12, an image of the interior of the vehicle is captured, and based on the image of the interior, the seat information of the occupant is determined.
In the disclosed embodiment, at least one camera may be installed inside the vehicle, for example, a plurality of cameras may be installed inside the vehicle to capture images of the inside of the vehicle. For example, the vehicle is a school bus, and cameras can be mounted around the inside of the school bus to acquire images of students at various positions in the school bus.
In the disclosed embodiment, the seat information of any one occupant may be information that can uniquely determine the seat of that occupant. For example, the seat information may be a seat number or the like.
In one possible implementation, the movement track of the passenger in the vehicle can be tracked according to the image and/or video stream in the vehicle, and the seat information of the passenger can be determined according to the movement track of the passenger in the vehicle.
In another possible implementation, after the occupant is detected to be seated, the position of the occupant may be analyzed according to an image inside the vehicle to determine seat information of the occupant.
In step S13, a drop-off location of the occupant is determined based on the identity information of the occupant.
In the embodiment of the present disclosure, the correspondence between the identity information of the occupant and the get-off point may be stored in advance. In the disclosed embodiment, any occupant may correspond to at least one drop-off location. For example, the drop-off location for any occupant may include the occupant's home address and school address, or the drop-off location for any occupant may include a stop closest to the occupant's home address and a stop closest to the occupant's school address. As another example, the drop-off location for any occupant may include the home address and the unit address of that occupant, or the drop-off location for any occupant may include a stop closest to the home address of that occupant and a stop closest to the unit address of that occupant. In the embodiment of the present disclosure, the getting-off location of the passenger may be determined according to a correspondence relationship between the pre-stored identity information of the passenger and the getting-off location.
In one possible implementation manner, the determining the getting-off location of the passenger according to the identity information of the passenger includes: and determining the getting-off place of the passenger according to the identity information of the passenger and the current time. In this implementation, the same occupant may have different drop-off locations at different times. For example, the vehicle is a school bus, the passenger is a student, if the current time belongs to the school time period, the getting-off point of the passenger can be determined to be the address of the school, and if the current time belongs to the school time period, the getting-off point of the passenger can be determined to be the address of the home. In this implementation, by determining the getting-off point of the passenger according to the identity information of the passenger and the current time, the getting-off point of the passenger can be accurately determined at different time periods.
In step S14, a get-off prompt is provided based on the information on the get-off point and the seat of the occupant.
In the embodiment of the present disclosure, the getting-off prompt may be performed when the vehicle is about to arrive at a stop station corresponding to the getting-off point of the passenger, and/or when the vehicle has arrived at a stop station corresponding to the getting-off point of the passenger, so as to prompt the passenger to pay attention to getting-off.
In the embodiment of the present disclosure, the getting-off prompt may be performed through one or more of a sound generation module, a display module, a vibration module, and a light emitting module provided in the vehicle. For example, the sound generating module may include a speaker, the display module may include a display screen, the vibration module may include a vibrator, and the Light Emitting module may include an LED (Light Emitting Diode). For example, a voice of "next station is XX station, please the occupant of XX seat pay attention to get off" may be uttered by the utterance module. For another example, the text message "the next station is the XX station, please the passengers in the XX seat to get off the vehicle" can be displayed by the display module. For another example, when the vehicle is about to reach a stop corresponding to the getting-off point of the passenger, and/or when the vehicle has reached a stop corresponding to the getting-off point of the passenger, the light module arranged in front of or near the seat of the passenger is controlled to emit light to prompt the passenger to pay attention to getting-off.
In the embodiment of the disclosure, the face image of the passenger of the vehicle is collected, the identity information of the passenger is determined based on the face image, the image in the vehicle is collected, the seat information of the passenger is determined based on the image in the vehicle, the getting-off point of the passenger is determined according to the identity information of the passenger, and the getting-off prompt is performed according to the getting-off point and the seat information of the passenger, so that the getting-off prompt of the passenger can be performed in a targeted manner, and the passenger can be effectively reminded to get off at the correct parking station. For example, in school bus commuting, children sometimes forget to get off the bus at fixed points because of being young and sleeping or vague. By adopting the embodiment of the disclosure, students taking school buses can be effectively reminded to get off at the correct stop, and the probability of getting off wrongly or forgetting to get off is greatly reduced. Compared with a school bus management mode of reminding getting-off through manpower in the related art, the embodiment of the disclosure can reduce cost and improve efficiency.
In one possible implementation, a display screen corresponding to a seat is installed in the vehicle; the getting-off prompt according to the getting-off place and seat information of the passenger comprises the following steps: determining a display screen corresponding to the seat of the passenger according to the seat information of the passenger; and prompting to get off according to the getting-off place of the passenger through a display screen corresponding to the seat of the passenger.
In this implementation, the display screen corresponding to the seat of the occupant is determined based on the seat information of the occupant, and may be, for example, a display screen disposed in front of or near the seat of the occupant. In this implementation manner, when the vehicle is about to arrive at a stop station corresponding to the get-off point of the passenger, and/or when the vehicle has arrived at a stop station corresponding to the get-off point of the passenger, the get-off prompt may be performed through a display screen corresponding to the seat of the passenger.
In this implementation manner, the display screen corresponding to the seat of the passenger is determined according to the seat information of the passenger, and the getting-off prompt is performed according to the getting-off point of the passenger through the display screen corresponding to the seat of the passenger, so that the getting-off prompt can be performed on the passenger through the display screen corresponding to the seat of the passenger in a targeted manner, and the passenger can be effectively reminded to get off at the correct parking point.
As an example of this implementation, the display screen in the vehicle corresponds one-to-one to the seat. In this example, the display screen corresponding to any seat may be positioned in front of or near that seat. For example, a display screen may be provided in front of each seat. The get-off prompt is carried out according to the get-off position of the passenger through the display screens corresponding to the seats of the passenger one by one, so that the get-off prompt can be carried out on the passenger through the display screens corresponding to the seats of the passenger in a targeted mode.
As another example of this implementation, the display screen in the vehicle has a one-to-one correspondence with seating regions, where any seating region includes at least one seat. In this example, the seats and the display screen may be in a many-to-one relationship, for example, the display screen may be disposed in front of a plurality of adjacent seats. According to the example, the cost of arranging the display screen in the vehicle can be reduced on the premise of carrying out the get-off prompt on the passengers in a targeted mode, and the total space occupied by the display screen in the vehicle can be reduced.
In one possible implementation mode, the vehicle is provided with vibration modules which correspond to seats one by one; the getting-off prompt according to the getting-off place and seat information of the passenger comprises the following steps: determining a vibration module corresponding to the seat of the passenger according to the seat information of the passenger; and controlling the vibration module corresponding to the seat of the passenger to vibrate according to the getting-off place of the passenger.
In this implementation, the shock module corresponding to any one seat may be mounted on the seat or on the backrest of that seat, or the like. In this implementation, the vibration module corresponding to the seat of the passenger may be controlled to vibrate when the vehicle is about to reach a stop station corresponding to the get-off point of the passenger and/or when the vehicle has reached a stop station corresponding to the get-off point of the passenger. The vibration module corresponding to the seat of the passenger can be used for prompting the passenger to get off the vehicle in a targeted manner, so that the passenger can be effectively reminded to get off the vehicle at the correct parking station, for example, the sleeping passenger can be effectively reminded to get off the vehicle in time.
In one possible implementation manner, the providing of the get-off prompt according to the get-off location and the seat information of the passenger includes: according to the getting-off place and the seat information of the passenger, getting-off seat information corresponding to the stop station of the vehicle is determined, wherein the getting-off seat information corresponding to any stop station comprises the following steps: the getting-off place is the seat information of the passengers at the stop station; the getting-off prompt is carried out according to the getting-off seat information corresponding to the stop station through a display screen corresponding to a driver of the vehicle; and/or carrying out get-off prompt according to the information of the get-off seat corresponding to the stop station through the broadcast of the vehicle.
In this implementation, according to the correspondence between the seat information of the occupant and the get-off point and the correspondence between the stop point and the get-off point, the correspondence between the stop point and the seat information can be determined, thereby obtaining the get-off seat information corresponding to different stop points. In this implementation, the display screen corresponding to the driver may be disposed near the driver seat, for example, may be disposed on the center console.
For example, when the vehicle is about to arrive at a stop, text information indicating that the next stop is the XX stop and the occupants of the XX seat, the XX seat and the XX seat are getting off or not may be displayed on a display screen corresponding to the driver of the vehicle. For another example, when the vehicle has arrived at a stop, the text message of "XX station has arrived, please note whether the passengers in XX seat, XX seat and XX seat get off the vehicle" can be displayed on the display screen corresponding to the driver of the vehicle.
For example, when the vehicle is about to arrive at a stop, a voice of "the next stop is the XX station, please note the exit for the XX seat, and the XX seat" may be played through the broadcast of the vehicle. As another example, when the vehicle has arrived at a stop, a voice of "XX station has arrived, please note getting off the vehicle for XX seat, and XX seat occupant" may be played through the broadcast of the vehicle.
In the implementation mode, the driver of the vehicle is reminded of getting off according to the getting-off seat information corresponding to the stop stations through the display screen corresponding to the driver of the vehicle, so that the driver can be reminded whether the passengers of the stop stations get off and whether the passengers get off the vehicle at the stop stations, wherein the stop stations are the stop stations, and the probability that the passengers get off or forget to get off the vehicle can be reduced. Through the broadcasting of the vehicle, the getting-off prompt is carried out according to the getting-off seat information corresponding to the stop station, so that all people in the vehicle can hear the getting-off prompt, if the getting-off position is that the passenger at the stop station which arrives at the current position does not get off (for example, the passenger does not see or hear the getting-off prompt because of sleeping), other passengers (for example, the passenger near the passenger) can remind the passenger of getting off, and the probability that the passenger gets off by mistake or forgets to get off can be reduced.
As an example of this implementation manner, the prompting of getting off according to the getting off seat information corresponding to the stop point through a display screen corresponding to a driver of the vehicle includes: and displaying a getting-off seat bitmap according to getting-off seat information of a stop station to be reached or to be reached currently through a display screen corresponding to a driver of the vehicle, wherein the getting-off position is highlighted in the getting-off seat bitmap and is a seat of a passenger of the stop station to be reached or to be reached currently.
In this example, the getting-off point is displayed in a manner that the seats of the passengers at the upcoming or currently arriving stop are displayed differently from the manner that the seats of the passengers at the upcoming or currently arriving stop are not displayed to highlight the getting-off point as the seats of the passengers at the upcoming or currently arriving stop. For example, the seats of the passengers whose getting-off points are the upcoming or currently arriving stop stations may be highlighted by enlarging the seat numbers of the seats of the passengers whose getting-off points are the upcoming or currently arriving stop stations in the getting-off seat bitmap, adding icons to the seats of the passengers whose getting-off points are the upcoming or currently arriving stop stations, adding dynamic frames to the seats of the passengers whose getting-off points are the upcoming or currently arriving stop stations, and the like. Fig. 2 shows a schematic view of a lower seating diagram in an embodiment of the present disclosure. As shown in fig. 2, seats of passengers whose alighting places are the parking sites to be arrived at or currently arrived at are 3 and 6, and the seats 3 and 6 may be displayed in a different manner from the other seats in the getting-off seat map to highlight the seats 3 and 6.
In this example, a getting-off seat map is displayed through a display screen corresponding to a driver of the vehicle according to getting-off seat information of a parking site to be reached or to be reached currently, wherein in the getting-off seat map, a getting-off seat of a passenger of the parking site to be reached or to be reached currently is highlighted, so that the driver can conveniently and visually determine that the getting-off seat is the seat of the passenger of the parking site to be reached or to be reached currently, and the driver can determine whether all passengers of the parking site to be the getting-off seat have got off and whether passengers get off mistakenly according to the getting-off seat map, so that the probability that the passengers get off mistakenly or forget to get off can be reduced.
In one possible implementation, the method further includes: detecting whether passengers of a stop station with the current getting-off place are all getting-off; in response to the fact that the getting-off position is detected to be that passengers of the currently arriving stop station get off, prompting that the currently arriving stop station is finished getting off through a display screen corresponding to a driver of the vehicle; and/or responding to the situation that the existing getting-off place is detected to be that the passengers at the current arriving stop station do not get off the vehicle, and carrying out getting-off prompt through at least one of a display screen corresponding to the seats of the passengers, a vibration module corresponding to the seats of the passengers, a display screen corresponding to the driver of the vehicle and the broadcast of the vehicle, wherein the getting-off place is the current arriving stop station and does not get off the vehicle.
As an example of this implementation, the movement track of the passengers may be tracked by collecting a video stream inside the vehicle, so as to determine whether all the passengers having the present arriving stop station have alight from the alighting place. As another example of this implementation, it may be determined whether all the passengers having the stop station that are currently reached at the alighting place have alight from the image acquired by the camera at the alighting place of the vehicle.
In the implementation mode, the passengers who answer the bus at the stop station which arrives at present all get off in response to the detection of the getting-off position, and the display screen corresponding to the driver of the vehicle prompts that the stop station which arrives at present is finished getting off, so that the parking time can be reduced, and the traffic efficiency of traffic work is improved. The getting-off prompt is carried out by responding to the fact that the position of getting-off is detected to be that the passenger at the current arriving stop does not get off, and the position of getting-off is at least one of a display screen corresponding to the seat of the passenger at the current arriving stop and not getting off, a vibration module corresponding to the seat of the passenger, a display screen corresponding to the driver of the vehicle and the broadcast of the vehicle, so that the probability that the passenger sits at the stop can be further reduced.
In a possible implementation manner, after the passenger is detected to leave the vehicle, getting-off reminding information may be sent to a parent terminal (i.e., a parent terminal) corresponding to the passenger to prompt the parent of the passenger that the passenger gets off the vehicle. For example, the school bus of vehicle, the passenger is the student, detects the student and leaves the school bus after, can send get off reminding information to the parent's terminal that this student corresponds to the suggestion head of a family this student has got off the bus, thereby makes things convenient for the head of a family remote monitoring child to take the condition of school bus.
In a possible implementation manner, after the passenger is detected to leave the vehicle, the getting-off information corresponding to the passenger may be sent to a service end, and a parent of the passenger may view the getting-off record of the passenger through an APP (application). Wherein the getting-off information corresponding to the passenger may include identity information of the passenger and a time of getting-off of the passenger.
In one possible implementation, after the determining the location of the passenger for getting off the vehicle, the method further includes: responding to the current time belonging to the school time and the fact that the vehicle is about to arrive at the getting-off place of the passenger, and sending receiving and sending information to a parent terminal corresponding to the passenger; and/or sending receiving and sending information to a parent terminal corresponding to any passenger in response to the fact that the current time belongs to the school time and the vehicle is about to arrive at the boarding place of the passenger.
In this implementation, the current time is the school time, and the vehicle is about to arrive at the get-off point of the passenger, and the delivery information is sent to the parent terminal corresponding to the passenger, so that the parent can be helped to arrive at the stop site in advance before the school bus arrives at the stop site, the condition that the student goes home independently by the parents and/or the student at the stop site and the like can be reduced, and the safety of the school bus commuting can be improved. By responding to the fact that the current time belongs to the school time and the vehicle is about to reach the boarding place of any passenger, the communication information is sent to the parent terminal corresponding to the passenger, so that parents and students can arrive at a stop site in advance before the school bus arrives at the stop site, the situation that the students such as the school bus and/or the students miss the school bus can be reduced, and the commuting efficiency of the school bus can be improved.
In a possible implementation mode, in the process of school and school of students, the position information of the school bus can be sent to the server, and parents can know the approximate arrival time of the school bus through a terminal APP (application), so that the parents can conveniently take the school bus to and deliver children and women.
In one possible implementation, after the acquiring the image of the vehicle interior, the method further comprises: detecting whether the occupant wears a seat belt based on an image of the interior after the occupant is detected to be seated; in response to detecting that the occupant is not wearing a seat belt, performing a seat belt reminder according to seat information of the occupant.
In this embodiment, the seat belt presentation is performed based on the seat information of the occupant, so that the seat belt presentation can be performed in a targeted manner, and the occupant can be effectively reminded to wear the seat belt. For example, to the student who takes the school bus, carry out the safety belt suggestion through the seat information according to the student, can effectively remind the student to wear the safety belt to can improve the security of school bus commute.
As an example of this implementation, the prompting for a seat belt based on the seat information of the occupant includes: and prompting a safety belt according to the seat information of the passenger through at least one of a display screen corresponding to the seat of the passenger, a vibration module corresponding to the seat of the passenger, a display screen corresponding to a driver of the vehicle and the broadcast of the vehicle.
For example, a text message "please wear the seat belt as soon as possible" may be displayed on the display screen corresponding to the seat of the occupant. For another example, a text message "please remind the XX passenger of wearing a seat belt" may be displayed on a display screen corresponding to the driver of the vehicle. As another example, a voice of "please wear the seat belt as soon as possible by the occupant in seat XX" can be played through the broadcast of the vehicle.
In this example, the seat belt presentation is performed on the basis of the seat information of the passenger on the display screen corresponding to the seat of the passenger, so that the passenger can be presented with the seat belt in a targeted manner on the display screen corresponding to the seat of the passenger, and the passenger can be effectively reminded to wear the seat belt. And carrying out safety belt prompt according to the seat information of the passenger through a display screen corresponding to the driver of the vehicle, so that the driver can remind the passenger to wear the safety belt according to the seat of the passenger. And carrying out safety belt prompting according to the seat information of the passengers through the broadcast of the vehicle, so that all people in the vehicle can hear the safety belt prompting and the mutual supervision among the passengers is facilitated.
As an example of this implementation, the detecting whether the occupant wears a seat belt based on the image of the interior includes: inputting the image of the interior into a first neural network, and outputting a belt wearing result of the occupant via the first neural network, wherein the first neural network is trained in advance using an image sample of a belt containing a preset color and/or a preset pattern.
In this example, the image samples used to train the first neural network may include belted and unwelted image samples, where the belted in the belted image samples may be a preset color and/or a preset pattern of belted belts. For example, the predetermined color may be a more distinct color and the predetermined pattern may be a more distinct pattern. In the vehicle, the safety belt with the preset color and/or the preset pattern can also be adopted so as to improve the accuracy of safety belt identification.
In this example, the first neural network performs the seat belt detection, which can improve the accuracy of the seat belt detection and reduce the cost of the seat belt detection.
In one possible implementation, the method further includes: performing a live body detection based on the image of the interior after the vehicle is turned off, or after the vehicle is turned off and the doors are closed; in response to detecting the living body, prompting the existence of the left-behind passenger in the vehicle through at least one of a display screen corresponding to a driver of the vehicle, a broadcast of the vehicle and a lamp of the vehicle, and/or sending a left-behind passenger prompting message to a terminal of a relevant person of the vehicle.
In this implementation, the vehicle flameout information may be obtained through a vehicle-mounted communication module, for example, the vehicle-mounted communication module may include a CAN (Controller Area Network) bus module.
As an example of this implementation, the image of the interior may be live-tested by a third neural network. Wherein the third neural network may be trained in advance using the image sample with the living body and the image sample without the living body.
For example, in response to the detection of the living body, a text message "please check whether there is a passenger not to get off the vehicle" may be displayed through a display screen corresponding to the driver of the vehicle. As another example, a voice of "please check whether there is a passenger not to get off" may be played through a broadcast of the vehicle in response to the detection of the living body. As another example, lights of the vehicle may be controlled to blink in response to detecting a living body to indicate the presence of a left-behind occupant in the vehicle. As another example, a message "please return to the vehicle to check whether any passenger is not alighting" may be sent to a relevant person (e.g., a driver, a person in charge, etc.) of the vehicle in response to the detection of the living body.
In this implementation, if a vehicle flame-out is detected, it may be largely indicative that the vehicle has reached the terminal. When the living body is detected by the living body detection based on the internal image after the vehicle is turned off or after the vehicle is turned off and the door is closed, the presence of the passenger is prompted, and the safety of the vehicle can be further improved. For example, after the school bus arrives at the terminal, the living body detection is performed based on the image inside the school bus, and if the living body is detected, the existence of the left-behind passenger in the school bus can be prompted through at least one of a display screen corresponding to a driver of the school bus, a broadcast of the school bus and a lamp of the school bus, and/or the left-behind passenger prompting information is sent to a terminal of a person related to the school bus (for example, the driver of the school bus, a person in charge of the school, and the like), so that the safety of the school bus commuting can be further improved.
In another possible implementation manner, the method further includes: after the vehicle is shut down, or after the vehicle is shut down and a door of the vehicle is closed, if it is determined that a left-behind passenger exists in the vehicle based on an entering/leaving record of the passenger of the vehicle, prompting that the left-behind passenger exists in the vehicle through at least one of a display screen corresponding to a driver of the vehicle, a broadcast of the vehicle and a lamp of the vehicle, and/or sending a left-behind passenger prompting message to a terminal of a relevant person of the vehicle.
In one possible implementation, the method further includes: when the passenger enters the vehicle, displaying welcome information aiming at the passenger through a display screen at the boarding position of the vehicle according to the identity information of the passenger; and/or after the passenger sits down, displaying welcome information aiming at the passenger through a display screen corresponding to the passenger seat according to the identity information of the passenger. For example, welcome information may include "morning happy, XX classmates", and the like.
As an example of this implementation, it is possible to acquire information on the desirability of the occupant from the identification information of the occupant, and display welcome information for the occupant from the desirability of the occupant. For example, if an occupant is expected to become a scientist, welcome information for the occupant may include "good morning, future scientists".
As an example of this implementation, information of a person liked by the occupant may be acquired from the identification information of the occupant, and welcome information for the occupant may be displayed through an avatar of the person liked by the occupant. For example, if a person preferred by a certain passenger is person a, person a can be displayed on the display screen, and the passenger can be called by person a.
In this implementation, the passenger can obtain a personalized riding experience by displaying welcome information for the passenger based on the identity information of the passenger via a display screen at the boarding location of the vehicle and/or displaying welcome information for the passenger based on the identity information of the passenger via a display screen corresponding to the seat of the passenger after the passenger is seated.
In one possible implementation, the method further includes: when the passenger leaves the vehicle, the getting-off greeting information aiming at the passenger is displayed through a display screen at the getting-off position of the vehicle according to the identity information of the passenger. For example, the get-off greeting message may be "XX classmates, thank you for taking a ride, next meeting" or "scientists in the future, thank you for taking a ride, next meeting" or the like.
In one possible implementation, the method further includes: acquiring an image of a driver of the vehicle, and performing fatigue state detection and/or negative emotion detection on the driver based on the image of the driver to obtain a fatigue state detection result and/or a negative emotion detection result corresponding to the driver; and storing and processing the fatigue state detection result and/or the negative emotion detection result corresponding to the driver.
In this implementation, the negative emotions may include emotions of anger, sadness, and the like.
In this implementation, the fatigue state detection result and/or the negative emotion detection result corresponding to the driver are/is stored, so that subsequent improvement can be facilitated, and the driving safety of the vehicle can be enhanced.
As an example of this implementation, the method further comprises: prompting the driver to rest alongside in response to detecting that the driver is in a fatigue state. For example, the driver may be prompted to rest at side by a display screen corresponding to the driver and/or a broadcast of the vehicle.
In a possible implementation manner, the body temperature data of the occupant may be measured through a body temperature measurement module disposed on the vehicle, and a correspondence between the measurement time of the body temperature data, the related information of the vehicle, the identity information, and the body temperature data is established, and the correspondence is stored and processed.
As an example of this implementation, the body temperature measurement module may be a non-contact body temperature measurement module, so that cross-infection can be reduced and passenger passing efficiency can be improved.
In one example, the body temperature measurement module may include a thermal imaging body temperature detector. In nature, electromagnetic waves are radiated when the temperature of an object is higher than absolute zero (-273 ℃), and infrared rays are the most widespread form of electromagnetic waves. The thermal imaging body temperature detector converts infrared signals into electric signals by collecting the infrared signals sent by an object, converts the electric signals into temperature through the signal processing system, and can output thermal imaging images which are convenient for visual identification. Thus, the thermal imaging body temperature detector can realize non-contact temperature measurement.
Since the temperature measurement error of the thermal imaging body temperature detector in the actual environment is about ± 1 ℃ or even higher, in order to improve the temperature measurement accuracy, in one example, the body temperature measurement module may further include a temperature calibration device, such as a black body. The black body can be arranged opposite to the thermal imaging body temperature detector, so that the radiation target surface of the black body can be ensured to appear in an imaging picture of the thermal imaging body temperature detector. The thermal imaging body temperature detector can measure and calibrate the temperature in real time by taking the temperature value set by the black body as a reference, so that the temperature measurement precision reaches the requirement of +/-0.3 ℃. The blackbody can be used only by being connected with a power supply, and networking is not needed.
In another example, the body temperature measurement module may further include an infrared thermometer for measuring forehead temperature or wrist temperature.
In this implementation, the body temperature measurement module may be mounted outside or inside the cabin of the vehicle. For example, the body temperature measurement module may be mounted adjacent to a vehicle door, such as the left side of the vehicle door. As another example, the body temperature measurement module may be mounted on a vehicle door. As another example, the body temperature measurement module may be mounted in a location of an on-off card reader of the vehicle.
As one example of this implementation, a body temperature prompt message is issued in response to an abnormality in the occupant's body temperature data. For example, the body temperature prompting message can be used for prompting the passenger to go to a hospital for examination, and the like.
As an example of this implementation, the related information of the vehicle includes one or more of positioning information, route information, driver information, and identification information of the vehicle. For example, if the vehicle is a car, the identification information of the vehicle may include a license plate number of the vehicle. The driver information may be information that the driver's identification number, photograph, name, face image, etc. can be used to uniquely determine the driver.
As an example of this implementation, the storing the correspondence includes: storing the correspondence in a memory of the vehicle.
As another example of this implementation, the storing the correspondence includes: and sending the corresponding relation to a server so as to store the corresponding relation by the server. The realization mode stores the corresponding relation through the server, and can facilitate the follow-up tracing search according to the data stored by the server.
As an example of this implementation manner, after the storing and processing the correspondence, the method further includes: acquiring an inquiry request, wherein the inquiry request comprises identity information of an occupant to be inquired; determining the relevant information of the vehicle taken by the passenger to be inquired and the time of the passenger to be inquired taking the vehicle according to the identity information of the passenger to be inquired; and determining identity information of co-pedestrians of the passenger to be inquired according to the related information of the vehicle on which the passenger to be inquired rides, wherein the co-pedestrians of the passenger to be inquired represent passengers riding the same vehicle at the same time as the passenger to be inquired. In this example, the query request may also include a time range for requesting the query, for example, the time range for requesting the query may be determined according to the onset time of the occupant to be queried and the latency length of the condition. In this implementation, the passenger to be queried may be a passenger who confirms diagnosis of a certain infectious disease, and after the passenger confirms diagnosis, the whereabouts of the passenger himself, the fellow passenger and the whereabouts thereof may be subjected to search control according to the stored data.
In one possible implementation, the vehicle is a vehicle, and the method further comprises: and controlling the vehicle door not to be unlocked or opened in response to the identification result of the face image indicating that the passenger is a passenger who is not allowed to take the vehicle. For example, in an application scenario where passenger information is registered in advance, such as a school bus, a network appointment car, and a shared regular bus, if a passenger in a face image acquired by a camera does not belong to a passenger registered in advance, the door may be controlled not to be unlocked or not to be opened. By controlling the door not to unlock or open in response to the identification result of the face image indicating that the passenger is a passenger who is not allowed to take a car, the possibility of the passenger taking the wrong car can be reduced.
The embodiment of the present disclosure also provides a control method for a vehicle, including: acquiring an image of the interior of the vehicle and determining seating information for occupants of the vehicle based on the image of the interior; detecting whether the occupant wears a seat belt based on an image of the interior after the occupant is detected to be seated; in response to detecting that the occupant is not wearing a seat belt, performing a seat belt reminder according to seat information of the occupant.
In the embodiment of the present disclosure, the seat belt presentation is performed based on the seat information of the occupant, so that the seat belt presentation can be performed in a targeted manner, and the occupant can be effectively reminded to wear the seat belt. For example, to the student who takes the school bus, carry out the safety belt suggestion through the seat information according to the student, can effectively remind the student to wear the safety belt to can improve the security of school bus commute.
In one possible implementation, the prompting a seat belt according to the seat information of the occupant includes: and prompting a safety belt according to the seat information of the passenger through at least one of a display screen corresponding to the seat of the passenger, a vibration module corresponding to the seat of the passenger, a display screen corresponding to a driver of the vehicle and the broadcast of the vehicle.
In this embodiment, the seat belt presentation is performed on the basis of the seat information of the passenger on the display screen corresponding to the seat of the passenger, so that the passenger can be presented with the seat belt in a targeted manner on the display screen corresponding to the seat of the passenger, and the passenger can be effectively reminded to wear the seat belt. And carrying out safety belt prompt according to the seat information of the passenger through a display screen corresponding to the driver of the vehicle, so that the driver can remind the passenger to wear the safety belt according to the seat of the passenger. And carrying out safety belt prompting according to the seat information of the passengers through the broadcast of the vehicle, so that all people in the vehicle can hear the safety belt prompting and the mutual supervision among the passengers is facilitated.
In one possible implementation, the detecting whether the occupant wears a seat belt based on the image of the interior includes: inputting the image of the interior into a first neural network, and outputting a belt wearing result of the occupant via the first neural network, wherein the first neural network is trained in advance using an image sample of a belt containing a preset color and/or a preset pattern.
In the implementation mode, the safety belt detection is performed through the first neural network, so that the safety belt detection precision can be improved, and the safety belt detection cost can be reduced.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
In addition, the present disclosure also provides a control device of a vehicle, an electronic device, a computer-readable storage medium, and a program, which can be used to implement any one of the control methods of a vehicle provided by the present disclosure, and the corresponding technical solutions and descriptions and corresponding descriptions in the methods section are omitted for brevity.
Fig. 3 shows a block diagram of a control device of a vehicle provided by an embodiment of the present disclosure. As shown in fig. 3, the control device of the vehicle includes:
a first determining module 31, configured to acquire a face image of an occupant of a vehicle, and determine identity information of the occupant based on the face image;
a second determination module 32 for acquiring an image of the interior of the vehicle and determining the seating information of the occupant based on the image of the interior;
a third determining module 33, configured to determine a getting-off point of the passenger according to the identity information of the passenger;
and the first getting-off prompting module 34 is used for prompting getting-off according to the getting-off place and seat information of the passengers.
In a possible implementation manner, the third determining module 33 is configured to:
and determining the getting-off place of the passenger according to the identity information of the passenger and the current time.
In one possible implementation form of the method,
a display screen corresponding to the seat is arranged in the vehicle;
the first get-off prompt module 34 is configured to: determining a display screen corresponding to the seat of the passenger according to the seat information of the passenger; and prompting to get off according to the getting-off place of the passenger through a display screen corresponding to the seat of the passenger.
In one possible implementation form of the method,
the display screens in the vehicles correspond to the seats one by one;
alternatively, the first and second electrodes may be,
the display screen in the vehicle corresponds one-to-one to the seating areas, wherein any seating area includes at least one seat.
In one possible implementation form of the method,
the vehicle is provided with vibration modules which correspond to the seats one by one;
the first get-off prompt module 34 is configured to: determining a vibration module corresponding to the seat of the passenger according to the seat information of the passenger; and controlling the vibration module corresponding to the seat of the passenger to vibrate according to the getting-off place of the passenger.
In one possible implementation, the first get-off prompt module 34 is configured to:
according to the getting-off place and the seat information of the passenger, getting-off seat information corresponding to the stop station of the vehicle is determined, wherein the getting-off seat information corresponding to any stop station comprises the following steps: the getting-off place is the seat information of the passengers at the stop station;
the getting-off prompt is carried out according to the getting-off seat information corresponding to the stop station through a display screen corresponding to a driver of the vehicle; and/or carrying out get-off prompt according to the information of the get-off seat corresponding to the stop station through the broadcast of the vehicle.
In one possible implementation, the first get-off prompt module 34 is configured to:
and displaying a getting-off seat bitmap according to getting-off seat information of a stop station to be reached or to be reached currently through a display screen corresponding to a driver of the vehicle, wherein the getting-off position is highlighted in the getting-off seat bitmap and is a seat of a passenger of the stop station to be reached or to be reached currently.
In one possible implementation manner, the method further includes:
the first detection module is used for detecting whether passengers at the current arrival stop station get off the bus or not;
the second getting-off prompting module is used for responding to the fact that the getting-off position is that passengers of the currently arrived stop station get off, and prompting that the currently arrived stop station is finished getting off through a display screen corresponding to a driver of the vehicle; and/or responding to the situation that the existing getting-off place is detected to be that the passengers at the current arriving stop station do not get off the vehicle, and carrying out getting-off prompt through at least one of a display screen corresponding to the seats of the passengers, a vibration module corresponding to the seats of the passengers, a display screen corresponding to the driver of the vehicle and the broadcast of the vehicle, wherein the getting-off place is the current arriving stop station and does not get off the vehicle.
In a possible implementation manner, the apparatus further includes a sending prompting module, configured to:
responding to the current time belonging to the school time and the fact that the vehicle is about to arrive at the getting-off place of the passenger, and sending receiving and sending information to a parent terminal corresponding to the passenger;
and/or the presence of a gas in the gas,
and sending receiving and sending information to a parent terminal corresponding to the passenger in response to the fact that the current time belongs to the time of school and the vehicle is about to arrive at the boarding place of any passenger.
In one possible implementation, the apparatus further includes:
a second detection module configured to detect whether the occupant wears a seat belt based on an image of the inside after detecting that the occupant is seated;
and the safety belt prompting module is used for responding to the detection that the passenger does not wear the safety belt and prompting the safety belt according to the seat information of the passenger.
In one possible implementation, the seat belt reminder module is configured to:
and prompting a safety belt according to the seat information of the passenger through at least one of a display screen corresponding to the seat of the passenger, a vibration module corresponding to the seat of the passenger, a display screen corresponding to a driver of the vehicle and the broadcast of the vehicle.
In one possible implementation manner, the second detection module is configured to:
inputting the image of the interior into a first neural network, and outputting a belt wearing result of the occupant via the first neural network, wherein the first neural network is trained in advance using an image sample of a belt containing a preset color and/or a preset pattern.
In one possible implementation, the apparatus further includes:
the living body detection module is used for carrying out living body detection based on the image of the interior after the vehicle is switched off or after the vehicle is switched off and the vehicle door is closed;
and the left-behind passenger prompting module is used for responding to the detection of the living body, prompting the existence of the left-behind passenger in the vehicle through at least one of a display screen corresponding to a driver of the vehicle, the broadcast of the vehicle and a lamp of the vehicle, and/or sending left-behind passenger prompting information to a terminal of a relevant person of the vehicle.
In one possible implementation, the apparatus further includes a display module configured to:
when the passenger enters the vehicle, displaying welcome information aiming at the passenger through a display screen at the boarding position of the vehicle according to the identity information of the passenger;
and/or the presence of a gas in the gas,
and after the passenger sits down, displaying welcome information aiming at the passenger according to the identity information of the passenger through a display screen corresponding to the seat of the passenger.
In one possible implementation, the apparatus further includes:
the driver detection module is used for acquiring images of a driver of the vehicle, and detecting fatigue state and/or negative emotion of the driver based on the images of the driver to obtain a fatigue state detection result and/or a negative emotion detection result corresponding to the driver;
and the storage module is used for storing the fatigue state detection result and/or the negative emotion detection result corresponding to the driver.
In the embodiment of the disclosure, the face image of the passenger of the vehicle is collected, the identity information of the passenger is determined based on the face image, the image in the vehicle is collected, the seat information of the passenger is determined based on the image in the vehicle, the getting-off point of the passenger is determined according to the identity information of the passenger, and the getting-off prompt is performed according to the getting-off point and the seat information of the passenger, so that the getting-off prompt of the passenger can be performed in a targeted manner, and the passenger can be effectively reminded to get off at the correct parking station. For example, in school bus commuting, children sometimes forget to get off the bus at fixed points because of being young and sleeping or vague. By adopting the embodiment of the disclosure, students taking school buses can be effectively reminded to get off at the correct stop, and the probability of getting off wrongly or forgetting to get off is greatly reduced. Compared with a school bus management mode of reminding getting-off through manpower in the related art, the embodiment of the disclosure can reduce cost and improve efficiency.
According to an aspect of the present disclosure, there is provided a control apparatus of a vehicle, including:
a second determination module to acquire an image of an interior of the vehicle and determine seating information of an occupant of the vehicle based on the image of the interior;
a second detection module configured to detect whether the occupant wears a seat belt based on an image of the inside after detecting that the occupant is seated;
and the safety belt prompting module is used for responding to the detection that the passenger does not wear the safety belt and prompting the safety belt according to the seat information of the passenger.
In one possible implementation manner, the second detection module is configured to:
inputting the image of the interior into a first neural network, and outputting a belt wearing result of the occupant via the first neural network, wherein the first neural network is trained in advance using an image sample of a belt containing a preset color and/or a preset pattern.
In the embodiment of the present disclosure, the seat belt presentation is performed based on the seat information of the occupant, so that the seat belt presentation can be performed in a targeted manner, and the occupant can be effectively reminded to wear the seat belt. For example, to the student who takes the school bus, carry out the safety belt suggestion through the seat information according to the student, can effectively remind the student to wear the safety belt to can improve the security of school bus commute.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-described method. The computer-readable storage medium may be a non-volatile computer-readable storage medium, or may be a volatile computer-readable storage medium.
The disclosed embodiments also provide a computer program product comprising computer readable code, which when run on a device, a processor in the device executes instructions for implementing the control method of a vehicle as provided in any of the above embodiments.
The disclosed embodiments also provide another computer program product for storing computer readable instructions, which when executed cause a computer to perform the operations of the control method of a vehicle provided in any of the above embodiments.
An embodiment of the present disclosure further provides an electronic device, including: one or more processors; a memory for storing executable instructions; wherein the one or more processors are configured to invoke the memory-stored executable instructions to perform the above-described method.
The electronic device may be provided as a terminal, server, or other form of device.
Fig. 4 illustrates a block diagram of an electronic device 800 provided by an embodiment of the disclosure. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like terminal.
Referring to fig. 4, electronic device 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a Complementary Metal Oxide Semiconductor (CMOS) or Charge Coupled Device (CCD) image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as a wireless network (Wi-Fi), a second generation mobile communication technology (2G), a third generation mobile communication technology (3G), a fourth generation mobile communication technology (4G)/long term evolution of universal mobile communication technology (LTE), a fifth generation mobile communication technology (5G), or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (20)

1. A control method for a vehicle, characterized by comprising:
acquiring a face image of a passenger of a vehicle, and determining identity information of the passenger based on the face image;
acquiring an image of the interior of the vehicle and determining seating information of the occupant based on the image of the interior;
determining a getting-off point of the passenger according to the identity information of the passenger;
and prompting to get off the vehicle according to the information of the getting-off place and the seat of the passenger.
2. The method of claim 1, wherein determining a drop-off location for the occupant based on the identity information of the occupant comprises:
and determining the getting-off place of the passenger according to the identity information of the passenger and the current time.
3. The method according to claim 1 or 2,
a display screen corresponding to the seat is arranged in the vehicle;
the getting-off prompt according to the getting-off place and seat information of the passenger comprises the following steps: determining a display screen corresponding to the seat of the passenger according to the seat information of the passenger; and prompting to get off according to the getting-off place of the passenger through a display screen corresponding to the seat of the passenger.
4. The method of claim 3,
the display screens in the vehicles correspond to the seats one by one;
alternatively, the first and second electrodes may be,
the display screen in the vehicle corresponds one-to-one to the seating areas, wherein any seating area includes at least one seat.
5. The method according to any one of claims 1 to 4,
the vehicle is provided with vibration modules which correspond to the seats one by one;
the getting-off prompt according to the getting-off place and seat information of the passenger comprises the following steps: determining a vibration module corresponding to the seat of the passenger according to the seat information of the passenger; and controlling the vibration module corresponding to the seat of the passenger to vibrate according to the getting-off place of the passenger.
6. The method of any one of claims 1 to 5, wherein the providing an exit prompt based on the location and seating information of the occupant comprises:
according to the getting-off place and the seat information of the passenger, getting-off seat information corresponding to the stop station of the vehicle is determined, wherein the getting-off seat information corresponding to any stop station comprises the following steps: the getting-off place is the seat information of the passengers at the stop station;
the getting-off prompt is carried out according to the getting-off seat information corresponding to the stop station through a display screen corresponding to a driver of the vehicle; and/or carrying out get-off prompt according to the information of the get-off seat corresponding to the stop station through the broadcast of the vehicle.
7. The method of claim 6, wherein the prompting to get off via a display screen corresponding to a driver of the vehicle according to information of a get-off seat corresponding to the stop station comprises:
and displaying a getting-off seat bitmap according to getting-off seat information of a stop station to be reached or to be reached currently through a display screen corresponding to a driver of the vehicle, wherein the getting-off position is highlighted in the getting-off seat bitmap and is a seat of a passenger of the stop station to be reached or to be reached currently.
8. The method of any one of claims 1 to 7, further comprising:
detecting whether passengers of a stop station with the current getting-off place are all getting-off;
in response to the fact that the getting-off position is detected to be that passengers of the currently arriving stop station get off, prompting that the currently arriving stop station is finished getting off through a display screen corresponding to a driver of the vehicle; and/or responding to the situation that the existing getting-off place is detected to be that the passengers at the current arriving stop station do not get off the vehicle, and carrying out getting-off prompt through at least one of a display screen corresponding to the seats of the passengers, a vibration module corresponding to the seats of the passengers, a display screen corresponding to the driver of the vehicle and the broadcast of the vehicle, wherein the getting-off place is the current arriving stop station and does not get off the vehicle.
9. The method of any of claims 1-8, wherein after the determining the location of the occupant's disembarkation, the method further comprises:
responding to the current time belonging to the school time and the fact that the vehicle is about to arrive at the getting-off place of the passenger, and sending receiving and sending information to a parent terminal corresponding to the passenger;
and/or the presence of a gas in the gas,
and sending receiving and sending information to a parent terminal corresponding to the passenger in response to the fact that the current time belongs to the time of school and the vehicle is about to arrive at the boarding place of any passenger.
10. The method of any one of claims 1 to 9, wherein after said acquiring an image of the vehicle interior, the method further comprises:
detecting whether the occupant wears a seat belt based on an image of the interior after the occupant is detected to be seated;
in response to detecting that the occupant is not wearing a seat belt, performing a seat belt reminder according to seat information of the occupant.
11. The method of claim 10, wherein the prompting for a seat belt based on the occupant's seat information comprises:
and prompting a safety belt according to the seat information of the passenger through at least one of a display screen corresponding to the seat of the passenger, a vibration module corresponding to the seat of the passenger, a display screen corresponding to a driver of the vehicle and the broadcast of the vehicle.
12. The method according to claim 10 or 11, wherein the detecting whether the occupant wears a seat belt based on the image of the interior includes:
inputting the image of the interior into a first neural network, and outputting a belt wearing result of the occupant via the first neural network, wherein the first neural network is trained in advance using an image sample of a belt containing a preset color and/or a preset pattern.
13. The method according to any one of claims 1 to 12, further comprising:
performing a live body detection based on the image of the interior after the vehicle is turned off, or after the vehicle is turned off and the doors are closed;
in response to detecting the living body, prompting the existence of the left-behind passenger in the vehicle through at least one of a display screen corresponding to a driver of the vehicle, a broadcast of the vehicle and a lamp of the vehicle, and/or sending a left-behind passenger prompting message to a terminal of a relevant person of the vehicle.
14. The method according to any one of claims 1 to 13, further comprising:
when the passenger enters the vehicle, displaying welcome information aiming at the passenger through a display screen at the boarding position of the vehicle according to the identity information of the passenger;
and/or the presence of a gas in the gas,
and after the passenger sits down, displaying welcome information aiming at the passenger according to the identity information of the passenger through a display screen corresponding to the seat of the passenger.
15. The method according to any one of claims 1 to 14, further comprising:
acquiring an image of a driver of the vehicle, and performing fatigue state detection and/or negative emotion detection on the driver based on the image of the driver to obtain a fatigue state detection result and/or a negative emotion detection result corresponding to the driver;
and storing and processing the fatigue state detection result and/or the negative emotion detection result corresponding to the driver.
16. A control method for a vehicle, characterized by comprising:
acquiring an image of the interior of the vehicle and determining seating information for occupants of the vehicle based on the image of the interior;
detecting whether the occupant wears a seat belt based on an image of the interior after the occupant is detected to be seated;
in response to detecting that the occupant is not wearing a seat belt, performing a seat belt reminder according to seat information of the occupant.
17. The method of claim 16, wherein the detecting whether the occupant is wearing a seat belt based on the image of the interior comprises:
inputting the image of the interior into a first neural network, and outputting a belt wearing result of the occupant via the first neural network, wherein the first neural network is trained in advance using an image sample of a belt containing a preset color and/or a preset pattern.
18. A control apparatus for a vehicle, characterized by comprising:
the device comprises a first determination module, a second determination module and a third determination module, wherein the first determination module is used for acquiring a face image of a passenger of a vehicle and determining the identity information of the passenger based on the face image;
a second determination module to acquire an image of the interior of the vehicle and determine seating information of the occupant based on the image of the interior;
the third determining module is used for determining the getting-off point of the passenger according to the identity information of the passenger;
and the first get-off prompt module is used for carrying out get-off prompt according to the information of the get-off place and the seat of the passenger.
19. An electronic device, comprising:
one or more processors;
a memory for storing executable instructions;
wherein the one or more processors are configured to invoke the memory-stored executable instructions to perform the method of any one of claims 1 to 17.
20. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1 to 17.
CN202010582762.2A 2020-06-23 2020-06-23 Control method and device for vehicle, electronic device and storage medium Pending CN111738158A (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN202010582762.2A CN111738158A (en) 2020-06-23 2020-06-23 Control method and device for vehicle, electronic device and storage medium
JP2021564084A JP2022541703A (en) 2020-06-23 2020-12-11 Transportation control method and device, electronic device and storage medium
PCT/CN2020/135808 WO2021258664A1 (en) 2020-06-23 2020-12-11 Method and apparatus for controlling vehicle, and electronic device and storage medium
KR1020217034158A KR20220000902A (en) 2020-06-23 2020-12-11 Control methods and devices for transportation organizations, electronic devices and storage media

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010582762.2A CN111738158A (en) 2020-06-23 2020-06-23 Control method and device for vehicle, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN111738158A true CN111738158A (en) 2020-10-02

Family

ID=72650794

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010582762.2A Pending CN111738158A (en) 2020-06-23 2020-06-23 Control method and device for vehicle, electronic device and storage medium

Country Status (4)

Country Link
JP (1) JP2022541703A (en)
KR (1) KR20220000902A (en)
CN (1) CN111738158A (en)
WO (1) WO2021258664A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021258664A1 (en) * 2020-06-23 2021-12-30 上海商汤临港智能科技有限公司 Method and apparatus for controlling vehicle, and electronic device and storage medium
CN114312580A (en) * 2021-12-31 2022-04-12 上海商汤临港智能科技有限公司 Method and device for determining seats of passengers in vehicle and vehicle control method and device
KR102422817B1 (en) * 2021-10-01 2022-07-19 (주) 원앤아이 Apparatus and method for management for getting on and off in a vehicle using plurality of sensors

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102022202229A1 (en) * 2022-03-04 2023-09-07 Siemens Mobility GmbH Computer-implemented method for recognizing a new object in the interior of a train
JP7339636B1 (en) 2022-09-30 2023-09-06 株式会社IoZ Left-in-vehicle detection system and server

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150243165A1 (en) * 2014-09-20 2015-08-27 Mohamed Roshdy Elsheemy Comprehensive traffic control system
CN107599971A (en) * 2017-09-04 2018-01-19 驭势(上海)汽车科技有限公司 Specific aim is got off based reminding method and device
CN108369645A (en) * 2018-02-08 2018-08-03 深圳前海达闼云端智能科技有限公司 Taxi operation monitoring method, device, storage medium and system
CN110533826A (en) * 2019-09-02 2019-12-03 阿里巴巴集团控股有限公司 A kind of information identifying method and system

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016043781A1 (en) * 2014-09-20 2016-03-24 Elsheemy Mohamed Comprehensive traffic control system
JP7118529B2 (en) * 2018-03-29 2022-08-16 矢崎総業株式会社 In-vehicle monitoring module and monitoring system
JP7114407B2 (en) * 2018-08-30 2022-08-08 株式会社東芝 Matching system
JP2020052478A (en) * 2018-09-24 2020-04-02 日本精機株式会社 Passenger management system, method, and computer program
CN109558795A (en) * 2018-10-17 2019-04-02 秦羽新 A kind of school bus passenger safety intelligent monitor system
KR102122263B1 (en) * 2018-10-19 2020-06-26 엘지전자 주식회사 Vehicle Indoor Person Monitoring Device and method for operating the same
CN111738158A (en) * 2020-06-23 2020-10-02 上海商汤临港智能科技有限公司 Control method and device for vehicle, electronic device and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150243165A1 (en) * 2014-09-20 2015-08-27 Mohamed Roshdy Elsheemy Comprehensive traffic control system
CN107599971A (en) * 2017-09-04 2018-01-19 驭势(上海)汽车科技有限公司 Specific aim is got off based reminding method and device
CN108369645A (en) * 2018-02-08 2018-08-03 深圳前海达闼云端智能科技有限公司 Taxi operation monitoring method, device, storage medium and system
CN110533826A (en) * 2019-09-02 2019-12-03 阿里巴巴集团控股有限公司 A kind of information identifying method and system

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021258664A1 (en) * 2020-06-23 2021-12-30 上海商汤临港智能科技有限公司 Method and apparatus for controlling vehicle, and electronic device and storage medium
KR102422817B1 (en) * 2021-10-01 2022-07-19 (주) 원앤아이 Apparatus and method for management for getting on and off in a vehicle using plurality of sensors
CN114312580A (en) * 2021-12-31 2022-04-12 上海商汤临港智能科技有限公司 Method and device for determining seats of passengers in vehicle and vehicle control method and device
CN114312580B (en) * 2021-12-31 2024-03-22 上海商汤临港智能科技有限公司 Method and device for determining seats of passengers in vehicle and vehicle control method and device

Also Published As

Publication number Publication date
JP2022541703A (en) 2022-09-27
WO2021258664A1 (en) 2021-12-30
KR20220000902A (en) 2022-01-04

Similar Documents

Publication Publication Date Title
CN111738158A (en) Control method and device for vehicle, electronic device and storage medium
WO2021159630A1 (en) Vehicle commuting control method and apparatus, electronic device, medium, and vehicle
WO2022041670A1 (en) Occupant detection method and apparatus in vehicle cabin, electronic device, and storage medium
CN112037380B (en) Vehicle control method and device, electronic equipment, storage medium and vehicle
WO2021164380A1 (en) Vehicle door unlocking method, apparatus and system, electronic device, and storage medium
CN112026790B (en) Control method and device for vehicle-mounted robot, vehicle, electronic device and medium
CN111739201A (en) Vehicle interaction method and device, electronic equipment, storage medium and vehicle
JP2018169942A (en) Vehicle management server, in-vehicle terminal, watching method, and program in pickup-with-watching-service system
CN112124073B (en) Intelligent driving control method and device based on alcohol detection
CN108615140B (en) Travel reminding method and device and storage medium
US10666901B1 (en) System for soothing an occupant in a vehicle
WO2022041669A1 (en) Method and apparatus for providing reminder of item which is left behind, and device and storage medium
WO2023029406A1 (en) Method and apparatus for vehicle to send passenger information to rescue call center
CN113486760A (en) Object speaking detection method and device, electronic equipment and storage medium
CN105261209B (en) The volume of the flow of passengers determines method and device
CN113920492A (en) Method and device for detecting people in vehicle, electronic equipment and storage medium
CN106202193A (en) The method of road image acquisition of information, Apparatus and system
CN105916119A (en) Information reminding method and information reminding device
CN114332941A (en) Alarm prompting method and device based on riding object detection and electronic equipment
CN114407630A (en) Vehicle door control method and device, electronic equipment and storage medium
JP2020091574A (en) Vehicle and notification method
CN111368701A (en) Character management and control method and device, electronic equipment and storage medium
CN112937479A (en) Vehicle control method and device, electronic device and storage medium
CN115072509A (en) Method, device, equipment, storage medium and program product for displaying elevator information
CN114495074A (en) Control method and device of vehicle, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination