CN114475623A - Vehicle control method and device, electronic equipment and storage medium - Google Patents

Vehicle control method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114475623A
CN114475623A CN202111629584.5A CN202111629584A CN114475623A CN 114475623 A CN114475623 A CN 114475623A CN 202111629584 A CN202111629584 A CN 202111629584A CN 114475623 A CN114475623 A CN 114475623A
Authority
CN
China
Prior art keywords
vehicle
risk
video data
riding
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111629584.5A
Other languages
Chinese (zh)
Inventor
赵腾飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Zhilian Beijing Technology Co Ltd
Original Assignee
Apollo Zhilian Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apollo Zhilian Beijing Technology Co Ltd filed Critical Apollo Zhilian Beijing Technology Co Ltd
Priority to CN202111629584.5A priority Critical patent/CN114475623A/en
Publication of CN114475623A publication Critical patent/CN114475623A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • B60W40/09Driving style or behaviour
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • B60W30/09Taking automatic action to avoid collision, e.g. braking and steering
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/143Alarm means

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Human Computer Interaction (AREA)
  • Traffic Control Systems (AREA)

Abstract

The disclosure provides a vehicle control method and device, electronic equipment and a storage medium, and relates to intelligent driving, in particular to the technical field of artificial intelligence such as deep learning and computer vision. The specific implementation scheme is as follows: acquiring video data acquired by a vehicle; carrying out image recognition on the video data to acquire the in-vehicle personnel information of the vehicle; and responding to the situation that the risk riding event exists in the vehicle according to the in-vehicle personnel information, generating a vehicle control strategy of the vehicle according to the risk riding event, and sending the vehicle control strategy to the vehicle for execution. Therefore, the intelligent control can be carried out on the vehicle, so that the intelligent level of the vehicle is improved, and the riding experience of comfort and safety is provided for passengers.

Description

Vehicle control method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of intelligent driving technologies, and in particular, to the field of artificial intelligence technologies such as deep learning and computer vision, and in particular, to a method and an apparatus for controlling a vehicle, an electronic device, and a storage medium.
Background
In daily life, public transport can meet the traveling requirements of most people, and the traveling convenience of people is improved. With the rapid development of automatic driving, the automatic driving public transport means also becomes a mode for people to select to go out, and more convenience is provided for people to go out. While self-propelled mass transit vehicles provide convenience to people, riding comfort and safety are important considerations.
Disclosure of Invention
The disclosure provides a control method, a control device, control equipment and a storage medium of a vehicle.
According to an aspect of the present disclosure, there is provided a control method of a vehicle, including:
acquiring video data acquired by a vehicle;
performing image recognition on the video data to acquire the in-vehicle personnel information of the vehicle;
and responding to the situation that a risk riding event exists in the vehicle according to the in-vehicle personnel information, generating a vehicle control strategy of the vehicle according to the risk riding event, and sending the vehicle control strategy to the vehicle for execution.
According to another aspect of the present disclosure, there is provided a control apparatus of a vehicle, including:
the first acquisition module is used for acquiring video data acquired by a vehicle;
the first identification module is used for carrying out image identification on the video data to acquire the in-vehicle personnel information of the vehicle;
and the first generation module is used for responding to the situation that a risk riding event exists in the vehicle according to the in-vehicle personnel information, generating a vehicle control strategy of the vehicle according to the risk riding event, and sending the vehicle control strategy to the vehicle for execution.
According to another aspect of the present disclosure, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the first and the second end of the pipe are connected with each other,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a control method of a vehicle.
According to another aspect of the present disclosure, a non-transitory computer-readable storage medium stores computer instructions for causing a computer to execute a control method of a vehicle.
According to another aspect of the disclosure, a computer program product comprises a computer program which, when executed by a processor, performs the steps of implementing a control method for a vehicle.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a flow chart of a control method of a vehicle according to one embodiment of the present disclosure;
FIG. 2 is a flow chart of a control method of a vehicle according to another embodiment of the present disclosure;
FIG. 3 is a flow chart of a control method of a vehicle according to another embodiment of the present disclosure;
FIG. 4 is a flow chart of a control method of a vehicle according to another embodiment of the present disclosure;
FIG. 5 is an interaction diagram of a vehicle, a vehicle monitoring server, and a cloud server according to one embodiment of the present disclosure;
FIG. 6 is a flow chart of a control method of a vehicle according to another embodiment of the present disclosure;
FIG. 7 is a flow chart of a control method of a vehicle according to another embodiment of the present disclosure;
FIG. 8 is a flow chart of a control method of a vehicle according to another embodiment of the present disclosure;
FIG. 9 is a schematic diagram of an application framework of a control method of a vehicle according to one embodiment of the present disclosure;
FIG. 10 is a schematic diagram of an application framework of a control method of a vehicle according to another embodiment of the present disclosure;
FIG. 11 is a schematic diagram of an application framework of a control method of a vehicle according to another embodiment of the present disclosure;
FIG. 12 is a schematic diagram of an application framework of a control method of a vehicle according to another embodiment of the present disclosure;
fig. 13 is a schematic configuration diagram of a control apparatus of a vehicle according to an embodiment of the present disclosure; and
fig. 14 is a block diagram of an electronic device for implementing a control method of a vehicle according to an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
A control method, a device, an electronic apparatus, and a storage medium of a vehicle of the embodiments of the present disclosure are described below with reference to the drawings.
The intelligent driving has two layers of meanings of 'intelligence' and 'capability', wherein the 'intelligence' means that an automobile can intelligently sense, synthesize, judge, reason, decide and memorize like a person; the term "capability" means that the intelligent vehicle can ensure effective execution of "intelligence", can implement active control, and can perform human-computer interaction and cooperation.
Artificial intelligence is the subject of research on the use of computers to simulate certain mental processes and intelligent behaviors (such as learning, reasoning, thinking, planning, etc.) of humans, both in the hardware and software domain. Artificial intelligence hardware technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing, and the like; the artificial intelligence software technology comprises a computer vision technology, a voice recognition technology, a natural language processing technology, deep learning, a big data processing technology, a knowledge map technology and the like.
Deep learning is a new research direction in the field of machine learning. Deep learning is the intrinsic law and expression level of the learning sample data, and the information obtained in the learning process is very helpful for the interpretation of data such as characters, images and sounds. The final aim of the method is to enable the machine to have the analysis and learning capability like a human, and to recognize data such as characters, images and sounds. Deep learning is a complex machine learning algorithm, and achieves the effect in speech and image recognition far exceeding the prior related art.
Computer vision is a science for researching how to make a machine "see", and further, it means that a camera and a computer are used to replace human eyes to perform machine vision such as identification, tracking and measurement on a target, and further image processing is performed, so that the computer processing becomes an image more suitable for human eyes to observe or transmitted to an instrument to detect. As a scientific discipline, computer vision research-related theories and techniques attempt to build artificial intelligence systems that can acquire 'information' from images or multidimensional data. The information referred to herein is that defined by shannon and can be used to help make a "decision". Because perception can be viewed as extracting information from sensory signals, computer vision can also be viewed as the science of how to make an artificial system "perceive" from images or multidimensional data.
Fig. 1 is a schematic flowchart of a control method of a vehicle according to an embodiment of the present disclosure.
The control method of the vehicle according to the embodiment of the disclosure can be further executed by the control device of the vehicle according to the embodiment of the disclosure, and the control device can be configured in an electronic device to acquire video data acquired by the vehicle, perform image recognition on the video data, acquire in-vehicle personnel information of the vehicle, determine that a risk riding event exists in the vehicle according to the in-vehicle personnel information, generate a vehicle control strategy of the vehicle according to the risk riding event, and send the strategy to the vehicle for execution, so that the vehicle can be intelligently controlled, the intelligent level of the vehicle is improved, and comfortable and safe riding experience is provided for passengers.
As shown in fig. 1, the control method of the vehicle may include:
step 101, video data collected by a vehicle are obtained.
In the embodiment of the present disclosure, an image capturing device (e.g., a camera or the like) on a vehicle may capture video data of the interior and the door of the vehicle in real time, and send the captured video data to a vehicle monitoring server, where the vehicle monitoring server may be a vehicle monitoring engineering platform for controlling the vehicle.
And 102, carrying out image identification on the video data to acquire the in-vehicle personnel information of the vehicle.
The in-vehicle personnel information can include state information, behavior information, pedestrian volume information and the like of in-vehicle personnel.
Optionally, when the video data is subjected to image recognition, human body and/or human face features of the image can be extracted, and then detection and recognition are performed through an image recognition algorithm or a model based on the human body and/or human face features to obtain the in-vehicle personnel information. For example, the extracted human and/or human face features may be detected by a Local Binary Patterns (LBP) algorithm, an Eigenface (Eigenface) algorithm, a model based on uniform Real-Time Object Detection (You Only Look one: Unified, Real-Time Object Detection, YoLO), and the like.
And 103, responding to the situation that the risk riding event exists in the vehicle according to the in-vehicle personnel information, generating a vehicle control strategy of the vehicle according to the risk riding event, and sending the vehicle control strategy to the vehicle for execution.
The risk riding event may be a risk event affecting riding comfort or driving safety, for example, abnormal behavior of people in the vehicle, overload of people in the vehicle, and the like.
In some implementations, different risk ride events often require different vehicle control strategies. The vehicle control strategy may include control of certain components of the vehicle, such as a display screen, steering wheel, player, etc. The vehicle control strategy may also include adjustment of driving parameters of the vehicle, such as vehicle speed, route, etc.
In the embodiment of the disclosure, after the risk riding event is determined to exist based on the in-vehicle personnel information, the vehicle control strategy matched with the risk riding event can be inquired based on the risk level and/or the event type of the risk riding event, and the vehicle is controlled through the vehicle control strategy so as to deal with different risk conditions, thereby ensuring the riding comfort of the in-vehicle personnel and the driving safety of the vehicle.
Optionally, after the vehicle monitoring server obtains the vehicle control strategy matched with the risk taking event, the vehicle control strategy is sent to the vehicle in order to realize control over the vehicle, and accordingly, after the vehicle receives the vehicle control strategy, the vehicle control strategy can be executed so as to reduce the probability of causing a traffic accident or taking safety of the risk taking event.
It should be noted that the different vehicle control strategies described in this embodiment may be formulated according to actual situations.
In the embodiment of the disclosure, video data acquired by a vehicle is firstly acquired, then image recognition is carried out on the video data, in-vehicle personnel information of the vehicle is acquired, and finally, a risk riding event is determined in the vehicle according to the in-vehicle personnel information, a vehicle control strategy of the vehicle is generated according to the risk riding event, and the vehicle control strategy is sent to the vehicle for execution. Therefore, the intelligent control can be carried out on the vehicle, so that the intelligent level of the vehicle is improved, and the riding experience of comfort and safety is provided for passengers.
After determining that a risk riding event exists in the vehicle according to the in-vehicle personnel information, the vehicle monitoring server may adopt a corresponding vehicle control strategy according to a risk level of the risk riding event, in an embodiment of the present disclosure, as shown in fig. 2, the generating a vehicle control strategy according to the risk riding event may include:
step 201, identifying the type of the risk riding event, and determining the controlled object on the vehicle and the control information of the controlled object according to the type of the risk riding event.
The type of the risky riding event can include types of unlawful behaviors, behaviors violating traffic rules, sudden diseases, sudden accidents and the like, the risky riding event can be specifically classified in advance according to actual conditions, the controlled object can include a vehicle speed, a steering wheel, a vehicle door, a player and the like, and the control information of the controlled object is information on how to control the controlled object.
And 202, generating a vehicle control strategy based on the controlled object and the control information.
In the embodiment of the present disclosure, after the vehicle monitoring server obtains the controlled object and the control information, a corresponding vehicle control strategy may be generated based on the controlled object and the control information.
For example, when the vehicle monitoring server identifies that a person falls down in the vehicle, the current vehicle speed of the vehicle can be acquired, and if the vehicle speed is higher, the vehicle is controlled to decelerate; when the vehicle monitoring server identifies that the passengers have the unlawful behaviors in the vehicle, the player on the vehicle can be controlled to play prompt voice to remind the passengers of riding the vehicle civilized; when the vehicle monitoring server identifies that the personnel in the vehicle have illegal behaviors, the player on the vehicle can be controlled to give an alarm, the vehicle is controlled to decelerate, and the vehicle door is opened after the vehicle stops.
According to the embodiment of the disclosure, a proper vehicle control strategy can be worked out based on the condition of the risk event, so that the intelligent control of the vehicle is realized, and the comfortable and safe riding experience can be provided for passengers.
To clearly illustrate the above embodiment, in an embodiment of the present disclosure, as shown in fig. 3, determining the controlled object and the control information of the controlled object on the vehicle according to the type of the risk ride event may include:
step 301, responding to the type indication risk riding event of the risk riding event, indicating that the risk riding event is abnormal riding behaviors of people in the vehicle, and acquiring the risk level of the abnormal riding behaviors.
In the disclosed embodiment, the abnormal riding behavior may be divided into a plurality of risk levels based on the risk level evaluation rule, for example, the unlawful behavior, the behavior violating the traffic rule, the sudden disease, and the sudden accident may be divided into a plurality of different levels, such as first level, second level, third level, fourth level, and the like. The risk level evaluation rule can be formulated according to actual conditions.
Optionally, after determining that a risk riding event exists in the vehicle according to the in-vehicle personnel information, the vehicle monitoring server may identify the type of the risk riding event, and determine whether the risk riding event is an abnormal riding behavior of the in-vehicle personnel according to the type of the risk riding, and if so, may obtain a risk level of the abnormal riding behavior.
And step 302, determining the controlled object and the control information of the controlled object according to the risk level of the abnormal riding behavior.
In the disclosed embodiments, for different risk levels, it is often necessary to control different components of the vehicle (e.g., brake, throttle, steering wheel, doors, and player, etc.) to address the different risks. After the vehicle monitoring server obtains the risk level of the abnormal riding behavior, the corresponding controlled object and the control information of the controlled object can be determined according to the risk level of the abnormal riding behavior.
The embodiment of the disclosure can control the vehicle differently for the events with different risk levels, so as to reasonably deal with the risk events with different levels.
Further, in one embodiment of the present disclosure, as shown in fig. 4, the control method of the vehicle may further include:
step 401, in response to the fact that the risk level of the abnormal riding behavior is larger than or equal to a preset risk level, acquiring candidate vehicles around the vehicle according to the position information of the vehicle. Wherein, the preset risk level can be determined according to the actual situation.
And 402, generating prompt information according to the risk level of the abnormal riding behavior, and sending the prompt information to the candidate vehicle.
In the embodiment of the disclosure, after acquiring the risk level of the abnormal riding behavior of the vehicle occupant, the vehicle monitoring server may determine whether the risk level of the abnormal riding behavior is greater than or equal to a preset risk level, if so, acquire the position information of the vehicle through a Global Positioning System (GPS) installed on the vehicle, acquire the candidate vehicles located around the vehicle according to the position information, generate the prompt information according to the risk level of the abnormal riding behavior, and send the prompt information to the candidate vehicles to warn the candidate vehicles.
For example, if a person in the vehicle has a behavior of robbing a steering wheel of the vehicle, and the risk level of the behavior is greater than a preset risk level, the vehicle monitoring server may obtain candidate vehicles located around the vehicle according to the position information of the vehicle, and then generate a prompt message, for example, "a certain vehicle has a risk and please keep a distance", and send the prompt message to the candidate vehicles, so as to reduce the traffic risk and avoid traffic accidents.
According to the embodiment of the disclosure, when the risk level of judging the abnormal riding behavior in the vehicle is higher, the prompt information is sent to the vehicles around the vehicle, so that traffic accidents can be avoided, and the safety of the vehicle is improved.
In other implementations, a safety control area closest to the vehicle may be determined based on the position information of the vehicle, a new route may be planned for the vehicle, the steering wheel may be fully taken over, and the vehicle may be controlled to travel to the safety control area according to the planned route, so as to improve the driving safety and provide safety protection for the driver.
In an embodiment of the disclosure, the image recognition is performed on the video data to obtain the in-vehicle personnel information of the vehicle, which may include extracting a video frame from the video data, sending the video frame to a cloud server for image recognition, and receiving the in-vehicle personnel information fed back by the cloud server; or extracting a video frame from the video data, performing image recognition based on the video frame, and determining the information of people in the vehicle.
Alternatively, referring to fig. 5, the vehicle may send video data collected by an image collecting device (e.g., a camera, etc.) to a vehicle monitoring server, after receiving the video data, the vehicle monitoring server may extract a video frame (which may be a plurality of frames) from the video data based on a video frame extraction policy and send the video frame to a cloud server, and after receiving the video frame, the cloud server performs image recognition on the video frame to obtain in-vehicle personnel information, and sends (feeds back) the in-vehicle personnel information to the vehicle monitoring server.
Alternatively, after the vehicle sends video data collected by an image collection device (e.g., a camera or the like) to the vehicle monitoring server, the vehicle monitoring server may extract a video frame (which may be multiple frames) from the video data based on a video frame extraction policy, and then perform image recognition on the video frame to obtain the in-vehicle personnel information.
It should be noted that the video frame extraction strategy described in this embodiment may be determined according to actual situations, for example, one frame of video image may be extracted at certain time intervals (e.g., 1 second, 2 seconds, 3 seconds, etc.).
The embodiment of the disclosure can perform image recognition on the video frame through the vehicle monitoring server or the cloud server, improve the flexibility of the image recognition, and improve the speed of the image recognition.
In one embodiment of the present disclosure, the vehicle control method may further include caching video frames extracted from the plurality of video data in a database according to vehicle identifiers corresponding to the video frames in response to acquiring the video data of the plurality of vehicles. The database described in this embodiment may be a database of a vehicle monitoring server.
Optionally, the video data collected by each vehicle corresponds to an identifier of the vehicle, and when the plurality of vehicles respectively send the respective collected video data to the vehicle monitoring server, the vehicle monitoring server may extract video frames (which may be multiple frames) from the plurality of video data, where each video frame has a corresponding vehicle identifier, and the extracted video frames may be cached in the database according to the identifier of the vehicle. When the cached video frame is subjected to image recognition subsequently, the vehicle corresponding to the video frame can be recognized according to the vehicle identification corresponding to the video frame, so that the phenomenon of data confusion is avoided.
According to the embodiment of the disclosure, the plurality of video data are cached in the database, so that the performance pressure of the vehicle monitoring server can be relieved, and the loss of video frames caused by overhigh data load can be avoided.
Further, in one embodiment of the present disclosure, as shown in fig. 6, the control method of the vehicle may further include:
step 601, in response to acquiring video data of a plurality of vehicles, generating identification tasks corresponding to the plurality of vehicles, wherein the identification tasks comprise vehicle identifications, and the vehicle identifications are used for acquiring video frames corresponding to the identification tasks from a database. The identification task is an image identification task of the video frame.
In the embodiment of the disclosure, when the vehicle monitoring server receives a plurality of video data sent by a plurality of vehicles, the vehicle monitoring server may generate an identification task corresponding to each of the plurality of vehicles.
Step 602, selecting at least one target recognition task from a plurality of recognition tasks.
In the embodiment of the present disclosure, the creation time of each identification task is often different, and a target identification task may be selected from the identification tasks based on the creation time of the identification task, and optionally, at least one identification task with an earlier creation time may be selected as the target identification task, so that the identification task created earlier can be scheduled as early as possible.
And 603, sending the target identification task to a cloud server for execution or local execution so as to acquire the information of people in the vehicle.
In the embodiment of the disclosure, the cloud server and the local server can both execute the target identification task, and the target identification task can be sent to the remote server to be executed or sent to the local server to be executed based on the computing power, the identification efficiency, the load condition and other factors of the cloud server and the local server.
The embodiment of the disclosure can execute the early-established recognition task as early as possible, thereby avoiding the long-time non-execution of the recognition task, and can also select to execute through the cloud server or execute locally, thereby improving the execution efficiency of the recognition task.
To clearly illustrate the above embodiment, in an embodiment of the present disclosure, as shown in fig. 7, selecting at least one target recognition task from a plurality of recognition tasks may include:
in step 701, the load amount of the cloud server or the local current image recognition is obtained.
In the embodiment of the present disclosure, when the load of the cloud server or the local image recognition is large, a large performance pressure may be generated, and at this time, the image recognition task needs to be scheduled to reduce the load of the cloud server or the local image recognition.
Step 702, according to the load, scheduling the target recognition task from the image recognition tasks.
Optionally, when the load of the current image recognition of the cloud server is large, at least one target recognition task may be selected from a plurality of recognition tasks of the cloud server, and sent to the vehicle monitoring server (i.e., locally) for execution; when the load capacity of local image recognition is large, at least one target recognition task can be selected from a plurality of recognition tasks of the vehicle monitoring server and sent to the cloud server to be executed.
The image recognition task is adaptively scheduled based on the cloud server and the local load, so that the pressure of the cloud server or the local image recognition task can be reduced, and the cloud server or the local image recognition task is prevented from being crashed due to a large load.
Further, in one embodiment of the present disclosure, the vehicle control method may further include monitoring an execution state of the recognition task to update the execution state of the recognition task. The execution state of the identified task can comprise executing, non-executing and execution completion.
In the embodiment of the disclosure, corresponding scheduling can be performed on the identification tasks which are not executed, and scheduling cannot be performed on the identification tasks which are executed and completed. Therefore, in the process of executing the identification task by the vehicle monitoring server or the cloud server, the execution state of the identification task needs to be monitored, and the execution state of the identification task is updated in real time to determine whether the identification task is executed, so as to determine whether the identification task can be scheduled.
The embodiment of the disclosure updates the execution state of the recognition task by monitoring the recognition task, so that the scheduling strategy of the recognition task can be adjusted according to the change of the execution state of the recognition task, and the execution efficiency of the recognition task is improved.
In one embodiment of the present disclosure, the vehicle control method may further include receiving a registration request of the vehicle, determining a video transmission address for the image capturing device on the vehicle according to the registration request, and transmitting the video transmission address to the vehicle.
Optionally, the relevant person may create a registration request of the vehicle through a corresponding client (e.g., an on-board computer), and send the registration request of the vehicle to the vehicle monitoring server through the client, and after receiving the registration request, the vehicle monitoring server may determine (configure) a corresponding video transmission address for the image capture device on the vehicle according to the registration request, and send the video transmission address to the vehicle, so that after receiving the video transmission address, the image capture device on the vehicle can send the captured video data to the vehicle monitoring server according to the video transmission address.
According to the embodiment of the disclosure, the video transmission address is sent to the vehicle requesting registration, so that the image acquisition device on the vehicle transmits the acquired video according to the video transmission address, the video transmission error can be avoided, and the management and maintenance of the video of the vehicle monitoring server are facilitated.
When there are a large number of vehicles registered on the vehicle monitoring server, the vehicle monitoring server may bear a great pressure, and in an embodiment of the present disclosure, as shown in fig. 8, before acquiring the video data collected by the vehicle, the method may further include:
step 801, acquiring operation data of a vehicle, and determining the current state of the vehicle according to the operation data.
Step 802, identify the current state of the vehicle as a target state.
In the embodiment of the disclosure, before the vehicle monitoring server obtains the video data acquired by the vehicle, the vehicle may upload the operation data of the vehicle, such as the vehicle speed, the acceleration, the door opening and closing, to the vehicle monitoring server, and after the vehicle monitoring server receives the operation data of the vehicle, the vehicle monitoring server may identify the current state of the vehicle according to the operation data, take the current state of the vehicle as a target state, and determine whether to start an image recognition task according to the target state.
Alternatively, referring to fig. 9, if the target state is vehicle start, hand brake is released, and vehicle speed is greater than 0 (door closed), the (image recognition) task is started; and if the vehicle enters the station in the target state, the vehicle speed is 0, and the hand brake is pulled (the vehicle door is opened), the (image recognition) task is stopped.
According to the embodiment of the invention, the vehicle monitoring server is triggered to acquire the video data of the vehicle through the corresponding state of the vehicle, the vehicle is correspondingly controlled, the vehicle to be controlled can be accurately controlled, the vehicle monitoring server is prevented from always receiving the video data of all the vehicles (registered in the vehicle monitoring server), and the pressure of the vehicle monitoring server is relieved.
In order to make the person skilled in the art more clearly understand the control method of the vehicle provided by the present disclosure, fig. 10 to 12 are schematic diagrams of the frames of the control method of the vehicle in practical application scenarios. As shown in fig. 10, the vehicle that is automatically driven collects videos in the vehicle through the camera to provide video monitoring services through the vehicle monitoring server, and the collected videos can be uploaded to the cloud server for analysis, wherein the ACE (respectively, automatic Driving, Connected Road, and Efficient Mobility), the local platform, and the third party platform can provide video interfaces for the cloud server. As shown in fig. 11, the Cloud server may provide a Function computation (CFC) service, an object storage service, and an Artificial Intelligence (AI) analysis service, and may cache video data into a database when the video data is large. As shown in fig. 12, the vehicle monitoring server may obtain a video stream sent by the vehicle monitoring host through the AI monitoring background, intercept a video frame of the video stream, upload the video stream to the cloud server, identify and analyze the video stream by the cloud server, mark an event (i.e., an abnormal behavior event) according to an analysis result, and then feed back the marked event to the vehicle monitoring server, where the vehicle monitoring server may send the marked event to the vehicle platform to send a notification reminder based on a risk level of the marked event, or send the marked event to the alarm platform to send an alarm through a trip-as-a-Service (MaaS), and an Automatic Driving (AD) side may generate an Automatic driving policy, i.e., a control policy, based on the marked event.
Fig. 13 is a schematic structural diagram of a control device of a vehicle according to an embodiment of the present disclosure.
The control device of the vehicle in the embodiment of the disclosure can be configured in the electronic device to acquire video data acquired by the vehicle, perform image recognition on the video data, acquire in-vehicle personnel information of the vehicle, determine that a risk riding event exists in the vehicle according to the in-vehicle personnel information in response to the in-vehicle personnel information, generate a vehicle control strategy of the vehicle according to the risk riding event, and send the vehicle control strategy to the vehicle for execution, so that the vehicle can be intelligently controlled, the intelligence level of the vehicle is further improved, and comfortable and safe riding experience is provided for passengers.
As shown in fig. 13, the control device 1300 for a vehicle may include: a first obtaining module 1301, a first identifying module 1302 and a first generating module 1303.
The first obtaining module 1301 is configured to obtain video data collected by a vehicle.
In the embodiment of the present disclosure, an image capturing device (e.g., a camera or the like) on a vehicle may capture video data of the interior and the door of the vehicle in real time, and send the captured video data to the first obtaining module 1301 of the vehicle monitoring server, where the vehicle monitoring server may be a vehicle monitoring engineering platform for controlling the vehicle.
The first identification module 1302 is configured to perform image identification on the video data to obtain the in-vehicle personnel information of the vehicle.
The in-vehicle personnel information can include state information, behavior information, pedestrian volume information and the like of in-vehicle personnel.
Optionally, when performing image recognition on the video data, the first recognition module 1302 may extract human body and/or human face features of the image, and then perform detection and recognition through an image recognition algorithm or a model based on the human body and/or human face features to obtain the in-vehicle personnel information. For example, the extracted human and/or human face features may be detected by a Local Binary Patterns (LBP) algorithm, an Eigenface (Eigenface) algorithm, a model based on uniform Real-Time Object Detection (You Only Look one: Unified, Real-Time Object Detection, YoLO), and the like.
And the first generating module 1303 is configured to generate a vehicle control strategy of the vehicle according to the risk riding event in response to determining that the risk riding event exists in the vehicle according to the in-vehicle personnel information, and send the vehicle control strategy to the vehicle for execution.
The risk riding event may be a risk event affecting riding comfort or driving safety, for example, abnormal behavior of people in the vehicle, overload of people in the vehicle, and the like.
In some implementations, different risk ride events often require different vehicle control strategies. The vehicle control strategy may include control of certain components of the vehicle, such as a display screen, steering wheel, player, etc. The vehicle control strategy may also include adjustment of driving parameters of the vehicle, such as vehicle speed, route, etc.
In the embodiment of the disclosure, after it is determined that a risk riding event exists based on the in-vehicle personnel information, the first generation module 1303 may query a vehicle control strategy matched with the risk riding event based on the risk level and/or the event type of the risk riding event, and control the vehicle through the vehicle control strategy to deal with different risk situations, so as to ensure riding comfort of the in-vehicle personnel and driving safety of the vehicle.
Optionally, after the server obtains the vehicle control strategy matched with the risk taking event, the server sends the vehicle control strategy to the vehicle in order to realize control over the vehicle, and accordingly, after the vehicle receives the vehicle control strategy, the vehicle control strategy can be executed so as to reduce the probability of causing a traffic accident or taking safety of the risk taking event.
It should be noted that the different vehicle control strategies described in this embodiment may be formulated according to actual situations.
According to the control device of the vehicle, the video data acquired by the vehicle are acquired through the first acquisition module, the first identification module carries out image identification on the video data, the in-vehicle personnel information of the vehicle is acquired, then the first generation module responds to the situation that a risk riding event exists in the vehicle according to the in-vehicle personnel information, the vehicle control strategy of the vehicle is generated according to the risk riding event and is sent to the vehicle to be executed, so that the vehicle can be intelligently controlled, the intelligent level of the vehicle is further improved, and comfortable and safe riding experience is provided for passengers.
In an embodiment of the present disclosure, the first generating module 1303 includes: the identification unit 10 is used for identifying the type of the risk riding event and determining a controlled object on the vehicle and control information of the controlled object according to the type of the risk riding event; and the generating unit 20 is used for generating the vehicle control strategy based on the controlled object and the control information.
In an embodiment of the present disclosure, the identification unit 10 is further configured to: responding to the type of the risk riding event, indicating the risk riding event to indicate that abnormal riding behaviors occur to people in the vehicle, and acquiring the risk level of the abnormal riding behaviors; and the method is used for determining the controlled object and the control information of the controlled object according to the risk level of the abnormal riding behavior.
In one embodiment of the present disclosure, the control device 1300 of a vehicle further includes: a second obtaining module 1304, configured to, in response to that a risk level of the abnormal riding behavior is greater than or equal to a preset risk level, obtain candidate vehicles located around the vehicle according to the position information of the vehicle; the second generating module 1305 generates prompt information according to the risk level of the abnormal riding behavior and sends the prompt information to the candidate vehicle.
In an embodiment of the present disclosure, the first identifying module 1302 is further configured to: extracting a video frame from the video data, sending the video frame to a cloud server for image recognition, and receiving in-vehicle personnel information fed back by the cloud server; or extracting a video frame from the video data, performing image recognition based on the video frame, and determining the information of people in the vehicle.
In one embodiment of the present disclosure, the control device 1300 of a vehicle further includes: the caching module 1306 is configured to, in response to obtaining the video data of the multiple vehicles, cache video frames extracted from the multiple video data in the database according to vehicle identifiers corresponding to the video frames.
In one embodiment of the present disclosure, the control device 1300 of a vehicle further includes: a third generating module 1307, configured to generate, in response to acquiring the video data of the multiple vehicles, an identification task corresponding to each of the multiple vehicles, where the identification task includes a vehicle identifier, and the vehicle identifier is used to acquire a video frame corresponding to the identification task from the database; a selecting module 1308, configured to select at least one target recognition task from multiple recognition tasks; the third obtaining module 1309 is configured to send the target identification task to the cloud server for execution or local execution, so as to obtain the vehicle interior personnel information.
In one embodiment of the present disclosure, the control device 1300 of a vehicle further includes: an update module 1310 configured to monitor the execution status of the identification task to update the execution status of the identification task.
In an embodiment of the present disclosure, the selecting module 1308 is further configured to: acquiring the load of the cloud server or the local current image recognition; and scheduling the target recognition task from the image recognition tasks according to the load amount.
In one disclosed embodiment, the control apparatus 1300 for a vehicle further includes: the receiving module 1311 is configured to receive a registration request of a vehicle, determine a video transmission address for an image capture device on the vehicle according to the registration request, and send the video transmission address to the vehicle.
In one disclosed embodiment, the control apparatus 1300 for a vehicle further includes: a fourth obtaining module 1312, configured to obtain operation data of the vehicle before obtaining the video data collected by the vehicle, and determine a current state of the vehicle according to the operation data; a second identifying module 1313 for identifying that the current state of the vehicle is the target state.
It should be noted that the foregoing explanation of the embodiment of the control method for the vehicle is also applicable to the control device for the vehicle in this embodiment, and the details are not repeated here.
According to the device of the embodiment, the first obtaining module is used for obtaining video data collected by a vehicle, the first identification module is used for carrying out image identification on the video data to obtain the in-vehicle personnel information of the vehicle, then the first generation module is used for responding to the situation that a risk riding event exists in the vehicle according to the in-vehicle personnel information, a vehicle control strategy of the vehicle is generated according to the risk riding event, and the vehicle control strategy is sent to the vehicle to be executed. Therefore, the intelligent control can be carried out on the vehicle, so that the intelligent level of the vehicle is improved, and the riding experience of comfort and safety is provided for passengers.
In the technical scheme of the disclosure, the collection, storage, use, processing, transmission, provision, disclosure and other processing of the personal information of the related user are all in accordance with the regulations of related laws and regulations and do not violate the good customs of the public order.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 14 shows a schematic block diagram of an example electronic device 1400 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 14, the device 1400 includes a computing unit 1401 that can perform various appropriate actions and processes in accordance with a computer program stored in a Read Only Memory (ROM)1402 or a computer program loaded from a storage unit 1408 into a Random Access Memory (RAM) 1403. In the RAM 1403, various programs and data required for the operation of the device 1400 can also be stored. The calculation unit 1401, the ROM 1402, and the RAM 1403 are connected to each other via a bus 1404. An input/output (I/O) interface 1405 is also connected to bus 1404.
Various components in device 1400 connect to I/O interface 1405, including: an input unit 1406 such as a keyboard, a mouse, or the like; an output unit 1407 such as various types of displays, speakers, and the like; a storage unit 1408 such as a magnetic disk, optical disk, or the like; and a communication unit 1409, such as a network card, modem, wireless communication transceiver, or the like. The communication unit 1409 allows the device 1400 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 1401 may be a variety of general purpose and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 1401 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and the like. The calculation unit 1401 executes the respective methods and processes described above, such as the control method of the vehicle. For example, in some embodiments, the control method of the vehicle may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 1408. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 1400 via ROM 1402 and/or communication unit 1409. When the computer program is loaded into the RAM 1403 and executed by the computing unit 1401, one or more steps of the control method of the vehicle described above may be performed. Alternatively, in other embodiments, the computing unit 1401 may be configured by any other suitable means (e.g. by means of firmware) to perform a control method of the vehicle.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), the internet, and blockchain networks.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server combining a blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (25)

1. A control method of a vehicle, comprising:
acquiring video data acquired by a vehicle;
performing image recognition on the video data to acquire the in-vehicle personnel information of the vehicle;
and responding to the situation that a risk riding event exists in the vehicle according to the in-vehicle personnel information, generating a vehicle control strategy of the vehicle according to the risk riding event, and sending the vehicle control strategy to the vehicle for execution.
2. The method of claim 1, wherein the generating a vehicle control strategy for the vehicle according to the risky ride event comprises:
identifying the type of the risk riding event, and determining a controlled object on the vehicle and control information of the controlled object according to the type of the risk riding event;
and generating the vehicle control strategy based on the controlled object and the control information.
3. The method of claim 2, wherein the determining a controlled object on the vehicle and control information for the controlled object according to the type of the risky ride event comprises:
responding to the type of the risk riding event, indicating the risk riding event to be that abnormal riding behaviors occur to people in the vehicle, and acquiring the risk level of the abnormal riding behaviors;
and determining the controlled object and the control information of the controlled object according to the risk level of the abnormal riding behavior.
4. The method of claim 3, wherein the method further comprises:
responding to the situation that the risk level of the abnormal riding behavior is larger than or equal to a preset risk level, and acquiring candidate vehicles around the vehicle according to the position information of the vehicle;
and generating prompt information according to the risk level of the abnormal riding behavior, and sending the prompt information to the candidate vehicle.
5. The method according to any one of claims 1 to 4, wherein the performing image recognition on the video data to obtain the in-vehicle occupant information of the vehicle comprises:
extracting a video frame from the video data, sending the video frame to a cloud server for image recognition, and receiving the in-vehicle personnel information fed back by the cloud server; alternatively, the first and second electrodes may be,
and extracting a video frame from the video data, performing image recognition based on the video frame, and determining the in-vehicle personnel information.
6. The method of claim 5, wherein the method further comprises:
in response to the video data of the plurality of vehicles being acquired, caching video frames extracted from the plurality of video data into a database according to vehicle identifications corresponding to the video frames.
7. The method of claim 5 or 6, wherein the method further comprises:
in response to the video data of a plurality of vehicles being acquired, generating identification tasks corresponding to the vehicles respectively, wherein the identification tasks comprise vehicle identifications, and the vehicle identifications are used for acquiring video frames corresponding to the identification tasks from a database;
selecting at least one target recognition task from a plurality of the recognition tasks;
and sending the target identification task to a cloud server for execution or local execution to acquire the in-vehicle personnel information.
8. The method of claim 7, wherein the method further comprises:
and monitoring the execution state of the identification task so as to update the execution state of the identification task.
9. The method of claim 7, wherein said selecting at least one target recognition task from a plurality of said recognition tasks comprises:
acquiring the load of the cloud server or the local current image recognition;
and scheduling the target recognition task from the image recognition tasks according to the load.
10. The method of any of claims 1-4, wherein the method further comprises:
receiving a registration request of the vehicle, determining a video transmission address for an image acquisition device on the vehicle according to the registration request, and sending the video transmission address to the vehicle.
11. The method of any of claims 1-4 or 10, wherein prior to acquiring the video data captured by the vehicle, further comprising:
acquiring running data of a vehicle, and determining the current state of the vehicle according to the running data;
and identifying the current state of the vehicle as a target state.
12. A control device of a vehicle, comprising:
the first acquisition module is used for acquiring video data acquired by a vehicle;
the first identification module is used for carrying out image identification on the video data to acquire the in-vehicle personnel information of the vehicle;
and the first generation module is used for responding to the situation that a risk riding event exists in the vehicle according to the in-vehicle personnel information, generating a vehicle control strategy of the vehicle according to the risk riding event, and sending the vehicle control strategy to the vehicle for execution.
13. The apparatus of claim 12, wherein the first generating means comprises:
the identification unit is used for identifying the type of the risk riding event and determining a controlled object on the vehicle and control information of the controlled object according to the type of the risk riding event;
and the generating unit is used for generating the vehicle control strategy based on the controlled object and the control information.
14. The apparatus of claim 13, wherein the identifying unit is further configured to:
responding to the type of the risk riding event, indicating that the risk riding event is the abnormal riding behavior of people in the vehicle, and acquiring the risk level of the abnormal riding behavior;
and the control information of the controlled object and the controlled object is determined according to the risk level of the abnormal riding behavior.
15. The control device of the vehicle according to claim 14, further comprising:
the second obtaining module is used for responding that the risk level of the abnormal riding behavior is larger than or equal to a preset risk level, and obtaining candidate vehicles around the vehicle according to the position information of the vehicle;
and the second generation module generates prompt information according to the risk level of the abnormal riding behavior and sends the prompt information to the candidate vehicle.
16. The apparatus of any of claims 12-15, wherein the first identifying module is further configured to:
extracting a video frame from the video data, sending the video frame to a cloud server for image recognition, and receiving the in-vehicle personnel information fed back by the cloud server; alternatively, the first and second electrodes may be,
and extracting a video frame from the video data, performing image recognition based on the video frame, and determining the in-vehicle personnel information.
17. The control device of the vehicle according to claim 16, further comprising:
the caching module is used for responding to the video data of the plurality of vehicles, caching video frames extracted from the plurality of video data into a database according to vehicle identifications corresponding to the video frames.
18. The control device of the vehicle according to claim 17, further comprising:
the third generation module is used for responding to the acquired video data of the plurality of vehicles and generating identification tasks corresponding to the vehicles, wherein the identification tasks comprise vehicle identifications, and the vehicle identifications are used for acquiring video frames corresponding to the identification tasks from a database;
the selection module is used for selecting at least one target identification task from the plurality of identification tasks;
and the third acquisition module is used for sending the target identification task to a cloud server for execution or local execution so as to acquire the in-vehicle personnel information.
19. The control device of the vehicle according to claim 18, further comprising:
and the updating module is used for monitoring the execution state of the identification task so as to update the execution state of the identification task.
20. The apparatus of claim 19, wherein the selecting module is further configured to:
acquiring the load of the cloud server or the local current image recognition;
and scheduling the target recognition task from the image recognition tasks according to the load.
21. The control device of the vehicle according to any one of claims 12 to 15, further comprising:
the receiving module is used for receiving a registration request of the vehicle, determining a video transmission address for an image acquisition device on the vehicle according to the registration request, and sending the video transmission address to the vehicle.
22. The control device of the vehicle according to any one of claims 12 to 15 or 21, further comprising:
the fourth acquisition module is used for acquiring the running data of the vehicle before acquiring the video data acquired by the vehicle and determining the current state of the vehicle according to the running data;
and the second identification module is used for identifying that the current state of the vehicle is a target state.
23. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of controlling the vehicle of any one of claims 1-11.
24. A non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute the control method of a vehicle according to any one of claims 1 to 11.
25. A computer program product comprising a computer program which, when being executed by a processor, realizes the steps of the control method of the vehicle according to any one of claims 1-11.
CN202111629584.5A 2021-12-28 2021-12-28 Vehicle control method and device, electronic equipment and storage medium Pending CN114475623A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111629584.5A CN114475623A (en) 2021-12-28 2021-12-28 Vehicle control method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111629584.5A CN114475623A (en) 2021-12-28 2021-12-28 Vehicle control method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114475623A true CN114475623A (en) 2022-05-13

Family

ID=81497018

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111629584.5A Pending CN114475623A (en) 2021-12-28 2021-12-28 Vehicle control method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114475623A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019232972A1 (en) * 2018-06-04 2019-12-12 上海商汤智能科技有限公司 Driving management method and system, vehicle-mounted intelligent system, electronic device and medium
WO2020151339A1 (en) * 2019-01-24 2020-07-30 平安科技(深圳)有限公司 Abnormality processing method and apparatus based on unmanned vehicle, and related devices
CN111860111A (en) * 2020-06-01 2020-10-30 北京嘀嘀无限科技发展有限公司 Safety monitoring method and device in vehicle journey and storage medium
CN112633057A (en) * 2020-11-04 2021-04-09 北方工业大学 Intelligent monitoring method for abnormal behaviors in bus
CN112686090A (en) * 2020-11-04 2021-04-20 北方工业大学 Intelligent monitoring system for abnormal behaviors in bus
CN113386786A (en) * 2021-07-29 2021-09-14 阿波罗智联(北京)科技有限公司 Information prompting method, device, equipment, medium, cloud control platform and vehicle

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019232972A1 (en) * 2018-06-04 2019-12-12 上海商汤智能科技有限公司 Driving management method and system, vehicle-mounted intelligent system, electronic device and medium
WO2020151339A1 (en) * 2019-01-24 2020-07-30 平安科技(深圳)有限公司 Abnormality processing method and apparatus based on unmanned vehicle, and related devices
CN111860111A (en) * 2020-06-01 2020-10-30 北京嘀嘀无限科技发展有限公司 Safety monitoring method and device in vehicle journey and storage medium
CN112633057A (en) * 2020-11-04 2021-04-09 北方工业大学 Intelligent monitoring method for abnormal behaviors in bus
CN112686090A (en) * 2020-11-04 2021-04-20 北方工业大学 Intelligent monitoring system for abnormal behaviors in bus
CN113386786A (en) * 2021-07-29 2021-09-14 阿波罗智联(北京)科技有限公司 Information prompting method, device, equipment, medium, cloud control platform and vehicle

Similar Documents

Publication Publication Date Title
US11380193B2 (en) Method and system for vehicular-related communications
US10346888B2 (en) Systems and methods to obtain passenger feedback in response to autonomous vehicle driving events
US11568689B2 (en) Systems and methods to obtain feedback in response to autonomous vehicle failure events
CN108860165B (en) Vehicle driving assisting method and system
EP4036886A2 (en) Method and apparatus for monitoring vehicle, cloud control platform and system for vehicle-road collaboration
CN109345829B (en) Unmanned vehicle monitoring method, device, equipment and storage medium
CN109523652B (en) Insurance processing method, device and equipment based on driving behaviors and storage medium
US20220242448A1 (en) Method, apparatus and device for determining behavioral driving habit and controlling vehicle driving
CN106448267B (en) Road traffic accident chain based on car networking blocks system
US20210256257A1 (en) Systems and methods for utilizing models to identify a vehicle accident based on vehicle sensor data and video data captured by a vehicle device
CN114030475A (en) Vehicle driving assisting method and device, vehicle and storage medium
El Masri et al. Toward self-policing: Detecting drunk driving behaviors through sampling CAN bus data
CN111541751B (en) Track monitoring method and device
CN117644880B (en) Fusion safety protection system and control method for intelligent network-connected automobile
CN113391627A (en) Unmanned vehicle driving mode switching method and device, vehicle and cloud server
CN113792106A (en) Road state updating method and device, electronic equipment and storage medium
CN109308802A (en) Abnormal vehicles management method and device
CN114475623A (en) Vehicle control method and device, electronic equipment and storage medium
CN111427037B (en) Obstacle detection method and device, electronic equipment and vehicle-end equipment
CN113256981B (en) Alarm analysis method, device, equipment and medium based on vehicle driving data
KR102314864B1 (en) safe driving system of a vehicle by use of edge deep learning of driving status information
CN115027488A (en) Vehicle control method and device and intelligent vehicle
CN115320626B (en) Danger perception capability prediction method and device based on human-vehicle state and electronic equipment
CN115384541A (en) Method and system for driving risk detection
CN117609946A (en) Smart city big data fusion analysis cloud system, method, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination