CN116533232A - Control method and device of vehicle-mounted robot and vehicle-mounted robot - Google Patents
Control method and device of vehicle-mounted robot and vehicle-mounted robot Download PDFInfo
- Publication number
- CN116533232A CN116533232A CN202310326894.2A CN202310326894A CN116533232A CN 116533232 A CN116533232 A CN 116533232A CN 202310326894 A CN202310326894 A CN 202310326894A CN 116533232 A CN116533232 A CN 116533232A
- Authority
- CN
- China
- Prior art keywords
- vehicle
- robot
- camera
- mounted robot
- driver
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 50
- 230000006870 function Effects 0.000 claims abstract description 61
- 230000008451 emotion Effects 0.000 claims description 15
- 230000008921 facial expression Effects 0.000 claims description 15
- 230000006399 behavior Effects 0.000 claims description 12
- 238000004590 computer program Methods 0.000 claims description 6
- 230000000007 visual effect Effects 0.000 abstract description 3
- 230000009471 action Effects 0.000 description 17
- 230000008569 process Effects 0.000 description 10
- 238000001514 detection method Methods 0.000 description 5
- 230000008859 change Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000012544 monitoring process Methods 0.000 description 4
- 230000014509 gene expression Effects 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- 239000000306 component Substances 0.000 description 2
- 238000004378 air conditioning Methods 0.000 description 1
- 239000008358 core component Substances 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 230000008909 emotion recognition Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1602—Programme controls characterised by the control system, structure, architecture
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J11/00—Manipulators not otherwise provided for
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Automation & Control Theory (AREA)
- Fittings On The Vehicle Exterior For Carrying Loads, And Devices For Holding Or Mounting Articles (AREA)
Abstract
The invention discloses a control method and device of a vehicle-mounted robot and the vehicle-mounted robot, which relate to the field of vehicle safety control, wherein the head of the vehicle-mounted robot is controlled to rotate according to the current state of a vehicle, a camera arranged on the vehicle-mounted robot is used for collecting images in the vehicle, and different tasks and strategies are executed according to image information and different vehicle states. The functions of the vehicle-mounted robot, the DMS function and the OMS function are integrated into the robot, and two cameras positioned on the robot are matched, so that the vehicle-mounted robot can realize multiple functions; by controlling the rotation of the head of the robot, the shooting visual field of the camera on the robot is wider than that of the camera at a fixed position; the functions realized by the original multiple devices are integrated into one vehicle-mounted robot, so that the importance of the vehicle-mounted robot in a vehicle is improved, the economic cost for arranging other devices is greatly saved, the occupied space in the vehicle is reduced, and the aesthetic degree of the interior of the vehicle is also improved.
Description
Technical Field
The present invention relates to the field of vehicle safety control, and in particular, to a control method and apparatus for a vehicle-mounted robot, and a vehicle-mounted robot.
Background
With the development of the vehicle field, the functions of OMS (Occupancy Monitoring System, passenger monitoring system), DMS (Driver Monitor System, driver monitoring system), intelligent interaction, entertainment and the like in the automobile are gradually enriched and perfected, but the current functional devices are independent of each other, and taking a vehicle-mounted robot as an example, the current vehicle-mounted robot can generally realize interaction with personnel based on the processing of multiple natural body languages such as sight, gestures, head actions and the like, so that the requirement of personnel on the entertainment of the automobile is met, but the vehicle-mounted robot cannot realize the functions related to the safety of the automobile. Resulting in a low degree of importance of the in-vehicle robot to the vehicle.
At present, in the prior art, functions such as OMS and DMS are integrated into a vehicle system, cameras are arranged at positions such as a steering wheel, a vehicle frame, an instrument panel, a rearview mirror, an A column and a skylight of a vehicle, and images are acquired through the cameras to perform image processing on the vehicle system so as to realize various functions. Although this method can integrate a plurality of functions, since the shooting views of the cameras are fixed, all views of the interior of the vehicle cannot be obtained by only one camera, and setting too many cameras requires not only higher economic cost, but also more space in the interior of the vehicle, and also affects the aesthetic degree of the interior of the vehicle.
Disclosure of Invention
The invention aims to provide a control method and device of a vehicle-mounted robot and the vehicle-mounted robot, which can shoot a wider area through a small number of cameras, can improve the importance of the vehicle-mounted robot in a vehicle, greatly save the economic cost of arranging other equipment, reduce the occupation of space in the vehicle and improve the aesthetic degree of the interior of the vehicle.
In order to solve the technical problem, the present invention provides a control method of a vehicle-mounted robot, which is applied to a processor in the vehicle-mounted robot in a vehicle, the vehicle-mounted robot further includes a first camera and a second camera disposed at a head, and the control method of the vehicle-mounted robot includes:
determining a current state of the vehicle according to the received state signal;
when the state of the vehicle is determined to be a running state, controlling the head of the vehicle-mounted robot to turn to a first preset position;
acquiring an image of a driver seat of the vehicle through the first camera;
executing a DMS strategy and a preset entertainment strategy by using the image acquired by the first camera;
when the state of the vehicle is determined to be a parking state, controlling the head of the vehicle-mounted robot to turn to a second preset position;
acquiring images of each seat of the vehicle by the second camera;
and executing an OMS strategy by using the image acquired by the second camera.
Preferably, executing a DMS policy by using the image acquired by the first camera includes:
acquiring a first image acquired by the first camera;
determining facial expression, hand motion and head motion of the driver according to the plurality of first images which are continuously acquired;
judging whether the driver is tired according to the facial expression;
if the user is tired, a first prompt signal is generated and sent to a prompt module, so that the prompt module sends out prompts;
judging whether the driver has distraction according to the hand motion and the head motion;
if the distraction behavior exists, a second prompt signal is generated and sent to the prompt module, so that the prompt module sends out prompts.
Preferably, after determining the facial expression, the hand motion, and the head motion of the driver from the plurality of the first images acquired in succession, the method further includes:
determining the emotion of the driver according to the facial expression;
and controlling a lighting system and a music system in the vehicle according to the emotion of the driver.
Preferably, the performing an OMS policy using the image acquired by the second camera includes:
acquiring a second image acquired by the second camera;
judging whether a living object exists in the vehicle according to a plurality of continuously acquired second images;
if the living things exist, a third prompt signal is generated and sent to the prompt module, so that the prompt module sends out prompts.
Preferably, the vehicle-mounted robot further includes a sound collector, and before judging whether a living object exists in the vehicle according to the plurality of the first images acquired continuously, the vehicle-mounted robot further includes:
acquiring the internal sound of the vehicle acquired by the sound acquisition device;
judging whether a living object exists in the vehicle according to a plurality of continuously acquired second images, wherein the method comprises the following steps:
and judging whether a living object exists in the vehicle according to the plurality of second images and the internal sound which are continuously acquired.
Preferably, when the state of the vehicle is determined to be a driving state, executing a preset entertainment strategy by using the image acquired by the first camera, including:
acquiring control voice which is acquired by the sound acquisition device and sent by a passenger in the vehicle;
a preset entertainment function is determined according to the image acquired by the first camera and the control voice;
and controlling the pose of the vehicle-mounted robot and the preset entertainment equipment in the vehicle according to the preset entertainment function.
Preferably, the method further comprises:
when detecting that the vehicle has a door opened, determining the opened door;
judging whether the opened vehicle door is in a normal opening mode or not;
if the vehicle-mounted robot is in a normal opening mode, controlling the head of the vehicle-mounted robot to turn to a preset position corresponding to the opened vehicle door;
controlling a loudspeaker in the vehicle-mounted robot to emit sound;
if the normal opening mode is not adopted, a fourth prompting signal is generated and sent to the alarm module, so that the alarm module sends out a warning.
Preferably, the method further comprises:
when detecting that a seat on the vehicle is occupied by a person, controlling the head of the vehicle-mounted robot to turn to a preset position corresponding to the seat occupied by the person;
acquiring a third image containing face information at a preset position corresponding to the seat where the person sits by using the second camera;
determining identity information of a seated person according to the face information in the third image;
determining riding preferences of the seated person according to the identity information;
and adjusting the pose of the seat where the seated person is located according to the riding preference.
The application also provides a control device of the vehicle-mounted robot, which comprises:
a memory for storing a computer program;
and the processor is used for realizing the steps of the control method of the vehicle-mounted robot when executing the computer program.
The application also provides a vehicle-mounted robot, which comprises a vehicle-mounted robot body, a first camera, a second camera and a control device of the vehicle-mounted robot;
the first camera and the second camera are arranged at the head of the vehicle-mounted robot body;
the vehicle-mounted robot body is connected with a control device of the vehicle-mounted robot.
The application provides a control method and device of a vehicle-mounted robot and the vehicle-mounted robot, and relates to the field of vehicle safety control, wherein the current state of a vehicle is determined according to received state signals; when the vehicle is in a parking state, the head of the vehicle-mounted robot is controlled to turn to a second preset position, images of all seats of the vehicle are collected through the second camera, and an OMS strategy is executed by using the images collected by the second camera. The functions of the vehicle-mounted robot and the functions of the DMS and the OMS are integrated into the robot, the robot is matched with two cameras arranged on the robot, and different functions are executed according to different states of the vehicle, so that the vehicle-mounted robot can realize multiple functions, and the importance of the vehicle-mounted robot in the vehicle is improved; by controlling the rotation of the head of the robot, the provided visual field is wider than that of a camera at a fixed position, and all the areas inside the vehicle can be seen by only two cameras, so that various functions can be realized more effectively; the functions realized by the original multiple devices are integrated into one device, so that the economic cost for arranging other devices is greatly saved, the occupied space in the vehicle is reduced, and the aesthetic degree of the interior of the vehicle is also improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required in the prior art and the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a control method of a vehicle-mounted robot provided in the present application;
fig. 2 is a flowchart of another control method of the in-vehicle robot provided in the present application;
fig. 3 is a schematic structural diagram of a control device of an in-vehicle robot provided by the present application;
fig. 4 is a schematic structural diagram of a vehicle-mounted robot provided in the present application.
Detailed Description
The invention provides a control method and device of a vehicle-mounted robot and the vehicle-mounted robot, wherein a wider area can be shot through a small number of cameras, the importance of the vehicle-mounted robot in a vehicle can be improved, the economic cost for arranging other equipment is greatly saved, the occupied space in the vehicle is reduced, and the aesthetic degree of the interior of the vehicle can be improved.
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The vehicle-mounted intelligent robot is a system capable of realizing human-vehicle interaction, has a powerful learning function and a voice command recognition function, and can realize functions of lifting and lowering a vehicle window, opening and closing a skylight, playing music, automobile navigation, speaking jokes and the like according to voice commands of users in the vehicle. However, since the positioning of the in-vehicle robot is still a vehicle entertainment device, the in-vehicle robot has a disadvantage in practicality for the safety running function of the vehicle, and cannot be a core component of the vehicle. That is, the functions achieved by the in-vehicle robots in the prior art all belong to the entertainment comfort class, and the vehicle safety performance control function is vacant.
On the other hand, in the current domestic market, the proportion of vehicles equipped with fatigue warning or other DMS functions is only about 15.16%, and the proportion of vehicles actually carrying these functions is only about 10%; but with current market trends, DMS and OMS functions will become technical standards in the field of vehicle cabins in the future. Because the implementation of the DMS and OMS functions all need to rely on the images collected by the cameras, the cameras are usually arranged at the positions of the steering wheel, the vehicle frame, the instrument panel, the rearview mirror, the a pillar, the skylight and the like of the vehicle to collect the images of the personnel in the vehicle, at present, the functions are relatively discrete, and the cameras are required to be arranged at different positions for different functions, so that not only is the cost required to be high, but also the attractiveness of the interior of the vehicle can be influenced by more cameras.
In order to solve the above technical problems, please refer to fig. 1, fig. 1 is a flowchart of a control method of an in-vehicle robot provided in the present application, which is applied to a processor in an in-vehicle robot in a vehicle, the in-vehicle robot further includes a first camera and a second camera disposed at a head, and the control method of the in-vehicle robot includes:
s1: determining a current state of the vehicle according to the received state signal;
the vehicle-mounted robot is connected with a vehicle-mounted system in the vehicle, and various signals are sent to the vehicle-mounted robot through the vehicle-mounted system so that the vehicle-mounted robot can determine the current state of the vehicle. Specifically, when the vehicle is originally in a parking state, and when a user opens a vehicle door, the vehicle machine system can send a signal for indicating that the user is about to get on the vehicle to the vehicle-mounted robot; when a user starts the vehicle, the vehicle machine system sends a signal for indicating that the vehicle starts to run to the vehicle-mounted robot; when a user stops the vehicle, the vehicle machine system sends a signal for indicating that the vehicle stops running to the vehicle-mounted robot; when a user opens the door and gets off the car, the car machine system sends a signal for indicating the user to leave to the car-mounted robot; when a user operates on the vehicle device, the vehicle-mounted robot system can also send a signal corresponding to the user operation to the vehicle-mounted robot. Based on this, the in-vehicle robot determines the current state of the vehicle and the state of the passenger from various state signals transmitted from the vehicle system, and executes corresponding functions based on these state signals.
S2: when the state of the vehicle is determined to be a running state, controlling the head of the vehicle-mounted robot to turn to a first preset position;
s3: acquiring an image of a driver seat of a vehicle through a first camera;
s4: executing a DMS strategy and a preset entertainment strategy by using the image acquired by the first camera;
when the vehicle is running, the state of the driver is particularly important to the running safety of the vehicle, so that the vehicle-mounted robot can always look at the driver by using the first camera in the running process of the vehicle, namely, the first preset position points to the position of the driving position, and the state of the driver is determined by collecting various biological information of the driver through the camera, namely, the DMS strategy is executed. The DMS strategy mainly realizes the functions of identity recognition of a driver, fatigue driving of the driver and detection of dangerous behaviors, belongs to an important component part in an ADAS (advanced driving assistance system) system, and mainly comprises the functions of faceID detection, fatigue detection, distraction detection, expression recognition, gesture recognition, dangerous action recognition, sight tracking and the like.
In the running process of the vehicle, passengers usually have certain entertainment demands, such as playing music, so that the DMS strategy is executed and preset entertainment strategies are executed, referring to fig. 2, fig. 2 is a flow chart of another control method of the vehicle-mounted robot provided by the application, the robot automatically recognizes the current driving state or the parking state according to signals sent by a vehicle system, recognizes whether the passengers have entertainment demands according to voices, and triggers corresponding entertainment functions, such as photographing, dancing, singing, broadcasting and the like according to voice instructions. In a driving state, the robot can keep the camera to observe a driver, and recognize whether the driver has behaviors which do not accord with the codes of the driving program or not according to the images shot by the camera, namely behaviors such as distraction or fatigue which do not accord with the specifications; in a parking state, the robot can observe the condition of the whole vehicle interior including a driver seat, and can give a warning if living things such as children or pets are found; further, when an adult driver gets off the vehicle in time, that is, when the engine is still working but the vehicle is stopped, the robot is still focused on the driver's seat, and when the driver's seat finds living things such as children or pets, the robot can also give out a warning, so that the danger caused by misoperation of the children or pets on the driver's seat is avoided.
Because the robot is required to always look at the driver when executing the DMS strategy, the implementation of the entertainment function is mainly realized by means of a sound system, and the voice information of the passenger is collected by a sound collector provided in the vehicle-mounted robot. According to different voice information, the vehicle-mounted robot can realize the entertainment functions of the following table:
table 1: entertainment function display table contained in preset entertainment strategy of vehicle-mounted robot
It should be noted that, because the main basis of judgment of the DMS policy is based on the image information acquired by the first camera, and the main basis of judgment of the entertainment policy is based on the voice information uttered by the user, the two functions can be simultaneously implemented without any collision. Furthermore, if other judging bases are provided in the vehicle, the same can be executed simultaneously, for example, if an overspeed detecting policy is also provided in the vehicle, the judging bases are according to the speed of the vehicle, and the judging bases corresponding to the two above policies are different, so that the DMS policy, the preset entertainment policy and the overspeed detecting policy can also be executed simultaneously.
S5: when the state of the vehicle is determined to be a parking state, controlling the head of the vehicle-mounted robot to turn to a second preset position;
s6: acquiring images of each seat of the vehicle through a second camera;
s7: and executing an OMS strategy by using the image acquired by the second camera.
The parking state described in the present application mainly refers to a state after the vehicle engine is turned off, and a driver and a passenger in the vehicle get off, and a temporary parking state (such as waiting for a traffic light or waiting for a sidewalk) in which the vehicle engine is started may be taken into consideration or not, which is not limited in the present application.
OMS strategy, i.e. passenger monitoring system, is an extension of DMS, mainly aiming at detecting the behaviors of other passengers except the driver in the vehicle, so as to further improve the safety performance of the vehicle. When a driver and a passenger just get on the vehicle, the vehicle is not started, face recognition is carried out on the front seat and the rear seat through a second camera (namely, the second preset position is a position capable of seeing each seat on the vehicle), after the passenger gets on the vehicle to sit, face recognition is carried out without sense, and according to the previous face recognition information, the seat on which each passenger sits is automatically, synchronously and individually arranged, and the memory position of the seat on which each passenger sits to the last sitting position of the passenger is adjusted so as to meet the comfort requirement of each passenger; if the eye or hand movements of the passenger can be recognized, the vision and gestures of the passenger can also be recognized to control the sunroof, air conditioner, side window and seat movements, for example, when the passenger looks at the sunroof and the fingers point at the sunroof, the sunroof is controlled to open; after the passengers leave the car and close the door, face recognition is carried out on the copilot and the rear seats through the second camera, if the copilot and the rear seats are found to be biological (usually children or pets), the drivers are prompted through some prompt information, for example, mobile phone short messages or in-software sent messages are sent to be pushed to the car owners for reminding.
In addition, the first camera and the second camera can be conventional cameras or infrared cameras, and the application is not limited to this.
In summary, determining the current state of the vehicle according to the received state signal, when the vehicle is in a driving state, controlling the head of the vehicle-mounted robot to turn to a first preset position, collecting an image of a driving seat of the vehicle through a first camera, and executing a DMS strategy and a preset entertainment strategy by utilizing the image collected by the first camera; when the vehicle is in a parking state, the head of the vehicle-mounted robot is controlled to turn to a second preset position, images of all seats of the vehicle are collected through the second camera, and an OMS strategy is executed by using the images collected by the second camera. The functions of the vehicle-mounted robot and the functions of the DMS and the OMS are integrated into the robot, the robot is matched with two cameras arranged on the robot, and different functions are executed according to different states of the vehicle, so that the vehicle-mounted robot can realize multiple functions, and the importance of the vehicle-mounted robot in the vehicle is improved; by controlling the rotation of the head of the robot, the provided visual field is wider than that of a camera at a fixed position, and all the areas inside the vehicle can be seen by only two cameras, so that various functions can be realized more effectively; the functions realized by the original multiple devices are integrated into one device, so that the economic cost for arranging other devices is greatly saved, the occupied space in the vehicle is reduced, and the aesthetic degree of the interior of the vehicle is also improved.
Based on the above embodiments:
as a preferred embodiment, executing a DMS policy using the image acquired by the first camera includes:
acquiring a first image acquired by a first camera;
determining facial expression, hand motion and head motion of the driver according to the plurality of first images acquired continuously;
judging whether the driver is tired according to the facial expression;
if the user is tired, a first prompt signal is generated and sent to the prompt module, so that the prompt module sends out prompts;
judging whether the driver has distraction according to the hand motion and the head motion;
if the distraction behavior exists, a second prompt signal is generated and sent to the prompt module, so that the prompt module sends out prompts.
In order to effectively realize the DMS strategy, in the application, a robot can continuously look at a driver in the running process of the vehicle, collect various action information of the driver according to a first image to judge whether the driver is concentrated, whether the driver loses the behavior ability, whether the emotion state is good or not and the like, when the position of the head or the hand of the driver on the image is found to have obvious action change by combining a plurality of first images, the action that fatigue or distraction is possible is described, further combining expression change on a plurality of images, determining the current emotion and the fatigue degree of the driver, and combining the action and the expression to determine whether the driver has the action of fatigue driving or distraction driving. When the fatigue or distraction of the driver is detected, the voice actively prompts the driver to pay attention to driving safety, and dangerous driving behaviors are timely early-warned or stopped according to different fatigue degrees of the driver, so that accidents are reduced. Wherein the distraction behavior is for example: a call, chat, driver not looking ahead of the vehicle, driver not holding the steering wheel with his hands, etc.
As a preferred embodiment, after determining the facial expression, the hand motion, and the head motion of the driver from the plurality of first images acquired in succession, further comprising:
determining the emotion of the driver according to the facial expression;
the lighting system and the music system in the vehicle are controlled according to the emotion of the driver.
In order to meet entertainment of passengers, in the application, emotion recognition function is still supported in the driving process, after the robot gathers the facial expression of driver through the camera, confirm the present emotion of driver according to facial expression, adjust interior lamp light colour and luminance and broadcast and the corresponding music of emotion according to the emotion of driver to this triggers entertainment function, for example: when the emotion of the driver is detected to be good, music and atmosphere lamps are associated, a music system and a light system in the vehicle are controlled to be started, the music is played, the light color and brightness are adjusted, and when the emotion of the driver is detected to be poor, the music can be turned off, and the atmosphere lamps are adjusted to be warm. The present application is not limited to specific control contents according to the emotion of the driver. Based on the above, the music and the light system on the vehicle are controlled by different emotions of the driver, so that the entertainment of the passengers during driving can be satisfied.
As a preferred embodiment, performing the OMS strategy using the image acquired by the second camera includes:
acquiring a second image acquired by a second camera;
judging whether a living object exists in the vehicle according to a plurality of continuously acquired second images;
if the living things exist, a third prompt signal is generated and sent to the prompt module, so that the prompt module sends out prompts.
In order to guarantee to improve the safety of the vehicle, in the application, after the vehicle stops and the passenger leaves, the vehicle-mounted robot can further detect the condition in the vehicle, specifically detect whether the living things exist in the vehicle, and this is to consider that some vehicle owners may leave small living things such as children and pets in the vehicle, and the living things left in the vehicle cannot autonomously open the vehicle door, so that a certain safety problem exists. Therefore, the vehicle-mounted robot can continuously collect image information in the vehicle after the vehicle stops and passengers leave, if obvious changes are found in a plurality of images, the existence of movable objects in the vehicle can be indicated, namely, living objects exist in the vehicle, and at the moment, the vehicle owner needs to be informed in time so as to ensure the safety of the living objects in the vehicle.
As a preferred embodiment, the in-vehicle robot further includes a sound collector, before judging whether or not a living object is present in the vehicle based on the plurality of first images acquired continuously, further including:
acquiring the internal sound of the vehicle acquired by the sound acquisition device;
judging whether a living object exists in the vehicle according to a plurality of continuously acquired second images, wherein the method comprises the following steps:
and judging whether a living object exists in the vehicle according to the plurality of second images and the internal sound which are continuously acquired.
In order to accurately detect whether a living object exists in a vehicle, the vehicle-mounted robot is further realized through a sound collecting system inside the vehicle-mounted robot. Specifically, the vehicle-mounted robot continuously collects image information of each seat and sound information in the vehicle after the vehicle is parked, and through dual judgment of the image and the sound: when the image change exists in two adjacent or continuous images, the condition that an object in the vehicle moves is indicated; when the sound in the vehicle exceeds a preset decibel or the sound is suddenly increased, the condition that the living things in the vehicle are sounding or the living things in the vehicle are knocked is indicated, and the two conditions indicate that the living things in the vehicle exist. Further, after detecting that the living thing exists, the vehicle-mounted robot can also send the image shot by the second camera and the collected sound to the vehicle owner, so that the vehicle owner can know detailed information in time. Based on this, by the double judgment of the sound and the image, it is possible to accurately detect whether or not a living object is present in the vehicle.
In addition, besides judging whether living things exist through sound, millimeter wave radar can be arranged in the vehicle-mounted robot, all areas in the vehicle are guaranteed to be covered based on accurate positioning of the radar sensor, whether living things exist in the vehicle is detected through radar waves, and the situation that the living things are hidden in dead angles of the second camera (such as the place right below the camera or behind a seat) and cannot be observed by the second camera is avoided.
As a preferred embodiment, when it is determined that the state of the vehicle is a driving state, a preset entertainment strategy is performed using the image acquired by the first camera, including:
acquiring control voice sent by a passenger in the vehicle and acquired by a sound acquisition device;
a preset entertainment function is determined according to the image collected by the first camera and the control voice;
and controlling the pose of the vehicle-mounted robot according to the preset entertainment function and the preset entertainment equipment in the vehicle.
In order to meet the entertainment of the passengers for driving, in the application, the DMS strategy is executed and the preset entertainment strategy is executed at the same time during driving. Because the vehicle-mounted robot needs to always look at a driver in the driving process, the realization of the entertainment function is mainly realized by virtue of a sound system, voice information of passengers is collected through a sound collector arranged in the vehicle-mounted robot, and the entertainment function is realized by means of cooperation of collecting some facial expressions and hand actions of the driver through a first camera. As shown in table 1 above, when the corresponding voice information is detected inside the vehicle, the in-vehicle robot triggers the corresponding entertainment function. The preset entertainment device refers to devices on a vehicle, such as music devices, in-vehicle light devices, air-conditioning devices, vehicle windows and the like, which can interact with passengers and do not affect driving, and the vehicle-mounted robots can be controlled to perform actions such as hand-engaging and clapping by combining entertainment functions in consideration of the fact that the vehicle-mounted robots are also provided with anthropomorphic components such as hands and feet.
Further, if the actual structure of the vehicle-mounted robot can meet the two conditions that the first camera looks at the first preset position and the second camera looks at the second preset position, the DMS, the OMS and the entertainment strategies can be simultaneously realized in the driving process, and the actions of other passengers can be added for the entertainment strategies during driving, for example, the second camera collects the sights and the hand actions of the front passenger and the rear passenger to realize the entertainment functions, for example, when a certain passenger looks at a skylight, the skylight is opened; for OMS strategy during driving, the behavior of the passengers can be detected, corresponding response is made according to the actually detected behavior, for example, when the passengers are detected to sleep, the current playing music volume of the vehicle music system is controlled to be reduced, or the music system is closed; when detecting that a passenger stretches out of the window or makes other dangerous actions, the passenger can timely send out a prompt. Based on this, the entertainment of the passenger for driving can be satisfied.
As a preferred embodiment, further comprising:
when the fact that the vehicle is opened is detected, determining the opened vehicle door;
controlling the head of the vehicle-mounted robot to turn to a preset position corresponding to the opened vehicle door;
judging whether the opening mode of the opened vehicle door is a normal opening mode or not;
if the vehicle-mounted robot is in a normal opening mode, controlling a loudspeaker in the vehicle-mounted robot to emit sound;
if the opening mode is not the normal opening mode, acquiring an image of the opened vehicle door, generating an alarm signal containing the image of the vehicle door, and sending the alarm signal to the user terminal.
In order to further increase the importance of the robot, in the present application, an antitheft function is also provided in the in-vehicle robot. Specifically, the existing anti-theft function of a vehicle generally makes a vehicle horn sound and a vehicle lamp flash when detecting that the vehicle is subjected to physical impact with larger force or the vehicle door is pried off abnormally, and the anti-theft mode cannot realize the anti-theft prompt function when the vehicle owner is far away from the vehicle. Based on the prior anti-theft function, when the vehicle detects that the vehicle door is pried abnormally, an anti-theft signal and a signal for explaining the position of the pried vehicle door are generated, the two signals are both sent to the vehicle-mounted robot, the vehicle-mounted robot timely turns the head to the pried vehicle door, the image of the vehicle door is collected to acquire the personnel information of the pried vehicle door, and finally the personnel information and the anti-theft information are sent to the personal terminal of the vehicle owner together, so that the vehicle owner can timely acquire the stolen condition of the vehicle.
In addition, when the door is normally opened, the head of the in-vehicle robot can also turn to the opened door and make different sounds according to the change of the running state of the vehicle to call or annunciate the passenger. Specifically, when the vehicle is changed from a running state to a parking state, the opening of the vehicle door indicates that the passenger is about to leave the vehicle, and at the moment, the vehicle-mounted robot turns to the opened vehicle door and controls a loudspeaker in the vehicle-mounted robot to emit a sound indicating the notification of the passenger; when the vehicle is in a stopped state, the opening of the door indicates that the passenger is about to enter the vehicle, and the on-board robot turns to the opened door and controls a loudspeaker in the on-board robot to emit a sound representing an call with the passenger. Further, if a new door is opened during the execution of one of the notification/call-in actions by the in-vehicle robot, the in-vehicle robot may sequentially execute the notification/call-in actions in the order in which the doors are opened, or may disregard the newly opened door, which is not limited in this application. Based on this, the anti-theft function and the tattooing/calling function are realized by the vehicle-mounted robot, and the importance of the robot is further improved.
As a preferred embodiment, further comprising:
when detecting that a seat on the vehicle is occupied by a person, controlling the head of the vehicle-mounted robot to turn to a preset position corresponding to the occupied seat;
acquiring a third image containing face information at a preset position corresponding to a seat on which a person sits by using two cameras;
determining identity information of the seated person according to the face information in the third image;
determining riding preference of the seated person according to the identity information;
and adjusting the pose of the seat where the sitting person is located according to the riding preference.
In order to meet the personalized riding requirements of passengers, the state of each seat in the vehicle can be adjusted according to sitting habits of different passengers. Specifically, when a driver's seat or other persons sitting on a seat are detected through a seat sensing system of the vehicle (the seat sensing system in the prior art is generally applied to seat belt detection, when a person sits on the seat and the seat belt is not buckled, the vehicle sends out reminding information for buckling the seat belt), the vehicle sends a position signal to the vehicle-mounted robot to enable the vehicle-mounted robot to know that the person sitting on the position, at the moment, the head of the vehicle-mounted robot turns to the direction of the seat where the person sits on, namely to the preset position, then image information of the seat is collected to determine the identity of the person sitting on the position, and the height, front and back and pitch angles of the seat are automatically adjusted to the memory position of the person when the person sits on the seat last time according to the identity of the person; if the identity information of the person is acquired for the first time, the identity information is stored in a memory of the vehicle-mounted robot for subsequent use. In the running process of the vehicle, if a passenger adjusts the pose of the seat, the vehicle-mounted robot can update the riding preference corresponding to the passenger in real time, so that the seated seat of the passenger can be automatically adjusted to the pose of the passenger after the last adjustment when the passenger rides next time. Based on the information, the identity information and riding preferences of different passengers are collected, and the pose of each seat can be automatically adjusted to adapt to the preferences of the person sitting on the seat so as to meet the personalized riding requirements of the passengers.
Referring to fig. 3, fig. 3 is a schematic structural diagram of a control device of an in-vehicle robot provided in the present application, including:
a memory 21 for storing a computer program;
the processor 22 is configured to execute the computer program to implement the steps of the control method of the in-vehicle robot described above.
For a detailed description of the control device of the vehicle-mounted robot provided in the present application, please refer to an embodiment of the control method of the vehicle-mounted robot, and the detailed description is omitted herein.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a vehicle-mounted robot provided in the present application, including a vehicle-mounted robot body, a first camera 31, a second camera 32, and a control device of the vehicle-mounted robot as described above;
the first camera 31 and the second camera 32 are arranged on the head of the vehicle-mounted robot body;
the in-vehicle robot body is connected with a control device of the in-vehicle robot.
For a detailed description of the in-vehicle robot provided in the present application, please refer to an embodiment of the control method of the in-vehicle robot, and the detailed description is omitted herein.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
It should also be noted that in this specification, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Claims (10)
1. A control method of an in-vehicle robot, characterized by being applied to a processor in an in-vehicle robot in a vehicle, the in-vehicle robot further including a first camera and a second camera provided at a head, the control method of the in-vehicle robot comprising:
determining a current state of the vehicle according to the received state signal;
when the state of the vehicle is determined to be a running state, controlling the head of the vehicle-mounted robot to turn to a first preset position;
acquiring an image of a driver seat of the vehicle through the first camera;
executing a DMS strategy and a preset entertainment strategy by using the image acquired by the first camera;
when the state of the vehicle is determined to be a parking state, controlling the head of the vehicle-mounted robot to turn to a second preset position;
acquiring images of each seat of the vehicle by the second camera;
and executing an OMS strategy by using the image acquired by the second camera.
2. The control method of the in-vehicle robot according to claim 1, wherein executing the DMS policy using the image acquired by the first camera includes:
acquiring a first image acquired by the first camera;
determining facial expression, hand motion and head motion of the driver according to the plurality of first images which are continuously acquired;
judging whether the driver is tired according to the facial expression;
if the user is tired, a first prompt signal is generated and sent to a prompt module, so that the prompt module sends out prompts;
judging whether the driver has distraction according to the hand motion and the head motion;
if the distraction behavior exists, a second prompt signal is generated and sent to the prompt module, so that the prompt module sends out prompts.
3. The control method of the in-vehicle robot according to claim 2, further comprising, after determining the facial expression, the hand motion, and the head motion of the driver from the plurality of the first images acquired in succession:
determining the emotion of the driver according to the facial expression;
and controlling a lighting system and a music system in the vehicle according to the emotion of the driver.
4. The control method of the in-vehicle robot according to claim 1, wherein executing the OMS strategy using the image acquired by the second camera includes:
acquiring a second image acquired by the second camera;
judging whether a living object exists in the vehicle according to a plurality of continuously acquired second images;
if the living things exist, a third prompt signal is generated and sent to the prompt module, so that the prompt module sends out prompts.
5. The control method of the in-vehicle robot according to claim 4, further comprising, before judging whether or not a living thing is present in the vehicle based on the plurality of the first images acquired continuously, a sound collector:
acquiring the internal sound of the vehicle acquired by the sound acquisition device;
judging whether a living object exists in the vehicle according to a plurality of continuously acquired second images, wherein the method comprises the following steps:
and judging whether a living object exists in the vehicle according to the plurality of second images and the internal sound which are continuously acquired.
6. The control method of the in-vehicle robot according to claim 5, wherein when it is determined that the state of the vehicle is a traveling state, executing a preset entertainment strategy using the image acquired by the first camera, comprising:
acquiring control voice which is acquired by the sound acquisition device and sent by a passenger in the vehicle;
a preset entertainment function is determined according to the image acquired by the first camera and the control voice;
and controlling the pose of the vehicle-mounted robot and the preset entertainment equipment in the vehicle according to the preset entertainment function.
7. The control method of the in-vehicle robot according to claim 1, further comprising:
when detecting that the vehicle has a door opened, determining the opened door;
judging whether the opened vehicle door is in a normal opening mode or not;
if the vehicle-mounted robot is in a normal opening mode, controlling the head of the vehicle-mounted robot to turn to a preset position corresponding to the opened vehicle door;
controlling a loudspeaker in the vehicle-mounted robot to emit sound;
if the normal opening mode is not adopted, a fourth prompting signal is generated and sent to the alarm module, so that the alarm module sends out a warning.
8. The control method of the in-vehicle robot according to any one of claims 1 to 7, further comprising:
when detecting that a seat on the vehicle is occupied by a person, controlling the head of the vehicle-mounted robot to turn to a preset position corresponding to the seat occupied by the person;
acquiring a third image containing face information at a preset position corresponding to the seat where the person sits by using the second camera;
determining identity information of a seated person according to the face information in the third image;
determining riding preferences of the seated person according to the identity information;
and adjusting the pose of the seat where the seated person is located according to the riding preference.
9. A control device for an in-vehicle robot, comprising:
a memory for storing a computer program;
a processor for implementing the steps of the control method of the in-vehicle robot according to any one of claims 1 to 8 when executing the computer program.
10. An in-vehicle robot comprising an in-vehicle robot body, a first camera, a second camera, and the control device of the in-vehicle robot according to claim 9;
the first camera and the second camera are arranged at the head of the vehicle-mounted robot body;
the vehicle-mounted robot body is connected with a control device of the vehicle-mounted robot.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310326894.2A CN116533232A (en) | 2023-03-29 | 2023-03-29 | Control method and device of vehicle-mounted robot and vehicle-mounted robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310326894.2A CN116533232A (en) | 2023-03-29 | 2023-03-29 | Control method and device of vehicle-mounted robot and vehicle-mounted robot |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116533232A true CN116533232A (en) | 2023-08-04 |
Family
ID=87456703
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310326894.2A Pending CN116533232A (en) | 2023-03-29 | 2023-03-29 | Control method and device of vehicle-mounted robot and vehicle-mounted robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116533232A (en) |
-
2023
- 2023-03-29 CN CN202310326894.2A patent/CN116533232A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4305289B2 (en) | VEHICLE CONTROL DEVICE AND VEHICLE CONTROL SYSTEM HAVING THE DEVICE | |
JP4380541B2 (en) | Vehicle agent device | |
CN106575478B (en) | Driver's monitoring arrangement | |
CN111439271A (en) | Auxiliary driving method and auxiliary driving equipment based on voice control | |
CN110654389B (en) | Vehicle control method and device and vehicle | |
CN110481419B (en) | Human-vehicle interaction method, system, vehicle and storage medium | |
CN110654345A (en) | Vehicle control method and device and vehicle | |
CN107878465B (en) | Mobile body control device and mobile body | |
CN111086480A (en) | Vehicle awakening method, vehicle machine and vehicle | |
CN110015308A (en) | A kind of people-car interaction method, system and vehicle | |
CN112041201B (en) | Method, system, and medium for controlling access to vehicle features | |
CN114228647A (en) | Vehicle control method, vehicle terminal and vehicle | |
CN115268334A (en) | Vehicle window control method, device, equipment and storage medium | |
JP2016207001A (en) | Driving support device | |
CN115447515A (en) | Control method and device for car nap mode, car and storage medium | |
CN108099785A (en) | A kind of traffic control method applied in intelligent vehicle | |
CN110001510A (en) | A kind of interactive approach and system, the vehicle of vehicle and pedestrian | |
CN116533232A (en) | Control method and device of vehicle-mounted robot and vehicle-mounted robot | |
US10592758B2 (en) | Occupant monitoring device for vehicle | |
CN116252711A (en) | Method for adjusting a vehicle mirror and mirror adjustment system | |
CN114194122B (en) | Safety prompt system and automobile | |
CN114537259B (en) | Control system and control method for multi-mode greeting interaction of automobile | |
CN115107674A (en) | Volume adjusting method and device and automobile | |
CN114872542A (en) | Automobile external signal interaction method and system, electronic equipment and automobile | |
CN207059776U (en) | A kind of motor vehicle driving approval apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |