WO2022048118A1 - 车载机器人的控制方法及装置、车辆、电子设备和介质 - Google Patents

车载机器人的控制方法及装置、车辆、电子设备和介质 Download PDF

Info

Publication number
WO2022048118A1
WO2022048118A1 PCT/CN2021/078671 CN2021078671W WO2022048118A1 WO 2022048118 A1 WO2022048118 A1 WO 2022048118A1 CN 2021078671 W CN2021078671 W CN 2021078671W WO 2022048118 A1 WO2022048118 A1 WO 2022048118A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
occupant
information
cabin
mounted robot
Prior art date
Application number
PCT/CN2021/078671
Other languages
English (en)
French (fr)
Inventor
黎建平
李激光
王俊越
孙牵宇
许亮
Original Assignee
上海商汤临港智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤临港智能科技有限公司 filed Critical 上海商汤临港智能科技有限公司
Publication of WO2022048118A1 publication Critical patent/WO2022048118A1/zh

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/146Display means

Definitions

  • the present disclosure relates to the technical field of vehicles, and in particular, to a control method and device of a vehicle-mounted robot, a vehicle, an electronic device and a medium.
  • Human-computer interaction refers to the process of information exchange between people and computers using a certain dialogue language and a certain interactive way to complete certain tasks.
  • the human-computer interaction of the vehicle aims to realize the interaction between the occupants of the vehicle and the vehicle, and is of great significance in the field of vehicles.
  • the present disclosure provides a technical solution for controlling a vehicle-mounted robot.
  • a method for controlling a vehicle-mounted robot comprising:
  • the vehicle-mounted robot is controlled to rotate according to the rotation control information.
  • a control device of a vehicle-mounted robot comprising:
  • the first acquisition module is used to acquire the video stream of the cabin
  • a first determining module configured to determine the position information of the occupant of the vehicle cabin based on the video stream
  • a first generating module configured to generate, according to the position information of the occupant, the rotation control information of the rotating part of the vehicle-mounted robot disposed in the vehicle cabin;
  • a rotation control module is used to perform rotation control of the vehicle-mounted robot according to the rotation control information.
  • a vehicle comprising:
  • a camera set in the cabin, is used to capture the video stream of the cabin;
  • a controller connected to the camera, for acquiring a video stream of the cabin from the camera, determining position information of an occupant of the cabin based on the video stream, and generating a setting according to the position information of the occupant Rotation control information of the rotating parts of the vehicle-mounted robot in the vehicle cabin, and perform rotation control on the vehicle-mounted robot according to the rotation control information;
  • the vehicle-mounted robot is connected to the controller, is arranged in the vehicle cabin, and is used to rotate according to the rotation control information.
  • an electronic device comprising: one or more processors; a memory for storing executable instructions; wherein the one or more processors are configured to invoke the memory storage executable instructions to perform the above method.
  • a computer-readable storage medium having computer program instructions stored thereon, the computer program instructions implementing the above method when executed by a processor.
  • a computer program comprising computer-readable code, which when the computer-readable code is executed in an electronic device, is executed by a processor in the electronic device for implementing the above method.
  • the video stream of the vehicle cabin is obtained, the position information of the occupant of the vehicle cabin is determined based on the video stream, and the vehicle-mounted robot arranged in the vehicle cabin is generated according to the position information of the occupant.
  • the rotation control information of the rotating parts of the vehicle is obtained, and the rotation control of the vehicle-mounted robot is performed according to the rotation control information, so that the vehicle-mounted robot can be controlled to turn to the occupant based on the video stream of the vehicle cabin, so that the vehicle-mounted robot is facing the Interacting with the occupant in the occupant state can make the interaction between the vehicle-mounted robot and the occupant more in line with the interaction habit between people, the interaction is more natural, and the pertinence and fluency of the interaction can be improved.
  • FIG. 1 shows a flowchart of a control method of a vehicle-mounted robot provided by an embodiment of the present disclosure.
  • FIG. 2 shows a schematic diagram of a vehicle provided by an embodiment of the present disclosure.
  • FIG. 3 shows another schematic diagram of a vehicle provided by an embodiment of the present disclosure.
  • FIG. 4 shows a block diagram of a control apparatus of a vehicle-mounted robot provided by an embodiment of the present disclosure.
  • FIG. 5 shows a block diagram of an electronic device 800 provided by an embodiment of the present disclosure.
  • Embodiments of the present disclosure provide a control method and device for a vehicle-mounted robot, a vehicle, an electronic device, and a storage medium.
  • a control method and device for a vehicle-mounted robot, a vehicle, an electronic device, and a storage medium By acquiring a video stream of the vehicle cabin, based on the video stream, the position information of the occupant of the vehicle cabin is determined, and according to the The position information of the occupant generates the rotation control information of the rotating part of the vehicle-mounted robot provided in the vehicle cabin, and the rotation control of the vehicle-mounted robot is performed according to the rotation control information, so that the video stream of the vehicle cabin can be used.
  • control the vehicle-mounted robot to turn to the occupant, so that the vehicle-mounted robot interacts with the occupant in the state of facing the occupant, so that the interaction between the vehicle-mounted robot and the occupant can be more in line with the interaction between people. It can improve the pertinence and fluency of the interaction between the vehicle-mounted robot and the occupants.
  • FIG. 1 shows a flowchart of a control method of a vehicle-mounted robot provided by an embodiment of the present disclosure.
  • the execution body of the control method of the vehicle-mounted robot may be an interaction device of the vehicle-mounted robot or a control device of the vehicle-mounted robot.
  • the control method of the vehicle-mounted robot may be executed by a terminal device or other processing device.
  • the terminal device may be a vehicle-mounted device, a user equipment (User Equipment, UE), a mobile device, a user terminal, a terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, or a wearable devices, etc.
  • UE user equipment
  • PDA Personal Digital Assistant
  • the in-vehicle device may be a controller, a domain controller, a processor or an in-vehicle machine installed in the vehicle cabin and connected to the in-vehicle robot, or may be an OMS (Occupant Monitoring System, occupant monitoring system) or a DMS (Driver Monitor System) System, driver monitoring system), a device host used to perform data processing operations such as images.
  • the control method of the vehicle-mounted robot may be implemented by the processor calling computer-readable instructions stored in the memory.
  • the control method of the vehicle-mounted robot may be applied to a drivable machine device, such as a smart vehicle, a smart cabin for simulating the driving of a vehicle, and the like. As shown in FIG. 1 , the control method of the vehicle-mounted robot includes steps S11 to S14.
  • step S11 the video stream of the cabin is acquired.
  • the video stream of the vehicle cabin may include a video stream in the vehicle cabin.
  • the video stream in the vehicle cabin may be collected by a first camera, and the video stream in the vehicle cabin may be acquired from the first camera.
  • the first camera may include an OMS camera, a DMS camera, and the like.
  • the number of the first cameras may be one or more.
  • the first camera can be set at any position in the cabin.
  • the first camera may be installed in at least one of the following positions: a dashboard, a dome light, an interior rearview mirror, an A-pillar, a center console, and a front windshield.
  • the video stream of the vehicle cabin may include a video stream outside the vehicle cabin.
  • a video stream outside the vehicle cabin may be collected by a second camera, and the video stream outside the vehicle cabin may be acquired from the second camera.
  • the number of the second cameras may be one or more.
  • the second camera may be installed on at least one of the following positions: at least one B-pillar, at least one vehicle door, at least one exterior rearview mirror, and a cross member.
  • the second camera may be mounted on the B-pillar on the main driver's seat side of the vehicle.
  • the second camera may be installed on the B-pillar on the left side of the vehicle.
  • the second camera may be installed on the two B-pillars and the trunk door.
  • the second camera may use a ToF (Time of Flight, time of flight) camera, a binocular camera, or the like.
  • step S12 based on the video stream, position information of the occupants of the vehicle cabin is determined.
  • the occupant of the vehicle cabin may be any person who rides in the vehicle to which the vehicle cabin belongs.
  • the occupants of the vehicle cabin may include at least one of a driver, a non-driver, a passenger, an adult, an elderly person, a child, a front-seat occupant, a rear-seat occupant, and the like in the vehicle.
  • the position information of the occupant may represent the position information of the occupant appearing in the vehicle cabin.
  • the location information of the occupant may include at least one of the location information of the occupant in the vehicle cabin, the location information of getting on the vehicle, the location information of getting off the vehicle, and the like.
  • the stop position information of the occupant in the cabin may indicate the position information of the occupant;
  • the boarding position information of the occupant may indicate the position of the door where the occupant gets on the bus;
  • the alighting position information of the occupant The door position where the occupant gets off the vehicle may be indicated.
  • the position information of the occupant may include at least one of seat information, direction information, angle information, coordinate information and the like of the occupant.
  • the seat information of the occupant may indicate the seat that the occupant takes in the cabin.
  • the seat information of the occupant may be the main driver's seat, the passenger's seat, the rear left seat, the rear middle seat, the rear right seat, and the like.
  • the direction information of the occupant may be the direction information of the installation position of the occupant relative to the vehicle-mounted robot or other fixed positions in the vehicle cabin (eg, the position of the steering wheel).
  • the direction information of the occupant may be front right; if the occupant is in the front left of the vehicle-mounted robot, the direction information of the occupant may be left Front; if the occupant is directly in front of the vehicle-mounted robot, the direction information of the occupant may be directly ahead.
  • the angle information of the occupant may represent the included angle between the direction of the occupant relative to the installation position of the vehicle-mounted robot or other fixed positions in the vehicle cabin (eg, the position of the steering wheel) and the preset direction.
  • the preset direction may be, for example, a direction directed by the vehicle-mounted robot to the rear middle seat.
  • the coordinate information of the occupant may represent the coordinate information of the occupant in the space coordinate system of the vehicle cabin, or the coordinate information of the occupant may represent the coordinate information of the occupant in the image coordinate system corresponding to the video stream of the vehicle cabin. Coordinate information.
  • human body detection and/or face detection may be performed on at least one frame of image in the video stream to obtain a human body detection result and/or a human face detection result; From the position information of the human body bounding box and/or the human face bounding box in the detection result, the position information of the occupant of the vehicle cabin can be obtained.
  • the position information of the human body bounding box and/or the human face bounding box may be used as the position information of the occupant of the vehicle cabin.
  • the correspondence between the position information of the human body bounding box and/or the face bounding box and the position information of the occupant may be established in advance, and the position information of the human body bounding box and/or the face bounding box and the position information of the occupant
  • the correspondence between the location information, and the location information of the occupant of the vehicle cabin is determined according to the human body detection result and/or the location information of the human body bounding box and/or the human face bounding box in the human body detection result and/or the human face detection result.
  • the human body detection may be used to detect the position information of the human body in at least one frame of image in the video stream
  • the face detection may be used to detect the position information of the human face in at least one frame of image in the video stream.
  • the determining, based on the video stream, the position information of the occupants of the vehicle cabin includes: in an image coordinate system corresponding to at least one frame of image in the video stream, determining all An image coordinate area where at least one body part of the occupant is located; and the location information of the occupant is determined according to the image coordinate area.
  • the image coordinate system may be a two-dimensional coordinate system corresponding to an image in the video stream.
  • a pre-trained region generation network (Region Proposal Network, RPN) may be used to perform region detection on at least one body part of the occupant in at least one frame of image in the video stream, wherein at least one Body parts may include, but are not limited to, the face, hands, torso, and the like.
  • RPN Region Proposal Network
  • the image coordinate area may be used as the position information of the occupant, or may be determined according to a pre-established mapping relationship between the image coordinate area and the position information of the occupant, and the image coordinate area location information of the occupant.
  • the position information of the occupant can be quickly detected.
  • the position information of the occupant includes: first relative position information of the occupant in the image; and the determining the position information of the occupant according to the image coordinate area includes: The image coordinate area is used as the first relative position information of the occupant in the image.
  • the rotation control parameters of the vehicle-mounted robot according to the first relative position information in the image coordinate system, and improve the vehicle-mounted robot. Rotation control efficiency.
  • the position information of the occupant includes: second relative position information of the occupant in the vehicle cabin; the determining the position information of the occupant according to the image coordinate area,
  • the method includes: according to the mapping relationship between the image coordinate system and the space coordinate system in the vehicle cabin, determining the space coordinate area in the vehicle cabin corresponding to the image coordinate area, and using the space coordinate area in the vehicle cabin as The second relative position information of the occupant in the vehicle cabin.
  • the space coordinate system may be a three-dimensional world coordinate system
  • the determined space coordinate area in the vehicle cabin may indicate that the image coordinate area where at least one body part of the occupant is located is in the space coordinate system
  • the corresponding spatial coordinate area that is, the spatial coordinate area where at least one body part of the occupant is located.
  • the mapping relationship between the image coordinate system and the space coordinate system can be established in advance, for example, the internal and external parameters of the camera that captures the video stream are pre-calibrated, and the mapping between the image coordinate system and the space coordinate system is determined according to the internal and external parameters of the camera. relation.
  • the image coordinate can be determined according to the pre-established mapping relationship between the image coordinate system and the space coordinate system in the cabin
  • the area corresponds to the space coordinate area in the vehicle cabin in the space coordinate system.
  • the space coordinate area in the vehicle cabin corresponding to the image coordinate area is determined, and the space coordinate area in the vehicle cabin is assigned to
  • the second relative position information of the occupant in the cabin the three-dimensional position information of the occupant in the cabin can be accurately obtained, so that the vehicle-mounted robot can be accurately rotated according to the three-dimensional position information subsequently. control.
  • step S13 the rotation control information of the rotation member of the in-vehicle robot installed in the vehicle cabin is generated based on the position information of the occupant.
  • the vehicle-mounted robot may be a physical robot, for example, the vehicle-mounted robot may be installed on a dashboard, a center console, or the like.
  • the rotating part of the vehicle-mounted robot may represent a rotatable part of the vehicle-mounted robot, and the orientation of the vehicle-mounted robot changes with the rotation of the rotating part.
  • the rotation control information used to control the direction of the vehicle-mounted robot to turn and interact with the passenger can be generated, so that the vehicle-mounted robot can turn to the interacting passengers under the control of the rotation control information.
  • the rotation control parameters and/or rotation stop conditions of the rotating parts of the vehicle-mounted robot may be determined according to the position information of the occupant, and a generation system including the rotation control parameters and/or the rotation stop conditions may be generated.
  • the rotation control parameters may include, but are not limited to, at least one of a rotation direction, a rotation angular velocity, a rotation time, and the like.
  • the rotation stop condition refers to a condition for the rotation of the rotating component to stop, which may include, but is not limited to, a condition with a rotation angle as a constraint and/or a condition with a target orientation as a constraint.
  • the vehicle-mounted robot may include one or more rotating parts, and the rotation control information may be used to control the rotation of the one or more rotating parts of the vehicle-mounted robot.
  • the rotation control information of the first rotating component of the vehicle-mounted robot may be generated according to the position information of the occupant, wherein the first rotating component may refer to the rotating component used to drive the body of the vehicle-mounted robot to rotate,
  • the rotation control information of the first rotating member can be used to control the first rotating member to drive the body of the vehicle-mounted robot to turn to the occupant.
  • the rotation control information of the first rotating part of the vehicle-mounted robot can be generated according to the position information of the occupant, and the rotation control information of the second rotating part and/or the third rotating part of the vehicle-mounted robot can be generated
  • the second rotating member may refer to a rotating member used to drive the left arm of the vehicle-mounted robot to rotate, and the rotation control information of the second rotating member may be used to control the second rotating member to drive the vehicle-mounted robot.
  • the left arm of the vehicle rotates
  • the third rotating part can refer to the rotating part used to drive the right arm of the vehicle-mounted robot to rotate
  • the rotation control information of the third rotating part can be used to control the third rotating part.
  • the right arm of the vehicle-mounted robot rotates.
  • the vehicle-mounted robot can display actions, such as the welcoming gesture of clapping hands, Goodbye gestures of waving arms, etc.
  • vehicle-mounted robot including the second rotating part for controlling the rotation of the left arm and the third rotating part for controlling the rotation of the right arm is only an example.
  • the second rotating part and the third rotating part can also control other body parts of the vehicle-mounted robot, such as the head, legs, etc.
  • the vehicle-mounted robot can also include three or more rotating parts that control different body parts respectively, which are not specially made here. limited.
  • the generating, according to the position information of the occupant, the rotation control information of the rotating component of the vehicle-mounted robot disposed in the vehicle cabin includes: according to the pre-established position information and the orientation of the vehicle-mounted robot and at least one of the following: generating and controlling the rotating part of the vehicle-mounted robot arranged in the vehicle cabin to rotate so that the vehicle-mounted robot turns to the The rotation control information of the target orientation; according to the current orientation of the vehicle-mounted robot and the target orientation, determine the rotation direction and rotation angle of the rotating parts of the vehicle-mounted robot, and generate and control the rotation of the vehicle-mounted robot arranged in the vehicle cabin.
  • the rotation direction and rotation time of the rotation part of the vehicle-mounted robot are used to generate rotation control information for controlling the rotation of the rotation part of the vehicle-mounted robot installed in the vehicle cabin according to the rotation direction and rotation time.
  • the target orientation may represent the orientation of the vehicle-mounted robot corresponding to the position information of the occupant.
  • the in-vehicle robot can be controlled to turn to the occupant, for example, the direction in which the front of the in-vehicle robot faces the occupant can be controlled.
  • the target orientation corresponding to the position information of the occupant may be determined according to the mapping relationship between the pre-established position information and the orientation of the vehicle-mounted robot, and a control panel disposed in the vehicle cabin may be generated.
  • rotation control information including the target orientation may be generated, that is, the rotation control information may be used to control the vehicle-mounted robot to turn to the target orientation.
  • the rotating part of the vehicle-mounted robot can be rotated under the control of the rotation control information until it rotates to the target orientation, so that the vehicle-mounted robot can be turned to the target orientation.
  • the vehicle-mounted robot by generating the rotation control information of the rotating part of the vehicle-mounted robot according to the target orientation, the vehicle-mounted robot can be controlled to accurately rotate to the target orientation according to the rotation stop condition.
  • the target orientation corresponding to the position information of the occupant may be determined according to the mapping relationship between the pre-established position information and the orientation of the vehicle-mounted robot, and the current orientation of the vehicle-mounted robot and the orientation of the vehicle-mounted robot may be determined according to the mapping relationship.
  • the target orientation determines the rotation direction and rotation angle of the rotating part of the vehicle-mounted robot, and generates a rotation control for controlling the rotation part of the vehicle-mounted robot arranged in the vehicle cabin to rotate according to the rotation direction and the rotation angle information.
  • the rotation angle may represent the angle at which the vehicle-mounted robot needs to rotate from the current orientation to the target orientation, that is, the angle between the current orientation of the vehicle-mounted robot and the target orientation.
  • the angle of rotation may be 20°, 30°, 45°, or the like.
  • the rotation control information including the rotation direction and the rotation angle may be generated, that is, the rotation control information may be used to control the rotation component of the vehicle-mounted robot according to the rotation direction and the rotation angle turn.
  • the rotating part of the vehicle-mounted robot may rotate according to the rotation direction and the rotation angle, so that the vehicle-mounted robot turns to the target direction.
  • the vehicle-mounted robot by generating the rotation control information of the rotating part of the vehicle-mounted robot according to the rotation direction and rotation angle, the vehicle-mounted robot can be controlled to accurately rotate to the target orientation through rotation control parameters and rotation stop conditions.
  • the target orientation corresponding to the position information of the occupant may be determined according to the mapping relationship between the pre-established position information and the orientation of the vehicle-mounted robot, and the current orientation of the vehicle-mounted robot,
  • the target orientation and the preset angular velocity of the rotating part of the vehicle-mounted robot arranged in the vehicle cabin determine the rotation direction and rotation time of the rotating part of the vehicle-mounted robot, and generate and control the vehicle-mounted robot arranged in the vehicle cabin.
  • the preset angular velocity is the rotational angular velocity of the vehicle-mounted robot, which may be the default of the system or set by the user, which is not limited herein.
  • the rotation control information including the rotation direction and the rotation time may be generated, that is, the rotation control information may be used to control the rotation part of the vehicle-mounted robot according to the rotation direction and the rotation time turn.
  • the rotating part of the vehicle-mounted robot may rotate according to the preset angular velocity, the rotation direction and the rotation time, so that the vehicle-mounted robot turns to the target orientation.
  • the vehicle-mounted robot can be accurately controlled to rotate to the target orientation through rotation control parameters.
  • the vehicle-mounted robot corresponding to the position information of the occupant may also be determined according to the position information of the occupant and the mapping relationship between the pre-established position information and the rotation control information Rotation control information of the rotating part.
  • the rotation control information may include a target orientation corresponding to the position information of the occupant, that is, the rotation control information may be used to control the vehicle-mounted robot to turn to the target orientation.
  • step S14 the vehicle-mounted robot is controlled to rotate according to the rotation control information.
  • the rotation control information can be sent to the rotating part of the vehicle-mounted robot to control the rotation of the rotating part of the vehicle-mounted robot, and the whole and/or part of the vehicle-mounted robot can be controlled to rotate, so that the vehicle-mounted robot can be rotated according to the
  • the rotation control information realizes the rotation control of the vehicle-mounted robot.
  • the anthropomorphism of the vehicle is realized in the human-computer interaction of the vehicle, so that the way of human-computer interaction is more in line with human interaction habits , the interaction is more natural, so that the occupants feel the warmth of human-computer interaction, and enhance the fun, comfort and companionship of the ride. It helps to keep the driver's attention and helps reduce the safety risk of driving by enhancing the ride pleasure and escort feeling.
  • the video stream of the vehicle cabin is obtained, the position information of the occupant of the vehicle cabin is determined based on the video stream, and the vehicle-mounted robot arranged in the vehicle cabin is generated according to the position information of the occupant.
  • the rotation control information of the rotating parts of the vehicle is obtained, and the rotation control of the vehicle-mounted robot is performed according to the rotation control information, so that the vehicle-mounted robot can be controlled to turn to the occupant based on the video stream of the vehicle cabin, so that the vehicle-mounted robot is facing the Interacting with the occupant in the occupant state can make the interaction between the vehicle-mounted robot and the occupant more in line with the interaction habit between people, the interaction is more natural, and the pertinence and fluency of the interaction can be improved.
  • the vehicle-mounted robot includes a body and the rotating component; the performing rotation control on the vehicle-mounted robot according to the rotation control information includes: driving the vehicle-mounted robot according to the rotation control information
  • the rotating part of the vehicle-mounted robot drives the body of the vehicle-mounted robot to rotate.
  • the body may include a torso and a head.
  • the body may include a torso and a head, and may also include at least one of a left arm, a right arm, a left leg, and a right leg.
  • the body may include a torso, head, and arms.
  • the body of the vehicle-mounted robot is driven to rotate by driving the rotating component of the vehicle-mounted robot according to the rotation control information, thereby enabling the vehicle-mounted robot to interact with the vehicle while the body is facing the occupant. to interact with the occupants.
  • the method further includes: generating display control information for controlling the display component of the vehicle-mounted robot to display content to the occupant.
  • the display component of the vehicle-mounted robot may represent a component with a display function in the vehicle-mounted robot.
  • the display part of the vehicle-mounted robot may include a display screen of the vehicle-mounted robot.
  • the display component can be used to display expressions (such as smiley faces), text, animation, and the like.
  • the display content can be displayed with different expressions, so that the interaction can be achieved. The process is more emotional and interesting.
  • the generating, according to the position information of the occupant, the rotation control information of the rotating part of the vehicle-mounted robot disposed in the vehicle cabin includes: in response to detecting the said video stream according to the video stream The occupant's intention to get on or off the vehicle is based on the position information of the occupant to generate rotation control information for the rotation member of the in-vehicle robot installed in the vehicle cabin.
  • the occupant's intention to get into the vehicle may be detected based on the video stream outside the vehicle cabin and/or the video stream inside the vehicle cabin.
  • the occupant enters the vehicle cabin from outside the vehicle cabin and if so, it can be determined that all the detected Describe the occupant's intention to get on the vehicle.
  • the occupant's intention to get off the vehicle may be detected based on the video stream outside the vehicle cabin and/or the video stream inside the vehicle cabin.
  • the video stream outside the vehicle cabin and/or the video stream inside the vehicle cabin it can be detected whether the occupant moves from the inside of the vehicle cabin to the outside of the vehicle cabin, and if so, it can be determined to detect to the occupant's intent to get off.
  • the video stream outside the vehicle cabin and/or the video stream inside the vehicle cabin it may be detected whether an occupant opens a door from inside the vehicle, and if so, it may be determined that the occupant is detected the intention to get off.
  • the identity information corresponding to the face recognition result outside the vehicle cabin may be obtained, wherein the face recognition result outside the vehicle cabin is obtained by performing face recognition based on the video stream outside the vehicle cabin; Perform face recognition on the video stream in the vehicle cabin to determine the position of the face area corresponding to the identity information; determine the position information of the occupant according to the position of the face area.
  • face recognition may be performed based on the video stream outside the vehicle cabin to obtain a face recognition result outside the vehicle cabin, wherein the video stream outside the vehicle cabin It can be collected by the second camera.
  • the video stream in the cabin can be analyzed.
  • Perform face recognition to determine the position of occupant B's face area, so that occupant B can be determined according to the position of occupant B's face area and the pre-established mapping relationship between the position of the face area and the occupant's position information location information.
  • the second camera used for swiping the face to open the door may be linked with the first camera used for occupant monitoring to obtain the position information of the occupant.
  • the boarding position of the passenger may also be determined according to the position of the second camera corresponding to the face recognition result outside the vehicle cabin.
  • the second camera corresponding to the face recognition result outside the vehicle cabin may represent a second camera that collects the video stream outside the vehicle cabin corresponding to the face recognition result outside the vehicle cabin. For example, if the second camera corresponding to the face recognition result outside the cabin is installed outside the left front door, it can be determined that the boarding position of the passenger is the driver's seat position.
  • the rotation control information of the rotating part of the vehicle-mounted robot disposed in the vehicle cabin is generated according to the position information of the occupant, In this way, the vehicle-mounted robot is controlled to perform targeted interaction with the passengers getting on and off the vehicle, so that personalized services in more scenarios can be realized through the vehicle-mounted robot.
  • the method further includes: performing face recognition on the occupant according to the video stream of the vehicle cabin; and determining attribute information of the occupant according to the face recognition result corresponding to the occupant ; According to the attribute information of the occupant, generate interactive control information for controlling the interaction between the vehicle-mounted robot and the occupant.
  • face recognition may be performed on the occupant according to a video stream outside the vehicle cabin to obtain a face recognition result corresponding to the occupant.
  • the video stream outside the vehicle cabin can be acquired in the scene of opening the door by swiping the face.
  • face recognition may be performed based on at least one frame of image in the video stream outside the vehicle cabin to obtain a face recognition result corresponding to the occupant.
  • the facial features of at least one frame of images in the video stream outside the vehicle cabin can be extracted, and the facial features of at least one frame of images in the video stream outside the vehicle cabin can be compared with the pre-registered facial features.
  • the pre-registered facial features may include, but are not limited to, at least one of the following: facial features of the owner of the vehicle, facial features of common people in the vehicle (for example, family members of the owner of the vehicle), The facial features of the borrower of the vehicle (for example, the borrower of the shared car), and the facial features of the passengers of the vehicle (for example, the passengers of the online car-hailing).
  • face recognition may be performed on the occupant according to the video stream in the cabin to obtain a face recognition result corresponding to the occupant.
  • face recognition may be performed based on at least one frame of image in the video stream in the vehicle cabin to obtain a face recognition result corresponding to the occupant.
  • the facial features of at least one frame of images in the video stream in the vehicle cabin may be extracted, and the facial features of at least one frame of images in the video stream in the vehicle cabin may be compared with the pre-registered facial features. By comparing, it is judged whether the facial features of the same person belong to the same person, so as to obtain the facial recognition result corresponding to the occupant.
  • attribute information corresponding to the face recognition result may be acquired according to the identity information in the face recognition result.
  • pre-stored attribute information such as gender information and age information of the occupant may be acquired from the memory or server according to the identity information in the face recognition result.
  • the identity information of the occupant can be determined according to the face recognition result corresponding to the occupant, and the attribute information of the occupant can be determined according to the identity information of the occupant, so that the identity information of the occupant can be combined , to obtain the interaction mode information corresponding to the occupant, so that the interaction mode information more suitable for the occupant can be obtained based on the richer information of the occupant, so that the personalized needs of the occupant can be more satisfied.
  • the title of the occupant may be determined according to the identity information in the attribute information;
  • the interactive control information may include the voice information of "XX, hello, I am your smart assistant" or the voice information of "XX, hello, welcome to ride", etc. .
  • the on-board robot in response to the detection of the occupant's boarding intention, according to the occupant's location information and attribute information, the on-board robot is controlled to interact with the occupant to get on the vehicle, so that personalization can be achieved through the on-board robot welcome service.
  • the on-board robot can realize personalized Farewell service.
  • the position information of the occupant may be determined through a door sensor. For example, if it is detected through the door sensor of the front left door that an occupant gets on the vehicle (that is, the door sensor of the front left door detects that the front left door is pulled open from the outside of the vehicle), it can be determined that the boarding position information of the occupant is the main driver's seat ; If it is detected by the door sensor of the right front door that there is an occupant getting on the vehicle (that is, it is detected by the door sensor of the right front door that the right front door is pulled open from the outside of the car), then it can be determined that the boarding position information of the occupant is the passenger seat; If it is detected through the door sensor of the rear door that an occupant gets on the vehicle (that is, the door sensor of the rear door detects that the rear door is pulled open from the outside of the vehicle), it can be determined that the boarding position information of the occupant is the rear seat.
  • the boarding position information of the occupant may be determined through a seat sensor. For example, if it is detected by the seat sensor of the main driver's seat that an occupant is seated, it can be determined that the boarding position information of the occupant is the main driver's seat.
  • the method further includes: performing attribute identification on the occupant based on the video stream to obtain attribute information of the occupant; generating and controlling the vehicle-mounted robot according to the position information of the occupant Interactive control information that interacts with attribute information.
  • attribute identification of the occupant may be performed based on at least one frame of image in the video stream to obtain attribute information of the occupant.
  • the image coordinate area corresponding to the location information in the image coordinate system corresponding to the video stream may be determined according to the mapping relationship between the location information of the occupant and the image coordinates; Attribute identification is performed on the included image portion to obtain attribute information of the occupant.
  • the location information includes seat information; the performing attribute identification on the occupant based on the video stream to obtain the attribute information of the occupant includes: according to pre-established seat and image coordinates The mapping relationship between the seat information and the corresponding image coordinate area in the image coordinate system corresponding to the video stream is determined; Crew attribute information.
  • the mapping relationship between each seat and the image coordinates may be established in advance.
  • the main driver's seat corresponds to the image coordinate area D 1
  • the passenger seat corresponds to the image coordinate area D 2
  • the rear left seat corresponds to the image coordinate area D 3
  • the rear middle seat corresponds to the image coordinate area D 4
  • the rear seat corresponds to the image coordinate area D 4
  • the row right seat corresponds to the image coordinate area D5.
  • any image coordinate area can be represented by the coordinates of the four vertices of the image coordinate area; or, any image coordinate area can be represented by the coordinates of one vertex of the image coordinate area and the length and width of the image coordinate area
  • the image coordinate area D1 can be represented by the coordinates of the vertex of the upper left corner of the image coordinate area D1 and the length and width of the image coordinate area D1.
  • the seat information of occupant A is the passenger seat
  • it can be determined that the image coordinate area corresponding to the seat information of occupant A is the image coordinate area D 2
  • the image coordinate area D 2 in at least one frame of image of the video stream can be included in the image coordinate area D 2 .
  • Attribute identification is performed on the image part of , and the attribute information of occupant A is obtained.
  • the image coordinate area corresponding to the seat information in the image coordinate system corresponding to the video stream in the vehicle cabin is determined according to the pre-established mapping relationship between the seat and the image coordinate, and the image coordinate area corresponding to the seat information is determined. Attribute identification is performed on the image part included in the image coordinate area in the video stream to obtain attribute information of the occupant, thereby reducing the image part (such as the background image) that does not belong to the occupant in the image of the video stream in the cabin. part, the image part of other occupants) on the attribute identification of the occupant, so that the accuracy of attribute identification of the occupant can be improved.
  • Attribute identification is performed on the image part included in the image coordinate area in the video stream to obtain attribute information of the occupant, thereby reducing the image part (such as the background image) that does not belong to the occupant in the image of the video stream in the cabin. part, the image part of other occupants) on the attribute identification of the occupant, so that the accuracy of attribute identification of the occupant can be improved
  • the attribute information of the occupant is obtained by identifying the attribute of the occupant based on the video stream, and an interaction for controlling the vehicle-mounted robot to interact according to the position information and attribute information of the occupant is generated.
  • control information so that not only the in-vehicle robot can interact with the occupant in a state of facing the occupant, but also the in-vehicle robot can interact with the occupant based on the attribute information of the occupant, so that the occupant can be satisfied. of individual needs.
  • the generating the interaction control information that controls the vehicle-mounted robot to interact according to the position information and attribute information of the occupant includes: determining the interaction corresponding to the occupant according to the attribute information of the occupant mode information; generating interactive control information that controls the vehicle-mounted robot to interact according to the occupant's position information and the interaction mode information.
  • the interaction manner information may include at least one of intonation information, voice template, expression information, action information, and the like.
  • the interaction mode corresponding to children can be more lively, such as higher intonation, richer expressions and actions; another example, the voice template corresponding to the elderly can contain more honorifics; The voice template corresponding to the occupant may have a motivating effect.
  • the corresponding relationship between the attribute information and the interaction mode information can be established in advance, so that the corresponding interaction mode information of the occupant can be determined according to the corresponding relationship between the attribute information and the interaction mode information and the attribute information of the occupant.
  • the interaction mode information corresponding to the occupant is determined according to the attribute information of the occupant, and an interactive control for controlling the vehicle-mounted robot to interact according to the position information of the occupant and the interaction mode information is generated. Therefore, the human-computer interaction can be carried out through different interaction methods, that is, the human-computer interaction methods corresponding to different occupants can be different, so as to meet the individual needs of the occupants, improve the ride fun, and make the occupants feel the benefits of human-computer interaction. warmth.
  • the interaction mode information corresponding to the occupant may be configured according to the interaction mode configuration request. According to this example, occupants can customize the way the onboard robot interacts according to their personal preferences.
  • the interaction mode information corresponding to the occupant may be regenerated according to the interaction mode reset request.
  • the occupants can re-customize the way the onboard robot interacts as their personal preferences change.
  • the appellation of the occupant may be determined according to the attribute information of the occupant, and interactive control information for controlling the vehicle-mounted robot to interact according to the location information of the occupant and the appellation of the occupant may be generated .
  • the title of the occupant such as "Ms.”, “Mr.”, “Children”, etc.
  • the title of the occupant such as "Ms. Zhang”, “Mr. Li”, etc.
  • the identity information, age information and gender information in the attribute information of the occupant may be determined according to the identity information, age information and gender information in the attribute information of the occupant.
  • the attribute identification includes at least one of age identification, gender identification, skin color identification, emotion identification, and identity identification
  • the attribute information includes age information, gender information, skin color information , at least one of emotional information and identity information.
  • the vehicle-mounted robot can interact with the occupant based on at least one of the occupant's age information, gender information, skin color information, emotional information, and identity information to satisfy the occupant's needs. It can make passengers feel the warmth of human-computer interaction and improve the pertinence and fluency of interaction.
  • the method further includes: acquiring voice information; the determining, based on the video stream, the location information of the occupants of the vehicle cabin, including: detecting the vehicle based on the video stream The position information of the occupant who issued the voice information among the occupants of the cabin; the generating, according to the position information of the occupant, the rotation control information of the rotating part of the vehicle-mounted robot disposed in the cabin, including: according to the voice The position information of the occupant of the information generates rotation control information of the rotation member of the in-vehicle robot installed in the vehicle cabin.
  • speech recognition may be performed by the control device of the vehicle-mounted robot to determine whether speech information is detected.
  • speech recognition may be performed by a vehicle machine or other speech recognition device disposed in the vehicle cabin to determine whether speech information is detected.
  • the voice information may include voice interaction instructions, and may also include other voice information, which is not limited herein.
  • the voice information can be used to wake up the in-vehicle robot, start the in-vehicle robot, control the in-vehicle robot to sleep, turn off the in-vehicle robot, answer the phone, open and close the windows, adjust the air conditioner, play audio and video, and navigate.
  • the position information of the occupant who sends out the voice information among the occupants of the vehicle cabin may be determined based on the audio data in the video stream.
  • an audio segment corresponding to the voice information may be obtained from the audio data of the video stream; The location information of the occupants.
  • an audio segment corresponding to the voice information in response to acquiring the voice information, may be acquired from the audio data of the video stream; the sound source location may be performed on the audio segment corresponding to the voice information to obtain the sound source.
  • the audio segment corresponding to the voice information in the audio data may represent the audio segment to which the voice information in the audio data belongs. That is, the audio segment corresponding to the voice information includes the voice content of the voice information.
  • the sound source is obtained. The position information of the occupant of the voice information can thereby be accurately determined.
  • an audio segment corresponding to the voice information in response to acquiring the voice information, can be acquired from the audio data of the video stream; voiceprint recognition is performed on the audio segment corresponding to the voice information, and it is determined that the audio segment is sent out.
  • the identity information of the occupant of the voice information perform face recognition on at least one frame of image in the video stream to determine the location information of the occupant corresponding to the identity information.
  • mouth shape detection may be performed based on the video stream to obtain the position information of the occupant who sends out the voice information among the occupants of the vehicle cabin.
  • the position information of the occupant who sends out the voice information among the occupants in the cabin is detected, and the setting is generated according to the position information of the occupant who sends out the voice information.
  • the rotation control information of the rotating parts of the vehicle-mounted robot in the cabin can control the vehicle-mounted robot to interact with the occupant in the state of facing the occupant who sends out the voice information, so that the voice interaction between the vehicle-mounted robot and the occupant can be improved.
  • the pertinence and fluency of voice interaction help to improve the efficiency of voice interaction.
  • the method further includes: acquiring voice window control information; performing sound source localization on the voice window control information and/or performing sound source detection based on the video stream, and determining the sound source the position information of the occupant of the voice window control information; determine the target window in the cabin that corresponds to the position information of the occupant who sent the voice window control information; generate a control window for the target window control information.
  • sound source localization may be performed on the voice window control information through an acoustic array (eg, a microphone array or a microphone array) to determine the position information of the occupant who sends the voice window control information.
  • an acoustic array eg, a microphone array or a microphone array
  • a video clip matching the acquisition time of the voice window control information may be determined from the video stream, and mouth shape detection may be performed on the video clip to determine the position of the occupant who issued the voice window control information information.
  • the target window may be the right front window; the position information of the occupant who issued the voice window control information is the left rear seat , the target window can be the rear left window.
  • the occupant who sent the voice window control information is determined. position information, determine the target window in the cabin that corresponds to the position information of the occupant who issued the voice window control information, and generate control information for controlling the target window, so that the Voice window control information for precise window control of the occupant's position.
  • face recognition may be performed based on a video stream outside the vehicle cabin to obtain a face recognition result outside the vehicle cabin.
  • the state information of the vehicle door may be acquired in response to the face recognition result outside the vehicle cabin being that the face recognition is successful. If the state information of the door is not unlocked, control the door to unlock or control the door to unlock and open; if the state information of the door is unlocked and not opened, control the door to open, which can be based on people Face recognition automatically opens the door for the user without the need for the user to manually open the door, which can improve the convenience of using the car.
  • the vehicle door in response to the face recognition result outside the vehicle cabin being that the face recognition is successful, the vehicle door is controlled to be unlocked and/or opened, and the in-vehicle robot is started or woken up.
  • the timing of starting or waking up the in-vehicle robot is "in response to the result of face recognition outside the vehicle cabin, the face recognition is successful".
  • controlling the door unlocking and/or opening and “starting or waking up the vehicle-mounted robot” can be executed in parallel in response to the face recognition result outside the vehicle cabin being that the face recognition is successful, so that the vehicle-mounted robot can be started as soon as possible robot.
  • the vehicle door is controlled to be unlocked and/or opened while the vehicle door is controlled to be unlocked and/or opened in response to the result of the face recognition outside the vehicle cabin being that the face recognition is successful, so that the vehicle-mounted robot can be used to perform a human-like operation by the vehicle-mounted robot.
  • the vehicle-mounted robot can be started or woken up immediately after the successful face recognition outside the vehicle cabin, and the time period between the successful face recognition outside the vehicle cabin and the entry of the occupant into the vehicle can be used to make the vehicle-mounted robot ready to communicate with the occupants.
  • the vehicle-mounted robot can provide services for the occupant more quickly, thereby improving the pertinence and fluency of the interaction.
  • the vehicle-mounted robot before the face recognition is successful, the vehicle-mounted robot is in an off state or a dormant state, so that the power consumption required for realizing the human-machine interaction of the vehicle through the vehicle-mounted robot can be saved.
  • the first camera in response to the face recognition result outside the vehicle cabin being that the face recognition is successful, the first camera is activated or awakened while controlling the door to be unlocked and/or opened.
  • the timing of activating or waking up the first camera is "in response to the result of the face recognition outside the vehicle cabin, the face recognition is successful". That is, in response to the face recognition result outside the vehicle cabin being that the face recognition is successful, the process of "controlling the unlocking and/or opening of the door” and the process of "starting or waking up the in-vehicle robot" are triggered in parallel, rather than sequentially triggering "" control the door unlocking and/or opening” process and the "start or wake up the first camera” process.
  • “parallel triggering” is not limited to the strict alignment of trigger timestamps.
  • “controlling the door unlocking and/or opening” and “starting or waking up the first camera” may be executed in parallel in response to the face recognition result outside the vehicle cabin being that the face recognition is successful, so that it can be started as soon as possible first camera.
  • the first camera installed in the cabin of the vehicle can be activated or woken up immediately after the face recognition outside the cabin is successful, that is, the time between the successful face recognition outside the cabin and the entry of the occupant into the vehicle can be used. During this period of time, the first camera is activated or awakened, so that the first camera can collect the video stream in the cabin in time, so that after the occupant enters the cabin, the occupant can interact with the occupant in time.
  • the first camera may be in an off state or a sleep state, thereby saving power consumption required for human-computer interaction of the vehicle.
  • the detection to the occupant's boarding intent For example, video analysis may be performed on the video stream in the vehicle cabin collected by the first camera to determine whether an occupant gets on the vehicle. If, within a preset period of time after the face recognition outside the vehicle cabin is successful, it is determined that a passenger has boarded the vehicle according to the video stream in the vehicle cabin, it can be determined that the passenger corresponding to the face recognition result has boarded the vehicle.
  • face recognition can be performed on the video stream in the vehicle cabin collected by the first camera, and if the occupant corresponding to the vehicle recognition result is recognized, the It is determined that the occupant corresponding to the face recognition result has boarded the vehicle.
  • video analysis may be performed on the video stream in the vehicle cabin collected by the first camera to determine whether any occupant intends to get off the vehicle. For example, when it is detected from the video stream in the cabin that the occupant gets up, it may be determined that the occupant's intention to get off the vehicle is detected. As another example of this implementation, the occupant's intention to get off the vehicle may be detected through a door sensor. For example, if the door sensor of the right front door detects that the right front door is opened from the inside of the vehicle, it can be determined that the passenger's intention to get off the vehicle is detected. As another example of this implementation, the occupant's intention to get off the vehicle can be detected by acting as a sensor. For example, if it is detected by the seat sensor of the rear middle seat that the occupant gets up, it can be determined that the occupant in the rear middle seat is detected to have an intention to get off the vehicle.
  • the present disclosure also provides a vehicle, a vehicle-mounted robot control device, electronic equipment, a computer-readable storage medium, and a program, all of which can be used to implement any of the vehicle-mounted robot control methods provided by the present disclosure, corresponding technical solutions and technical effects Please refer to the corresponding records in the method section, and will not be repeated here.
  • FIG. 2 shows a schematic diagram of a vehicle provided by an embodiment of the present disclosure.
  • the vehicle includes: a camera 210 , which is arranged in the vehicle cabin, and is used to collect the video stream of the vehicle cabin; the video stream of the cabin, determine the position information of the occupant of the cabin based on the video stream, generate the rotation control information of the rotating part of the vehicle-mounted robot 230 arranged in the cabin according to the position information of the occupant, and
  • the vehicle-mounted robot 230 is controlled to rotate according to the rotation control information; the vehicle-mounted robot 230 is connected to the controller 220 and arranged in the vehicle cabin for rotating according to the rotation control information.
  • the controller 220 may be installed in an invisible area in the vehicle cabin.
  • the controller 220 may be configured to control the camera 210 to capture the video stream of the vehicle cabin.
  • the vehicle-mounted robot 230 may adopt an intelligent robot system (Intelligent Robot System, IRS).
  • IRS Intelligent Robot System
  • the video stream of the vehicle cabin is collected by the camera, and the controller generates the rotating part of the vehicle-mounted robot disposed in the vehicle cabin according to the position information of the occupant in the vehicle cabin based on the position information of the occupant.
  • the vehicle-mounted robot is rotated according to the rotation control information, and the vehicle-mounted robot rotates according to the rotation information.
  • the vehicle-mounted robot is made to interact with the occupant in the state of facing the occupant, so that the interaction between the vehicle-mounted robot and the occupant can be more in line with the interaction habits between people, the interaction is more natural, and the pertinence of the interaction can be improved. and fluency.
  • FIG. 3 shows another schematic diagram of a vehicle provided by an embodiment of the present disclosure.
  • the camera 210 includes: a first camera 211 , which is arranged in the vehicle cabin and is used to collect video streams in the vehicle cabin; and/or a second camera 212 , which is set outside the vehicle cabin and used to collect video streams outside the vehicle cabin.
  • the first camera 211 may include an OMS camera, a DMS camera, and the like.
  • the number of the first cameras 211 may be one or more.
  • the first camera 211 can be set at any position in the cabin.
  • the first camera 211 may be installed in at least one of the following positions: a dashboard, a dome light, an interior rearview mirror, a center console, and a front windshield.
  • the number of the second cameras 212 may be one or more.
  • the second camera 212 may be installed on at least one of the following positions: at least one B-pillar, at least one door, at least one exterior rearview mirror, and a cross member.
  • the second camera 212 may be mounted on the B-pillar on the main driver's seat side of the vehicle.
  • the second camera 212 may be installed on the B-pillar on the left side of the vehicle.
  • the second camera 212 may be installed on the two B-pillars and the trunk door.
  • the second camera 212 may adopt a ToF camera, a binocular camera, or the like.
  • FIG. 4 shows a block diagram of a control apparatus of a vehicle-mounted robot provided by an embodiment of the present disclosure.
  • the control device of the vehicle-mounted robot includes: a first acquisition module 41 for acquiring the video stream of the vehicle cabin; and a first determination module 42 for determining the video stream of the vehicle cabin based on the video stream The position information of the occupant; the first generation module 43 is used for generating the rotation control information of the rotating part of the vehicle-mounted robot arranged in the cabin according to the position information of the occupant; the rotation control module 44 is used for according to the rotation The control information controls the rotation of the vehicle-mounted robot.
  • the vehicle-mounted robot includes a body and the rotating component; the rotation control module 44 is configured to: according to the rotation control information, drive the rotating component of the vehicle-mounted robot to drive the vehicle-mounted robot body to rotate.
  • the first generation module 43 is configured to: determine the target orientation corresponding to the position information of the occupant according to the mapping relationship between the pre-established position information and the orientation of the vehicle-mounted robot; and At least one of the following: generating rotation control information for controlling the rotating part of the vehicle-mounted robot disposed in the vehicle cabin to rotate so that the vehicle-mounted robot turns to the target orientation; according to the current orientation of the vehicle-mounted robot and the target orientation Determine the rotation direction and rotation angle of the rotating part of the vehicle-mounted robot, and generate rotation control information for controlling the rotation part of the vehicle-mounted robot arranged in the vehicle cabin to rotate according to the rotation direction and the rotation angle; The current orientation of the vehicle-mounted robot, the target orientation, and the preset angular velocity of the vehicle-mounted robot rotating part arranged in the cabin, determine the rotation direction and rotation time of the vehicle-mounted robot's rotating part, and generate a control set in the The rotation control information that the rotation part of the vehicle-mounted robot in the vehicle cabin rotates according to the rotation direction and the rotation time
  • the apparatus further includes: a second generating module configured to generate display control information for controlling the display component of the vehicle-mounted robot to display content to the occupant.
  • the first determining module 42 is configured to: in an image coordinate system corresponding to at least one frame of image in the video stream, determine the image coordinates where at least one body part of the occupant is located area; according to the image coordinate area, determine the position information of the occupant.
  • the position information of the occupant includes: first relative position information of the occupant in the image; the first determination module 42 is configured to: use the image coordinate area as the and the first relative position information of the occupant in the image.
  • the position information of the occupant includes: second relative position information of the occupant in the cabin; the first determining module 42 is configured to: according to the image coordinate system and The mapping relationship between the space coordinate systems in the cabin, determine the space coordinate area in the cabin corresponding to the image coordinate area, and use the space coordinate area in the cabin as the occupant in the cabin of the second relative position information.
  • the first generating module 43 is configured to: in response to detecting the occupant's intention to get on or off the vehicle according to the video stream, generate a set of information according to the position information of the occupant. Rotation control information of the rotating parts of the vehicle-mounted robot in the vehicle cabin.
  • the apparatus further includes: a face recognition module, configured to perform face recognition on the occupant according to the video stream of the vehicle cabin; a second determination module, configured to perform face recognition on the occupant according to the The face recognition result corresponding to the occupant determines the attribute information of the occupant; the third generation module is configured to generate interactive control information for controlling the interaction between the vehicle-mounted robot and the occupant according to the attribute information of the occupant.
  • the apparatus further includes: an attribute identification module, configured to perform attribute identification on the occupant based on the video stream, to obtain attribute information of the occupant; a fourth generation module, used for Generate interactive control information for controlling the vehicle-mounted robot to interact according to the position information and attribute information of the occupant.
  • the fourth generation module is configured to: determine the interaction mode information corresponding to the occupant according to the attribute information of the occupant; generate and control the vehicle-mounted robot according to the position information of the occupant, Interactive control information for interacting according to the interactive mode information.
  • the attribute identification includes at least one of age identification, gender identification, skin color identification, emotion identification, and identity identification
  • the attribute information includes age information, gender information, skin color identification At least one of information, emotional information, and identity information.
  • the apparatus further includes: a second acquiring module, configured to acquire voice information; the first determining module 42 is configured to: based on the video stream, detect among the occupants of the vehicle cabin the position information of the occupant who sends out the voice information; the first generating module 43 is configured to: generate the rotation control information of the rotating part of the vehicle-mounted robot arranged in the cabin according to the position information of the occupant who sends out the voice information .
  • the device further includes: a third acquisition module for acquiring voice window control information; a sound source detection module for performing sound source localization and/or sound source positioning on the voice window control information Or perform sound source detection based on the video stream, and determine the position information of the occupant who sends the voice window control information; a third determining module is used to determine the difference between the cabin and the voice window control information.
  • the target window corresponding to the position information of the occupant; the fifth generation module is configured to generate control information for controlling the target window.
  • the video stream of the vehicle cabin is obtained, the position information of the occupant of the vehicle cabin is determined based on the video stream, and the vehicle-mounted robot arranged in the vehicle cabin is generated according to the position information of the occupant.
  • the rotation control information of the rotating parts of the vehicle is obtained, and the rotation control of the vehicle-mounted robot is performed according to the rotation control information, so that the vehicle-mounted robot can be controlled to turn to the occupant based on the video stream of the vehicle cabin, so that the vehicle-mounted robot is facing the Interacting with the occupant in the occupant state can make the interaction between the vehicle-mounted robot and the occupant more in line with the interaction habit between people, the interaction is more natural, and the pertinence and fluency of the interaction can be improved.
  • the functions or modules included in the devices and vehicles provided in the embodiments of the present disclosure may be used to execute the methods described in the above method embodiments, and the specific implementation and technical effects may refer to the descriptions in the above method embodiments , for brevity, will not be repeated here.
  • Embodiments of the present disclosure further provide a computer-readable storage medium, on which computer program instructions are stored, and when the computer program instructions are executed by a processor, the foregoing method is implemented.
  • the computer-readable storage medium may be a non-volatile computer-readable storage medium, or may be a volatile computer-readable storage medium.
  • Embodiments of the present disclosure further provide a computer program, including computer-readable codes, when the computer-readable codes are executed in an electronic device, the processor in the electronic device executes the above method.
  • Embodiments of the present disclosure further provide another computer program product for storing computer-readable instructions, which, when executed, cause the computer to execute the operations of the control method for a vehicle-mounted robot provided by any of the foregoing embodiments.
  • Embodiments of the present disclosure further provide an electronic device, including: one or more processors; a memory for storing executable instructions; wherein the one or more processors are configured to invoke executable instructions stored in the memory instruction to execute the above method.
  • the electronic device may be provided as a terminal, server or other form of device.
  • the electronic device may be a controller, a domain controller, a processor, or a vehicle machine connected to the vehicle-mounted robot, and may also be a device host used for performing data processing operations such as images in an OMS or DMS.
  • FIG. 5 shows a block diagram of an electronic device 800 provided by an embodiment of the present disclosure.
  • the electronic device 800 may be an in-vehicle device, a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, etc. terminals.
  • electronic device 800 may include one or more of the following components: processing component 802, memory 804, power supply component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814 , and the communication component 816 .
  • the processing component 802 generally controls the overall operation of the electronic device 800, such as operations associated with display, phone calls, data communications, camera operations, and recording operations.
  • the processing component 802 can include one or more processors 820 to execute instructions to perform all or some of the steps of the methods described above.
  • processing component 802 may include one or more modules that facilitate interaction between processing component 802 and other components.
  • processing component 802 may include a multimedia module to facilitate interaction between multimedia component 808 and processing component 802.
  • Memory 804 is configured to store various types of data to support operation at electronic device 800 . Examples of such data include instructions for any application or method operating on electronic device 800, contact data, phonebook data, messages, pictures, videos, and the like. Memory 804 may be implemented by any type of volatile or nonvolatile storage device or combination thereof, such as static random access memory (SRAM), electrically erasable programmable read only memory (EEPROM), erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Magnetic or Optical Disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read only memory
  • EPROM erasable Programmable Read Only Memory
  • PROM Programmable Read Only Memory
  • ROM Read Only Memory
  • Magnetic Memory Flash Memory
  • Magnetic or Optical Disk Magnetic Disk
  • Power supply assembly 806 provides power to various components of electronic device 800 .
  • Power supply components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power to electronic device 800 .
  • Multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and the user.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user.
  • the touch panel includes one or more touch sensors to sense touch, swipe, and gestures on the touch panel. The touch sensor may not only sense the boundaries of a touch or swipe action, but also detect the duration and pressure associated with the touch or swipe action.
  • the multimedia component 808 includes a front-facing camera and/or a rear-facing camera. When the electronic device 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each of the front and rear cameras can be a fixed optical lens system or have focal length and optical zoom capability.
  • Audio component 810 is configured to output and/or input audio signals.
  • audio component 810 includes a microphone (MIC) that is configured to receive external audio signals when electronic device 800 is in operating modes, such as calling mode, recording mode, and voice recognition mode.
  • the received audio signal may be further stored in memory 804 or transmitted via communication component 816 .
  • audio component 810 also includes a speaker for outputting audio signals.
  • the I/O interface 812 provides an interface between the processing component 802 and a peripheral interface module, which may be a keyboard, a click wheel, a button, or the like. These buttons may include, but are not limited to: home button, volume buttons, start button, and lock button.
  • Sensor assembly 814 includes one or more sensors for providing status assessment of various aspects of electronic device 800 .
  • the sensor assembly 814 can detect the on/off state of the electronic device 800, the relative positioning of the components, such as the display and the keypad of the electronic device 800, the sensor assembly 814 can also detect the electronic device 800 or one of the electronic device 800 Changes in the position of components, presence or absence of user contact with the electronic device 800 , orientation or acceleration/deceleration of the electronic device 800 and changes in the temperature of the electronic device 800 .
  • Sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact.
  • Sensor assembly 814 may also include a light sensor, such as a complementary metal oxide semiconductor (CMOS) or charge coupled device (CCD) image sensor, for use in imaging applications.
  • CMOS complementary metal oxide semiconductor
  • CCD charge coupled device
  • the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • Communication component 816 is configured to facilitate wired or wireless communication between electronic device 800 and other devices.
  • the electronic device 800 can access a wireless network based on communication standards, such as wireless network (Wi-Fi), second generation mobile communication technology (2G), third generation mobile communication technology (3G), fourth generation mobile communication technology (4G) )/Long Term Evolution (LTE) of Universal Mobile Communications Technology, Fifth Generation Mobile Communications Technology (5G), or a combination thereof.
  • the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communication component 816 also includes a near field communication (NFC) module to facilitate short-range communication.
  • the NFC module may be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • electronic device 800 may be implemented by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable A programmed gate array (FPGA), controller, microcontroller, microprocessor or other electronic component implementation is used to perform the above method.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGA field programmable A programmed gate array
  • controller microcontroller, microprocessor or other electronic component implementation is used to perform the above method.
  • a non-volatile computer-readable storage medium such as a memory 804 comprising computer program instructions executable by the processor 820 of the electronic device 800 to perform the above method is also provided.
  • the present disclosure may be a system, method and/or computer program product.
  • the computer program product may include a computer-readable storage medium having computer-readable program instructions loaded thereon for causing a processor to implement various aspects of the present disclosure.
  • a computer-readable storage medium may be a tangible device that can hold and store instructions for use by the instruction execution device.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • Non-exhaustive list of computer readable storage media include: portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM) or flash memory), static random access memory (SRAM), portable compact disk read only memory (CD-ROM), digital versatile disk (DVD), memory sticks, floppy disks, mechanically coded devices, such as printers with instructions stored thereon Hole cards or raised structures in grooves, and any suitable combination of the above.
  • RAM random access memory
  • ROM read only memory
  • EPROM erasable programmable read only memory
  • flash memory static random access memory
  • SRAM static random access memory
  • CD-ROM compact disk read only memory
  • DVD digital versatile disk
  • memory sticks floppy disks
  • mechanically coded devices such as printers with instructions stored thereon Hole cards or raised structures in grooves, and any suitable combination of the above.
  • Computer-readable storage media are not to be construed as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (eg, light pulses through fiber optic cables), or through electrical wires transmitted electrical signals.
  • the computer readable program instructions described herein may be downloaded to various computing/processing devices from a computer readable storage medium, or to an external computer or external storage device over a network such as the Internet, a local area network, a wide area network, and/or a wireless network.
  • the network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
  • a network adapter card or network interface in each computing/processing device receives computer-readable program instructions from a network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in each computing/processing device .
  • Computer program instructions for carrying out operations of the present disclosure may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-dependent instructions, microcode, firmware instructions, state setting data, or instructions in one or more programming languages.
  • Source or object code written in any combination, including object-oriented programming languages, such as Smalltalk, C++, etc., and conventional procedural programming languages, such as the "C" language or similar programming languages.
  • the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server implement.
  • the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider through the Internet connect).
  • LAN local area network
  • WAN wide area network
  • custom electronic circuits such as programmable logic circuits, field programmable gate arrays (FPGAs), or programmable logic arrays (PLAs) can be personalized by utilizing state information of computer readable program instructions.
  • Computer readable program instructions are executed to implement various aspects of the present disclosure.
  • These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer or other programmable data processing apparatus to produce a machine that causes the instructions when executed by the processor of the computer or other programmable data processing apparatus , resulting in means for implementing the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
  • These computer readable program instructions can also be stored in a computer readable storage medium, these instructions cause a computer, programmable data processing apparatus and/or other equipment to operate in a specific manner, so that the computer readable medium on which the instructions are stored includes An article of manufacture comprising instructions for implementing various aspects of the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
  • Computer readable program instructions can also be loaded onto a computer, other programmable data processing apparatus, or other equipment to cause a series of operational steps to be performed on the computer, other programmable data processing apparatus, or other equipment to produce a computer-implemented process , thereby causing instructions executing on a computer, other programmable data processing apparatus, or other device to implement the functions/acts specified in one or more blocks of the flowcharts and/or block diagrams.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more functions for implementing the specified logical function(s) executable instructions.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or actions , or can be implemented in a combination of dedicated hardware and computer instructions.
  • the computer program product can be specifically implemented by hardware, software or a combination thereof.
  • the computer program product is embodied as a computer storage medium, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), etc. Wait.
  • a software development kit Software Development Kit, SDK

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Human Computer Interaction (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)
  • Fittings On The Vehicle Exterior For Carrying Loads, And Devices For Holding Or Mounting Articles (AREA)

Abstract

一种车载机器人的控制方法及装置、车辆、电子设备和介质。方法包括:获取车舱的视频流(S11);基于视频流,确定车舱的乘员的位置信息(S12);根据乘员的位置信息生成设置于车舱内的车载机器人的转动部件的转动控制信息(S13);根据转动控制信息对车载机器人进行转动控制(S14)。

Description

车载机器人的控制方法及装置、车辆、电子设备和介质
本申请要求在2020年9月3日提交中国专利局、申请号为202010916165.9、申请名称为“车载机器人的控制方法及装置、车辆、电子设备和介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本公开涉及车辆技术领域,尤其涉及一种车载机器人的控制方法及装置、车辆、电子设备和介质。
背景技术
随着车辆技术和计算机技术的发展,车辆的人机交互功能越来越受到用户的关注。人机交互是指人与计算机之间使用某种对话语言,以一定的交互方式,为完成确定任务的人与计算机之间的信息交换过程。车辆的人机交互旨在实现车辆的乘员与车辆之间的交互,在车辆领域具有重要意义。
发明内容
本公开提供了一种车载机器人的控制技术方案。
根据本公开的一方面,提供了一种车载机器人的控制方法,包括:
获取车舱的视频流;
基于所述视频流,确定所述车舱的乘员的位置信息;
根据所述乘员的位置信息生成设置于所述车舱内的车载机器人的转动部件的转动控制信息;
根据所述转动控制信息对所述车载机器人进行转动控制。
根据本公开的一方面,提供了一种车载机器人的控制装置,包括:
第一获取模块,用于获取车舱的视频流;
第一确定模块,用于基于所述视频流,确定所述车舱的乘员的位置信息;
第一生成模块,用于根据所述乘员的位置信息生成设置于所述车舱内的车载机器人的转动部件的转动控制信息;
转动控制模块,用于根据所述转动控制信息对所述车载机器人进行转动控制。
根据本公开的一方面,提供了一种车辆,包括:
摄像头,设置于车舱,用于采集车舱的视频流;
控制器,与所述摄像头连接,用于从所述摄像头获取所述车舱的视频流,基于所述视频流,确定所述车舱的乘员的位置信息,根据所述乘员的位置信息生成设置于所述车舱内的车载机器人的转动部件的转动控制信息,并根据所述转动控制信息对所述车载机器人进行转动控制;
所述车载机器人,与所述控制器连接,设置于所述车舱内,用于根据所述转动控制信息进行转动。
根据本公开的一方面,提供了一种电子设备,包括:一个或多个处理器;用于存储可执行指令的存储器;其中,所述一个或多个处理器被配置为调用所述存储器存储的可执行指令,以执行上述方法。
根据本公开的一方面,提供了一种计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现上述方法。
根据本公开的一方面,提供了一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行用于实现上述方法。
在本公开实施例中,通过获取车舱的视频流,基于所述视频流,确定所述车舱的乘员的位置信息,根据所述乘员的位置信息生成设置于所述车舱内的车载机器人的转动部件的转动控制信息,并根据所述转动控制信息对所述车载机器人进行转动控制,由此能够基于车舱的视频流,控制车载机器人转向所述乘员,以使车载机器人在朝向所述乘员的状态下与所述乘员进行交互,从而能够使车载机器人与乘员的交互方式更加符合人与人之间的交互习惯,交互过更加自然,能够提高交互的针对性和流畅性。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,而非限制本公开。
根据下面参考附图对示例性实施例的详细说明,本公开的其它特征及方面将变得清楚。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,这些附图示出了符合本公开的实施例,并与说明书一起用于说明本公开的技术方案。
图1示出本公开实施例提供的车载机器人的控制方法的流程图。
图2示出本公开实施例提供的车辆的示意图。
图3示出本公开实施例提供的车辆的另一示意图。
图4示出本公开实施例提供的车载机器人的控制装置的框图。
图5示出本公开实施例提供的一种电子设备800的框图。
具体实施方式
以下将参考附图详细说明本公开的各种示例性实施例、特征和方面。附图中相同的附图标记表示功能相同或相似的元件。尽管在附图中示出了实施例的各种方面,但是除非特别指出,不必按比例绘制附图。
在这里专用的词“示例性”意为“用作例子、实施例或说明性”。这里作为“示例性”所说明的任何实施例不必解释为优于或好于其它实施例。
本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中术语“至少一种”表示多种中的任意一种或多种中的至少两种的任意组合,例如,包括A、B、C中的至少一种,可以表示包括从A、B和C构成的集合中选择的任意一个或多个元素。
另外,为了更好地说明本公开,在下文的具体实施方式中给出了众多的具体细节。本领域技术人员应当理解,没有某些具体细节,本公开同样可以实施。在一些实例中,对于本领域技术人员熟知的方法、手段、元件和电路未作详细描述,以便于凸显本公开的主旨。
本公开实施例提供了一种车载机器人的控制方法及装置、车辆、电子设备和存储介质,通过获取车舱的视频流,基于所述视频流,确定所述车舱的乘员的位置信息,根据所述乘员的位置信息生成设置于所述车舱内的车载机器人的转动部件的转动控制信息,并根据所述转动控制信息对所述车载机器人进行转动控制,由此能够基于车舱的视频流,控制车载机器人转向所述乘员,以使车载机器人在朝向所述乘员的状态下与所述乘员进行交互,从而能够使车载机器人与乘员的交互方式更加符合人与人 之间的交互习惯,交互过更加自然,能够提高车载机器人与乘员交互的针对性和流畅性。
图1示出本公开实施例提供的车载机器人的控制方法的流程图。所述车载机器人的控制方法的执行主体可以是车载机器人的交互装置或车载机器人的控制装置。例如,所述车载机器人的控制方法可以由终端设备或其它处理设备执行。其中,终端设备可以是车载设备、用户设备(User Equipment,UE)、移动设备、用户终端、终端、蜂窝电话、无绳电话、个人数字助理(Personal Digital Assistant,PDA)、手持设备、计算设备或者可穿戴设备等。其中,所述车载设备可以是设置在车舱,且与车载机器人连接的控制器、域控制器、处理器或者车机,还可以是OMS(Occupant Monitoring System,乘员监控系统)或者DMS(Driver Monitor System,驾驶员监控系统)中用于执行图像等数据处理操作的设备主机等。在一些可能的实现方式中,所述车载机器人的控制方法可以通过处理器调用存储器中存储的计算机可读指令的方式来实现。在一种可能的实现方式中,所述车载机器人的控制方法可以应用于可驾驶的机器设备,例如智能车辆、模拟车辆驾驶的智能车舱等。如图1所示,所述车载机器人的控制方法包括步骤S11至步骤S14。
在步骤S11中,获取车舱的视频流。
在一种可能的实现方式中,所述车舱的视频流可以包括车舱内的视频流。作为该实现方式的一个示例,可以通过第一摄像头采集所述车舱内的视频流,并可以从所述第一摄像头获取所述车舱内的视频流。其中,所述第一摄像头可以包括OMS摄像头、DMS摄像头等。所述第一摄像头的数量可以是一个或多个。所述第一摄像头可以设置在车舱内的任意位置。作为该实现方式的一个示例,所述第一摄像头可以安装在以下至少一个位置:仪表板、顶灯、内后视镜、A柱、中控台、前挡风玻璃。
在另一种可能的实现方式中,所述车舱的视频流可以包括车舱外的视频流。作为该实现方式的一个示例,可以通过第二摄像头采集车舱外的视频流,并可以从所述第二摄像头获取所述车舱外的视频流。其中,所述第二摄像头的数量可以是一个或多个。作为该实现方式的一个示例,所述第二摄像头可以安装在以下至少一个位置上:至少一根B柱、至少一个车门、至少一个外后视镜、横梁。例如,所述第二摄像头可以安装在所述车辆的主驾驶座侧的B柱上。例如,主驾驶座在左侧,则所述第二摄像头可以安装在所述车辆的左侧的B柱上。又如,所述第二摄像头可以安装在两根B柱和后备箱门上。作为该实现方式的一个示例,所述第二摄像头可以采用ToF(Time of Flight,飞行时间)摄像头、双目摄像头等。
在步骤S12中,基于所述视频流,确定所述车舱的乘员的位置信息。
在本公开实施例中,所述车舱的乘员可以是搭乘所述车舱所属的车辆的任意人员。例如,所述车舱的乘员可以包括搭乘所述车辆的驾驶员、非驾驶员、乘客、大人、老人、小孩、前排人员、后排人员等中的至少之一。
在本公开实施例中,所述乘员的位置信息可以表示所述乘员在车舱内出现的位置信息。例如,所述乘员的位置信息可以包括所述乘员在车舱内的停留位置信息、上车位置信息、下车位置信息等中的至少之一。其中,所述乘员在车舱内的停留位置信息可以表示所述乘员乘坐的位置信息;所述乘员的上车位置信息可以表示所述乘员上车的车门位置;所述乘员的下车位置信息可以表示所述乘员下车的车门位置。
在一种可能的实现方式中,所述乘员的位置信息可以包括所述乘员的座位信息、方向信息、角度信息、坐标信息等中的至少之一。其中,所述乘员的座位信息可以表示所述乘员在车舱内乘坐的座位。 例如,所述乘员的座位信息可以是主驾驶座、副驾驶座、后排左侧座位、后排中间座位、后排右侧座位等。所述乘员的方向信息可以是所述乘员相对于车载机器人的安装位置或车舱内其他固定位置(如方向盘位置)的方向信息。例如,若所述乘员在所述车载机器人的右前方,则所述乘员的方向信息可以是右前方;若所述乘员在所述车载机器人的左前方,则所述乘员的方向信息可以是左前方;若所述乘员在所述车载机器人的正前方,则所述乘员的方向信息可以是正前方。所述乘员的角度信息可以表示所述乘员相对于车载机器人的安装位置或车舱内其他固定位置(例如方向盘位置)的方向与预设方向的夹角。例如,预设方向可以例如是由车载机器人指向后排中间座位的方向。所述乘员的坐标信息可以表示所述乘员在车舱内的空间坐标系中的坐标信息,或者,所述乘员的坐标信息可以表示所述乘员在车舱的视频流对应的图像坐标系中的坐标信息。
在本公开实施例中,可以对所述视频流中的至少一帧图像进行人体检测和/或人脸检测,得到人体检测结果和/或人脸检测结果;根据人体检测结果和/或人脸检测结果中的人体边界框和/或人脸边界框的位置信息,可以得到所述车舱的乘员的位置信息。例如,可以将所述人体边界框和/或所述人脸边界框的位置信息,作为所述车舱的乘员的位置信息。又如,可以预先建立人体边界框和/或人脸边界框的位置信息与乘员的位置信息之间的对应关系,根据预先建立的人体边界框和/或人脸边界框的位置信息与乘员的位置信息之间的对应关系,以及根据人体检测结果和/或人脸检测结果中的人体边界框和/或人脸边界框的位置信息,确定所述车舱的乘员的位置信息。其中,人体检测可以用于检测所述视频流中的至少一帧图像中的人体的位置信息,人脸检测可以用于检测所述视频流中的至少一帧图像中的人脸的位置信息。
在一种可能的实现方式中,所述基于所述视频流,确定所述车舱的乘员的位置信息,包括:在所述视频流中的至少一帧图像对应的图像坐标系中,确定所述乘员的至少一个身体部位所在的图像坐标区域;根据所述图像坐标区域,确定所述乘员的位置信息。
在该实现方式中,所述图像坐标系可以是所述视频流中的图像所对应的二维坐标系。作为该实现方式的一个示例,可以通过预先训练好的区域生成网络(Region Proposal Network,RPN)对所述视频流中的至少一帧图像中乘员的至少一个身体部位进行区域检测,其中,至少一个身体部位可以包括但不限于脸部、手部,躯干等。通过区域检测,可以确定所述乘员的至少一个身体部位所在的图像坐标区域。
在该实现方式中,可以将所述图像坐标区域作为所述乘员的位置信息,或者,可以根据预先建立的图像坐标区域与乘员的位置信息之间的映射关系,以及所述图像坐标区域,确定所述乘员的位置信息。
在该实现方式中,通过检测图像中乘员的至少一个身体部位在图像坐标系中的位置,能够快速地检测出乘员的位置信息。
作为该实现方式的一个示例,所述乘员的位置信息包括:所述乘员在所述图像中的第一相对位置信息;所述根据所述图像坐标区域,确定所述乘员的位置信息,包括:将所述图像坐标区域作为所述乘员在所述图像中的所述第一相对位置信息。在该示例中,通过确定出所述乘员在所述图像中的所述第一相对位置信息,有利于实现在图像坐标系内根据第一相对位置信息确定车载机器人的转动控制参数,提升车载机器人的转动控制效率。
作为该实现方式的另一个示例,所述乘员的位置信息包括:所述乘员在所述车舱内的第二相对位 置信息;所述根据所述图像坐标区域,确定所述乘员的位置信息,包括:根据所述图像坐标系与所述车舱内的空间坐标系之间的映射关系,确定所述图像坐标区域对应的车舱内空间坐标区域,并将所述车舱内空间坐标区域作为所述乘员在所述车舱内的所述第二相对位置信息。在该示例中,所述空间坐标系可以是三维世界坐标系,确定出的所述车舱内空间坐标区域可以表示所述乘员的至少一个身体部位所在的图像坐标区域在所述空间坐标系中对应的空间坐标区域,也即所述乘员的至少一个身体部位所在的空间坐标区域。在该示例中,可以预先建立所述图像坐标系与空间坐标系之间的映射关系,例如预先标定采集视频流的摄像头的内外参数,根据摄像头的内外参数确定图像坐标系与空间坐标系的映射关系。由此在确定所述乘员的至少一个身体部位所在的图像坐标区域之后,可以根据预先建立的所述图像坐标系与所述车舱内的空间坐标系之间的映射关系,确定所述图像坐标区域在所述空间坐标系中对应的车舱内空间坐标区域。该示例通过根据所述图像坐标系与所述车舱内的空间坐标系之间的映射关系,确定所述图像坐标区域对应的车舱内空间坐标区域,并将所述车舱内空间坐标区域作为所述乘员在所述车舱内的所述第二相对位置信息,由此能够准确地获取所述乘员在车舱内的三维位置信息,以便后续根据三维位置信息对车载机器人进行精准的转动控制。
在步骤S13中,根据所述乘员的位置信息生成设置于所述车舱内的车载机器人的转动部件的转动控制信息。
在本公开实施例中,所述车载机器人可以是实体机器人,例如,所述车载机器人可以安装在仪表板、中控台等位置。
在本公开实施例中,所述车载机器人的转动部件可以表示所述车载机器人中能够转动的部件,车载机器人的朝向随转动部件的转动而变化。可以根据乘员的位置信息,确定车载机器人与乘员交互的方向,进而生成用于控制车载机器人转向与乘员交互的方向的转动控制信息,使得车载机器人可以在该转动控制信息的控制下转向交互的乘员。
在本公开实施例中,可以根据所述乘员的位置信息,确定所述车载机器人的转动部件的转动控制参数和/或转动停止条件,并生成包含所述转动控制参数和/或所述转动停止条件的转动控制信息。其中,所述转动控制参数可以包括但不限于转动方向、转动角速度、转动时间等中的至少之一。所述转动停止条件表示转动部件停止转动的条件,可以包括但不限于以转动角作为约束的条件和/或以目标朝向作为约束的条件。
所述车载机器人可以包括一个或多个转动部件,所述转动控制信息可以用于控制所述车载机器人的一个或多个转动部件转动。例如,可以根据所述乘员的位置信息,生成所述车载机器人的第一转动部件的转动控制信息,其中,所述第一转动部件可以指用于带动所述车载机器人的本体转动的转动部件,所述第一转动部件的转动控制信息可以用于控制所述第一转动部件带动所述车载机器人的本体转向所述乘员。又如,可以根据所述乘员的位置信息,生成所述车载机器人的第一转动部件的转动控制信息,并生成所述车载机器人的第二转动部件和/或第三转动部件的转动控制信息,其中,所述第二转动部件可以指用于带动所述车载机器人的左臂转动的转动部件,所述第二转动部件的转动控制信息可以用于控制所述第二转动部件带动所述车载机器人的左臂转动,所述第三转动部件可以指用于带动所述车载机器人的右臂转动的转动部件,所述第三转动部件的转动控制信息可以用于控制所述第三转动部件带动所述车载机器人的右臂转动。通过控制所述第二转动部件带动所述车载机器人的左臂转动和/或控制所述第三转动部件带动所述车载机器人的右臂转动,能够使车载机器人展示动作,例如拍 手的欢迎动作、挥动手臂的再见动作等。
需要说明的是,上述包含用于控制左臂转动的第二转动部件和用于控制右臂转动的第三转动部件的车载机器人仅仅是一个示例,在本公开的实施例中,车载机器人的第二转动部件和第三转动部件还可以控制车载机器人的其他身体部位,如头部,腿部,等等;车载机器人也可以包括三个以上分别控制不同的身体部位的转动部件,在此不作特殊限定。
在一种可能的实现方式中,所述根据所述乘员的位置信息生成设置于所述车舱内的车载机器人的转动部件的转动控制信息,包括:根据预先建立的位置信息与车载机器人的朝向之间的映射关系,确定与所述乘员的位置信息对应的目标朝向;以及以下至少之一:生成控制设置于所述车舱内的车载机器人的转动部件进行转动使得所述车载机器人转向所述目标朝向的转动控制信息;根据所述车载机器人的当前朝向和所述目标朝向确定所述车载机器人的转动部件的转动方向和转动角,并生成控制设置于所述车舱内的车载机器人的转动部件按照所述转动方向和所述转动角进行转动的转动控制信息;根据所述车载机器人的当前朝向、所述目标朝向以及设置于所述车舱内的车载机器人转动部件的预设角速度,确定所述车载机器人的转动部件的转动方向和转动时间,并生成控制设置于所述车舱内的车载机器人的转动部件按照所述转动方向和所述转动时间进行转动的转动控制信息。
在该实现方式中,所述目标朝向可以表示与所述乘员的位置信息对应的车载机器人的朝向。通过控制所述车载机器人转向所述目标朝向,可以控制所述车载机器人转向所述乘员,例如,可以控制所述车载机器人的正面对着所述乘员的方向。
作为该实现方式的一个示例,可以根据预先建立的位置信息与车载机器人的朝向之间的映射关系,确定与所述乘员的位置信息对应的目标朝向,并生成控制设置于所述车舱内的车载机器人的转动部件进行转动使得所述车载机器人转向所述目标朝向的转动控制信息。在该示例中,可以生成包含所述目标朝向的转动控制信息,即,所述转动控制信息可以用于控制所述车载机器人转向所述目标朝向。所述车载机器人的转动部件可以在所述转动控制信息的控制下转动,直至转动至所述目标朝向,从而可以使所述车载机器人转向所述目标朝向。在该示例中,通过根据所述目标朝向生成所述车载机器人的转动部件的转动控制信息,由此能够通过转动停止条件控制车载机器人准确地转动至所述目标朝向。
作为该实现方式的另一个示例,可以根据预先建立的位置信息与车载机器人的朝向之间的映射关系,确定与所述乘员的位置信息对应的目标朝向,并根据所述车载机器人的当前朝向和所述目标朝向确定所述车载机器人的转动部件的转动方向和转动角,并生成控制设置于所述车舱内的车载机器人的转动部件按照所述转动方向和所述转动角进行转动的转动控制信息。在该示例中,所述转动角可以表示所述车载机器人由当前朝向转动至目标朝向需要转动的角度,也即车载机器人的当前朝向与目标朝向之间的夹角。例如,转动角可以是20°、30°、45°等。在该示例中,可以生成包含所述转动方向和所述转动角的转动控制信息,即,所述转动控制信息可以用于控制所述车载机器人的转动部件按照所述转动方向和所述转动角转动。所述车载机器人的转动部件在所述转动控制信息的控制下,可以按照所述转动方向和所述转动角转动,以使所述车载机器人转向所述目标朝向。在该示例中,通过根据所述转动方向和转动角生成所述车载机器人的转动部件的转动控制信息,由此能够通过转动控制参数和转动停止条件控制车载机器人准确地转动至所述目标朝向。
作为该实现方式的另一个示例,可以根据预先建立的位置信息与车载机器人的朝向之间的映射关系,确定与所述乘员的位置信息对应的目标朝向,并根据所述车载机器人的当前朝向、所述目标朝向 以及设置于所述车舱内的车载机器人转动部件的预设角速度,确定所述车载机器人的转动部件的转动方向和转动时间,并生成控制设置于所述车舱内的车载机器人的转动部件按照所述转动方向和所述转动时间进行转动的转动控制信息。在该示例中,预设角速度是车载机器人的转动角速度,可以是系统默认的,也可以是用户设置的,在此不作限定。在该示例中,可以生成包含所述转动方向和所述转动时间的转动控制信息,即,所述转动控制信息可以用于控制所述车载机器人的转动部件按照所述转动方向和所述转动时间转动。所述车载机器人的转动部件在所述转动控制信息的控制下,可以按照所述预设角速度、所述转动方向和所述转动时间转动,以使所述车载机器人转向所述目标朝向。在该示例中,通过根据所述转动方向和所述转动时间生成所述车载机器人的转动部件的转动控制信息,由此能够通过转动控制参数准确地控制车载机器人转动至所述目标朝向。
在另一种可能的实现方式中,还可以根据所述乘员的位置信息,以及预先建立的位置信息与转动控制信息之间的映射关系,确定与所述乘员的位置信息对应的所述车载机器人的转动部件的转动控制信息。在该实现方式中,所述转动控制信息可以包含所述乘员的位置信息对应的目标朝向,即,所述转动控制信息可以用于控制所述车载机器人转向所述目标朝向。
在步骤S14中,根据所述转动控制信息对所述车载机器人进行转动控制。
在本公开实施例中,可以将转动控制信息发送至车载机器人的转动部件,以控制所述车载机器人的转动部件转动,可以控制所述车载机器人的整体和/或部分发生转动,从而能够根据所述转动控制信息实现对所述车载机器人的转动控制。
在本公开实施例中,通过利用车载机器人与所述车辆的乘员进行交互,由此在车辆的人机交互中,实现了车辆的拟人化,从而使得人机交互的方式更加符合人的交互习惯,交互过更加自然,使乘员感受到人机交互的温暖,提升乘车乐趣、舒适感和陪护感。通过提升乘车乐趣和陪护感,有助于使驾驶员保持注意力集中,从而有利于降低驾驶的安全风险。
在本公开实施例中,通过获取车舱的视频流,基于所述视频流,确定所述车舱的乘员的位置信息,根据所述乘员的位置信息生成设置于所述车舱内的车载机器人的转动部件的转动控制信息,并根据所述转动控制信息对所述车载机器人进行转动控制,由此能够基于车舱的视频流,控制车载机器人转向所述乘员,以使车载机器人在朝向所述乘员的状态下与所述乘员进行交互,从而能够使车载机器人与乘员的交互方式更加符合人与人之间的交互习惯,交互过更加自然,能够提高交互的针对性和流畅性。
在一种可能的实现方式中,所述车载机器人包括本体和所述转动部件;所述根据所述转动控制信息对所述车载机器人进行转动控制,包括:根据所述转动控制信息,驱动所述车载机器人的转动部件带动所述车载机器人的本体进行转动。作为该实现方式的一个示例,所述本体可以包括躯干和头部。作为该实现方式的另一个示例,所述本体可以包括躯干和头部,还可以包括左臂、右臂、左腿、右腿中的至少之一。例如,所述本体可以包括躯干、头部和双臂。在该实现方式中,通过根据所述转动控制信息,驱动所述车载机器人的转动部件带动所述车载机器人的本体进行转动,由此能够使车载机器人在本体在朝向所述乘员的状态下与所述乘员进行交互。
在一种可能的实现方式中,所述方法还包括:生成控制所述车载机器人的显示部件向所述乘员展示内容的显示控制信息。在该实现方式中,所述车载机器人的显示部件可以表示所述车载机器人中具有显示功能的部件。例如,所述车载机器人的显示部件可以包括所述车载机器人的显示屏。所述显示部件可以用于展示表情(例如笑脸)、文字、动画等内容。在该实现方式中,通过生成控制所述车载 机器人的显示部件向所述乘员展示内容的显示控制信息,在车载机器人与乘员交互的过程中,可以配合不同的表情等展示内容,从而能够使得交互过程更有情感和趣味。
在一种可能的实现方式中,所述根据所述乘员的位置信息生成设置于所述车舱内的车载机器人的转动部件的转动控制信息,包括:响应于根据所述视频流检测到所述乘员的上车意图或下车意图,根据所述乘员的位置信息生成设置于所述车舱内的车载机器人的转动部件的转动控制信息。
在该实现方式中,可以基于车舱外的视频流和/或车舱内的视频流,检测乘员的上车意图。作为该实现方式的一个示例,可以基于所述车舱外的视频流和/或所述车舱内的视频流,检测乘员是否从车舱外进入车舱内,若是,则可以判定检测到所述乘员的上车意图。作为该实现方式的另一个示例,可以基于所述车舱外的视频流和/或所述车舱内的视频流,检测乘员是否从车外打开车门,若是,则可以确定检测到所述乘员的上车意图。
在该实现方式中,可以基于所述车舱外的视频流和/或车舱内的视频流,检测乘员的下车意图。作为该实现方式的一个示例,可以基于所述车舱外的视频流和/或所述车舱内的视频流,检测乘员是否从由车舱内向车舱外方向移动,若是,则可以判定检测到所述乘员的下车意图。作为该实现方式的另一个示例,可以基于所述车舱外的视频流和/或所述车舱内的视频流,检测乘员是否从车内打开车门,若是,则可以确定检测到所述乘员的下车意图。
作为该实现方式的一个示例,可以获取车舱外的人脸识别结果对应的身份信息,其中,所述车舱外的人脸识别结果是基于车舱外的视频流进行人脸识别得到的;对所述车舱内的视频流进行人脸识别,确定所述身份信息对应的人脸区域的位置;根据所述人脸区域的位置,确定所述乘员的位置信息。在示例中,可以在刷脸开车门的场景下,基于所述车舱外的视频流进行人脸识别,得到所述车舱外的人脸识别结果,其中,所述车舱外的视频流可以是第二摄像头采集的。例如,车舱外的人脸识别结果对应的身份信息是乘员B,则可以根据乘员B的人脸信息(例如乘员B的人脸图像或者人脸特征),对所述车舱内的视频流进行人脸识别,确定乘员B的人脸区域的位置,从而可以根据乘员B的人脸区域的位置,以及预先建立的人脸区域的位置与乘员的位置信息之间的映射关系,确定乘员B的位置信息。在这个例子中,可以将用于刷脸开车门的第二摄像头与用于乘员监控的第一摄像头联动起来,得到所述乘员的位置信息。
在一个例子中,还可以根据所述车舱外的人脸识别结果对应的第二摄像头的位置,确定所述乘员的上车位置。其中,所述车舱外的人脸识别结果对应的第二摄像头,可以表示采集所述车舱外的人脸识别结果对应的车舱外的视频流的第二摄像头。例如,若所述车舱外的人脸识别结果对应的第二摄像头安装在左前门外,则可以确定所述乘员的上车位置是驾驶座位置。
在该实现方式中,通过响应于根据所述视频流检测到所述乘员的上车意图,根据所述乘员的位置信息生成设置于所述车舱内的车载机器人的转动部件的转动控制信息,由此控制所述车载机器人与上下车的乘员进行针对性的交互,从而能够通过车载机器人实现更多场景下的个性化服务。
作为该实现方式的一个示例,所述方法还包括:根据所述车舱的视频流,对所述乘员进行人脸识别;根据所述乘员对应的人脸识别结果,确定所述乘员的属性信息;根据所述乘员的属性信息,生成控制所述车载机器人与所述乘员进行交互的交互控制信息。
在一个例子中,可以根据车舱外的视频流,对所述乘员进行人脸识别,得到所述乘员对应的人脸识别结果。其中,所述车舱外的视频流可以在刷脸开车门的场景下获取。在这个例子中,可以基于所 述车舱外的视频流中的至少一帧图像进行人脸识别,得到所述乘员对应的人脸识别结果。例如,可以提取所述车舱外的视频流中的至少一帧图像的人脸特征,将所述车舱外的视频流中的至少一帧图像的人脸特征与预注册的人脸特征进行比对,判断是否属于同一个人的人脸特征,从而得到所述乘员对应的人脸识别结果。其中,预注册的人脸特征可以包括但不限于以下至少之一:所述车辆的车主的人脸特征、所述车辆的常用人(例如所述车辆的车主的家人)的人脸特征、所述车辆的借用人(例如共享车的借用人)的人脸特征、所述车辆的乘客(例如网约车的乘客)的人脸特征。
在另一个例子中,可以根据车舱内的视频流,对所述乘员进行人脸识别,得到所述乘员对应的人脸识别结果。在这个例子中,可以基于所述车舱内的视频流中的至少一帧图像进行人脸识别,得到所述乘员对应的人脸识别结果。例如,可以提取所述车舱内的视频流中的至少一帧图像的人脸特征,将所述车舱内的视频流中的至少一帧图像的人脸特征与预注册的人脸特征进行比对,判断是否属于同一个人的人脸特征,从而得到所述乘员对应的人脸识别结果。
在一个例子中,可以根据人脸识别结果中的身份信息,获取所述人脸识别结果对应的属性信息。例如,可以根据人脸识别结果中的身份信息,从存储器或者服务器中获取预先存储的所述乘员的性别信息、年龄信息等属性信息。
在该示例中,可以根据所述乘员对应的人脸识别结果,确定所述乘员的身份信息,根据所述乘员的身份信息,确定所述乘员的属性信息,从而可以结合所述乘员的身份信息,得到所述乘员对应的交互方式信息,由此能够基于所述乘员的更丰富的信息得到更适合所述乘员的交互方式信息,从而能够更加满足乘员的个性化需求。
在一个例子中,可以根据所述属性信息中的身份信息,确定所述乘员的称呼;根据所述属性信息中的年龄信息、性别信息、肤色信息、情绪信息中的至少之一,确定所述乘员对应的交互方式信息;根据所述乘员的称呼,以及所述乘员的年龄信息、性别信息、肤色信息、情绪信息中的至少之一,生成控制所述车载机器人与所述乘员进行交互的交互控制信息。例如,所述乘员的称呼是XX,则所述交互控制信息可以包括“XX,您好,我是您的智能小助手”的语音信息或者“XX,您好,欢迎乘车”的语音信息等。
在该示例中,通过响应于检测到所述乘员的上车意图,根据所述乘员的位置信息和属性信息,控制所述车载机器人与乘员进行上车交互,由此能够通过车载机器人实现个性化的迎宾服务。通过响应于检测到所述乘员的下车意图,根据所述乘员的属性信息和所述位置信息,控制所述车载机器人与所述乘员进行下车交互,由此能够通过车载机器人实现个性化的欢送服务。
在另一种可能的实现方式中,可以通过车门传感器确定所述乘员的位置信息。例如,若通过左前门的车门传感器检测到有乘员上车(即通过左前门的车门传感器检测到左前门从车外被拉开),则可以确定所述乘员的上车位置信息为主驾驶座;若通过右前门的车门传感器检测到有乘员上车(即通过右前门的车门传感器检测到右前门从车外被拉开),则可以确定所述乘员的上车位置信息是副驾驶座;若通过后门的车门传感器检测到有乘员上车(即通过后门的车门传感器检测到后门从车外被拉开),则可以确定所述乘员的上车位置信息是后排座位。
在另一种可能的实现方式中,可以通过座位传感器确定所述乘员的上车位置信息。例如,若通过主驾驶座的座位传感器检测到有乘员落座,则可以确定所述乘员的上车位置信息为主驾驶座。
在一种可能的实现方式中,所述方法还包括:基于所述视频流,对所述乘员进行属性识别,得到 所述乘员的属性信息;生成控制所述车载机器人根据所述乘员的位置信息和属性信息进行交互的交互控制信息。
在该实现方式中,可以基于所述视频流中的至少一帧图像,对所述乘员进行属性识别,得到所述乘员的属性信息。例如,可以根据乘员的位置信息与图像坐标之间的映射关系,确定所述位置信息在所述视频流对应的图像坐标系中对应的图像坐标区域;对所述视频流中所述图像坐标区域包含的图像部分进行属性识别,得到所述乘员的属性信息。
作为该实现方式的一个示例,所述位置信息包括座位信息;所述基于所述视频流,对所述乘员进行属性识别,得到所述乘员的属性信息,包括:根据预先建立的座位与图像坐标之间的映射关系,确定所述座位信息在所述视频流对应的图像坐标系中对应的图像坐标区域;对所述视频流中所述图像坐标区域包含的图像部分进行属性识别,得到所述乘员的属性信息。
在该示例中,可以预先建立各个座位与图像坐标之间的映射关系。例如,主驾驶座对应于图像坐标区域D 1,副驾驶座对应于图像坐标区域D 2,后排左侧座位对应于图像坐标区域D 3,后排中间座位对应于图像坐标区域D 4,后排右侧座位对应于图像坐标区域D 5。其中,任一图像坐标区域可以采用该图像坐标区域的4个顶点的坐标来表示;或者,任一图像坐标区域可以采用该图像坐标区域的其中一个顶点的坐标以及该图像坐标区域的长和宽来表示,例如,图像坐标区域D 1可以采用图像坐标区域D 1的左上角的顶点的坐标以及图像坐标区域D 1的长和宽来表示。若乘员A的座位信息是副驾驶座,则可以确定乘员A的座位信息对应的图像坐标区域是图像坐标区域D 2,进而可以对所述视频流的至少一帧图像中图像坐标区域D 2包含的图像部分进行属性识别,得到乘员A的属性信息。
在该示例中,通过根据预先建立的座位与图像坐标之间的映射关系,确定所述座位信息在所述车舱内的视频流对应的图像坐标系中对应的图像坐标区域,并对所述视频流中所述图像坐标区域包含的图像部分进行属性识别,得到所述乘员的属性信息,由此能够减少车舱内的视频流的图像中的不属于所述乘员的图像部分(例如背景图像部分、其他乘员的图像部分)对所述乘员进行属性识别的影响,从而能够提高对乘员进行属性识别的准确性。
在该实现方式中,通过基于所述视频流,对所述乘员进行属性识别,得到所述乘员的属性信息,并生成控制所述车载机器人根据所述乘员的位置信息和属性信息进行交互的交互控制信息,由此不仅能够使车载机器人在朝向所述乘员的状态下与所述乘员进行交互,还能使车载机器人基于所述乘员的属性信息与所述乘员进行交互,从而能够满足所述乘员的个性化需求。
作为该实现方式的一个示例,所述生成控制所述车载机器人根据所述乘员的位置信息和属性信息进行交互的交互控制信息,包括:根据所述乘员的属性信息,确定所述乘员对应的交互方式信息;生成控制所述车载机器人根据所述乘员的位置信息、按照所述交互方式信息进行交互的交互控制信息。
在该示例中,交互方式信息可以包括语调信息、语音模板、表情信息、动作信息等中的至少之一。例如,小孩对应的交互方式可以是较为活泼的,例如语调可以较高,表情和动作可以较丰富;又如,老人对应的语音模板中可以包含较多的敬语;又如,情绪较低的乘员对应的语音模板可以是具有激励效果的。
在该示例中,可以预先建立属性信息与交互方式信息的对应关系,从而可以根据属性信息与交互方式信息的对应关系,以及所述乘员的属性信息,确定所述乘员对应的交互方式信息。
在该示例中,通过根据所述乘员的属性信息,确定所述乘员对应的交互方式信息,并生成控制所 述车载机器人根据所述乘员的位置信息、按照所述交互方式信息进行交互的交互控制信息,由此能够通过不同的交互方式进行人机交互,即,不同乘员对应的人机交互方式可以不同,从而能够满足乘员的个性化需求,提升乘车乐趣,使乘员感受到人机交互的温暖。
在一个例子中,可以根据交互方式配置请求,配置所述乘员对应的交互方式信息。根据这个例子,乘员可以根据个人喜好定制车载机器人的交互方式。
在一个例子中,可以根据交互方式重置请求,重新生成所述乘员对应的交互方式信息。根据这个例子,乘员可以随着个人喜好的变化重新定制车载机器人的交互方式。
作为该实现方式的一个示例,可以根据所述乘员的属性信息,确定所述乘员的称呼,并生成控制所述车载机器人根据所述乘员的位置信息和所述乘员的称呼进行交互的交互控制信息。例如,可以根据所述乘员的属性信息中的年龄信息和性别信息,确定所述乘员的称呼,例如“女士”“先生”“小朋友”等。又如,可以根据所述乘员的属性信息中的身份信息、年龄信息和性别信息,确定所述乘员的称呼,例如“张女士”“李先生”等。
作为该实现方式的一个示例,所述属性识别包括年龄识别、性别识别、肤色识别、情绪识别、身份识别中的至少之一,和/或,所述属性信息包括年龄信息、性别信息、肤色信息、情绪信息、身份信息中的至少之一。在该示例中,通过基于所述视频流,对所述乘员进行年龄识别、性别识别、肤色识别、情绪识别、身份识别中的至少之一,得到所述乘员的年龄信息、性别信息、肤色信息、情绪信息、身份信息中的至少之一,由此车载机器人能够基于所述乘员的年龄信息、性别信息、肤色信息、情绪信息、身份信息中的至少之一与所述乘员进行交互,满足乘员的个性化需求,使乘员感受到人机交互的温暖,提高交互的针对性和流畅性。
在一种可能的实现方式中,所述方法还包括:获取语音信息;所述基于所述视频流,确定所述车舱的乘员的位置信息,包括:基于所述视频流,检测所述车舱的乘员中发出所述语音信息的乘员的位置信息;所述根据所述乘员的位置信息生成设置于所述车舱内的车载机器人的转动部件的转动控制信息,包括:根据所述发出语音信息的乘员的位置信息生成设置于所述车舱内的车载机器人的转动部件的转动控制信息。
作为该实现方式的一个示例,可以通过所述车载机器人的控制装置进行语音识别,以判断是否检测到语音信息。作为该实现方式的另一个示例,可以通过设置在车舱内的车机或其他语音识别设备进行语音识别,以判断是否检测到语音信息。在该实现方式中,语音信息可以包括语音交互指令,也可以包括其他语音信息,在此不作限定。例如,所述语音信息可以用于唤醒车载机器人、启动车载机器人、控制车载机器人休眠、关闭车载机器人、接听电话、开关车窗、调整空调、播放音视频、导航等。
作为该实现方式的一个示例,可以基于所述视频流中的音频数据,确定所述车舱的乘员中发出所述语音信息的乘员的位置信息。在该示例中,可以响应于获取到语音信息,从所述视频流的音频数据中,获取所述语音信息对应的音频片段;根据所述语音信息对应的音频片段,确定发出所述语音信息的乘员的位置信息。
在一个例子中,可以响应于获取到语音信息,从所述视频流的音频数据中,获取所述语音信息对应的音频片段;对所述语音信息对应的音频片段进行发声源定位,得到发出所述语音信息的乘员的位置信息。其中,所述音频数据中所述语音信息对应的音频片段,可以表示所述音频数据中所述语音信息所属的音频片段。即,所述语音信息对应的音频片段包含所述语音信息的语音内容。在这个例子中, 通过响应于获取到语音信息,从视频流的音频数据中,获取所述语音信息对应的音频片段,并对所述语音信息对应的音频片段进行声源定位,得到发出所述语音信息的乘员的位置信息,由此能够准确地确定发出所述语音信息的乘员的位置信息。
在另一个例子中,可以响应于获取到语音信息,从所述视频流的音频数据中,获取所述语音信息对应的音频片段;对所述语音信息对应的音频片段进行声纹识别,确定发出所述语音信息的乘员的身份信息;对所述视频流中的至少一帧图像进行人脸识别,确定所述身份信息对应的乘员的位置信息。
作为该实现方式的另一个示例,可以基于所述视频流进行嘴型检测,得到所述车舱的乘员中发出所述语音信息的乘员的位置信息。
在该实现方式中,通过获取语音信息,基于所述视频流,检测所述车舱的乘员中发出所述语音信息的乘员的位置信息,并根据所述发出语音信息的乘员的位置信息生成设置于所述车舱内的车载机器人的转动部件的转动控制信息,由此能够控制车载机器人在朝向发出语音信息的乘员的状态下与所述乘员进行交互,从而能够提高车载机器人与乘员进行语音交互的针对性和流畅性,进而有助于提高语音交互的效率。
在一种可能的实现方式中,所述方法还包括:获取语音车窗控制信息;对所述语音车窗控制信息进行声源定位和/或基于所述视频流进行发声源检测,确定发出所述语音车窗控制信息的乘员的位置信息;确定所述车舱内与所述发出所述语音车窗控制信息的乘员的位置信息对应的目标车窗;生成对所述目标车窗进行控制的控制信息。
在该实现方式中,可以通过声阵列(例如传声器阵列或麦克风阵列)对所述语音车窗控制信息进行声源定位,确定发出所述语音车窗控制信息的乘员的位置信息。或者,可以从所述视频流中,确定与所述语音车窗控制信息的获取时间匹配的视频片段,对所述视频片段进行嘴型检测,确定发出所述语音车窗控制信息的乘员的位置信息。
例如,发出所述语音车窗控制信息的乘员的位置信息是副驾驶座,则目标车窗可以是右前方车窗;发出所述语音车窗控制信息的乘员的位置信息是后排左侧座位,则目标车窗可以是左后方车窗。
根据该实现方式,通过获取语音车窗控制信息,对所述语音车窗控制信息进行声源定位和/或基于所述视频流进行发声源检测,确定发出所述语音车窗控制信息的乘员的位置信息,确定所述车舱内与所述发出所述语音车窗控制信息的乘员的位置信息对应的目标车窗,并生成对所述目标车窗进行控制的控制信息,由此能够利用发出语音车窗控制信息的乘员的位置进行精准的车窗控制。
在一种可能的实现方式中,可以基于车舱外的视频流进行人脸识别,得到车舱外的人脸识别结果。可以响应于所述车舱外的人脸识别结果为人脸识别成功,获取所述车门的状态信息。若所述车门的状态信息为未解锁,则控制所述车门解锁或者控制车门解锁并打开;若所述车门的状态信息为已解锁且未打开,则控制所述车门打开,由此能够基于人脸识别自动为用户开车门,而无需用户手动拉开车门,从而能够提高用车的便捷性。
作为该实现方式的一个示例,响应于所述车舱外的人脸识别结果为人脸识别成功,在控制车门解锁和/或打开的同时,启动或唤醒所述车载机器人。在该示例中,启动或唤醒所述车载机器人的时机是“响应于所述车舱外的人脸识别结果为人脸识别成功”。即,响应于所述车舱外的人脸识别结果为人脸识别成功,并行触发“控制车门解锁和/或打开”的进程和“启动或唤醒所述车载机器人”的进程,而非先后触发“控制车门解锁和/或打开”的进程和“启动或唤醒所述车载机器人”的进程。其中,“并行触发” 不局限于触发的时间戳严格对齐。在该示例中,“控制车门解锁和/或打开”与“启动或唤醒所述车载机器人”可以响应于所述车舱外的人脸识别结果为人脸识别成功并行执行,由此能够尽快启动车载机器人。
通常,车载机器人启动或唤醒需要一定的时间。在该示例中,通过响应于所述车舱外的人脸识别结果为人脸识别成功,在控制车门解锁和/或打开的同时,启动或唤醒所述车载机器人,以通过所述车载机器人进行人机交互,由此能够在车舱外人脸识别成功后,立即启动或唤醒车载机器人,利用从车舱外人脸识别成功到所述乘员进入车辆之间的这一段时间,使车载机器人做好与乘员进行交互的准备,从而在所述乘员进入车辆后,车载机器人能够更快地为所述乘员提供服务,进而能够提高交互的针对性和流畅性。
在一个例子中,在人脸识别成功之前,所述车载机器人处于关闭状态或者休眠状态,由此能够节省通过车载机器人实现车辆的人机交互所需的功耗。
作为该实现方式的一个示例,响应于所述车舱外的人脸识别结果为人脸识别成功,在控制车门解锁和/或打开的同时,启动或唤醒所述第一摄像头。在该示例中,启动或唤醒第一摄像头的时机是“响应于所述车舱外的人脸识别结果为人脸识别成功”。即,响应于所述车舱外的人脸识别结果为人脸识别成功,并行触发“控制车门解锁和/或打开”的进程和“启动或唤醒所述车载机器人”的进程,而非先后触发“控制车门解锁和/或打开”的进程和“启动或唤醒所述第一摄像头”的进程。其中,“并行触发”不局限于触发的时间戳严格对齐。在该示例中,“控制车门解锁和/或打开”与“启动或唤醒所述第一摄像头”可以响应于所述车舱外的人脸识别结果为人脸识别成功并行执行,由此能够尽快启动第一摄像头。根据该示例,能够在车舱外人脸识别成功后,立即启动或唤醒设置在所述车辆的车舱内的第一摄像头,即,利用从车舱外人脸识别成功到所述乘员进入车辆之间的这一段时间,启动或唤醒所述第一摄像头,使所述第一摄像头能够及时采集到车舱内的视频流,从而在乘员进入车舱后,能够及时与乘员进行交互。
在一个例子中,在人脸识别成功之前,第一摄像头可以处于关闭状态或者休眠状态,由此能够节省车辆的人机交互所需的功耗。
作为该实现方式的一个示例,若在车舱外人脸识别成功之后的预设时长内,检测到乘员上车,则可以判定所述人脸识别结果对应的乘员已上车,即,可以判定检测到所述乘员的上车意图。例如,可以对第一摄像头采集的车舱内的视频流进行视频分析,确定是否有乘员上车。若在车舱外人脸识别成功之后的预设时长内,根据所述车舱内的视频流分析到有乘员上车,则可以判定所述人脸识别结果对应的乘员已上车。又如,可以通过车门传感器检测是否有乘员上车。若在车舱外人脸识别成功之后的预设时长内,通过车门传感器检测到车门从车外被拉开,则可以判定所述人脸识别结果对应的乘员已上车。又如,可以通过座位传感器检测是否有乘员上车。若在车舱外人脸识别成功之后的预设时长内,通过座位传感器检测到有乘员落座,则可以判定所述人脸识别结果对应的乘员已上车。
作为该实现方式的另一个示例,在车舱外人脸识别成功之后,可以对第一摄像头采集的车舱内的视频流进行人脸识别,若识别到所述车辆识别结果对应的乘员,则可以判定所述人脸识别结果对应的乘员已上车。
作为该实现方式的一个示例,可以对第一摄像头采集的车舱内的视频流进行视频分析,确定是否有乘员有下车意图。例如,可以在从所述车舱内的视频流中检测到有乘员起身的情况下,判定检测到 乘员的下车意图。作为该实现方式的另一个示例,可以通过车门传感器检测乘员的下车意图。例如,若通过右前门的车门传感器检测到右前门从车内被打开,则可以判定检测到副驾驶座的乘员的下车意图。作为该实现方式的另一个示例,可以通过作为传感器检测乘员的下车意图。例如,若通过后排中间座位的座位传感器检测到乘员起身,则可以判定检测到后排中间座位的乘员的下车意图。
可以理解,本公开提及的上述各个方法实施例,在不违背原理逻辑的情况下,均可以彼此相互结合形成结合后的实施例,限于篇幅,本公开不再赘述。本领域技术人员可以理解,在具体实施方式的上述方法中,各步骤的具体执行顺序应当以其功能和可能的内在逻辑确定。
此外,本公开还提供了车辆、车载机器人的控制装置、电子设备、计算机可读存储介质、程序,上述均可用来实现本公开提供的任一种车载机器人的控制方法,相应技术方案和技术效果可参见方法部分的相应记载,不再赘述。
图2示出本公开实施例提供的车辆的示意图。如图2所示,所述车辆包括:摄像头210,设置于车舱,用于采集车舱的视频流;控制器220,与所述摄像头210连接,用于从所述摄像头210获取所述车舱的视频流,基于所述视频流,确定所述车舱的乘员的位置信息,根据所述乘员的位置信息生成设置于所述车舱内的车载机器人230的转动部件的转动控制信息,并根据所述转动控制信息对所述车载机器人230进行转动控制;所述车载机器人230,与所述控制器220连接,设置于所述车舱内,用于根据所述转动控制信息进行转动。
在本公开实施例中,所述控制器220可以安装在车舱内的不可见区域。
在一种可能的实现方式中,所述控制器220可以用于控制所述摄像头210采集车舱的视频流。
在一种可能的实现方式中,所述车载机器人230可以采用智能机器人系统(Intelligent Robot System,IRS)。
在本公开实施例中,通过摄像头采集车舱的视频流,控制器基于所述车舱的乘员的位置信息,根据所述乘员的位置信息生成设置于所述车舱内的车载机器人的转动部件的转动控制信息,并根据所述转动控制信息对所述车载机器人进行转动控制,车载机器人根据所述转动信息进行转动,由此能够基于车舱的视频流,控制车载机器人转向所述乘员,以使车载机器人在朝向所述乘员的状态下与所述乘员进行交互,从而能够使车载机器人与乘员的交互方式更加符合人与人之间的交互习惯,交互过更加自然,能够提高交互的针对性和流畅性。
图3示出本公开实施例提供的车辆的另一示意图。如图3所示,在一种可能的实现方式中,所述摄像头210包括:第一摄像头211,设置于车舱内,用于采集车舱内的视频流;和/或,第二摄像头212,设置于车舱外,用于采集车舱外的视频流。
其中,所述第一摄像头211可以包括OMS摄像头、DMS摄像头等。所述第一摄像头211的数量可以是一个或多个。所述第一摄像头211可以设置在车舱内的任意位置。作为该实现方式的一个示例,所述第一摄像头211可以安装在以下至少一个位置:仪表板、顶灯、内后视镜、中控台、前挡风玻璃。
所述第二摄像头212的数量可以是一个或多个。作为该实现方式的一个示例,所述第二摄像头212可以安装在以下至少一个位置上:至少一根B柱、至少一个车门、至少一个外后视镜、横梁。例如,所述第二摄像头212可以安装在所述车辆的主驾驶座侧的B柱上。例如,主驾驶座在左侧,则所述第二摄像头212可以安装在所述车辆的左侧的B柱上。又如,所述第二摄像头212可以安装在两根B柱和后备箱门上。作为该实现方式的一个示例,所述第二摄像头212可以采用ToF摄像头、双目摄像头等。
图4示出本公开实施例提供的车载机器人的控制装置的框图。如图4所示,所述车载机器人的控制装置包括:第一获取模块41,用于获取车舱的视频流;第一确定模块42,用于基于所述视频流,确定所述车舱的乘员的位置信息;第一生成模块43,用于根据所述乘员的位置信息生成设置于所述车舱内的车载机器人的转动部件的转动控制信息;转动控制模块44,用于根据所述转动控制信息对所述车载机器人进行转动控制。
在一种可能的实现方式中,所述车载机器人包括本体和所述转动部件;所述转动控制模块44用于:根据所述转动控制信息,驱动所述车载机器人的转动部件带动所述车载机器人的本体进行转动。
在一种可能的实现方式中,所述第一生成模块43用于:根据预先建立的位置信息与车载机器人的朝向之间的映射关系,确定与所述乘员的位置信息对应的目标朝向;以及以下至少之一:生成控制设置于所述车舱内的车载机器人的转动部件进行转动使得所述车载机器人转向所述目标朝向的转动控制信息;根据所述车载机器人的当前朝向和所述目标朝向确定所述车载机器人的转动部件的转动方向和转动角,并生成控制设置于所述车舱内的车载机器人的转动部件按照所述转动方向和所述转动角进行转动的转动控制信息;根据所述车载机器人的当前朝向、所述目标朝向以及设置于所述车舱内的车载机器人转动部件的预设角速度,确定所述车载机器人的转动部件的转动方向和转动时间,并生成控制设置于所述车舱内的车载机器人的转动部件按照所述转动方向和所述转动时间进行转动的转动控制信息。
在一种可能的实现方式中,所述装置还包括:第二生成模块,用于生成控制所述车载机器人的显示部件向所述乘员展示内容的显示控制信息。
在一种可能的实现方式中,所述第一确定模块42用于:在所述视频流中的至少一帧图像对应的图像坐标系中,确定所述乘员的至少一个身体部位所在的图像坐标区域;根据所述图像坐标区域,确定所述乘员的位置信息。
在一种可能的实现方式中,所述乘员的位置信息包括:所述乘员在所述图像中的第一相对位置信息;所述第一确定模块42用于:将所述图像坐标区域作为所述乘员在所述图像中的所述第一相对位置信息。
在一种可能的实现方式中,所述乘员的位置信息包括:所述乘员在所述车舱内的第二相对位置信息;所述第一确定模块42用于:根据所述图像坐标系与所述车舱内的空间坐标系之间的映射关系,确定所述图像坐标区域对应的车舱内空间坐标区域,并将所述车舱内空间坐标区域作为所述乘员在所述车舱内的所述第二相对位置信息。
在一种可能的实现方式中,所述第一生成模块43用于:响应于根据所述视频流检测到所述乘员的上车意图或下车意图,根据所述乘员的位置信息生成设置于所述车舱内的车载机器人的转动部件的转动控制信息。
在一种可能的实现方式中,所述装置还包括:人脸识别模块,用于根据所述车舱的视频流,对所述乘员进行人脸识别;第二确定模块,用于根据所述乘员对应的人脸识别结果,确定所述乘员的属性信息;第三生成模块,用于根据所述乘员的属性信息,生成控制所述车载机器人与所述乘员进行交互的交互控制信息。
在一种可能的实现方式中,所述装置还包括:属性识别模块,用于基于所述视频流,对所述乘员进行属性识别,得到所述乘员的属性信息;第四生成模块,用于生成控制所述车载机器人根据所述乘 员的位置信息和属性信息进行交互的交互控制信息。
在一种可能的实现方式中,所述第四生成模块用于:根据所述乘员的属性信息,确定所述乘员对应的交互方式信息;生成控制所述车载机器人根据所述乘员的位置信息、按照所述交互方式信息进行交互的交互控制信息。
在一种可能的实现方式中,所述属性识别包括年龄识别、性别识别、肤色识别、情绪识别、身份识别中的至少之一,和/或,所述属性信息包括年龄信息、性别信息、肤色信息、情绪信息、身份信息中的至少之一。
在一种可能的实现方式中,所述装置还包括:第二获取模块,用于获取语音信息;所述第一确定模块42用于:基于所述视频流,检测所述车舱的乘员中发出所述语音信息的乘员的位置信息;所述第一生成模块43用于:根据所述发出语音信息的乘员的位置信息生成设置于所述车舱内的车载机器人的转动部件的转动控制信息。
在一种可能的实现方式中,所述装置还包括:第三获取模块,用于获取语音车窗控制信息;发声源检测模块,用于对所述语音车窗控制信息进行声源定位和/或基于所述视频流进行发声源检测,确定发出所述语音车窗控制信息的乘员的位置信息;第三确定模块,用于确定所述车舱内与所述发出所述语音车窗控制信息的乘员的位置信息对应的目标车窗;第五生成模块,用于生成对所述目标车窗进行控制的控制信息。
在本公开实施例中,通过获取车舱的视频流,基于所述视频流,确定所述车舱的乘员的位置信息,根据所述乘员的位置信息生成设置于所述车舱内的车载机器人的转动部件的转动控制信息,并根据所述转动控制信息对所述车载机器人进行转动控制,由此能够基于车舱的视频流,控制车载机器人转向所述乘员,以使车载机器人在朝向所述乘员的状态下与所述乘员进行交互,从而能够使车载机器人与乘员的交互方式更加符合人与人之间的交互习惯,交互过更加自然,能够提高交互的针对性和流畅性。
在一些实施例中,本公开实施例提供的装置和车辆具有的功能或包含的模块可以用于执行上文方法实施例描述的方法,其具体实现和技术效果可以参照上文方法实施例的描述,为了简洁,这里不再赘述。
本公开实施例还提供一种计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现上述方法。其中,所述计算机可读存储介质可以是非易失性计算机可读存储介质,或者可以是易失性计算机可读存储介质。
本公开实施例还提出一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行用于实现上述方法。
本公开实施例还提供了另一种计算机程序产品,用于存储计算机可读指令,指令被执行时使得计算机执行上述任一实施例提供的车载机器人的控制方法的操作。
本公开实施例还提供一种电子设备,包括:一个或多个处理器;用于存储可执行指令的存储器;其中,所述一个或多个处理器被配置为调用所述存储器存储的可执行指令,以执行上述方法。
电子设备可以被提供为终端、服务器或其它形态的设备。所述电子设备可以是与车载机器人连接的控制器、域控制器、处理器、车机,还可以是OMS或者DMS中用于执行图像等数据处理操作的设备主机等。
图5示出本公开实施例提供的一种电子设备800的框图。例如,电子设备800可以是车载设备,移 动电话,计算机,数字广播终端,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理等终端。
参照图5,电子设备800可以包括以下一个或多个组件:处理组件802,存储器804,电源组件806,多媒体组件808,音频组件810,输入/输出(I/O)的接口812,传感器组件814,以及通信组件816。
处理组件802通常控制电子设备800的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件802可以包括一个或多个处理器820来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件802可以包括一个或多个模块,便于处理组件802和其他组件之间的交互。例如,处理组件802可以包括多媒体模块,以方便多媒体组件808和处理组件802之间的交互。
存储器804被配置为存储各种类型的数据以支持在电子设备800的操作。这些数据的示例包括用于在电子设备800上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器804可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
电源组件806为电子设备800的各种组件提供电力。电源组件806可以包括电源管理系统,一个或多个电源,及其他与为电子设备800生成、管理和分配电力相关联的组件。
多媒体组件808包括在所述电子设备800和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件808包括一个前置摄像头和/或后置摄像头。当电子设备800处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。
音频组件810被配置为输出和/或输入音频信号。例如,音频组件810包括一个麦克风(MIC),当电子设备800处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器804或经由通信组件816发送。在一些实施例中,音频组件810还包括一个扬声器,用于输出音频信号。
I/O接口812为处理组件802和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件814包括一个或多个传感器,用于为电子设备800提供各个方面的状态评估。例如,传感器组件814可以检测到电子设备800的打开/关闭状态,组件的相对定位,例如所述组件为电子设备800的显示器和小键盘,传感器组件814还可以检测电子设备800或电子设备800一个组件的位置改变,用户与电子设备800接触的存在或不存在,电子设备800方位或加速/减速和电子设备800的温度变化。传感器组件814可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件814还可以包括光传感器,如互补金属氧化物半导体(CMOS)或电荷耦合装置(CCD)图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件814还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。
通信组件816被配置为便于电子设备800和其他设备之间有线或无线方式的通信。电子设备800可以接入基于通信标准的无线网络,如无线网络(Wi-Fi)、第二代移动通信技术(2G)、第三代移动通信技术(3G)、第四代移动通信技术(4G)/通用移动通信技术的长期演进(LTE)、第五代移动通信技术(5G)或它们的组合。在一个示例性实施例中,通信组件816经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,所述通信组件816还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
在示例性实施例中,电子设备800可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述方法。
在示例性实施例中,还提供了一种非易失性计算机可读存储介质,例如包括计算机程序指令的存储器804,上述计算机程序指令可由电子设备800的处理器820执行以完成上述方法。
本公开可以是系统、方法和/或计算机程序产品。计算机程序产品可以包括计算机可读存储介质,其上载有用于使处理器实现本公开的各个方面的计算机可读程序指令。
计算机可读存储介质可以是可以保持和存储由指令执行设备使用的指令的有形设备。计算机可读存储介质例如可以是――但不限于――电存储设备、磁存储设备、光存储设备、电磁存储设备、半导体存储设备或者上述的任意合适的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、静态随机存取存储器(SRAM)、便携式压缩盘只读存储器(CD-ROM)、数字多功能盘(DVD)、记忆棒、软盘、机械编码设备、例如其上存储有指令的打孔卡或凹槽内凸起结构、以及上述的任意合适的组合。这里所使用的计算机可读存储介质不被解释为瞬时信号本身,诸如无线电波或者其他自由传播的电磁波、通过波导或其他传输媒介传播的电磁波(例如,通过光纤电缆的光脉冲)、或者通过电线传输的电信号。
这里所描述的计算机可读程序指令可以从计算机可读存储介质下载到各个计算/处理设备,或者通过网络、例如因特网、局域网、广域网和/或无线网下载到外部计算机或外部存储设备。网络可以包括铜传输电缆、光纤传输、无线传输、路由器、防火墙、交换机、网关计算机和/或边缘服务器。每个计算/处理设备中的网络适配卡或者网络接口从网络接收计算机可读程序指令,并转发该计算机可读程序指令,以供存储在各个计算/处理设备中的计算机可读存储介质中。
用于执行本公开操作的计算机程序指令可以是汇编指令、指令集架构(ISA)指令、机器指令、机器相关指令、微代码、固件指令、状态设置数据、或者以一种或多种编程语言的任意组合编写的源代码或目标代码,所述编程语言包括面向对象的编程语言—诸如Smalltalk、C++等,以及常规的过程式编程语言—诸如“C”语言或类似的编程语言。计算机可读程序指令可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络—包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。在一些实施例中,通过利用计算机可读程序指令的状态信息来个性化定制电子电路,例如可编程逻辑电路、现场可编程门阵列(FPGA)或可编程 逻辑阵列(PLA),该电子电路可以执行计算机可读程序指令,从而实现本公开的各个方面。
这里参照根据本公开实施例的方法、装置(系统)和计算机程序产品的流程图和/或框图描述了本公开的各个方面。应当理解,流程图和/或框图的每个方框以及流程图和/或框图中各方框的组合,都可以由计算机可读程序指令实现。
这些计算机可读程序指令可以提供给通用计算机、专用计算机或其它可编程数据处理装置的处理器,从而生产出一种机器,使得这些指令在通过计算机或其它可编程数据处理装置的处理器执行时,产生了实现流程图和/或框图中的一个或多个方框中规定的功能/动作的装置。也可以把这些计算机可读程序指令存储在计算机可读存储介质中,这些指令使得计算机、可编程数据处理装置和/或其他设备以特定方式工作,从而,存储有指令的计算机可读介质则包括一个制造品,其包括实现流程图和/或框图中的一个或多个方框中规定的功能/动作的各个方面的指令。
也可以把计算机可读程序指令加载到计算机、其它可编程数据处理装置、或其它设备上,使得在计算机、其它可编程数据处理装置或其它设备上执行一系列操作步骤,以产生计算机实现的过程,从而使得在计算机、其它可编程数据处理装置、或其它设备上执行的指令实现流程图和/或框图中的一个或多个方框中规定的功能/动作。
附图中的流程图和框图显示了根据本公开的多个实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或指令的一部分,所述模块、程序段或指令的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
该计算机程序产品可以具体通过硬件、软件或其结合的方式实现。在一个可选实施例中,所述计算机程序产品具体体现为计算机存储介质,在另一个可选实施例中,计算机程序产品具体体现为软件产品,例如软件开发包(Software Development Kit,SDK)等等。
以上已经描述了本公开的各实施例,上述说明是示例性的,并非穷尽性的,并且也不限于所披露的各实施例。在不偏离所说明的各实施例的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。本文中所用术语的选择,旨在最好地解释各实施例的原理、实际应用或对市场中的技术的改进,或者使本技术领域的其它普通技术人员能理解本文披露的各实施例。

Claims (20)

  1. 一种车载机器人的控制方法,其特征在于,包括:
    获取车舱的视频流;
    基于所述视频流,确定所述车舱的乘员的位置信息;
    根据所述乘员的位置信息生成设置于所述车舱内的车载机器人的转动部件的转动控制信息;
    根据所述转动控制信息对所述车载机器人进行转动控制。
  2. 根据权利要求1所述的方法,其特征在于,所述车载机器人包括本体和所述转动部件;
    所述根据所述转动控制信息对所述车载机器人进行转动控制,包括:
    根据所述转动控制信息,驱动所述车载机器人的转动部件带动所述车载机器人的本体进行转动。
  3. 根据权利要求1或2所述的方法,其特征在于,所述根据所述乘员的位置信息生成设置于所述车舱内的车载机器人的转动部件的转动控制信息,包括:
    根据预先建立的位置信息与车载机器人的朝向之间的映射关系,确定与所述乘员的位置信息对应的目标朝向;以及以下至少之一:
    生成控制设置于所述车舱内的车载机器人的转动部件进行转动使得所述车载机器人转向所述目标朝向的转动控制信息;
    根据所述车载机器人的当前朝向和所述目标朝向确定所述车载机器人的转动部件的转动方向和转动角,并生成控制设置于所述车舱内的车载机器人的转动部件按照所述转动方向和所述转动角进行转动的转动控制信息;
    根据所述车载机器人的当前朝向、所述目标朝向以及设置于所述车舱内的车载机器人转动部件的预设角速度,确定所述车载机器人的转动部件的转动方向和转动时间,并生成控制设置于所述车舱内的车载机器人的转动部件按照所述转动方向和所述转动时间进行转动的转动控制信息。
  4. 根据权利要求1至3中的任意一项所述的方法,其特征在于,所述方法还包括:
    生成控制所述车载机器人的显示部件向所述乘员展示内容的显示控制信息。
  5. 根据权利要求1至4中的任意一项所述的方法,其特征在于,所述基于所述视频流,确定所述车舱的乘员的位置信息,包括:
    在所述视频流中的至少一帧图像对应的图像坐标系中,确定所述乘员的至少一个身体部位所在的图像坐标区域;
    根据所述图像坐标区域,确定所述乘员的位置信息。
  6. 根据权利要求5所述的方法,其特征在于,
    所述乘员的位置信息包括:所述乘员在所述图像中的第一相对位置信息;
    所述根据所述图像坐标区域,确定所述乘员的位置信息,包括:将所述图像坐标区域作为所述乘员在所述图像中的所述第一相对位置信息。
  7. 根据权利要求5或6所述的方法,其特征在于,
    所述乘员的位置信息包括:所述乘员在所述车舱内的第二相对位置信息;
    所述根据所述图像坐标区域,确定所述乘员的位置信息,包括:根据所述图像坐标系与所述车舱内的空间坐标系之间的映射关系,确定所述图像坐标区域对应的车舱内空间坐标区域,并将所述车舱内空间坐标区域作为所述乘员在所述车舱内的所述第二相对位置信息。
  8. 根据权利要求1至7中任意一项所述的方法,其特征在于,所述根据所述乘员的位置信息生成 设置于所述车舱内的车载机器人的转动部件的转动控制信息,包括:
    响应于根据所述视频流检测到所述乘员的上车意图或下车意图,根据所述乘员的位置信息生成设置于所述车舱内的车载机器人的转动部件的转动控制信息。
  9. 根据权利要求8所述的方法,其特征在于,所述方法还包括:
    根据所述车舱的视频流,对所述乘员进行人脸识别;
    根据所述乘员对应的人脸识别结果,确定所述乘员的属性信息;
    根据所述乘员的属性信息,生成控制所述车载机器人与所述乘员进行交互的交互控制信息。
  10. 根据权利要求1至9中任意一项所述的方法,其特征在于,所述方法还包括:
    基于所述视频流,对所述乘员进行属性识别,得到所述乘员的属性信息;
    生成控制所述车载机器人根据所述乘员的位置信息和属性信息进行交互的交互控制信息。
  11. 根据权利要求10所述的方法,其特征在于,所述生成控制所述车载机器人根据所述乘员的位置信息和属性信息进行交互的交互控制信息,包括:
    根据所述乘员的属性信息,确定所述乘员对应的交互方式信息;
    生成控制所述车载机器人根据所述乘员的位置信息、按照所述交互方式信息进行交互的交互控制信息。
  12. 根据权利要求10或11所述的方法,其特征在于,所述属性识别包括年龄识别、性别识别、肤色识别、情绪识别、身份识别中的至少之一,和/或,所述属性信息包括年龄信息、性别信息、肤色信息、情绪信息、身份信息中的至少之一。
  13. 根据权利要求1至12中任意一项所述的方法,其特征在于,所述方法还包括:
    获取语音信息;
    所述基于所述视频流,确定所述车舱的乘员的位置信息,包括:
    基于所述视频流,检测所述车舱的乘员中发出所述语音信息的乘员的位置信息;
    所述根据所述乘员的位置信息生成设置于所述车舱内的车载机器人的转动部件的转动控制信息,包括:
    根据所述发出语音信息的乘员的位置信息生成设置于所述车舱内的车载机器人的转动部件的转动控制信息。
  14. 根据权利要求1至13任一项所述的方法,其特征在于,所述方法还包括:
    获取语音车窗控制信息;
    对所述语音车窗控制信息进行声源定位和/或基于所述视频流进行发声源检测,确定发出所述语音车窗控制信息的乘员的位置信息;
    确定所述车舱内与所述发出所述语音车窗控制信息的乘员的位置信息对应的目标车窗;
    生成对所述目标车窗进行控制的控制信息。
  15. 一种车载机器人的控制装置,其特征在于,包括:
    第一获取模块,用于获取车舱的视频流;
    第一确定模块,用于基于所述视频流,确定所述车舱的乘员的位置信息;
    第一生成模块,用于根据所述乘员的位置信息生成设置于所述车舱内的车载机器人的转动部件的转动控制信息;
    转动控制模块,用于根据所述转动控制信息对所述车载机器人进行转动控制。
  16. 一种车辆,其特征在于,包括:
    摄像头,设置于车舱,用于采集车舱的视频流;
    控制器,与所述摄像头连接,用于从所述摄像头获取所述车舱的视频流,基于所述视频流,确定所述车舱的乘员的位置信息,根据所述乘员的位置信息生成设置于所述车舱内的车载机器人的转动部件的转动控制信息,并根据所述转动控制信息对所述车载机器人进行转动控制;
    所述车载机器人,与所述控制器连接,设置于所述车舱内,用于根据所述转动控制信息进行转动。
  17. 根据权利要求16所述的车辆,其特征在于,所述摄像头包括:
    第一摄像头,设置于车舱内,用于采集车舱内的视频流;
    和/或,
    第二摄像头,设置于车舱外,用于采集车舱外的视频流。
  18. 一种电子设备,其特征在于,包括:
    一个或多个处理器;
    用于存储可执行指令的存储器;
    其中,所述一个或多个处理器被配置为调用所述存储器存储的可执行指令,以执行权利要求1至14中任意一项所述的方法。
  19. 一种计算机可读存储介质,其上存储有计算机程序指令,其特征在于,所述计算机程序指令被处理器执行时实现权利要求1至14中任意一项所述的方法。
  20. 一种计算机程序,包括计算机可读代码,其特征在于,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行用于实现权利要求1至14中的任一权利要求所述的方法。
PCT/CN2021/078671 2020-09-03 2021-03-02 车载机器人的控制方法及装置、车辆、电子设备和介质 WO2022048118A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010916165.9A CN112026790B (zh) 2020-09-03 2020-09-03 车载机器人的控制方法及装置、车辆、电子设备和介质
CN202010916165.9 2020-09-03

Publications (1)

Publication Number Publication Date
WO2022048118A1 true WO2022048118A1 (zh) 2022-03-10

Family

ID=73591875

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/078671 WO2022048118A1 (zh) 2020-09-03 2021-03-02 车载机器人的控制方法及装置、车辆、电子设备和介质

Country Status (2)

Country Link
CN (1) CN112026790B (zh)
WO (1) WO2022048118A1 (zh)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112026790B (zh) * 2020-09-03 2022-04-15 上海商汤临港智能科技有限公司 车载机器人的控制方法及装置、车辆、电子设备和介质
CN113488043B (zh) * 2021-06-30 2023-03-24 上海商汤临港智能科技有限公司 乘员说话检测方法及装置、电子设备和存储介质
CN113486760A (zh) * 2021-06-30 2021-10-08 上海商汤临港智能科技有限公司 对象说话检测方法及装置、电子设备和存储介质
CN113524214A (zh) * 2021-07-16 2021-10-22 广东汇天航空航天科技有限公司 一种交互方法、装置、载人设备和介质
CN115214505B (zh) * 2022-06-29 2024-04-26 重庆长安汽车股份有限公司 车辆座舱音效的控制方法、装置、车辆及存储介质
WO2024113839A1 (zh) * 2022-11-29 2024-06-06 华人运通(上海)云计算科技有限公司 机械臂的控制方法、车辆以及电子设备

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030055532A1 (en) * 2001-08-22 2003-03-20 Yoshiaki Sakagami Autonomous action robot
CN101815634A (zh) * 2007-10-04 2010-08-25 日产自动车株式会社 信息提示系统
CN108664123A (zh) * 2017-12-15 2018-10-16 蔚来汽车有限公司 人车交互方法、装置、车载智能控制器及系统
CN109366497A (zh) * 2018-11-12 2019-02-22 奇瑞汽车股份有限公司 车载机器人、车载机器人的控制方法、装置及存储介质
CN110728256A (zh) * 2019-10-22 2020-01-24 上海商汤智能科技有限公司 基于车载数字人的交互方法及装置、存储介质
CN110781799A (zh) * 2019-10-22 2020-02-11 上海商汤智能科技有限公司 车舱内图像处理方法及装置
CN111124123A (zh) * 2019-12-24 2020-05-08 苏州思必驰信息科技有限公司 基于虚拟机器人形象的语音交互方法及装置、车载设备智能控制系统
CN112026790A (zh) * 2020-09-03 2020-12-04 上海商汤临港智能科技有限公司 车载机器人的控制方法及装置、车辆、电子设备和介质

Family Cites Families (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8831826B2 (en) * 2011-11-16 2014-09-09 Flextronics Ap, Llc Gesture recognition for on-board display
US20130204457A1 (en) * 2012-02-06 2013-08-08 Ford Global Technologies, Llc Interacting with vehicle controls through gesture recognition
US9082239B2 (en) * 2012-03-14 2015-07-14 Flextronics Ap, Llc Intelligent vehicle for assisting vehicle occupants
US9230556B2 (en) * 2012-06-05 2016-01-05 Apple Inc. Voice instructions during navigation
CN104085395A (zh) * 2013-04-01 2014-10-08 德尔福电子(苏州)有限公司 一种基于鸟瞰系统的辅助泊车方法
CN103488299B (zh) * 2013-10-15 2016-11-23 大连市恒芯科技有限公司 一种融合人脸和手势的智能终端人机交互方法
KR20150076627A (ko) * 2013-12-27 2015-07-07 한국전자통신연구원 차량 운전 학습 시스템 및 방법
JP2016162164A (ja) * 2015-03-02 2016-09-05 シャープ株式会社 操作装置および操作方法
US10169995B2 (en) * 2015-09-25 2019-01-01 International Business Machines Corporation Automatic selection of parking spaces based on parking space attributes, driver preferences, and vehicle information
US9764694B2 (en) * 2015-10-27 2017-09-19 Thunder Power Hong Kong Ltd. Intelligent rear-view mirror system
JP6583199B2 (ja) * 2016-09-27 2019-10-02 株式会社デンソー 運転交代制御装置、及び運転交代制御方法
JP6643969B2 (ja) * 2016-11-01 2020-02-12 矢崎総業株式会社 車両用表示装置
KR20180056867A (ko) * 2016-11-21 2018-05-30 엘지전자 주식회사 디스플레이 장치 및 그의 동작 방법
KR101982774B1 (ko) * 2016-11-29 2019-05-27 엘지전자 주식회사 자율 주행 차량
US10272925B1 (en) * 2017-10-30 2019-04-30 Ford Global Technologies, Llc Integrated performance braking
CN109050396A (zh) * 2018-07-16 2018-12-21 浙江合众新能源汽车有限公司 一种车载智能机器人
WO2020017716A1 (ko) * 2018-07-20 2020-01-23 엘지전자 주식회사 차량용 로봇 및 상기 로봇의 제어 방법
CN109545219A (zh) * 2019-01-09 2019-03-29 北京新能源汽车股份有限公司 车载语音交互方法、系统、设备及计算机可读存储介质
CN109960407A (zh) * 2019-03-06 2019-07-02 中山安信通机器人制造有限公司 一种车载机器人主动交互的方法、计算机装置以及计算机可读存储介质
CN110502116A (zh) * 2019-08-20 2019-11-26 广东远峰汽车电子有限公司 汽车情感机器人与乘车人员的互动方法及装置
CN110992946A (zh) * 2019-11-01 2020-04-10 上海博泰悦臻电子设备制造有限公司 一种语音控制方法、终端及计算机可读存储介质
CN111016785A (zh) * 2019-11-26 2020-04-17 惠州市德赛西威智能交通技术研究院有限公司 一种基于人眼位置的平视显示系统调节方法
CN111325129A (zh) * 2020-02-14 2020-06-23 上海商汤智能科技有限公司 交通工具通勤控制方法及装置、电子设备、介质和车辆

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030055532A1 (en) * 2001-08-22 2003-03-20 Yoshiaki Sakagami Autonomous action robot
CN101815634A (zh) * 2007-10-04 2010-08-25 日产自动车株式会社 信息提示系统
CN108664123A (zh) * 2017-12-15 2018-10-16 蔚来汽车有限公司 人车交互方法、装置、车载智能控制器及系统
CN109366497A (zh) * 2018-11-12 2019-02-22 奇瑞汽车股份有限公司 车载机器人、车载机器人的控制方法、装置及存储介质
CN110728256A (zh) * 2019-10-22 2020-01-24 上海商汤智能科技有限公司 基于车载数字人的交互方法及装置、存储介质
CN110781799A (zh) * 2019-10-22 2020-02-11 上海商汤智能科技有限公司 车舱内图像处理方法及装置
CN111124123A (zh) * 2019-12-24 2020-05-08 苏州思必驰信息科技有限公司 基于虚拟机器人形象的语音交互方法及装置、车载设备智能控制系统
CN112026790A (zh) * 2020-09-03 2020-12-04 上海商汤临港智能科技有限公司 车载机器人的控制方法及装置、车辆、电子设备和介质

Also Published As

Publication number Publication date
CN112026790B (zh) 2022-04-15
CN112026790A (zh) 2020-12-04

Similar Documents

Publication Publication Date Title
WO2022048118A1 (zh) 车载机器人的控制方法及装置、车辆、电子设备和介质
JP7302005B2 (ja) 車両のインタラクション方法及び装置、電子機器、記憶媒体並びに車両
WO2022048119A1 (zh) 车辆控制方法及装置、电子设备、存储介质和车辆
JP7106768B2 (ja) 車両のドアロック解除方法、装置、システム、電子機器及び記憶媒体
CN209044516U (zh) 用于汽车的用户识别装置
JP5630318B2 (ja) スマートエントリシステム
US20200047687A1 (en) Exterior speech interface for vehicle
WO2022041670A1 (zh) 车舱内的乘员检测方法及装置、电子设备和存储介质
US20150180999A1 (en) Automated social network interaction system for a vehicle
WO2022062658A1 (zh) 基于酒精检测的智能驾驶控制方法及装置
WO2023273064A1 (zh) 对象说话检测方法及装置、电子设备和存储介质
WO2022142331A1 (zh) 车载显示屏的控制方法及装置、电子设备和存储介质
WO2022188362A1 (zh) 分心提醒方法及装置、电子设备和存储介质
WO2023273063A1 (zh) 乘员说话检测方法及装置、电子设备和存储介质
US20240070213A1 (en) Vehicle driving policy recommendation method and apparatus
KR20140111138A (ko) 테일 게이트 작동 시스템 및 그 방법
US20220219717A1 (en) Vehicle interactive system and method, storage medium, and vehicle
WO2023029262A1 (zh) 车载人机交互系统
JP7410796B2 (ja) 車両制御システム、及び車両制御方法
WO2024139737A1 (zh) 车机系统的消息展示方法、电子设备及车辆
CN113361361B (zh) 与乘员交互的方法及装置、车辆、电子设备和存储介质
US20220206567A1 (en) Method and apparatus for controlling vehicle display screen, and storage medium
WO2023236691A1 (en) Control method based on vehicle external audio system, vehicle intelligent marketing method, electronic apparatus, and storage medium
JP2024073109A (ja) 情報処理方法及び情報処理装置
US20240208295A1 (en) Information processing device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21863189

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 09/08/2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21863189

Country of ref document: EP

Kind code of ref document: A1