WO2024113839A1 - 机械臂的控制方法、车辆以及电子设备 - Google Patents
机械臂的控制方法、车辆以及电子设备 Download PDFInfo
- Publication number
- WO2024113839A1 WO2024113839A1 PCT/CN2023/104108 CN2023104108W WO2024113839A1 WO 2024113839 A1 WO2024113839 A1 WO 2024113839A1 CN 2023104108 W CN2023104108 W CN 2023104108W WO 2024113839 A1 WO2024113839 A1 WO 2024113839A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- vehicle
- target
- cabin
- action
- screen
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 75
- 230000009471 action Effects 0.000 claims abstract description 179
- 230000004044 response Effects 0.000 claims description 9
- 230000001960 triggered effect Effects 0.000 claims description 9
- 230000004807 localization Effects 0.000 claims description 5
- 230000008569 process Effects 0.000 description 9
- 238000004891 communication Methods 0.000 description 8
- 238000004590 computer program Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 7
- 230000007246 mechanism Effects 0.000 description 6
- 238000012545 processing Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 230000000007 visual effect Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 3
- 230000000704 physical effect Effects 0.000 description 3
- 101100233916 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) KAR5 gene Proteins 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 101001121408 Homo sapiens L-amino-acid oxidase Proteins 0.000 description 1
- 101000827703 Homo sapiens Polyphosphoinositide phosphatase Proteins 0.000 description 1
- 102100026388 L-amino-acid oxidase Human genes 0.000 description 1
- 102100023591 Polyphosphoinositide phosphatase Human genes 0.000 description 1
- 101100012902 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) FIG2 gene Proteins 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 239000000446 fuel Substances 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R16/00—Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for
- B60R16/02—Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements
- B60R16/023—Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements for transmission of signals between vehicle parts or subsystems
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/08—Interaction between the driver and the control system
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
Definitions
- the present application relates to the field of artificial intelligence, and in particular to a control method for a robotic arm, a vehicle, and an electronic device.
- robotic arms As an automatic control device that can imitate the functions of human arms and complete various tasks, robotic arms have been widely used in industrial manufacturing, medical rescue, aerospace and other fields. However, installing robotic arms in vehicle cabins to provide intelligent services to the occupants is a field that few people have touched upon.
- the present application provides a control method for a robotic arm, a vehicle, and an electronic device to solve the problems existing in the related art.
- a method for controlling a robotic arm including: when a target person is detected making a clapping action, determining the sound source position of a clapping sound, the clapping sound being the sound generated by the clapping action; based on the sound source position, controlling the vehicle-mounted screen robotic arm to move to a target cabin area where the sound source position is located; and controlling the vehicle-mounted screen robotic arm to perform a target setting action in the target cabin area.
- a vehicle comprising: an on-board server, an on-board screen robotic arm and an on-board screen; the on-board server is used to execute the control method of the robotic arm provided in an embodiment of the present application to control the on-board screen robotic arm to perform a target setting action in a target cabin area; the on-board screen robotic arm is used to perform a target setting action in a target cabin area under the control of the on-board server; and the on-board screen is connected to the on-board screen robotic arm.
- an electronic device comprising: at least one processor; and a memory communicatively connected to the at least one processor; wherein the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor so that the at least one processor can execute the method of any embodiment of the present application.
- FIG1 shows a flow chart of a method for controlling a robotic arm provided in an embodiment of the present application.
- Figure 2 shows a schematic diagram of a vehicle-mounted screen robotic arm provided in an embodiment of the present application.
- FIG3 shows a schematic diagram of a vehicle provided in an embodiment of the present application.
- FIG. 4 shows a block diagram of an electronic device used to implement the method for determining a script sequence provided in an embodiment of the present application.
- the control method of the mechanical arm provided in the embodiment of the present application is shown in Figure 1, which is a flow chart of a control method of the mechanical arm provided in the embodiment of the present disclosure.
- the control method of the mechanical arm shown in Figure 1 includes the following steps.
- Step S101 when it is detected that the target person makes a high-five action, determine the sound source position of the high-five sound, which is the sound generated by the high-five action.
- Step S102 Based on the sound source position, control the vehicle screen robot arm to move to the target cabin area where the sound source position is located.
- Step S103 Control the vehicle-mounted screen robotic arm to perform a target setting action in the target cabin area.
- the control method of the mechanical arm provided in the embodiment of the present application will determine the sound source position of the clapping sound when the target person makes a clapping action. After determining the sound source position of the clapping sound, the vehicle-mounted screen mechanical arm will be further controlled to move to the target cabin area where the sound source position is located based on the sound source position, and the vehicle-mounted screen mechanical arm will be controlled to perform the target setting action in the target cabin area.
- the vehicle-mounted screen mechanical arm can be automatically controlled to move to the target cabin area where the sound source is located, and the vehicle-mounted screen mechanical arm can be controlled to perform the target setting action in the target cabin area. Therefore, the vehicle-mounted screen mechanical arm can be controlled to perform the target setting action in the target cabin area without the need for the vehicle-mounted person to manually control the vehicle-mounted screen mechanical arm, thereby improving the user experience of the vehicle-mounted person.
- the vehicle screen mechanical arm after controlling the vehicle screen mechanical arm to move to the target cabin area where the sound source is located, the vehicle screen mechanical arm can be further controlled to perform the target setting action in the target cabin area. Therefore, the vehicle screen mechanical arm can interact with the passengers based on the clapping action of the passengers, thereby improving the driving or riding pleasure of the passengers, and further improving the user experience of the passengers.
- the executor of the control method of the robotic arm provided in the embodiment of the present application is generally an on-board server on the vehicle, or it may be a cloud server corresponding to the vehicle.
- the executor of the control method of the robotic arm there is no specific limitation on the executor of the control method of the robotic arm.
- the so-called target person belongs to the person on the vehicle, which includes the vehicle driver and the passengers.
- the passengers can be further divided according to the number of seats in the vehicle. For example, when the number of seats in the vehicle is 5, the passengers can be further divided into: passengers sitting in the main driver's seat, passengers sitting on the left side of the rear row of the vehicle, passengers sitting in the middle of the rear row of the vehicle, and passengers sitting on the right side of the rear row of the vehicle.
- the passengers can be further divided into: passengers sitting in the co-pilot, passengers sitting on the left side of the middle of the vehicle, passengers sitting on the right side of the middle of the vehicle, passengers sitting on the left side of the rear row of the vehicle, passengers sitting in the middle of the rear row of the vehicle, and passengers sitting on the right side of the rear row of the vehicle.
- the passengers can be further divided into: passengers sitting in the co-pilot, passengers sitting on the left side of the middle of the vehicle, and passengers sitting on the left side of the rear row of the vehicle.
- the so-called high-five action generally refers to the action made by the target person by clapping his palms.
- high-five actions There are many kinds of high-five actions.
- the high-five action made by clapping the palms of the hands together the high-five action made by clapping the palms of the hands together, and the high-five action made by clapping the backs of the hands together.
- Another example is a single high-five action and a continuous high-five action.
- a designated high-five action can be pre-configured, and it is set that only when the physical action made by the person on the vehicle is designated as a pre-configured designated high-five action, it is considered that the target person has been detected to have made a high-five action.
- the specific process of detecting whether the target person has made a high-five action can be as follows: First, the vehicle-mounted image acquisition device collects a sequence of image frames for the target person within a specified time period. Then, for the image frame sequence, determine the physical actions made by the target person within the specified time period. Finally, determine whether the physical action belongs to a high-five action. If so, it is determined that the target person has been detected to have made a high-five action. If not, it is determined that the target person has not been detected to have made a high-five action.
- the above process of detecting whether the target person makes a high-five action can be implemented in the following manner: input the image frame sequence to be identified into a trained action recognition model, and obtain the recognition result output by the action recognition model.
- the recognition result includes detecting that the target person makes a high-five action, and not detecting that the target person makes a high-five action.
- the action recognition model is trained using image sequence samples and corresponding annotated recognition results, and is used to determine the body movements made by a specific person for the image frame sequence to be recognized, and to determine and output the recognition results.
- the implementation method of determining whether the body movement belongs to the clapping action can be: determining whether the body movement belongs to a pre-configured clapping action, if so, determining that the target person is detected to have made the clapping action, if not, determining that the target person is not detected to have made the clapping action.
- the location of the sound source of the high-five sound is determined, and the specific implementation method of the high-five sound being the sound generated by the high-five action includes the following steps:
- the body movements made by the target person within the specified time period are determined.
- the specified audio collected by the vehicle-mounted audio acquisition device during the time period when the target person makes the high-five movement is obtained.
- the high-five sound is determined in the specified audio.
- the high-five sound is located to determine the location of the sound source.
- the specified audio collected by the vehicle-mounted audio collection device during the time period when the target person makes a high-five action is obtained, and the high-five sound is determined in the specified audio. This can ensure as much as possible that the determined high-five sound is the sound produced by the high-five action made by the target person, thereby ensuring the accuracy of the sound source position, and further ensuring smooth control of the vehicle-mounted screen robotic arm.
- locating the clapping sound to determine the sound source position may be as follows: first, locating the sound source of the clapping sound to determine the position of the clapping sound in the vehicle cabin. Then, the position of the clapping sound in the vehicle cabin is determined as the sound source position. Directly determining the position of the clapping sound in the vehicle cabin as the sound source position can improve the efficiency of determining the sound source position, thereby improving the efficiency of controlling the vehicle-mounted screen mechanical arm.
- the sound source location can also be determined in the following manner: first, based on the image area in the target person image frame sequence, the cabin area where the target person is located is determined. Then, the sound source of the clapping sound is localized to determine the location of the clapping sound in the cabin. Finally, based on the cabin area where the target person is located and the location of the clapping sound in the cabin, the sound source location is determined.
- the process of determining the cabin area where the target person is located based on the image area in the target person image frame sequence is: determining the cabin area where the target person is located based on the correspondence between the image area in the image frame sequence and the cabin area.
- the cabin area is generally an area in the cabin divided according to the seats of the vehicle. That is, the target cabin area includes the cabin area where the vehicle seats are located. Specifically, the cabin can be divided into different cabin areas according to the number of seats in the vehicle. For example: when the number of seats in the vehicle is 5, the cabin area that can be divided into the cabin area includes: the cabin area where the main driver's seat is located, the cabin area where the co-driver's seat is located, the cabin area where the left rear seat of the vehicle is located, the cabin area where the middle rear seat of the vehicle is located, and the cabin area where the right rear seat of the vehicle is located.
- the cabin area that can be divided into the cabin area includes: the cabin area where the main driver's seat is located, the cabin area where the co-driver's seat is located, the cabin area where the left middle seat of the vehicle is located, the cabin area where the right middle seat of the vehicle is located, the cabin area where the left rear seat of the vehicle is located, the cabin area where the middle rear seat of the vehicle is located, and the cabin area where the right rear seat of the vehicle is located.
- the cabin is divided into the following areas: the cabin area where the main driver's seat is located, the cabin area where the co-driver's seat is located, the cabin area where the left rear seat of the vehicle is located, and the cabin area where the right rear seat of the vehicle is located.
- the target cabin area may be one of the following cabin areas: the cabin area where the main driver's seat is located, the cabin area where the co-driver's seat is located, the cabin area where the left rear seat of the vehicle is located, the cabin area where the middle rear seat of the vehicle is located, and the cabin area where the right rear seat of the vehicle is located.
- the target cabin area may be one of the following cabin areas: the cabin area where the main driver's seat is located, the cabin area where the co-driver's seat is located, the cabin area where the left middle seat of the vehicle is located, the cabin area where the right middle seat of the vehicle is located, the cabin area where the left rear seat of the vehicle is located, the cabin area where the middle rear seat of the vehicle is located, and the cabin area where the right rear seat of the vehicle is located.
- the target cabin area may be one of the following cabin areas: the cabin area where the main driver's seat is located, the cabin area where the co-driver's seat is located, the cabin area where the left rear seat of the vehicle is located, and the cabin area where the right rear seat of the vehicle is located.
- the number of vehicle-mounted image acquisition devices and vehicle-mounted audio acquisition devices needs to be configured according to the number of seats in the vehicle. The more seats the vehicle has, the more vehicle-mounted image acquisition devices and vehicle-mounted audio acquisition devices need to be configured.
- the so-called vehicle-mounted screen robotic arm refers to a robotic arm installed in the vehicle cabin for carrying and controlling the vehicle-mounted screen.
- the vehicle-mounted screen robotic arm carries a vehicle-mounted screen, but in special cases, the vehicle-mounted screen robotic arm may not carry a vehicle-mounted screen.
- the vehicle-mounted screen robotic arm can be controlled to perform target setting actions in the target cabin area.
- Figure 2 is a schematic diagram of a vehicle-mounted screen robotic arm provided in the embodiment of the present application.
- the vehicle-mounted screen mechanical arm in the embodiment of the present application specifically includes: a multi-degree-of-freedom adjustment mechanism fixed to the back of the vehicle-mounted screen, and a plurality of telescopic units installed on the multi-degree-of-freedom adjustment mechanism.
- the plurality of telescopic units are used to drive the vehicle-mounted screen to flip up and down, left and right
- the multi-degree-of-freedom adjustment mechanism is used to drive the vehicle-mounted screen to rotate and translate, wherein up and down, left and right refer to the vehicle-mounted screen being tilted to the rear side at the top, the rear side at the bottom, the rear side at the left, and the rear side at the right relative to the initial position when the vehicle-mounted screen is in a vertical state facing the user.
- the vehicle-mounted screen mechanical arm has three controllable degrees of freedom in three-dimensional space.
- the following is a detailed description of the vehicle-mounted screen mechanical arm having three controllable degrees of freedom in three-dimensional space in conjunction with the three-dimensional space coordinate system shown in FIG.
- the vehicle-mounted screen mechanical arm has three controllable degrees of freedom in the three-dimensional space: the vehicle-mounted screen mechanical arm can freely rotate around the x-axis in the three-dimensional space, the vehicle-mounted screen mechanical arm can freely rotate around the y-axis in the three-dimensional space, and the vehicle-mounted screen mechanical arm can freely rotate around the z-axis in the three-dimensional space.
- the vehicle-mounted screen robotic arm has three controllable degrees of freedom in three-dimensional space, which can still mean: the vehicle-mounted screen robotic arm can rotate freely around the x-axis in three-dimensional space, the vehicle-mounted screen robotic arm can rotate freely around the y-axis in three-dimensional space, and the vehicle-mounted screen robotic arm can rotate freely around the z-axis in three-dimensional space.
- vehicle-mounted screen mechanical arms is generally one, but it can also be multiple.
- the vehicle-mounted screen mechanical arm can be used alone to control the movement of the display screen, or it can be used in conjunction with the vehicle-mounted infotainment system, instrument panel, head-up display, streaming media rearview mirror, ambient light, smart door and smart speaker in the vehicle cabin to complete the corresponding preset action in the preset scene.
- the left and right swing action can be performed in conjunction with the flashing of the ambient light.
- a preset action corresponding to the high-five action can be obtained first, and then the preset action corresponding to the high-five action can be determined as the target setting action to control the vehicle-mounted screen robotic arm to perform the target setting action in the target cabin area.
- the target set action is determined according to the preset action corresponding to the high-five action, and when the target person makes different high-five actions, the vehicle-mounted screen mechanical arm can be controlled to perform different set actions. Therefore, the interactivity between the target person and the vehicle-mounted screen mechanical arm can be enhanced, thereby improving the driving or riding pleasure of the vehicle occupants, and further improving the user experience of the vehicle occupants.
- the method for obtaining the preset action corresponding to the clapping action is: first, obtain a preset first correspondence list, the first correspondence list including the correspondence between the preset clapping action and the preset action, and then determine the preset action corresponding to the clapping action in the first correspondence.
- the correspondences included in the first correspondence list are: when the high-five action is the first type of high-five action, execute the first preset action; when the high-five action is the second type of high-five action, execute the second preset action; when the high-five action is the third type of high-five action, execute the third preset action; when the high-five action is the fourth type of high-five action, execute the fourth preset action; when the high-five action is the fifth type of high-five action, execute the fifth preset action.
- determining the preset action corresponding to the high-five action as the target action means determining the third preset action as the target action.
- the pre-set action corresponding to the target cabin area can be obtained first, and then the pre-set action corresponding to the target cabin area can be determined as the target setting action to control the vehicle-mounted screen robotic arm to perform the target setting action in the target cabin area.
- the target setting action is determined according to the preset action corresponding to the target cabin area, and the vehicle-mounted screen mechanical arm can be controlled to perform different setting actions in different cabin areas. Therefore, the setting actions performed by the vehicle-mounted screen mechanical arm in different cabin areas can be personalized. In this way, the driving or riding pleasure of the passengers in the vehicle can be improved, and the user experience of the passengers in the vehicle can be improved.
- the method of obtaining the preset action corresponding to the target cabin area is as follows: first, obtaining a preset second correspondence list, the second correspondence list including the correspondence between the cabin area and the preset action. Then, determining the preset action corresponding to the target cabin area in the second correspondence list.
- the corresponding relationships included in the second correspondence list are: when the target cabin area is cabin area 1, execute preset action 1; when the target cabin area is cabin area 2, execute preset action 2; when the target cabin area is cabin area 3, execute preset action 3; when the target cabin area is cabin area 4, execute preset action 4; when the target cabin area is cabin area 5, execute preset action 5.
- determining the preset action corresponding to the target cabin area as the target setting action means determining preset action 2 as the target setting action.
- the vehicle-mounted server and the cloud server corresponding to the vehicle can control the vehicle-mounted screen robotic arm by issuing control instructions.
- in-vehicle screens have become standard equipment for many vehicles. And as the functions of in-vehicle information systems become more and more powerful, in-vehicle screens can provide more and more intelligent services. Therefore, the occupants of the vehicle are becoming more and more dependent on in-vehicle screens.
- the control method of the mechanical arm provided in the first embodiment of the present application, after controlling the vehicle screen mechanical arm to perform the target setting action in the target cabin area, can further determine the setting position information corresponding to the vehicle screen according to the position setting requirements of the target personnel for the vehicle screen, and then use the setting position information to control the vehicle screen mechanical arm to adjust the vehicle screen, wherein the setting position information is used to indicate the spatial position of the vehicle screen set in the cabin.
- the control method of the mechanical arm provided in the first embodiment of the present application, after controlling the vehicle screen mechanical arm to perform the target setting action in the target cabin area, can further determine the setting position information corresponding to the vehicle screen according to the position setting requirements of the target personnel for the vehicle screen, and then use the setting position information to control the vehicle screen mechanical arm to adjust the vehicle screen, wherein the setting position information is used to indicate the spatial position of the vehicle screen set in the cabin.
- the vehicle screen robotic arm After controlling the vehicle screen robotic arm to perform the target setting action in the target cabin area, first determine the setting position information corresponding to the vehicle screen according to the target personnel's requirements for the position setting of the vehicle screen, and then use the setting position information to control the vehicle screen robotic arm to adjust the vehicle screen, wherein the setting position information is used to indicate the spatial position of the vehicle screen set in the cabin.
- the so-called target personnel include vehicle drivers and passengers.
- the passengers can be further divided according to the number of seats in the vehicle. For example: when the number of seats in the vehicle is 5, the passengers can be further divided into: passengers sitting in the main driver's seat, passengers sitting on the left side of the rear row of the vehicle, passengers sitting in the middle of the rear row of the vehicle, and passengers sitting on the right side of the rear row of the vehicle.
- the passengers when the number of seats in the vehicle is 7, the passengers can be further divided into: passengers sitting in the co-pilot, passengers sitting on the left side of the middle of the vehicle, passengers sitting on the right side of the middle of the vehicle, passengers sitting on the left side of the rear row of the vehicle, passengers sitting in the middle of the rear row of the vehicle, and passengers sitting on the right side of the rear row of the vehicle.
- the passengers when the number of seats in the vehicle is 4, the passengers can be further divided into: passengers sitting in the co-pilot, passengers sitting on the left side of the middle of the vehicle, and passengers sitting on the left side of the rear row of the vehicle.
- the so-called vehicle-mounted screen mechanical arm refers to a mechanical arm installed in the vehicle cabin for carrying and controlling the vehicle-mounted screen. Please refer to Figure 2 again.
- the vehicle-mounted screen mechanical arm in the second embodiment of the present application also includes: a multi-degree-of-freedom adjustment mechanism fixed to the back of the vehicle-mounted screen, and a plurality of telescopic units installed on the multi-degree-of-freedom adjustment mechanism.
- the plurality of telescopic units are used to drive the vehicle-mounted screen to flip up and down, left and right, and the multi-degree-of-freedom adjustment mechanism is used to drive the vehicle-mounted screen to rotate and translate, wherein up and down, left and right refer to the movement of the vehicle-mounted screen tilting upward, downward, backward, left and right relative to the initial position when the above-mentioned vehicle-mounted screen is in a vertical state facing the user.
- the vehicle-mounted screen robotic arm has three controllable degrees of freedom in three-dimensional space.
- the vehicle-mounted screen robotic arm can rotate freely around the z-axis in the corresponding three-dimensional space, where the z-axis is the coordinate axis pointing to the roof in the three-dimensional space coordinate system constructed for the three-dimensional space, the x-axis in the three-dimensional space coordinate system points to the rear of the vehicle, and the y-axis in the three-dimensional space coordinate system is perpendicular to the plane formed by the x-axis and the z-axis.
- vehicle-mounted screen robotic arm having three controllable degrees of freedom in three-dimensional space and the vehicle-mounted screen robotic arm being able to rotate freely around the z-axis in the corresponding three-dimensional space, in conjunction with the three-dimensional space coordinate system shown in FIG. 2.
- the vehicle-mounted screen robotic arm has three controllable degrees of freedom in the three-dimensional space: the vehicle-mounted screen robotic arm can rotate freely around the x-axis in the three-dimensional space, the vehicle-mounted screen robotic arm can rotate freely around the y-axis in the three-dimensional space, and the vehicle-mounted screen robotic arm can rotate freely around the y-axis in the three-dimensional space. Freely rotate around the z-axis in three-dimensional space.
- the vehicle-mounted screen robotic arm has three controllable degrees of freedom in three-dimensional space, which can still mean: the vehicle-mounted screen robotic arm can rotate freely around the x-axis in three-dimensional space, the vehicle-mounted screen robotic arm can rotate freely around the y-axis in three-dimensional space, and the vehicle-mounted screen robotic arm can rotate freely around the z-axis in three-dimensional space.
- the number of vehicle-mounted screen robotic arms is generally one, but it can also be multiple.
- the number of vehicle-mounted screens is generally one, but it can also be multiple.
- the vehicle-mounted screen robotic arm can be used alone to control the movement of the display screen, or it can be coordinated with the vehicle-mounted infotainment system, instrument panel, head-up display, streaming media rearview mirror, ambient light, smart door and smart speaker in the vehicle cabin to complete the corresponding preset actions under the preset scene. For example, the left and right swinging action is performed in conjunction with the flashing of the ambient light.
- the vehicle-mounted screen robotic arm in the second embodiment of the present application has three controllable degrees of freedom in three-dimensional space. Therefore, the vehicle-mounted screen robotic arm can control the vehicle-mounted screen in three degrees of freedom.
- the spatial position in the second embodiment of the present application refers to the three-dimensional spatial position of the vehicle-mounted screen in the vehicle cabin, including but not limited to: the horizontal position of the vehicle-mounted screen in the vehicle cabin, the vertical position of the vehicle-mounted screen in the vehicle cabin, and the orientation of the vehicle-mounted screen in the vehicle cabin.
- the vehicle-mounted screen robotic arm in the second embodiment of the present application can not only set the horizontal position and vertical position of the vehicle-mounted screen in the vehicle cabin, but also set the orientation of the vehicle-mounted screen in the vehicle cabin.
- the so-called orientation of the vehicle-mounted screen in the cabin specifically includes at least one of the orientation of the vehicle-mounted screen relative to the horizontal plane and the orientation of the vehicle-mounted screen relative to the vertical plane.
- the orientation of the vehicle screen relative to the horizontal plane is represented by the first inclination angle of the vehicle screen relative to the horizontal plane
- the orientation of the vehicle screen relative to the vertical plane is represented by the second inclination angle of the vehicle screen relative to the vertical plane.
- the setting position information refers to the three-dimensional spatial position information of the vehicle screen in the vehicle cabin, including but not limited to: the horizontal position information of the vehicle screen in the vehicle cabin, the vertical position information of the vehicle screen in the vehicle cabin and the orientation information of the vehicle screen in the vehicle cabin.
- the location setting requirement includes at least one of a target person's setting requirement for a first tilt angle of the vehicle screen relative to a horizontal plane and a target person's setting requirement for a second tilt angle of the vehicle screen relative to a vertical plane.
- the screen control method provided in the second embodiment of the present application can set the vehicle-mounted screen more flexibly.
- the target person's position in the cabin can be determined in response to the voice control command triggered by the target person for the vehicle-mounted screen. Then, based on the target person's position in the cabin and the voice control command, the position setting requirements are determined. After that, based on the position setting requirements, the spatial position of the vehicle-mounted screen in the cabin is determined. Finally, based on the spatial position of the vehicle-mounted screen in the cabin, the setting position information is obtained.
- the position setting requirement is determined, and then the spatial position of the vehicle screen in the vehicle cabin is determined, so that the target person can more conveniently control the spatial position of the vehicle screen in the vehicle cabin. Therefore, the screen control method provided in the second embodiment of the present application can bring a better vehicle screen control experience to the target person, and enable the vehicle screen to provide more convenient services to the target person.
- the target person's position in the vehicle cabin may be determined by performing sound source localization on the voice control command to determine the target person's position in the vehicle cabin.
- the target person's position in the vehicle cabin includes the vehicle seat position where the target person is located.
- the specific implementation method of determining the position setting requirement based on the position of the target person in the cabin and the voice control command is as follows: first, when the position of the target person in the cabin is determined, the voice control command is voice parsed to obtain the position adjustment requirement of the target person for the vehicle screen. Then, the current spatial position of the vehicle screen in the cabin is determined. Finally, based on the position of the target person in the cabin, the current spatial position of the vehicle screen in the cabin, and the target person's position adjustment requirement for the vehicle screen, the position setting requirement is determined.
- the corresponding default setting position of the vehicle screen is pre-configured for different vehicle seat positions.
- the target person's voice control command is a control command for indicating that he needs to use the vehicle screen, or is a control command for indicating that the person sitting
- the determined position setting requirements are: setting the vehicle screen at the default setting position of the vehicle screen pre-configured for the vehicle seat position where the target person is located, or setting the vehicle screen at the default setting position of the vehicle screen pre-configured for the vehicle seat position where other target persons are located.
- the determined position setting requirement is: set the vehicle screen to the vehicle screen default setting position pre-configured for the main driver's seat.
- the target person's vehicle seat is the main driver's seat, if the target person's voice control command is "The person in the co-pilot seat needs to use the vehicle screen” or "The vehicle screen plays a video for the person in the co-pilot seat", etc., then the determined position setting requirement is: set the vehicle screen to the vehicle screen default setting position pre-configured for the co-pilot seat.
- the voice control command is a command for instructing to adjust the setting position of the vehicle screen
- the target person's voice control command is "Move the vehicle screen closer”
- the position setting requirement is: set the vehicle screen at a position closer to the target person than the current spatial position.
- the implementation method of determining the position of the target person in the vehicle cabin can be: first, based on the voiceprint feature information corresponding to the voice control command, determine the identity of the target person. Then, based on the identity of the person, determine the position of the target person in the face image collected by the vehicle image acquisition device; finally, based on the position of the person, determine the position of the target person in the vehicle cabin.
- the two position determination methods of determining the position of the target person in the cabin by sound source positioning and determining the position of the target person in the cabin by personnel position can be combined to determine the position of the target person in the cabin.
- the position of the target person in the cabin is considered to be determined only when the position of the target person in the cabin determined by sound source positioning is consistent with the position of the target person in the cabin determined by personnel position.
- the process of determining the position of the target person in the cabin is as follows: first, the correspondence between the image area in the face image and the cabin area is obtained. Then, the image area where the person's position is located in the face image is determined. Finally, the cabin area corresponding to the image area where the person's position is located in the face image is determined in the cabin, and the position of the target person in the cabin is determined based on the cabin area.
- the screen control method provided in the second embodiment of the present application can also control the vehicle screen to provide corresponding personalized services for the target person based on the identity of the person after the vehicle screen mechanical arm is controlled to adjust the vehicle screen. For example, if the identity of the target person is user 1, and user 1 likes to watch TV series, after the vehicle screen mechanical arm is controlled to adjust the vehicle screen, the vehicle screen can be controlled to automatically play the TV series that user 1 likes to watch.
- the target person's eye tracking can be performed first to determine the target person's line of sight focus. Then, based on the line of sight focus, the position setting requirements are determined. Then, based on the position setting requirements, the spatial position of the vehicle-mounted screen in the cabin is determined. Finally, based on the spatial position of the vehicle-mounted screen in the cabin, the setting position information is obtained.
- the position setting requirements are determined, and further based on the position setting requirements, the spatial position of the vehicle screen in the cabin is determined, so that the spatial position of the vehicle screen in the cabin can be more in line with the user's visual field requirements, so that the target person can have a better viewing angle and visual range when using the vehicle screen. Therefore, while enabling the target person to control the vehicle screen more flexibly, more convenient services can also be provided to the target person.
- the vehicle-mounted screen after controlling the vehicle-mounted screen mechanical arm to adjust the vehicle-mounted screen, can also be controlled to switch the screen brightness and screen mode.
- the vehicle-mounted screen when the person is a child, after controlling the vehicle-mounted screen mechanical arm to adjust the vehicle-mounted screen, the vehicle-mounted screen can be further controlled to switch the screen mode to a children's entertainment screen.
- the screen brightness can be further adjusted so that the adjusted screen brightness can be more suitable for the current cabin environment.
- the vehicle-mounted server and the cloud server corresponding to the vehicle can control the vehicle-mounted screen robotic arm by issuing control instructions.
- the embodiment of the present application also provides a control device for a robotic arm, which is applied to a vehicle and includes: a sound source position determination module, which is used to determine the sound source position of the clapping sound when a target person is detected making a clapping action, and the clapping sound is the sound generated by the clapping action; a first control module, which is used to control the vehicle-mounted screen robotic arm to move to the target cabin area where the sound source position is located based on the sound source position; and a second control module, which is used to control the vehicle-mounted screen robotic arm to perform a target setting action in the target cabin area.
- a sound source position determination module which is used to determine the sound source position of the clapping sound when a target person is detected making a clapping action, and the clapping sound is the sound generated by the clapping action
- a first control module which is used to control the vehicle-mounted screen robotic arm to move to the target cabin area where the sound source position is located based on the sound source position
- the second control module is specifically used to obtain a preset action corresponding to the clapping action; determine the preset action corresponding to the clapping action as the target setting action to control the vehicle-mounted screen robotic arm to perform the target setting action in the target cabin area.
- obtaining a preset action corresponding to a clapping action includes: obtaining a preset first correspondence list, the first correspondence list including correspondences between preset clapping actions and preset actions; and determining the preset action corresponding to the clapping action in the first correspondence.
- the second control module is specifically used to obtain a preset action corresponding to the target cabin area; determine the preset action corresponding to the target cabin area as the target setting action to control the vehicle-mounted screen robotic arm to perform the target setting action in the target cabin area.
- the preset action corresponding to the target cabin area includes: obtaining a preset second correspondence list, the second correspondence list including the correspondence between the cabin area and the preset action; and determining the preset action corresponding to the target cabin area in the second correspondence list.
- the target cabin area includes a cabin area where vehicle seats are located.
- the sound source position determination module is specifically used to determine the body movements made by the target person within a specified time period based on the image frame sequence captured by the vehicle-mounted image acquisition device for the target person within the specified time period; when the body movement is a clapping action, obtain the specified audio collected by the vehicle-mounted audio acquisition device during the time period when the target person makes the clapping action; determine the clapping sound in the specified audio; and locate the clapping sound to determine the sound source position.
- locating the clapping sound to determine the location of the sound source includes: locating the sound source of the clapping sound to determine the location of the clapping sound in the vehicle cabin; and determining the location of the clapping sound in the vehicle cabin as the location of the sound source.
- the clapping sound is located to determine the location of the sound source, including: determining the cabin area where the target person is located based on the image area in the target person image frame sequence; locating the sound source of the clapping sound to determine the location of the clapping sound in the cabin; determining the location of the sound source based on the cabin area where the target person is located and the location of the clapping sound in the cabin.
- the device also includes: a spatial position determination module, which is used to determine the setting position information corresponding to the vehicle-mounted screen according to the target personnel's requirements for the position setting of the vehicle-mounted screen after controlling the vehicle-mounted screen robotic arm to perform the target setting action in the target cabin area, and the setting position information is used to indicate the spatial position of the vehicle-mounted screen in the cabin; a vehicle-mounted screen setting module, which is used to control the vehicle-mounted screen robotic arm to adjust the vehicle-mounted screen using the setting position information.
- a spatial position determination module which is used to determine the setting position information corresponding to the vehicle-mounted screen according to the target personnel's requirements for the position setting of the vehicle-mounted screen after controlling the vehicle-mounted screen robotic arm to perform the target setting action in the target cabin area
- the setting position information is used to indicate the spatial position of the vehicle-mounted screen in the cabin
- a vehicle-mounted screen setting module which is used to control the vehicle-mounted screen robotic arm to adjust the vehicle-mounted screen using the setting position information.
- the spatial position determination module is specifically used to determine the position of the target person in the vehicle cabin in response to the voice control command triggered by the target person to the vehicle screen; determine the position setting requirements based on the position of the target person in the vehicle cabin and the voice control command; determine the spatial position of the vehicle screen in the vehicle cabin according to the position setting requirements; and obtain setting position information based on the spatial position of the vehicle screen in the vehicle cabin.
- the position of the target person in the vehicle cabin is determined, including: performing sound source localization for the voice control command to determine the position of the target person in the vehicle cabin.
- the position of the target person in the vehicle cabin is determined, including: determining the identity of the target person based on voiceprint feature information corresponding to the voice control command; based on the identity of the person, determining the position of the target person in a face image captured by an on-board image acquisition device; based on the position of the person, determining the position of the target person in the vehicle cabin.
- the position of the target person in the vehicle cabin includes the vehicle seat position where the target person is located.
- the spatial position determination module is specifically used to track the eyes of a target person to determine the target person's line of sight focus; based on the line of sight focus, determine the position setting requirements; according to the position setting requirements, determine the spatial position of the vehicle-mounted screen in the cabin; based on the spatial position of the vehicle-mounted screen in the cabin, obtain the setting position information.
- the position setting requirement includes the target person's setting of a first tilt angle of the vehicle-mounted screen relative to the horizontal plane.
- the target person determines at least one of a setting requirement for a second tilt angle of the vehicle-mounted screen relative to the vertical plane.
- the vehicle-mounted screen robotic arm can rotate freely around the z-axis in the corresponding three-dimensional space, where the z-axis is the coordinate axis pointing to the roof of the vehicle in the three-dimensional space coordinate system constructed for the three-dimensional space, the x-axis in the three-dimensional space coordinate system points to the rear of the vehicle, and the y-axis in the three-dimensional space coordinate system is perpendicular to the plane formed by the x-axis and the z-axis.
- the fourth embodiment of the present application further provides a vehicle, as shown in FIG3 , which is a schematic diagram of a vehicle provided in the embodiment of the present application.
- the vehicle 300 includes: an on-board server 501 , an on-board screen mechanical arm 302 , and an on-board screen 303 .
- the vehicle-mounted server 301 is used to execute the control method of the robotic arm provided in the embodiment of the present application to control the vehicle-mounted screen robotic arm 302 to perform the target setting action in the target cabin area.
- the vehicle-mounted screen robot arm 302 is used to perform target setting actions in a target cabin area under the control of the vehicle-mounted server 301 .
- the vehicle-mounted screen 303 is connected to the vehicle-mounted screen mechanical arm 302 .
- the vehicle in this embodiment can be a fuel vehicle, an electric vehicle, a solar vehicle, or any other powered vehicle.
- the vehicle in this embodiment can be an autonomous driving vehicle.
- Embodiment 5 of the present application also provides an electronic device, a readable storage medium and a computer program product.
- Fig. 4 shows a schematic block diagram of an example electronic device 400 that can be used to implement an embodiment of the present application.
- Electronic devices are intended to represent various forms of digital computers, such as laptop computers, desktop computers, workbenches, personal digital assistants, servers, blade servers, mainframe computers, and other suitable computers.
- Electronic devices can also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices.
- the components shown herein, their connections and relationships, and their functions are merely examples, and are not intended to limit the implementation of the present application described herein and/or required.
- the device 400 includes a computing unit 401, which can perform various appropriate actions and processes according to a computer program stored in a read-only memory (ROM) 402 or a computer program loaded from a storage unit 408 into a random access memory (RAM) 403.
- ROM read-only memory
- RAM random access memory
- various programs and data required for the operation of the device 400 can also be stored.
- the computing unit 401, ROM 402, and RAM 403 are connected to each other via a bus 404.
- An input/output (I/O) interface 405 is also connected to the bus 404.
- a number of components in the device 400 are connected to the I/O interface 405, including: an input unit 406, such as a keyboard, a mouse, etc.; an output unit 407, such as various types of displays, speakers, etc.; a storage unit 408, such as a disk, an optical disk, etc.; and a communication unit 409, such as a network card, a modem, a wireless communication transceiver, etc.
- the communication unit 409 allows the device 400 to exchange information/data with other devices through a computer network such as the Internet and/or various telecommunication networks.
- the computing unit 401 may be a variety of general and/or special processing components with processing and computing capabilities. Some examples of the computing unit 401 include, but are not limited to, a central processing unit (CPU), a graphics processing unit (GPU), various dedicated artificial intelligence (AI) computing chips, various computing units running machine learning model algorithms, digital signal processors (DSPs), and any appropriate processors, controllers, microcontrollers, etc.
- the computing unit 401 performs the various methods and processes described above, such as the control method of the vehicle-mounted robotic arm.
- the control method of the vehicle-mounted robotic arm may be implemented as a computer software program, which is tangibly contained in a machine-readable medium, such as a storage unit 408.
- part or all of the computer program may be loaded and/or installed on the device 400 via the ROM 402 and/or the communication unit 409.
- the computer program When the computer program is loaded into the RAM 403 and executed by the computing unit 401, one or more steps of the control method of the vehicle-mounted robotic arm described above may be executed.
- the computing unit 401 may be configured to execute the control method of the vehicle-mounted robotic arm in any other appropriate manner (e.g., by means of firmware).
- FPGA field programmable gate array
- ASIC application specific integrated circuit
- ASSP application specific standard product
- SOC system on a chip
- CPLD load programmable logic device
- These various embodiments may include: being implemented in one or more computer programs, which may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a dedicated or general programmable processor, which may receive data and instructions from a storage system, at least one input device, and at least one output device, and transmit data and instructions to the storage system, the at least one input device, and the at least one output device.
- a programmable processor which may be a dedicated or general programmable processor, which may receive data and instructions from a storage system, at least one input device, and at least one output device, and transmit data and instructions to the storage system, the at least one input device, and the at least one output device.
- the program code for implementing the method of the present application can be written in any combination of one or more programming languages. These program codes can be provided to a processor or controller of a general-purpose computer, a special-purpose computer, or other programmable data processing device, so that the program code, when executed by the processor or controller, implements the functions/operations specified in the flow chart and/or block diagram.
- the program code can be executed entirely on the machine, partially on the machine, partially on the machine and partially on a remote machine as a stand-alone software package, or entirely on a remote machine or server.
- a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, device, or equipment.
- a machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
- a machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or equipment, or any suitable combination of the foregoing.
- a more specific example of a machine-readable storage medium may include an electrical connection based on one or more lines, a portable computer disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
- RAM random access memory
- ROM read-only memory
- EPROM or flash memory erasable programmable read-only memory
- CD-ROM portable compact disk read-only memory
- CD-ROM compact disk read-only memory
- magnetic storage device or any suitable combination of the foregoing.
- the systems and techniques described herein can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user; and a keyboard and pointing device (e.g., a mouse or trackball) through which the user can provide input to the computer.
- a display device e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor
- a keyboard and pointing device e.g., a mouse or trackball
- Other types of devices can also be used to provide interaction with the user; for example, the feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in any form (including acoustic input, voice input, or tactile input).
- the systems and techniques described herein may be implemented in a computing system that includes back-end components (e.g., as a data server), or a computing system that includes middleware components (e.g., an application server), or a computing system that includes front-end components (e.g., a user computer with a graphical user interface or a web browser through which a user can interact with implementations of the systems and techniques described herein), or a computing system that includes any combination of such back-end components, middleware components, or front-end components.
- the components of the system may be interconnected by any form or medium of digital data communication (e.g., a communications network). Examples of communications networks include: a local area network (LAN), a wide area network (WAN), and the Internet.
- a computer system may include a client and a server.
- the client and the server are generally remote from each other and usually interact through a communication network.
- the relationship of client and server is generated by computer programs running on respective computers and having a client-server relationship with each other.
- the server may be a cloud server, a server of a distributed system, or a server combined with a blockchain.
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Mechanical Engineering (AREA)
- Automation & Control Theory (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Transportation (AREA)
- Fittings On The Vehicle Exterior For Carrying Loads, And Devices For Holding Or Mounting Articles (AREA)
Abstract
一种机械臂的控制方法,包括:在检测到目标人员做出击掌动作的情况下,确定击掌声音的声源位置,击掌声音是由击掌动作产生的声音;基于声源位置,控制车载屏幕机械臂运动至声源位置所在的目标车舱区域;控制车载屏幕机械臂在目标车舱区域执行目标设定动作。还提供一种车辆(300)以及电子设备。
Description
本申请要求于2022年11月29日提交至国家知识产权局、申请号为202211516214.5、名称为“一种机械臂的控制方法、车辆以及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请要求于2022年11月29日提交至国家知识产权局、申请号为202211515888.3、名称为“一种屏幕的控制方法、车辆以及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请涉及人工智能领域,尤其涉及机械臂的控制方法、车辆以及电子设备。
机械臂作为一种具有模仿人类手臂功能并可完成各种作业的自动控制设备,已经被广泛应用于工业制造、医疗救援以及航空航天等领域。但将机械臂安装在车辆座舱内,来为车内人员提供智能化的服务,却是鲜有人涉及的领域。
在将机械臂安装在车辆座舱内来为车内人员提供智能化的服务的过程中,如何控制机械臂成为了相关技术人员不得不面临的问题。
发明内容
本申请提供了机械臂的控制方法、车辆以及电子设备,以解决相关技术中存在的问题。
根据本申请的一方面,提供了一种机械臂的控制方法,包括:在检测到目标人员做出击掌动作的情况下,确定击掌声音的声源位置,击掌声音是由击掌动作产生的声音;基于声源位置,控制车载屏幕机械臂运动至声源位置所在的目标车舱区域;控制车载屏幕机械臂在目标车舱区域执行目标设定动作。
根据本申请的另一方面,提供了一种车辆,包括:车载服务器、车载屏幕机械臂以及车载屏幕;车载服务器,用于执行本申请实施例提供的机械臂的控制方法,以控制车载屏幕机械臂在目标车舱区域执行目标设定动作;车载屏幕机械臂,用于在车载服务器的控制下,在目标车舱区域执行目标设定动作;车载屏幕,与车载屏幕机械臂连接。
根据本申请的另一方面,提供了一种电子设备,包括:至少一个处理器;以及与至少一个处理器通信连接的存储器;其中,存储器存储有可被至少一个处理器执行的指令,指令被至少一个处理器执行,以使至少一个处理器能够执行本申请任一实施例的方法。
上述概述仅仅是为了说明书的目的,并不意图以任何方式进行限制。除上述描述的示意性的方面、实施方式和特征之外,通过参考附图和以下的详细描述,本申请进一步的方面、实施方式和特征将会是容易明白的。
在附图中,除非另外规定,否则贯穿多个附图相同的附图标记表示相同或相似的部件或元素。这些附图不一定是按照比例绘制的。应该理解,这些附图仅描绘了根据本申请公开的一些实施方式,而不应将其视为是对本申请范围的限制。
图1示出了本申请实施例中提供的一种机械臂的控制方法的流程图。
图2示出了本申请实施例中提供的一种车载屏幕机械臂的示意图。
图3示出了本申请实施例中提供的一种车辆的示意图。
图4示出了用来实现本申请实施例提供的脚本序列的确定方法的电子设备的框图。
在下文中,仅简单地描述了某些示例性实施例。正如本领域技术人员可认识到的那样,在不脱离本申请的精神或范围的情况下,可通过各种不同方式修改所描述的实施例。因此,附图和描述被认为本质上是示例性的而非限制性的。
实施例一
本申请实施例中提供的机械臂的控制方法如图1所示,其为本公开的实施例中提供的一种机械臂的控制方法的流程图。图1示出的机械臂的控制方法包括如下步骤。
步骤S101:在检测到目标人员做出击掌动作的情况下,确定击掌声音的声源位置,击掌声音是由击掌动作产生的声音。
步骤S102:基于声源位置,控制车载屏幕机械臂运动至声源位置所在的目标车舱区域。
步骤S103:控制车载屏幕机械臂在目标车舱区域执行目标设定动作。
本申请实施例中提供的机械臂的控制方法,在检测到目标人员做出击掌动作的情况下,会确定击掌声音的声源位置。在确定击掌声音的声源位置后,会进一步基于声源位置,来控制车载屏幕机械臂运动至声源位置所在的目标车舱区域,并控制车载屏幕机械臂在目标车舱区域执行目标设定动作。
由于在检测到目标人员做出击掌动作的情况下,能够自动控制车载屏幕机械臂运动至声源位置所在的目标车舱区域,并控制车载屏幕机械臂在目标车舱区域执行目标设定动作。因此,无需车上人员手动控制车载屏幕机械臂,即可控制车载屏幕机械臂在目标车舱区域执行目标设定动作,从而能够提高车上人员的用户体验感。
另外,在控制车载屏幕机械臂运动至声源位置所在的目标车舱区域后,能够进一步控制车载屏幕机械臂在目标车舱区域执行目标设定动作。因此,车载屏幕机械臂能够基于车上人员的击掌动作与车上人员进行互动,从而能够提高车上人员的驾车或者乘车乐趣,进而能够提高车上人员的用户体验感。
需要说明的是,本申请实施例中提供的机械臂的控制方法的执行主体一般是车辆上的车载服务器,也可以是车辆对应的云端服务器,当然本申请实施例中,对机械臂的控制方法的执行主体不做具体限定。
本申请实施例中,所谓目标人员属于车上人员,其包括车辆驾驶人员以及乘车人员,根据车辆座位数的不同,可以对乘车人员做进一步的划分。例如:在车辆的座位数为5座时,可以将乘车人员进一步划分为:坐在主驾的乘车人员、坐在车辆后排左侧的乘车人员、坐在车辆后排中间的乘车人员以及坐在车辆后排右侧的乘车人员。例如:在车辆的座位数为7座时,可以将乘车人员进一步划分为:坐在副驾的乘车人员、坐在车辆中部左侧的乘车人员、坐在车辆中部右侧的乘车人员、坐在车辆后排左侧的乘车人员、坐在车辆后排中间的乘车人员以及坐在车辆后排右侧的乘车人员。再如:在车辆的座位数为4座时,可以将乘车人员进一步划分为:坐在副驾的乘车人员、坐在车辆中部左侧的乘车人员以及坐在车辆后排左侧的乘车人员。
所谓击掌动作一般是指目标人员通过互拍手掌做出的动作,击掌动作可以包括很多种。例如:通过手心拍打手心而做出的击掌动作、通过手心拍打手背而做出的击掌动作。再如:单次击掌动作以及连续击掌动作等。
本申请实施例中,可以预先配置指定的击掌动作,并设定只有在车上人员做出的肢体动作指定属于预先配置指定的击掌动作时,才认为检测到目标人员做出击掌动作。检测目标人员是否做出击掌动作的具体过程可以如下:首先,车载图像采集设备在指定时间段内针对目标人员采集到的图像帧序列。然后,针对图像帧序列,确定目标人员在指定时间段内做出的肢体动作。最后,确定肢体动作是否属于击掌动作,若是,则确定检测到目标人员做出击掌动作,若否,则确定未检测到目标人员做出击掌动作。
在实际用中,上述检测目标人员是否做出击掌动作的过程可以通过如下方式来实现:将待识别的图像帧序列输入到已训练的动作识别模型中,获取动作识别模型输出的识别结果。其中,识别结果包括检测到目标人员做出击掌动作,以及未检测到目标人员做出击掌动作。
需要说明的是,动作识别模型是利用图像序列样本以及对应标注的识别结果训练得到的,用于针对待识别的图像帧序列,确定特定人员做出的肢体动作,以及确定并输出识别结果。
确定肢体动作是否属于击掌动作的实现方式可以为:确定肢体动作是否属于预先配置指定的击掌动作,若是,则确定检测到目标人员做出击掌动作,若否,则确定未检测到目标人员做出击掌动作。
本申请实施例中,在检测到目标人员做出击掌动作的情况下,确定击掌声音的声源位置,击掌声音是由击掌动作产生的声音的具体实现方式包括如下步骤:
首先,根据车载图像采集设备在指定时间段内针对目标人员采集到的图像帧序列,确定目标人员在指定时间段内做出的肢体动作。其次,在肢体动作为击掌动作的情况下,获取车载音频采集设备在目标人员做出击掌动作的时间段内采集到的指定音频。再次,在指定音频中确定出击掌声音。最后,对击掌声音进行定位,以确定声源位置。
本申请实施例中,获取车载音频采集设备在目标人员做出击掌动作的时间段内采集到的指定音频,并在指定音频中确定出击掌声音,能够尽可能保障确定出的击掌声音就是由目标人员做出的击掌动作产生的声音,从而能够保障声源位置的准确度,进而能够确保对车载屏幕机械臂进行顺利的控制。
在一种可能实现的方式中,对击掌声音进行定位,以确定声源位置可以为:首先,对击掌声音进行声源定位,确定击掌声音在车舱内的位置。然后,将击掌声音在车舱内的位置确定为声源位置。直接将击掌声音在车舱内的位置确定为声源位置,能够提高声源位置的确定效率,进而能够提高对车载屏幕机械臂的控制效率。
为了提高声源位置的精准度,还可以采用如下方式来确定声源位置:首先,基于目标人员图像帧序列中的图像区域,确定目标人员所在的车舱区域。然后,对击掌声音进行声源定位,确定击掌声音在车舱内的位置。最后,基于目标人员所在的车舱区域以及击掌声音在车舱内的位置,确定声源位置。
所谓基于目标人员图像帧序列中的图像区域,确定目标人员所在的车舱区域的过程是:基于图像帧序列中的图像区域与车舱区域之间的对应关系,确定目标人员所在的车舱区域。
本申请实施例中,车舱区域一般是在车舱中根据车辆的座位划分出的区域。即,目标车舱区域包括车辆座位所处的车舱区域。具体的,根据车辆的座位数,可以将车舱划分出不同的车舱区域。例如:在车辆的座位数为5座时,可以将车舱划分出的车舱区域包括:主驾驶位所处的车舱区域、副驾驶位所处的车舱区域、车辆后排左侧座位所处的车舱区域、车辆后排中间座位所处的车舱区域以及车辆后排右侧座位所处的车舱区域。例如:在车辆的座位数为7座时,可以将车舱划分出的车舱区域为:主驾驶位所处的车舱区域、副驾驶位所处的车舱区域、车辆中部左侧座位所处的车舱区域、车辆中部右侧座位所处的车舱区域、车辆后排左侧座位所处的车舱区域、车辆后排中间座位所处的车舱区域以及车辆后排右侧座位所处的车舱区域。再如:在车辆的座位数为4座时,将车舱划分出的车舱区域为:主驾驶位所处的车舱区域、副驾驶位所处的车舱区域、车辆后排左侧座位所处的车舱区域以及车辆后排右侧座位所处的车舱区域。
在车辆的座位数为5座时,目标车舱区域可以为如下车舱区域中的一种:主驾驶位所处的车舱区域、副驾驶位所处的车舱区域、车辆后排左侧座位所处的车舱区域、车辆后排中间座位所处的车舱区域以及车辆后排右侧座位所处的车舱区域。在车辆的座位数为7座时,目标车舱区域可以为如下车舱区域中的一种:主驾驶位所处的车舱区域、副驾驶位所处的车舱区域、车辆中部左侧座位所处的车舱区域、车辆中部右侧座位所处的车舱区域、车辆后排左侧座位所处的车舱区域、车辆后排中间座位所处的车舱区域以及车辆后排右侧座位所处的车舱区域。在车辆的座位数为4座时,目标车舱区域可以为如下车舱区域中的一种:主驾驶位所处的车舱区域、副驾驶位所处的车舱区域、车辆后排左侧座位所处的车舱区域以及车辆后排右侧座位所处的车舱区域。
需要说明的是,本申请实施例中,需要根据车辆的座位数配置车载图像采集设备以及车载音频采集设备的数目,车辆的座位数越多时,需要配置的车载图像采集设备以及车载音频采集设备的数目也越多。
本申请实施例中,所谓车载屏幕机械臂是指,安装在车舱内用于承载并控制车载屏幕的机械臂。一般情况下,车载屏幕机械臂上都承载有车载屏幕,但是在特殊情况下车载屏幕机械臂也可以不承载车载屏幕。本申请实施例中,无论车载屏幕机械臂是否承载有车载屏幕,均可控制车载屏幕机械臂在目标车舱区域执行目标设定动作。本申请实施例中的车载屏幕机械臂的结构如图2所示,图2为本申请实施例中提供的一种车载屏幕机械臂的示意图。
具体而言,本申请实施例中的车载屏幕机械臂具体包括:固定于车载屏幕的背面的多自由度调整机构、以及安装于多自由度调整机构上的多个伸缩单元。多个伸缩单元用于带动车载屏幕进行上下左右的翻转,多自由度调整机构用于带动车载屏幕进行旋转和平移,其中上下左右是指当上述的车载屏幕处于面向使用者的竖直状态下,车载屏幕相对于初始位置进行上部向后侧倾斜、下部向后侧倾斜、左部向后侧倾斜以及右部向后侧倾斜的动作。
本申请实施例中,车载屏幕机械臂在三维空间内有三个可控的自由度。以下结合图2中示出的三维空间坐标系,对车载屏幕机械臂在三维空间内有三个可控的自由度进行详细的说明。
图2中示出的三维空间坐标系中x轴指向车尾、z轴指向车顶,y轴与由x轴和z轴构成的平面垂直。此时,车载屏幕机械臂在三维空间内有三个可控的自由度是指:车载屏幕机械臂可在三维空间内绕x轴自由转动,车载屏幕机械臂可在三维空间内绕y轴自由转动,以及车载屏幕机械臂可在三维空间内绕z轴自由转动。
需要说明的,本申请实施例中也可以构建其他的三维空间坐标系,当然,在其他的三维空间坐标系下,车载屏幕机械臂在三维空间内有三个可控的自由度依然可以是指:车载屏幕机械臂可在三维空间内绕x轴自由转动,车载屏幕机械臂可在三维空间内绕y轴自由转动,以及车载屏幕机械臂可在三维空间内绕z轴自由转动。
需要说明的是,车载屏幕机械臂的数目一般为一个,但是也可以为多个。车载屏幕机械臂可以单独用于控制显示屏幕移动,也可以与车辆座舱内的车载信息娱乐系统、仪表盘、抬头显示、流媒体后视镜、氛围灯、智能车门以及智能音箱等进行配合,在预设场景下完成相应的预设动作。例如,配合氛围灯的闪烁来执行左右摇摆动作。
在一种可能的实现方式中,在控制车载屏幕机械臂在目标车舱区域执行目标设定动作时,可以先获取与击掌动作对应的预先设定动作,然后,再将与击掌动作对应的预先设定动作确定为目标设定动作,以控制车载屏幕机械臂在目标车舱区域执行目标设定动作。
根据击掌动作对应的预先设定动作来确定目标设定动作,能够在目标人员在做出不同击掌动作时,控制车载屏幕机械臂执行不同的设定动作。因此,能够增强目标人员与车载屏幕机械臂之间的互动性,从而能够提高车上人员的驾车或者乘车乐趣,进而能够提高车上人员的用户体验感。
本申请实施例中,获取与击掌动作对应的预先设定动作的方式为:首先,获取预先设置的第一对应关系列表,第一对应关系列表中包括预设击掌动作与预先设定动作之间的对应关系,然后,在第一对应关系中确定与击掌动作对应的预先设定动作。
具体而言,假设第一对应关系列表中包括的对应关系有:在击掌动作为第一种击掌动作时,执行第一预先设定动作;在击掌动作为第二种击掌动作时,执行第二预先设定动作;在击掌动作为第三种击掌动作时,执行第三预先设定动作;在击掌动作为第四种击掌动作时,执行第四预先设定动作;在击掌动作为第五种击掌动作时,执行第五预先设定动作。
也就是说,如果击掌动作为第三种击掌动作,则确定与击掌动作对应的预先设定动作为第三预先设定动作。此时,将与击掌动作对应的预先设定动作确定为目标设定动作是指,将第三预先设定动作确定为目标设定动作。
在一种可能的实现方式中,在控制车载屏幕机械臂在目标车舱区域执行目标设定动作时,可以先获取与目标车舱区域对应的预先设定动作,然后,再将与目标车舱区域对应的预先设定动作确定为目标设定动作,以控制车载屏幕机械臂在目标车舱区域执行目标设定动作。
根据目标车舱区域对应的预先设定动作来确定目标设定动作,能够控制车载屏幕机械臂在不同车舱区域执行不同的设定动作。因此,可以使车载屏幕机械臂在不同车舱区域执行的设定动作能够具有个性化。这样,能够提高车上人员的驾车或者乘车乐趣,进而能够提高车上人员的用户体验感。
本申请实施例中,获取与目标车舱区域对应的预先设定动作的方式为:首先,获取预先设置的第二对应关系列表,第二对应关系列表中包括车舱区域与预先设定动作之间的对应关系。然后,在第二对应关系列表中确定与目标车舱区域对应的预先设定动作。
具体而言,假设第二对应关系列表中包括的对应关系有:在目标车舱区域为车舱区域1时,执行预先设定动作1;在目标车舱区域为车舱区域2时,执行预先设定动作2;在目标车舱区域为车舱区域3时,执行预先设定动作3;在目标车舱区域为车舱区域4时,执行预先设定动作4;在目标车舱区域为车舱区域5时,执行预先设定动作5。
也就是说,如果击掌动作为击掌动作2,则确定与目标车舱区域对应的预先设定动作为预先设定动作2。此时,将与目标车舱区域对应的预先设定动作确定为目标设定动作是指,将预先设定动作2确定为目标设定动作。
本申请实施例中,车载服务器以及车辆对应的云端服务器可以通过下发控制指令的方式来控制车载屏幕机械臂。
另外,现如今,车载屏幕已经成为了诸多车辆的标配。并且随着车载信息系统功能越来越强大,车载屏幕能够提供的智能化服务也越来越多。因此,车上人员对车载屏幕的依赖也越来越重。
在使用车载屏幕为车上人员提供便利的过程中,如何对车载屏幕进行更为灵活的设置成为了相关技术人员不得不面临的问题。为了解决相应的技术问题,本申请实施例一中提供的机械臂的控制方法,在控制车载屏幕机械臂在目标车舱区域执行目标设定动作之后,还可以进一步根据目标人员对车载屏幕的位置设置需求,确定车载屏幕对应的设置位置信息,然后,再利用设置位置信息,控制车载屏幕机械臂调整车载屏幕,其中,设置位置信息用于表示车载屏幕设置在车舱内的空间位置。相应过程请参见本申请实施例二。
实施例二
在控制车载屏幕机械臂在目标车舱区域执行目标设定动作之后,先根据目标人员对车载屏幕的位置设置需求,确定车载屏幕对应的设置位置信息,然后,再利用设置位置信息,控制车载屏幕机械臂调整车载屏幕,其中,设置位置信息用于表示车载屏幕设置在车舱内的空间位置。
本申请实施例二中,所谓目标人员包括车辆驾驶人员以及乘车人员。在实际应用中,根据车辆座位数的不同,可以对乘车人员做进一步的划分。例如:在车辆的座位数为5座时,可以将乘车人员进一步划分为:坐在主驾的乘车人员、坐在车辆后排左侧的乘车人员、坐在车辆后排中间的乘车人员以及坐在车辆后排右侧的乘车人员。例如:在车辆的座位数为7座时,可以将乘车人员进一步划分为:坐在副驾的乘车人员、坐在车辆中部左侧的乘车人员、坐在车辆中部右侧的乘车人员、坐在车辆后排左侧的乘车人员、坐在车辆后排中间的乘车人员以及坐在车辆后排右侧的乘车人员。再如:在车辆的座位数为4座时,可以将乘车人员进一步划分为:坐在副驾的乘车人员、坐在车辆中部左侧的乘车人员以及坐在车辆后排左侧的乘车人员。
本申请实施例二中,所谓车载屏幕机械臂是指,安装在车舱内用于承载并控制车载屏幕的机械臂。请再参照图2,本申请实施例二中的车载屏幕机械臂也包括:固定于车载屏幕的背面的多自由度调整机构、以及安装于多自由度调整机构上的多个伸缩单元。多个伸缩单元用于带动车载屏幕进行上下左右的翻转,多自由度调整机构用于带动车载屏幕进行旋转和平移,其中上下左右是指当上述的车载屏幕处于面向使用者的竖直状态下,车载屏幕相对于初始位置进行上部向后侧倾斜、下部向后侧倾斜、左部向后侧倾斜以及右部向后侧倾斜的动作。
本申请实施例二中,车载屏幕机械臂在三维空间内有三个可控的自由度。在一示例中,车载屏幕机械臂可在对应的三维空间内绕z轴自由转动,z轴为针对三维空间构建的三维空间坐标系中的指向车顶的坐标轴,三维空间坐标系中的x轴指向车尾,三维空间坐标系中的y轴与由x轴和z轴构成的平面垂直。以下结合图2中示出的三维空间坐标系,对车载屏幕机械臂在三维空间内有三个可控的自由度,以及车载屏幕机械臂可在对应的三维空间内绕z轴自由转动进行详细的说明。
图2中示出的三维空间坐标系中x轴指向车尾、z轴指向车顶,y轴与由x轴和z轴构成的平面垂直。此时,车载屏幕机械臂在三维空间内有三个可控的自由度是指:车载屏幕机械臂可在三维空间内绕x轴自由转动,车载屏幕机械臂可在三维空间内绕y轴自由转动,以及车载屏幕机械臂可在
三维空间内绕z轴自由转动。
需要说明的,本申请实施例二中也可以构建其他的三维空间坐标系,当然,在其他的三维空间坐标系下,车载屏幕机械臂在三维空间内有三个可控的自由度依然可以是指:车载屏幕机械臂可在三维空间内绕x轴自由转动,车载屏幕机械臂可在三维空间内绕y轴自由转动,以及车载屏幕机械臂可在三维空间内绕z轴自由转动。
需要说明的是,车载屏幕机械臂的数目一般为一个,但是也可以为多个。相应的,车载屏幕的数目一般为一个,但是也可以为多个。车载屏幕机械臂可以单独用于控制显示屏幕移动,也可以与车辆座舱内的车载信息娱乐系统、仪表盘、抬头显示、流媒体后视镜、氛围灯、智能车门以及智能音箱等进行配合,在预设场景下完成相应的预设动作。例如,配合氛围灯的闪烁来执行左右摇摆动作。
由于本申请实施例二中车载屏幕机械臂在三维空间内有三个可控的自由度。因此,车载屏幕机械臂可在三个自由度对车载屏幕进行控制。进而本申请实施例二中的空间位置是指,车载屏幕在车舱内的三维空间位置,包括但不限于:车载屏幕在车舱内的水平位置、车载屏幕在车舱内的垂直位置以及车载屏幕在车舱内的朝向。也就是说,本申请实施例二中车载屏幕机械臂不仅仅可以设置车载屏幕在车舱内的水平位置和垂直位置,还可以设置车载屏幕在车舱内的朝向。所谓车载屏幕在车舱内的朝向具体包括指车载屏幕相对于水平面的朝向以及车载屏幕相对于竖直面的朝向中的至少一种。
本申请实施例二中,通过车载屏幕相对于水平面的第一倾斜角度来表示车载屏幕相对于水平面的朝向,通过车载屏幕相对于竖直面的第二倾斜角度来表示车载屏幕相对于竖直面的朝向。
与空间位置相对应的,设置位置信息是指,车载屏幕在车舱内的三维空间位置信息,包括但不限于:车载屏幕在车舱内的水平位置信息、车载屏幕在车舱内的垂直位置信息以及车载屏幕在车舱内的朝向信息。
在一种可能的实施方式中,位置设置需求包括目标人员对车载屏幕相对于水平面的第一倾斜角度的设置需求,以及目标人员对车载屏幕相对于竖直面的第二倾斜角度的设置需求中的至少一种。
由于车载屏幕机械臂不仅仅可以设置车载屏幕在车舱内的水平位置和垂直位置,还可以设置车载屏幕在车舱内的朝向。因此,本申请实施例二中提供的屏幕的控制方法能够更为灵活的设置车载屏幕。
本申请实施例二中,在根据目标人员对车载屏幕的位置设置需求,确定车载屏幕对应的设置位置信息时,可以先响应于目标人员针对车载屏幕触发的语音控制指令,确定目标人员在车舱内的位置。然后,再基于目标人员在车舱内的位置以及语音控制指令,确定位置设置需求。之后,再根据位置设置需求,确定车载屏幕设置在车舱内的空间位置。最后,基于车载屏幕设置在车舱内的空间位置,得到设置位置信息。
基于目标人员在车舱内的位置以及语音控制指令,确定位置设置需求,进而确定车载屏幕设置在车舱内的空间位置,可以使目标人员能够更为方便的控制车载屏幕设置在车舱内的空间位置。因此,本申请实施例二中提供的屏幕的控制方法能够为目标人员带来更好的车载屏幕的控制体验,以及能够使车载屏幕可以为目标人员提供更为便利的服务。
在实际应用中,响应于目标人员针对车载屏幕触发的语音控制指令,确定目标人员在车舱内的位置的实现方式可以为:针对语音控制指令进行声源定位,以确定目标人员在车舱内的位置。目标人员在车舱内的位置包括目标人员所在的车辆座位位置。
本申请实施例二中,基于目标人员在车舱内的位置以及语音控制指令,确定位置设置需求的具体实现方式为:首先,在确定目标人员在车舱内的位置的情况下,对语音控制指令进行语音解析,获取目标人员对车载屏幕的位置调整需求。然后,确定车载屏幕在车舱内的当前空间位置。最后,基于目标人员在车舱内的位置、车载屏幕在车舱内的当前空间位置、以及目标人员对车载屏幕的位置调整需求,来确定位置设置需求。
本申请实施例二中,会针对不同车辆座位位置预先配置对应的车载屏幕默认设置位置。具体的,如果目标人员的语音控制指令为用于指示自身需要使用车载屏幕的控制指令,或者为用于指示坐在
指定座位上的其他目标人员需要使用车载屏幕的控制指令时,则确定出的位置设置需求是:将车载屏幕设置在针对目标人员所在的车辆座位位置预先配置的车载屏幕默认设置位置,或者,将车载屏幕设置在针对其他目标人员所在的车辆座位位置预先配置的车载屏幕默认设置位置。
例如:在目标人员所在的车辆座位位置是车辆主驾驶位时,如果目标人员的语音控制指令是“我要使用车载屏幕”或者“车载屏幕给我播放视频”等,则确定出的位置设置需求是:将车载屏幕设置在针对车辆主驾驶位预先配置的车载屏幕默认设置位置。再如:在目标人员所在的车辆座位位置是车辆主驾驶位时,如果目标人员的语音控制指令是“车辆副驾驶位的人需要使用车载屏幕”或者“车载屏幕给车辆副驾驶位的人播放视频”等,则确定出的位置设置需求是:将车载屏幕设置在针对车辆副驾驶位预先配置的车载屏幕默认设置位置。
另外,在语音控制指令为用于指示调整车载屏幕设置位置的指令时,则需要同时基于目标人员在车舱内的位置、车载屏幕在车舱内的当前空间位置、以及目标人员对车载屏幕的位置调整需求,来确定位置设置需求。具体的,如果目标人员的语音控制指令是“车载屏幕过来一点”,则位置设置需求是:将车载屏幕设置在比当前空间位置更靠近目标人员的位置。
在实际应用中,响应于目标人员针对车载屏幕触发的语音控制指令,确定目标人员在车舱内的位置的实现方式可以为:首先,基于语音控制指令对应的声纹特征信息,确定目标人员的人员身份。然后,基于人员身份,在车载图像采集设备采集到的人脸图像中确定目标人员的人员位置;最后,基于人员位置,确定目标人员在车舱内的位置。
本申请实施例二中,可以采用不同的方式来确定目标人员在车舱内的位置,从而能够使本申请实施例二中提供的屏幕的控制方法具有更好的适用性。此外,为了提高确定出的目标人员在车舱内的位置的准确性,可以将通过声源定位来确定目标人员在车舱内的位置,以及通过人员位置来确定目标人员在车舱内的位置这两种位置确定方式结合起来,来确定目标人员在车舱内的位置。具体的,在通过声源定位来确定出的目标人员在车舱内的位置,与通过人员位置来确定出的目标人员在车舱内的位置保持一致时,才认为确定出了目标人员在车舱内的位置。
基于人员位置,确定目标人员在车舱内的位置的过程为:首先,获取人脸图像中的图像区域与车舱区域的对应关系。然后,确定人员位置在人脸图像中所处的图像区域。最后,在车舱中确定与人员位置在人脸图像中所处的图像区域对应的车舱区域,并根据车舱区域确定目标人员在车舱内的位置。
需要说明的是,本申请实施例二中提供的屏幕的控制方法,还可以在控制车载屏幕机械臂调整车载屏幕后,进一步基于人员身份,来控制车载屏幕为目标人员提供对应的个性化服务。例如:如果目标人员的人员身份为用户1,而根据用户1喜欢追剧,则在控制车载屏幕机械臂调整车载屏幕后,可以控制车载屏幕自动播放用户1喜欢看的电视剧。
本申请实施例二中,在根据目标人员对车载屏幕的位置设置需求,确定车载屏幕对应的设置位置信息时,可以先对目标人员进行眼球追踪,以确定目标人员的视线焦点。然后,再基于视线焦点,确定位置设置需求。之后再根据位置设置需求,确定车载屏幕设置在车舱内的空间位置。最后,基于车载屏幕设置在车舱内的空间位置,得到设置位置信息。
基于目标人员的视线焦点,来确定位置设置需求,并进一步根据位置设置需求,来确定车载屏幕设置在车舱内的空间位置,能够使车载屏幕设置在车舱内的空间位置更加符合用户的视野需求,使目标人员在使用车载屏幕时能够有更佳的视角和视野范围。因此,在使目标人员能够更为灵活的控制车载屏幕时,还能够为目标人员提供更为便利的服务。
需要说明的是,本申请实施例二中还可以在控制车载屏幕机械臂调整车载屏幕后,还可以控制车载屏幕切换屏幕亮度和屏幕模式。例如:在人员身份为儿童时,在控制车载屏幕机械臂调整车载屏幕后,还可以进一步控制车载屏幕将屏幕模式切换为儿童娱乐屏。再如:在控制车载屏幕机械臂调整车载屏幕后,还可以进一步调整屏幕亮度,以使调整后的屏幕亮度能够更加适合当前车舱环境。
本申请实施例二中,车载服务器以及车辆对应的云端服务器可以通过下发控制指令的方式来控制车载屏幕机械臂。
实施例三
与本申请实施例一和二提供的机械臂的控制方法相对应地,本申请实施例还提供一种机械臂的控制装置,该装置应用于车辆,其包括:声源位置确定模块,用于在检测到目标人员做出击掌动作的情况下,确定击掌声音的声源位置,击掌声音是由击掌动作产生的声音;第一控制模块,用于基于声源位置,控制车载屏幕机械臂运动至声源位置所在的目标车舱区域;第二控制模块,用于控制车载屏幕机械臂在目标车舱区域执行目标设定动作。
在一种实施方式中,第二控制模块具体用于获取与击掌动作对应的预先设定动作;将与击掌动作对应的预先设定动作确定为目标设定动作,以控制车载屏幕机械臂在目标车舱区域执行目标设定动作。
在一种实施方式中,获取与击掌动作对应的预先设定动作,包括:获取预先设置的第一对应关系列表,第一对应关系列表中包括预设击掌动作与预先设定动作之间的对应关系;在第一对应关系中确定与击掌动作对应的预先设定动作。
在一种实施方式中,第二控制模块具体用于获取与目标车舱区域对应的预先设定动作;将与目标车舱区域对应的预先设定动作确定为目标设定动作,以控制车载屏幕机械臂在目标车舱区域执行目标设定动作。
在一种实施方式中,与目标车舱区域对应的预先设定动作,包括:获取预先设置的第二对应关系列表,第二对应关系列表中包括车舱区域与预先设定动作之间的对应关系;在第二对应关系列表中确定与目标车舱区域对应的预先设定动作。
在一种实施方式中,目标车舱区域包括车辆座位所处的车舱区域。
在一种实施方式中,声源位置确定模块具体用于根据车载图像采集设备在指定时间段内针对目标人员采集到的图像帧序列,确定目标人员在指定时间段内做出的肢体动作;在肢体动作为击掌动作的情况下,获取车载音频采集设备在目标人员做出击掌动作的时间段内采集到的指定音频;在指定音频中确定出击掌声音;对击掌声音进行定位,以确定声源位置。
在一种实施方式中,对击掌声音进行定位,以确定声源位置,包括:对击掌声音进行声源定位,确定击掌声音在车舱内的位置;将击掌声音在车舱内的位置确定为声源位置。
在一种实施方式中,对击掌声音进行定位,以确定声源位置,包括:基于目标人员图像帧序列中的图像区域,确定目标人员所在的车舱区域;对击掌声音进行声源定位,确定击掌声音在车舱内的位置;基于目标人员所在的车舱区域以及击掌声音在车舱内的位置,确定声源位置。
在一种实施方式中,该装置还包括:空间位置确定模块,用于在控制车载屏幕机械臂在目标车舱区域执行目标设定动作之后根据目标人员对车载屏幕的位置设置需求,确定车载屏幕对应的设置位置信息,设置位置信息用于表示车载屏幕设置在车舱内的空间位置;车载屏幕设置模块,用于利用设置位置信息,控制车载屏幕机械臂调整车载屏幕。
在一种实施方式中,空间位置确定模块具体用于响应于目标人员针对车载屏幕触发的语音控制指令,确定目标人员在车舱内的位置;基于目标人员在车舱内的位置以及语音控制指令,确定位置设置需求;根据位置设置需求,确定车载屏幕设置在车舱内的空间位置;基于车载屏幕设置在车舱内的空间位置,得到设置位置信息。
在一种实施方式中,响应于目标人员针对车载屏幕触发的语音控制指令,确定目标人员在车舱内的位置,包括:针对语音控制指令进行声源定位,以确定目标人员在车舱内的位置。
在一种实施方式中,响应于目标人员针对车载屏幕触发的语音控制指令,确定目标人员在车舱内的位置,包括:基于语音控制指令对应的声纹特征信息,确定目标人员的人员身份;基于人员身份,在车载图像采集设备采集到的人脸图像中确定目标人员的人员位置;基于人员位置,确定目标人员在车舱内的位置。
在一种实施方式中,目标人员在车舱内的位置包括目标人员所在的车辆座位位置。
在一种实施方式中,空间位置确定模块具体用于对目标人员进行眼球追踪,以确定目标人员的视线焦点;基于视线焦点,确定位置设置需求;根据位置设置需求,确定车载屏幕设置在车舱内的空间位置;基于车载屏幕设置在车舱内的空间位置,得到设置位置信息。
在一种实施方式中,位置设置需求包括目标人员对车载屏幕相对于水平面的第一倾斜角度的设
置需求,以及目标人员对车载屏幕相对于竖直面的第二倾斜角度的设置需求中的至少一种。
在一种实施方式中,车载屏幕机械臂可在对应的三维空间内绕z轴自由转动,z轴为针对三维空间构建的三维空间坐标系中的指向车顶的坐标轴,三维空间坐标系中的x轴指向车尾,三维空间坐标系中的y轴与由x轴和z轴构成的平面垂直。
本申请实施例各装置中的各单元的功能可以参见上述方法中的对应描述,在此不再赘述。
本申请的技术方案中,所涉及的用户个人信息的获取,存储和应用等,均符合相关法律法规的规定,且不违背公序良俗。
实施例四
与本申请实施例一、二提供的机械臂的控制方法相对应地,本申请实施例四还提供一种车辆,具体如图3所示,图3为本申请实施例中提供的一种车辆的示意图。该车辆300包括:车载服务器501、车载屏幕机械臂302以及车载屏幕303。
具体而言,车载服务器301,用于执行本申请实施例提供的机械臂的控制方法,以控制车载屏幕机械臂302在目标车舱区域执行目标设定动作。
车载屏幕机械臂302,用于在车载服务器301的控制下,在目标车舱区域执行目标设定动作。
车载屏幕303,与车载屏幕机械臂302连接。
示例性地,本实施例中的车辆可以燃油车、电动车、太阳能车等任何动力驱动的车辆。示例性地,本实施例中的车辆可以为自动驾驶车辆。
本实施例的车辆的其他构成,如车架和车轮的具体结构以及连接紧固部件等,可以采用于本领域普通技术人员现在和未来知悉的各种技术方案,这里不再详细描述。
实施例五
本申请实施例五中还提供了一种电子设备、一种可读存储介质和一种计算机程序产品。
图4示出了可以用来实施本申请实施例的示例电子设备400的示意性框图。电子设备旨在表示各种形式的数字计算机,诸如,膝上型计算机、台式计算机、工作台、个人数字助理、服务器、刀片式服务器、大型计算机、和其它适合的计算机。电子设备还可以表示各种形式的移动装置,诸如,个人数字处理、蜂窝电话、智能电话、可穿戴设备和其它类似的计算装置。本文所示的部件、它们的连接和关系、以及它们的功能仅仅作为示例,并且不意在限制本文中描述的和/或者要求的本申请的实现。
如图4所示,设备400包括计算单元401,其可以根据存储在只读存储器(ROM)402中的计算机程序或者从存储单元408加载到随机访问存储器(RAM)403中的计算机程序,来执行各种适当的动作和处理。在RAM 403中,还可存储设备400操作所需的各种程序和数据。计算单元401、ROM 402以及RAM 403通过总线404彼此相连。输入/输出(I/O)接口405也连接至总线404。
设备400中的多个部件连接至I/O接口405,包括:输入单元406,例如键盘、鼠标等;输出单元407,例如各种类型的显示器、扬声器等;存储单元408,例如磁盘、光盘等;以及通信单元409,例如网卡、调制解调器、无线通信收发机等。通信单元409允许设备400通过诸如因特网的计算机网络和/或各种电信网络与其他设备交换信息/数据。
计算单元401可以是各种具有处理和计算能力的通用和/或专用处理组件。计算单元401的一些示例包括但不限于中央处理单元(CPU)、图形处理单元(GPU)、各种专用的人工智能(AI)计算芯片、各种运行机器学习模型算法的计算单元、数字信号处理器(DSP)、以及任何适当的处理器、控制器、微控制器等。计算单元401执行上文所描述的各个方法和处理,例如车载机械臂的控制方法。例如,在一些实施例中,车载机械臂的控制方法可被实现为计算机软件程序,其被有形地包含于机器可读介质,例如存储单元408。在一些实施例中,计算机程序的部分或者全部可以经由ROM 402和/或通信单元409而被载入和/或安装到设备400上。当计算机程序加载到RAM 403并由计算单元401执行时,可以执行上文描述的车载机械臂的控制方法的一个或多个步骤。备选地,在其他实施例中,计算单元401可以通过其他任何适当的方式(例如,借助于固件)而被配置为执行车载机械臂的控制方法。
本文中以上描述的系统和技术的各种实施方式可以在数字电子电路系统、集成电路系统、场可
编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、芯片上系统的系统(SOC)、负载可编程逻辑设备(CPLD)、计算机硬件、固件、软件、和/或它们的组合中实现。这些各种实施方式可以包括:实施在一个或者多个计算机程序中,该一个或者多个计算机程序可在包括至少一个可编程处理器的可编程系统上执行和/或解释,该可编程处理器可以是专用或者通用可编程处理器,可以从存储系统、至少一个输入装置、和至少一个输出装置接收数据和指令,并且将数据和指令传输至该存储系统、该至少一个输入装置、和该至少一个输出装置。
用于实施本申请的方法的程序代码可以采用一个或多个编程语言的任何组合来编写。这些程序代码可以提供给通用计算机、专用计算机或其他可编程数据处理装置的处理器或控制器,使得程序代码当由处理器或控制器执行时使流程图和/或框图中所规定的功能/操作被实施。程序代码可以完全在机器上执行、部分地在机器上执行,作为独立软件包部分地在机器上执行且部分地在远程机器上执行或完全在远程机器或服务器上执行。
在本申请的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
为了提供与用户的交互,可以在计算机上实施此处描述的系统和技术,该计算机具有:用于向用户显示信息的显示装置(例如,CRT(阴极射线管)或者LCD(液晶显示器)监视器);以及键盘和指向装置(例如,鼠标或者轨迹球),用户可以通过该键盘和该指向装置来将输入提供给计算机。其它种类的装置还可以用于提供与用户的交互;例如,提供给用户的反馈可以是任何形式的传感反馈(例如,视觉反馈、听觉反馈、或者触觉反馈);并且可以用任何形式(包括声输入、语音输入或者、触觉输入)来接收来自用户的输入。
可以将此处描述的系统和技术实施在包括后台部件的计算系统(例如,作为数据服务器)、或者包括中间件部件的计算系统(例如,应用服务器)、或者包括前端部件的计算系统(例如,具有图形用户界面或者网络浏览器的用户计算机,用户可以通过该图形用户界面或者该网络浏览器来与此处描述的系统和技术的实施方式交互)、或者包括这种后台部件、中间件部件、或者前端部件的任何组合的计算系统中。可以通过任何形式或者介质的数字数据通信(例如,通信网络)来将系统的部件相互连接。通信网络的示例包括:局域网(LAN)、广域网(WAN)和互联网。
计算机系统可以包括客户端和服务器。客户端和服务器一般远离彼此并且通常通过通信网络进行交互。通过在相应的计算机上运行并且彼此具有客户端-服务器关系的计算机程序来产生客户端和服务器的关系。服务器可以是云服务器,也可以为分布式系统的服务器,或者是结合了区块链的服务器。
应该理解,可以使用上面所示的各种形式的流程,重新排序、增加或删除步骤。例如,本发公开中记载的各步骤可以并行地执行也可以顺序地执行也可以不同的次序执行,只要能够实现本申请公开的技术方案所期望的结果,本文在此不进行限制。
上述具体实施方式,并不构成对本申请保护范围的限制。本领域技术人员应该明白的是,根据设计要求和其他因素,可以进行各种修改、组合、子组合和替代。任何在本申请的精神和原则之内所作的修改、等同替换和改进等,均应包含在本申请保护范围之内。
Claims (19)
- 一种机械臂的控制方法,包括:在检测到目标人员做出击掌动作的情况下,确定击掌声音的声源位置,所述击掌声音是由所述击掌动作产生的声音;基于所述声源位置,控制车载屏幕机械臂运动至所述声源位置所在的目标车舱区域;控制所述车载屏幕机械臂在所述目标车舱区域执行目标设定动作。
- 根据权利要求1所述的方法,其中,所述控制所述车载屏幕机械臂在所述目标车舱区域执行目标设定动作,包括:获取与所述击掌动作对应的预先设定动作;将与所述击掌动作对应的预先设定动作确定为目标设定动作,以控制所述车载屏幕机械臂在所述目标车舱区域执行所述目标设定动作。
- 根据权利要求2所述的方法,其中,所述获取与所述击掌动作对应的预先设定动作,包括:获取预先设置的第一对应关系列表,所述第一对应关系列表中包括预设击掌动作与预先设定动作之间的对应关系;在所述第一对应关系中确定与所述击掌动作对应的预先设定动作。
- 根据权利要求1所述的方法,其中,所述控制所述车载屏幕机械臂在所述目标车舱区域执行目标设定动作,包括:获取与所述目标车舱区域对应的预先设定动作;将与所述目标车舱区域对应的预先设定动作确定为目标设定动作,以控制所述车载屏幕机械臂在所述目标车舱区域执行所述目标设定动作。
- 根据权利要求4所述的方法,其中,所述与所述目标车舱区域对应的预先设定动作,包括:获取预先设置的第二对应关系列表,所述第二对应关系列表中包括车舱区域与预先设定动作之间的对应关系;在所述第二对应关系列表中确定与所述目标车舱区域对应的预先设定动作。
- 根据权利要求4或5所述的方法,其中,所述目标车舱区域包括车辆座位所处的车舱区域。
- 根据权利要求1所述的方法,其中,所述在检测到目标人员做出击掌动作的情况下,确定击掌声音的声源位置,所述击掌声音是由所述击掌动作产生的声音,包括:根据车载图像采集设备在指定时间段内针对所述目标人员采集到的图像帧序列,确定所述目标人员在所述指定时间段内做出的肢体动作;在所述肢体动作为所述击掌动作的情况下,获取车载音频采集设备在所述目标人员做出所述击掌动作的时间段内采集到的指定音频;在所述指定音频中确定出所述击掌声音;对所述击掌声音进行定位,以确定所述声源位置。
- 根据权利要求7所述的方法,其中,所述对所述击掌声音进行定位,以确定所述声源位置,包括:对所述击掌声音进行声源定位,确定所述击掌声音在车舱内的位置;将所述击掌声音在车舱内的位置确定为所述声源位置。
- 根据权利要求7所述的方法,其中,所述对所述击掌声音进行定位,以确定所述声源位置,包括:基于所述目标人员所述图像帧序列中的图像区域,确定所述目标人员所在的车舱区域;对所述击掌声音进行声源定位,确定所述击掌声音在车舱内的位置;基于所述目标人员所在的车舱区域以及所述击掌声音在车舱内的位置,确定所述声源位置。
- 根据权利要求1所述的方法,其中,在所述控制所述车载屏幕机械臂在所述目标车舱区域执行目标设定动作之后,所述方法还包括:根据所述目标人员对车载屏幕的位置设置需求,确定所述车载屏幕对应的设置位置信息,所述设置位置信息用于表示所述车载屏幕设置在车舱内的空间位置;利用所述设置位置信息,控制车载屏幕机械臂调整所述车载屏幕。
- 根据权利要求10所述的方法,其中,所述根据所述目标人员对车载屏幕的位置设置需求,确定所述车载屏幕对应的设置位置信息,包括:响应于所述目标人员针对所述车载屏幕触发的语音控制指令,确定所述目标人员在所述车舱内的位置;基于所述目标人员在所述车舱内的位置以及所述语音控制指令,确定所述位置设置需求;根据所述位置设置需求,确定所述车载屏幕设置在车舱内的空间位置;基于所述车载屏幕设置在车舱内的空间位置,得到所述设置位置信息。
- 根据权利要求11所述的方法,其中,所述响应于所述目标人员针对所述车载屏幕触发的语音控制指令,确定所述目标人员在所述车舱内的位置,包括:针对所述语音控制指令进行声源定位,以确定所述目标人员在所述车舱内的位置。
- 根据权利要求11或12所述的方法,其中,所述响应于所述目标人员针对所述车载屏幕触发的语音控制指令,确定所述目标人员在所述车舱内的位置,包括:基于所述语音控制指令对应的声纹特征信息,确定所述目标人员的人员身份;基于所述人员身份,在车载图像采集设备采集到的人脸图像中确定所述目标人员的人员位置;基于所述人员位置,确定所述目标人员在所述车舱内的位置。
- 根据权利要求11所述的方法,其中,所述目标人员在所述车舱内的位置包括所述目标人员所在的车辆座位位置。
- 根据权利要求10所述的方法,其中,所述根据所述目标人员对车载屏幕的位置设置需求,确定所述车载屏幕对应的设置位置信息,包括:对所述目标人员进行眼球追踪,以确定所述目标人员的视线焦点;基于所述视线焦点,确定所述位置设置需求;根据所述位置设置需求,确定所述车载屏幕设置在车舱内的空间位置;基于所述车载屏幕设置在车舱内的空间位置,得到所述设置位置信息。
- 根据权利要求10、11或15所述的方法,其中,所述位置设置需求包括所述目标人员对所述车载屏幕相对于水平面的第一倾斜角度的设置需求,以及所述目标人员对所述车载屏幕相对于竖直面的第二倾斜角度的设置需求中的至少一种。
- 根据权利要求10所述的方法,其中,所述车载屏幕机械臂可在对应的三维空间内绕z轴自由转动,所述z轴为针对所述三维空间构建的三维空间坐标系中的指向车顶的坐标轴,所述三维空间坐标系中的x轴指向车尾,所述三维空间坐标系中的y轴与由所述x轴和所述z轴构成的平面垂直。
- 一种车辆,其中,包括:车载服务器、车载屏幕机械臂以及车载屏幕;所述车载服务器,用于执行如权利要求1至17任一项所述的方法,以控制所述车载屏幕机械臂在目标车舱区域执行目标设定动作;所述车载屏幕机械臂,用于在所述车载服务器的控制下,在所述目标车舱区域执行所述目标设定动作;所述车载屏幕,与所述车载屏幕机械臂连接。
- 一种电子设备,其中,包括:至少一个处理器;以及与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行如权利要求1至17任一项所述的方法。
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211515888.3A CN118107485A (zh) | 2022-11-29 | 2022-11-29 | 一种屏幕的控制方法、车辆以及电子设备 |
CN202211515888.3 | 2022-11-29 | ||
CN202211516214.5A CN118106949A (zh) | 2022-11-29 | 2022-11-29 | 一种机械臂的控制方法、车辆以及电子设备 |
CN202211516214.5 | 2022-11-29 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024113839A1 true WO2024113839A1 (zh) | 2024-06-06 |
Family
ID=91322955
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2023/104108 WO2024113839A1 (zh) | 2022-11-29 | 2023-06-29 | 机械臂的控制方法、车辆以及电子设备 |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2024113839A1 (zh) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009025715A (ja) * | 2007-07-23 | 2009-02-05 | Xanavi Informatics Corp | 車載装置および音声認識方法 |
CN103077369A (zh) * | 2011-10-26 | 2013-05-01 | 江南大学 | 以击掌为标志动作的人机交互系统及其识别方法 |
CN110021298A (zh) * | 2019-04-23 | 2019-07-16 | 广州小鹏汽车科技有限公司 | 一种汽车语音控制系统 |
CN112026790A (zh) * | 2020-09-03 | 2020-12-04 | 上海商汤临港智能科技有限公司 | 车载机器人的控制方法及装置、车辆、电子设备和介质 |
CN115158197A (zh) * | 2022-07-21 | 2022-10-11 | 重庆蓝鲸智联科技有限公司 | 一种基于声源定位的车载智能座舱娱乐的控制系统 |
-
2023
- 2023-06-29 WO PCT/CN2023/104108 patent/WO2024113839A1/zh unknown
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009025715A (ja) * | 2007-07-23 | 2009-02-05 | Xanavi Informatics Corp | 車載装置および音声認識方法 |
CN103077369A (zh) * | 2011-10-26 | 2013-05-01 | 江南大学 | 以击掌为标志动作的人机交互系统及其识别方法 |
CN110021298A (zh) * | 2019-04-23 | 2019-07-16 | 广州小鹏汽车科技有限公司 | 一种汽车语音控制系统 |
CN112026790A (zh) * | 2020-09-03 | 2020-12-04 | 上海商汤临港智能科技有限公司 | 车载机器人的控制方法及装置、车辆、电子设备和介质 |
CN115158197A (zh) * | 2022-07-21 | 2022-10-11 | 重庆蓝鲸智联科技有限公司 | 一种基于声源定位的车载智能座舱娱乐的控制系统 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10913463B2 (en) | Gesture based control of autonomous vehicles | |
CN112369051B (zh) | 用于车辆乘员和远程用户的共享环境 | |
CN113302664B (zh) | 运载工具的多模态用户接口 | |
US8886399B2 (en) | System and method for controlling a vehicle user interface based on gesture angle | |
US9656690B2 (en) | System and method for using gestures in autonomous parking | |
CN110211586A (zh) | 语音交互方法、装置、车辆以及机器可读介质 | |
US11256104B2 (en) | Intelligent vehicle point of focus communication | |
CN111694433A (zh) | 语音交互的方法、装置、电子设备及存储介质 | |
WO2024002297A1 (zh) | 车载机械臂的控制的方法、装置、车载显示设备及车辆 | |
CN112083795A (zh) | 对象控制方法及装置、存储介质和电子设备 | |
WO2022116656A1 (en) | Methods and devices for hand-on-wheel gesture interaction for controls | |
CN115525152A (zh) | 图像处理方法及系统、装置、电子设备和存储介质 | |
CN111638786B (zh) | 车载后排投影显示系统的显示控制方法、装置、设备及存储介质 | |
WO2024113839A1 (zh) | 机械臂的控制方法、车辆以及电子设备 | |
US20230298267A1 (en) | Event routing in 3d graphical environments | |
CN113361361B (zh) | 与乘员交互的方法及装置、车辆、电子设备和存储介质 | |
KR101655826B1 (ko) | 웨어러블 글래스, 그 제어 방법 및 차량 제어 시스템 | |
TWI853358B (zh) | 場景展示方法、裝置、電子設備及儲存介質 | |
US20240177424A1 (en) | Digital assistant object placement | |
CN118790175A (zh) | 车辆交互控制方法、装置、设备及计算机存储介质 | |
CN118107485A (zh) | 一种屏幕的控制方法、车辆以及电子设备 | |
CN118494183A (zh) | 车辆控制方法、车辆控制装置和车辆 | |
CN117681795A (zh) | 一种车辆滑移屏的控制方法、装置、介质、电子设备 | |
TW202414030A (zh) | 場景展示方法、裝置、電子設備及儲存介質 | |
WO2023211844A1 (en) | Content transfer between devices |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23896003 Country of ref document: EP Kind code of ref document: A1 |