WO2024002297A1 - 车载机械臂的控制的方法、装置、车载显示设备及车辆 - Google Patents

车载机械臂的控制的方法、装置、车载显示设备及车辆 Download PDF

Info

Publication number
WO2024002297A1
WO2024002297A1 PCT/CN2023/104187 CN2023104187W WO2024002297A1 WO 2024002297 A1 WO2024002297 A1 WO 2024002297A1 CN 2023104187 W CN2023104187 W CN 2023104187W WO 2024002297 A1 WO2024002297 A1 WO 2024002297A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
script
control
sequence
robotic arm
Prior art date
Application number
PCT/CN2023/104187
Other languages
English (en)
French (fr)
Inventor
李谦
Original Assignee
华人运通(江苏)技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华人运通(江苏)技术有限公司 filed Critical 华人运通(江苏)技术有限公司
Publication of WO2024002297A1 publication Critical patent/WO2024002297A1/zh

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R11/00Arrangements for holding or mounting articles, not otherwise provided for
    • B60R11/02Arrangements for holding or mounting articles, not otherwise provided for for radio sets, television sets, telephones, or the like; Arrangement of controls thereof
    • B60R11/0229Arrangements for holding or mounting articles, not otherwise provided for for radio sets, television sets, telephones, or the like; Arrangement of controls thereof for displays, e.g. cathodic tubes
    • B60R11/0235Arrangements for holding or mounting articles, not otherwise provided for for radio sets, television sets, telephones, or the like; Arrangement of controls thereof for displays, e.g. cathodic tubes of flat type, e.g. LCD
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R11/00Arrangements for holding or mounting articles, not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R16/00Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for
    • B60R16/02Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R11/00Arrangements for holding or mounting articles, not otherwise provided for
    • B60R2011/0042Arrangements for holding or mounting articles, not otherwise provided for characterised by mounting means
    • B60R2011/008Adjustable or movable supports
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Definitions

  • the present application relates to the field of automatic control technology, and in particular to methods and devices for controlling vehicle-mounted robotic arms, vehicle-mounted display equipment and vehicles.
  • the main components of the smart cockpit include in-vehicle infotainment systems, instrument panels, heads-up displays, streaming rearview mirrors, ambient lights, smart doors, and smart speakers.
  • Various functions in the smart cockpit can be combined to provide more personalized driving services.
  • Robotic arms have been widely used in automation scenarios, but there are few examples of combining robotic arms with smart cockpits. How to combine the two to provide drivers and passengers with more diverse intelligent services while ensuring driving safety? , taking into account the control instructions for the vehicle-mounted robotic arm under different circumstances has become an urgent problem to be solved.
  • This application provides a method and device for controlling a vehicle-mounted robotic arm, a vehicle-mounted display device, and a vehicle to solve technical problems existing in related technologies.
  • a method for controlling a vehicle-mounted robotic arm may include the following steps: generating a first control instruction sequence for vehicle-mounted controllable components including the vehicle-mounted robotic arm according to the first trigger information. ;
  • the first trigger information is determined based on the instruction information of different people in the vehicle and/or the environmental information of the current vehicle location; during the process of the vehicle-mounted controllable component executing the first control instruction sequence, the second trigger information is received
  • a second control instruction sequence for the vehicle-mounted controllable components including the vehicle-mounted manipulator is generated; the second trigger information is determined based on the instruction information of different people in the vehicle and/or the environmental information of the current vehicle location; in When there is a conflict between the first control instruction sequence and the second control instruction sequence, a conflict resolution strategy is determined to resolve the conflict; the conflict situation includes that the control objects of the first control instruction sequence and the second control instruction sequence both include vehicle-mounted machinery. arm.
  • a control device for a vehicle-mounted robotic arm may include: a first control instruction sequence generation module, configured to generate, based on the first trigger information, a control device for the vehicle-mounted robotic arm, including the vehicle-mounted robotic arm.
  • the first control instruction sequence of the controllable component; the first trigger information is determined based on the instruction information of different people in the vehicle and/or the environmental information of the current vehicle location; the second control instruction sequence generation module is used to generate the controllable component in the vehicle.
  • the control component executes the first control instruction sequence and receives the second trigger information, it generates a second control instruction sequence for the vehicle-mounted controllable components including the vehicle-mounted robotic arm; the second trigger information is based on different conditions in the vehicle.
  • the conflict resolution strategy determination module is used to determine the conflict resolution strategy to deal with the situation when there is a conflict between the first control instruction sequence and the second control instruction sequence.
  • the conflict is resolved; in the case of conflict, the control objects of the first control instruction sequence and the second control instruction sequence both include the vehicle-mounted robotic arm.
  • an electronic device including: at least one processor; and a memory communicatively connected to the at least one processor; wherein the memory stores instructions that can be executed by the at least one processor. , the instruction is executed by the at least one processor, so that the at least one processor can execute the method in any embodiment of the present application.
  • a non-transitory computer-readable storage medium storing computer instructions
  • the computing Machine instructions are used to cause the computer to execute the method in any embodiment of this application.
  • a computer program product including a computer program/instructions, which when executed by a processor implements the method in any embodiment of the present application.
  • a vehicle-mounted display device including: a control unit, a method for performing vehicle-mounted robot arm control, or a system including vehicle-mounted robot arm control; a display module composed of a robot arm and a vehicle-mounted screen.
  • the robot arm is used to drive the vehicle-mounted screen to complete at least one target action.
  • a vehicle including: a control unit, a method for performing vehicle-mounted robotic arm control, or a system including vehicle-mounted robotic arm control; a display module composed of a robotic arm and a vehicle-mounted screen, The robotic arm is used to drive the vehicle screen to complete at least one target action.
  • a preset conflict resolution strategy is executed to adjust the first control instruction sequence and the second control instruction sequence.
  • the first control instruction sequence and the second control instruction sequence can be adjusted according to the priority of the control instruction sequence as a conflict resolution strategy. This supports the concurrency of multiple control instruction sequences, and the controllable components perform their respective functions according to the conflict resolution strategy, improving the intelligence of the vehicle's controllable components.
  • Figure 1 shows a flow chart of a control method for a vehicle-mounted robotic arm provided in Embodiment 1 of the present application.
  • FIG. 2 shows a flow chart for determining a conflict resolution strategy to resolve conflicts provided in Embodiment 1 of the present application.
  • Figure 3 shows a schematic diagram of an accessible space provided in Embodiment 1 of the present application.
  • FIG. 4 shows a schematic diagram of the coordinate system of a vehicle-mounted robotic arm provided in Embodiment 1 of the present application.
  • FIG. 5 shows a schematic diagram of a control device for a vehicle-mounted robotic arm provided in Embodiment 4 of the present application.
  • FIG. 6 shows a block diagram of an electronic device used to implement the method for determining a script sequence provided by an embodiment of the present application.
  • Figure 7 shows an overall schematic diagram of a vehicle-mounted robotic arm according to an embodiment of the present application.
  • Figure 8 shows a schematic diagram of the guide rail of the vehicle-mounted robotic arm according to the embodiment of the present application.
  • Figure 9 shows a schematic diagram of the rotation mechanism of the vehicle-mounted robotic arm according to the embodiment of the present application.
  • FIG. 10 shows a schematic diagram of another installation method of the linear motion unit of the vehicle-mounted robotic arm according to the embodiment of the present application.
  • Figure 11 shows a schematic diagram of the vehicle screen flipping action of the vehicle-mounted robotic arm according to the embodiment of the present application.
  • Figure 12 shows a schematic diagram of the vehicle screen translation action of the vehicle-mounted robotic arm according to the embodiment of the present application.
  • Figure 13 shows a schematic diagram of the vehicle screen rotation action of the vehicle-mounted robotic arm according to the embodiment of the present application.
  • Figure 14 shows a schematic diagram of the forward and backward movement of the vehicle screen of the vehicle-mounted robotic arm according to the embodiment of the present application.
  • Figure 15 shows a schematic diagram of the action of the rotating member of the vehicle-mounted robotic arm according to the embodiment of the present application.
  • the method for determining a script sequence provided in Embodiment 1 of the present application is as shown in Figure 1.
  • the method may include the following steps.
  • S101 Based on the first trigger information, generate a first control instruction sequence for the vehicle-mounted controllable components including the vehicle-mounted robotic arm; the first trigger information is based on the instruction information of different people in the vehicle and/or the location of the current vehicle. Environmental information is determined;
  • Vehicle-mounted controllable components can include vehicle-mounted robotic arms, doors, speakers, ambient lights, etc.
  • the control principles of each vehicle-mounted controllable component are the same.
  • a vehicle-mounted manipulator is used as an example for detailed description.
  • the vehicle-mounted robotic arm can be a bracket for the vehicle display screen. Through the movement of the vehicle-mounted robotic arm, it can match the display content of the vehicle display screen. For example, it can cooperate with the vehicle display screen to swing back and forth at a certain angle, left and right, etc.
  • the initial position of the vehicle-mounted manipulator can be embedded in the vehicle console. After receiving the control instruction, it can move based on the control instruction. For example, the movement may be tilting toward the driver, moving to the corresponding position of the rear passenger, etc.
  • the vehicle-mounted robotic arm can also be an independent vehicle-mounted component, for example, it can be installed at the arm rest of the car. The initial position of the vehicle-mounted manipulator can serve as an arm rest.
  • the vehicle-mounted robotic arm After receiving the control command, it can move based on the control command. Not only that, the vehicle-mounted robotic arm can also perform different actions in conjunction with the music played by the speakers and the different colors of the ambient lights, so as to meet the personalized needs of users. In this application, the specific location or function of the vehicle-mounted robotic arm is not limited.
  • the (first or second) trigger information can be issued by different people in the vehicle.
  • the first triggering command is issued by the driver
  • the second triggering command is issued by the passenger sitting behind the driver.
  • the trigger information may also be information detected by a vehicle-side sensor.
  • the trigger information can be a control instruction for the vehicle's controllable components issued by the driver or passenger through voice, movement or touch.
  • the control instruction actions issued to the vehicle-mounted controllable components may include the actions issued through gesture adjustment.
  • the standard direction up, down, left, right
  • the vehicle-mounted robotic arm can be rotated left and right and tilted up and down through gesture recognition.
  • the user's adjustment instructions through fist gestures can be detected in real time.
  • the gesture recognition system is kept in front of the gesture recognition system for a certain period of time (for example, 2 seconds), the gesture adjustment can be activated.
  • the image acquisition device in the gesture recognition system collects the movement direction of the user's gesture (such as making a fist), and the image analysis module in the gesture recognition system determines the up, down, left, and right movements of the gesture, and ultimately can control the left and right rotation or up and down pitch movements of the vehicle-mounted robotic arm.
  • the vehicle-mounted robotic arm can be controlled to stop rotating.
  • the user stays briefly (for example, not more than 1 second) and then moves in the other direction (to the right).
  • the gesture adjustment mode can be controlled to exit.
  • the central control screen displays the corresponding content that the gesture adjustment has exited.
  • the following gestures can also be included.
  • the gesture may include multiple repetitions after detecting that the user's five fingers are combined, the palm is facing down, the five fingers are bent, and the five finger tips are toward/away from the palm.
  • each repetition may represent the position of the vehicle-mounted robotic arm toward the controller. move.
  • the gesture may also include detecting that the user's five fingers are merged, the palm is facing upward, the five fingers are bent, and the five finger tips are repeated multiple times toward/away from the palm direction, etc.
  • the trigger information can be generated by the user editing through the front-end device.
  • the front-end device may be a smartphone terminal (APP), a car terminal (APP), or a Web terminal (editing page or APP), etc. Take the front-end device as an APP as an example.
  • the visual programming interface can be prefabricated in the APP on the smartphone, car or web. Users (car owners) can enter the visual editing page by opening the APP. In the visual programming interface, you can use the icon drag-and-drop editing method to edit the peripheral atomization function blocks corresponding to the control command icons such as the movement of the vehicle's robotic arm, the sound effects of the speakers, the door opening status, and the ambient light changes.
  • the visual programming interface is provided with control instruction icons for a vehicle-mounted robotic arm, a control instruction icon for a speaker, a control instruction icon for an ambient light, a control instruction icon for a car door, etc.
  • control instruction icons for a vehicle-mounted robotic arm when receiving the user's drag command for the control instruction icon of the vehicle-mounted robotic arm, the control instruction icon of the vehicle-mounted robotic arm can be set to editable, and the control instructions of other vehicle-mounted components can be set to editable.
  • the icon is made uneditable. Uneditable can be grayed out or added Load non-editable layers, etc.
  • For editable control instruction icons submenus can be displayed.
  • the submenus can display prefabricated actions of the vehicle-mounted robotic arm, such as swinging left and right (shaking head), swinging up and down (nodding), etc.
  • the submenu can display a single control action, such as moving forward one space, up one space, etc., turning 5° to the left, etc.
  • the dragging instruction may be to select the control instruction icon of a single vehicle-mounted component, or it may be to select the control instruction icons of multiple vehicle-mounted components in succession.
  • the execution sequence of the multiple vehicle-mounted components can also be controlled in conjunction with the timeline.
  • the order of execution can be serial or parallel, etc.
  • the first trigger information may be trigger information generated first in time series.
  • a first control instruction sequence for vehicle-mounted controllable components including the vehicle-mounted robotic arm can be generated.
  • the first control instruction sequence generated using the first trigger information may be directed only to the vehicle-mounted robotic arm, or may be directed to multiple vehicle-mounted controllable components such as the vehicle-mounted robotic arm, the door, the speaker, and the ambient light.
  • the second control instruction sequence generated using the second trigger information can also be targeted at the vehicle-mounted robotic arm, or can also be targeted at multiple vehicle-mounted controllable components such as car doors, speakers, and ambient lights.
  • a preset conflict resolution strategy is executed to adjust the first control instruction sequence and the second control instruction sequence.
  • the first control instruction sequence and the second control instruction sequence can be adjusted according to the priority of the control instruction sequence as a conflict resolution strategy. This supports the concurrency of multiple control instruction sequences, and the controllable components perform their respective functions according to the conflict resolution strategy, improving the intelligence of the vehicle's controllable components.
  • the types of the first trigger information and the second trigger information may be determined first. Then, if the specified type exists, or the priority of the trigger information of the specified type is higher than the priority of another trigger information, the other trigger information is ignored.
  • Types of the first trigger information and the second trigger information may include security or non-security.
  • the safety category can be triggered by vehicle braking.
  • the automatic emergency braking signal (AEB) sent by the Auto-driving Domain Controller Module (ADCM) can correspond to safety trigger information.
  • the control signals sent through the Infotainment Domain Controller Module (IDCM) and the control signals sent through the Body Domain Controller Module (BDCM) can both be used as non-safety trigger information.
  • the specified type can be a safe class. That is, the priority of the security-type trigger information may be the highest level. For example, when the (first) control instruction sequence corresponding to the non-safety trigger information is interrupted by the (second) control instruction sequence corresponding to the safety trigger information, and there is a conflict between the two, due to the safety trigger The (second) control instruction sequence corresponding to the information corresponds to a higher level. In this case, even if the (first) control instruction sequence corresponding to the non-safety trigger information is earlier, it can still be ignored.
  • the priority of each signal can be determined in advance. For example, the priority of control signals corresponding to high-frequency, fixed-function scenarios sent through BDCM may be higher than the priority of control signals corresponding to user-initiated adjustment scenarios.
  • the co-pilot mode can include that after detecting that the co-pilot is getting on the car, the vehicle-mounted mechanical arm rotates left and right, so that the vehicle display screen can face the co-pilot side, making it easier for the co-pilot to get in and operate the car.
  • the power-off return mode can be the reset of the vehicle-mounted manipulator after power-off.
  • Track mode can be when the vehicle starts racing, the on-board robotic arm resets, or it tilts at a certain angle in conjunction with the vehicle's steering, etc.
  • the triggering condition for the rear-seat free space mode can be the parking state, or the driving speed is 0 km/h. In this case, it is also necessary to ensure that there is no occupation signal for the front seats for 5 seconds.
  • the main and passenger seats move toward the front of the car to their closest position and fold forward.
  • the vehicle-mounted robotic arm (carrying the display device) moves toward the rear of the vehicle to the optimal position (for example, centered and extended forward), and the ambient light performs dim light display, etc.
  • Normal interruptions of the rear row space mode can include: the user ends the scene independently. This situation will no longer trigger the scene (this time the back row free space mode ends), and other untriggered situations can still trigger the scene.
  • Abnormal interruptions of the rear row enjoyment space mode may include: a higher priority scene is executed (the rear row enjoyment space mode is suspended this time), and the scene will be retriggered when the trigger conditions are met again.
  • User active adjustment scenarios can include manual power adjustment, steering wheel control adjustment, voice adjustment, motion adjustment, etc.
  • the user's active adjustment scene may include a control instruction sequence (jog command) created by the user in a personalized manner; the user's active adjustment scene may also include an existing control instruction sequence selected by the user, which is referred to here as the control sequence corresponding to the calling scene card.
  • the control sequence corresponding to the scene card you need to use the scene engine, and the scene card is stored in the scene engine.
  • the scene card provides two forms: preset scenes (created at the factory) and user-created scenes (customized by the user to compile and save to meet rationality requirements).
  • the scene card itself is a public resource, which can be used to call the vehicle-mounted robotic arm controller through script pre-embedding, UI interface human-computer interaction, voice, steering wheel control, etc., and then control the vehicle-mounted robotic arm controller.
  • the trigger signal of the jog command directly calls the vehicle-mounted robotic arm service (Bot Service), thereby controlling the vehicle-mounted robotic arm controller.
  • Bot Service vehicle-mounted robotic arm service
  • conflict resolution strategies can be adjusted during this process.
  • the jog command has a certain degree of randomness. For example, it can be a control command issued randomly by the driver or passenger during the ride. For example, it can be "move a little to the left", “come closer to me", etc. uttered by the driver through voice.
  • the conflict resolution strategy is determined by determining the priority of the first control instruction sequence and the second control instruction sequence. If they have the same priority, they can be executed on a first-come, first-served basis, or they can be executed in parallel if there is no conflict with the controlled objects.
  • the first control instruction sequence and the second control instruction sequence can be executed in parallel, or they can be executed sequentially according to the reception time.
  • the method further includes the following process: directly sending the control instruction sequence corresponding to the specified type of trigger information to the vehicle-mounted controllable component.
  • the specified type of trigger information may be the aforementioned safety trigger information, that is, the trigger information corresponding to the automatic emergency braking signal sent by the ADCM; or, it may also be the trigger information corresponding to the control signal sent through the BDCM. Trigger information.
  • the trigger information can also be sent through IDCM.
  • the MCU directly controls the vehicle-mounted robotic arm.
  • the control signal transmission path can be reduced, ensuring that the vehicle-mounted robotic arm can be accurately controlled in the first time.
  • the types of the first trigger information and the second trigger information may also be determined first. Then, if the types are the same, the attributes of the first control instruction sequence and the second control instruction sequence are determined, and the attributes include template class attributes or customized class attributes. Finally, if the attributes are different, or if the attributes are the same and belong to the customized category, the vehicle-mounted robotic arm is controlled to pause the action. When a new control instruction sequence is received, the vehicle-mounted robotic arm is controlled to execute the new control instruction sequence.
  • Template class Properties can include template class properties or custom class properties.
  • the previously mentioned welcome mode, co-pilot convenience mode, power-off return mode, track mode, etc. can correspond to template attributes. That is, each of the above modes is a complete set of preset control sequences, which can be executed according to the corresponding sequence when triggered.
  • Customized attributes can be attributes corresponding to fine-tuning control instructions triggered by the user through voice, gesture, or touch instructions. For example, “a little higher”, “a little closer to me”, “turn 10° towards me”, etc.
  • the above control instruction sequence does not have a complete control sequence, but a control instruction or control instruction sequence randomly issued by the user. In this regard, it can be divided into custom class attributes.
  • the vehicle-mounted robotic arm is executing the control instruction sequence (first control instruction sequence, template attribute) corresponding to the passenger convenience.
  • the driver issues instructions through side control, voice, gestures, etc. to generate the second trigger information.
  • the second control instruction sequence corresponding to the second trigger information conflicts with the first control instruction sequence corresponding to the first trigger information
  • the vehicle-mounted manipulator is in a running state when it receives a new jog control command (at this time
  • the jog control command of the second control command sequence is not cached), and the vehicle-mounted manipulator is controlled to stop and enter a state ready to receive new control commands.
  • a first-come, first-served conflict resolution strategy is implemented.
  • the subsequent control command sequence shall be determined first. That is, the received new control instruction sequence may be a newly received first control instruction sequence and a second control instruction sequence, or may be a third control instruction sequence different from the first control instruction sequence and the second control instruction sequence.
  • the conflict resolution strategy can also be obtained in the same way.
  • the priorities of the first control instruction sequence and the second control instruction sequence can be compared first when the attributes are the same and belong to customized attributes. Then, based on the comparison results, the vehicle-mounted manipulator is controlled to execute a control instruction sequence with high priority.
  • the priority of the two can first be compared. class.
  • the priority of the control signal corresponding to the high-frequency, fixed-function scene sent through the BDCM can be higher than the priority of the control signal corresponding to the user-initiated adjustment operation sent by the user using the steering wheel.
  • the priority of the control signal corresponding to the high-frequency and fixed-function scenario can be medium (the priority of the safety category is high).
  • the priority of control signals in user co-creation scenarios can be medium or low.
  • User co-created scenes can be scenes composed of user-defined action control instructions. Since it is a scenario that has been created by the user and passed the rationality test, it can also be used as a template type for user co-created scenarios.
  • the priority of the control signal corresponding to the user's active adjustment scene sent by the user using the steering wheel may be low.
  • Human-computer interaction scenes can be in several prefabricated modes. For example, it can include a safety steward mode (such as safety management, autonomous driving, etc.), a test drive introduction mode (such as vehicle function introduction), an awakening mode (such as interacting with the driver), and an intelligent volume adjustment mode (such as intelligent noise reduction). , mute when answering calls or talking), KTV mode, smart weather broadcast mode, New Year lion dance mode (multimedia playback on specific holidays), children's mode (playing cartoons or cartoon songs), low battery mode, etc.
  • a safety steward mode such as safety management, autonomous driving, etc.
  • test drive introduction mode such as vehicle function introduction
  • an awakening mode such as interacting with the driver
  • an intelligent volume adjustment mode such as intelligent noise reduction
  • the priority of the control signal corresponding to the automatic adjustment scene can be low.
  • the follow-up automatic adjustment scene can be a sequence of control instructions of the accompanying type. For example, the color change of the ambient light can be automatically adjusted as the music in the speaker changes.
  • follow-up automatic adjustment scenes can correspond to template types.
  • the conflict resolution strategy can be obtained by continuing the first-come-first-served approach in the previous steps.
  • the following steps may also be included: first, detect the status of the vehicle-mounted controllable components in real time, and the status includes a normal status or an abnormal status. Then, the executability of the first control instruction sequence and/or the second control instruction sequence is determined based on the detection result of the status of the vehicle-mounted controllable component.
  • the status of the vehicle-mounted controllable components may include normal status or abnormal status (abnormal status).
  • abnormal status For the vehicle-mounted manipulator, its status can be shown in Table 1.
  • the vehicle-mounted robotic arm can be directly controlled through the MCU of the IDCM.
  • the above information transmission process can use the CAN protocol. This ensures the stability and timeliness of control.
  • the vehicle-mounted robotic arm can be controlled based on the vehicle-mounted robotic arm service (Bot Service).
  • Bot Service can implement communication and scenarios Functions such as packaging, vehicle-mounted robotic arm status query, and vehicle-mounted robotic arm driving.
  • the communication function principle includes: exchanging data with external data interfaces (such as Open API) and receiving control instructions in .json format from the upper-layer scene engine.
  • Scene encapsulation may refer to encapsulating the aforementioned different scenes to obtain an encapsulated control instruction sequence to control the vehicle-mounted robotic arm.
  • the vehicle-mounted controllable component when there is no conflict between the first control instruction sequence and the second control instruction sequence, the vehicle-mounted controllable component is controlled to execute the first control instruction sequence and the second control instruction sequence in parallel.
  • the first control instruction sequence and the second control instruction sequence that do not conflict may be executed in parallel. It can realize personalized experiences and services for the driver, co-driver, and rear passengers. In summary, by combining safety, user experience, service priority, and triggering timing, different services can be provided to the main driver, co-driver, and rear passengers at the same time.
  • the first control instruction sequence is obtained based on at least script parsing of the received target script sequence including the vehicle-mounted robot arm control instruction.
  • the step of determining the target script sequence may be as shown in Embodiment 2 of the present application.
  • Embodiment 2 of the present application provides a method for processing a script sequence.
  • the steps for determining the target script sequence may also be as shown in Embodiment 3 of the present application.
  • Embodiment 3 of the present application provides a method for determining a script sequence.
  • the vehicle-mounted robotic arm As an automatic control device that imitates the functions of a human arm and can complete various tasks, the vehicle-mounted robotic arm has been widely used in industrial manufacturing, medical rescue, aerospace and other fields. However, installing vehicle-mounted robotic arms in the vehicle cockpit to provide intelligent driving services to vehicle occupants is an application field of vehicle-mounted robotic arms that few people have touched upon.
  • Embodiment 2 of the present application provides a script sequence processing method for determining a target script sequence.
  • the script sequence processing method provided in Embodiment 2 of the present application may include the following steps: first, when the user edits the target script sequence using the control script set, determine the next available control script in the control script set for the control script currently selected by the user.
  • the selected control script is a script used to control the vehicle-mounted manipulator to perform preset actions.
  • the selectable control script is a control script that can be controlled after the vehicle-mounted manipulator completes the preset actions corresponding to the currently selected control script.
  • the vehicle-mounted manipulator further executes the control script corresponding to the preset action within the preset accessible space. Then, the control script that can be selected is determined as the next control script that can be selected by the user.
  • control script that can be selected is a control script that can control the vehicle-mounted robotic arm to further execute the corresponding preset action within the preset reachable space after the vehicle-mounted robotic arm completes the preset action corresponding to the currently selected control script. Therefore, when the control script that can be selected is determined as the next control script that can be selected by the user, it can be ensured that the next control script selected by the user after the currently selected control script must be after the on-board manipulator is executed and After the preset action corresponding to the currently selected control script is performed, the vehicle-mounted manipulator can be controlled to further execute the control script corresponding to the preset action within the preset accessible space.
  • the execution subject of the script sequence processing method provided in the embodiment of this application is generally the server, and may also be the client.
  • the so-called server can be a cloud server or cloud server cluster that provides data processing, storage, forwarding and other services, or it can be a traditional server or traditional server cluster that provides data processing, storage, forwarding and other services.
  • the traditional server is generally implemented as a computing device.
  • the so-called client is an application that at least has the function of processing the target script sequence. (Application, APP), application or software.
  • the client can be deployed and run on a vehicle-mounted electronic device, on an application electronic device, or on a browser web page (web).
  • the client deployed and run on the vehicle-mounted electronic device is the vehicle-mounted client
  • the client deployed and run on the application electronic device is the mobile client
  • the client deployed and run on the browser web page is the web client.
  • a common mobile client is a mobile client.
  • the so-called vehicle-mounted robotic arm is a robotic arm installed in the vehicle cabin and used to provide intelligent driving services to people on the vehicle.
  • the number of vehicle-mounted robotic arms may be one or multiple.
  • the target script sequence can be processed separately for the multiple vehicle-mounted robotic arms, or the target script sequence can be processed for any one of the multiple vehicle-mounted robotic arms.
  • the following is a detailed description of the script sequence processing method provided in the embodiment of the present application, taking only one vehicle-mounted robotic arm as an example.
  • the vehicle-mounted robotic arm can independently perform corresponding preset actions to provide intelligent driving services for vehicle personnel.
  • the vehicle-mounted robotic arm can be used alone to control the movement of the display screen, such as controlling the preset action of moving the vehicle display back and forth or adjusting the angle, or independently performing the preset action of swinging left and right.
  • the vehicle-mounted robotic arm can also cooperate with the vehicle infotainment system, instrument panel, head-up display, streaming rearview mirror, ambient light, smart door, smart speaker, etc. in the vehicle cockpit to complete corresponding tasks in preset scenarios.
  • Default action For example, perform left and right swinging actions in conjunction with the flashing of the ambient light.
  • the vehicle-mounted client realizes control of the vehicle-mounted robotic arm by parsing and executing the control script. Specifically, the vehicle-mounted client can parse the control script and control the vehicle-mounted robotic arm according to the preset actions determined by the control script.
  • the control script set includes at least two preset actions.
  • the so-called preset actions are preset execution actions for the vehicle-mounted robotic arm. During the execution of the preset actions, the vehicle-mounted robotic arm is within the reachable space.
  • the preset action may specifically be a separate basic action, such as upward movement, downward movement, or leftward movement, etc.
  • Preset actions can also be complex actions composed of basic actions, such as swinging left and right, moving up and down, shaking your head or waving your hands, etc.
  • the specific process of determining the next control script that can be selected in the control script set may include the following steps: First, predict that the vehicle-mounted manipulator will complete the execution of the currently selected control script. The stopping position after the preset action corresponding to the script. Then, for each control script in the control script set, the real-time position of the vehicle-mounted manipulator during further execution of the corresponding preset action is predicted based on the stopping position. Finally, determine the control script that can be selected based on the real-time location.
  • the control script that can be selected is determined based on the real-time position of the vehicle-mounted mechanical arm during the further execution of the corresponding preset action based on the stop position, which can ensure that the control script that can be selected is the control script in the vehicle-mounted machinery.
  • the stop position of the vehicle-mounted robot arm can be within the reachable space after the vehicle-mounted robot arm is controlled to perform the corresponding preset action, the vehicle-mounted robot arm is controlled to perform the corresponding preset action. The process will cause the vehicle-mounted robotic arm to exceed the control script of the accessible space.
  • the vehicle-mounted manipulator can be controlled to further execute the control script corresponding to the preset action within the preset accessible space. Therefore, the real-time position is used to determine the control script that can be selected, so that the vehicle-mounted manipulator can be in the accessible space while executing the preset actions corresponding to each target script sequence in sequence according to the edited target script sequence.
  • the second control script predicts the stopping position of the vehicle-mounted robotic arm after executing the corresponding preset action; then, traverse each control script in the control script set, respectively. Predict the real-time position of the vehicle-mounted manipulator during further execution of the corresponding preset action based on the stopping position. Finally, use the real-time corresponding to each control script Position, within the set of control scripts, that determines the control script that can be selected as the third control script in the sequence of target scripts.
  • control script determines the spatial range limited by the accessible space. Then, for each control script, detect whether the real-time position is within the spatial range. Finally, the control script whose real-time position is within the spatial range is determined as a control script that can be selected.
  • Determining the control script whose real-time position is within the spatial range as a control script that can be selected can enable the vehicle-mounted manipulator to execute the preset actions corresponding to each target script sequence in sequence according to the edited target script sequence. are within accessible space. Therefore, personalized control of the vehicle-mounted robotic arm can be achieved through the edited target script sequence, so that the vehicle-mounted robotic arm can sequentially execute the preset actions corresponding to each target script sequence according to the edited target script sequence.
  • the specific implementation method of determining the spatial range limited by the accessible space is: first, using the installation position of the vehicle-mounted robotic arm in the vehicle cockpit as the coordinate origin, a three-dimensional coordinate system for the vehicle-mounted robotic arm is constructed.
  • Figure 4 is a schematic diagram of the coordinate system of a vehicle-mounted robotic arm provided in Embodiment 2 of the present application.
  • the x-axis of the three-dimensional coordinate system shown in Figure 4 points to the rear of the car, and the z-axis points to the roof.
  • the spatial range can be represented by the pitch angle range, yaw angle range and spatial distance of the vehicle-mounted manipulator arm.
  • Embodiment 2 of this application the specific implementation method of predicting the real-time position of the vehicle-mounted robotic arm during further execution of the corresponding preset action based on the stopping position is as follows:
  • the vehicle-mounted robot arm On the basis of the stop position, for each control script in the control script set, based on the spatial position change of the vehicle-mounted robot arm relative to the installation position during the execution of the corresponding preset action, the vehicle-mounted robot arm is predicted to perform the corresponding preset action process. real-time location in .
  • the spatial position change of the vehicle-mounted manipulator relative to the installation position during the execution of the preset action will be configured in advance, and the process of the vehicle-mounted manipulator performing the corresponding preset action will be predicted.
  • the real-time position in the vehicle is determined, the real-time position of the vehicle-mounted robot arm during the execution of the corresponding preset action can be predicted based on the stop position and the spatial position change of the vehicle-mounted robot arm relative to the installation position during the execution of the preset action.
  • the vehicle-mounted robotic arm may be composed of at least one joint or connecting rod.
  • each joint or link The pole may exceed the accessible space during the execution of the preset action. Therefore, when predicting the real-time position of the vehicle-mounted manipulator during the execution of the corresponding preset action, it is necessary to predict the real-time position of each joint or link in the vehicle-mounted manipulator during the execution of the corresponding preset action.
  • the currently selected control script when the execution subject is a client, the currently selected control script may be determined in the following manner: in response to a selection operation triggered by the user for the control script, the currently selected control script is determined. In addition, after determining the control script that can be selected, the control script that can be selected by the user can be further set to a state that can be selected by the user. Under this determination method, in addition to at least having the target script sequence processing function, the client also needs to have the target script sequence editing function.
  • target script sequence processing function and the target script sequence editing function are deployed separately, there will often be a problem of delayed editing of the target script sequence due to poor network communication signals between different electronic devices. Setting the target script sequence processing function and the target script sequence editing function on the same client at the same time can effectively prevent this problem from occurring.
  • the currently selected control script can also be determined by using the first prompt information sent by the server for the control script that can be selected to determine the control script that can be selected. script.
  • second prompt information for prompting the control scripts that can be selected by the user can be further sent to the server, so that the first electronic device uses the second prompt information to provide the control scripts that can be selected by the user.
  • the selected control script is made available for user selection.
  • the client in addition to having at least a target script sequence processing function, can also have a target script sequence editing function.
  • the selected control script is triggered by the user through the human-computer interaction interface to run on the first electronic device.
  • the target script sequence processing function and the target script sequence editing function are deployed separately, which can reduce the resource consumption of the target script sequence processing function and the target script sequence editing function on the electronic devices used to support their respective operations.
  • the currently selected control script when the execution subject is the server, the currently selected control script may be determined in the following manner: using the third prompt information sent by the second electronic device for the currently selected control script to determine the currently selected control script. .
  • fourth prompt information for prompting the control script that can be selected by the user can be further sent to the second electronic device, so that the second electronic device uses the fourth prompt information to select the control script that can be selected by the user.
  • the control script for user selection is set to a state for user selection.
  • the control script that can be selected is the control script selected by the user by triggering the target application, program or software running on the second electronic device through the human-computer interaction interface of the second electronic device.
  • the target script sequence processing function is deployed on the server, which not only improves the determination speed of control scripts available for users to select, but also provides target script sequence processing function services for different users.
  • the detailed process of processing the target script sequence may include the following steps: Step 1, using the third prompt information sent by the second electronic device for the currently selected control script, determine The currently selected control script.
  • Step 2 For each control script in the control script set, predict the real-time position of the vehicle-mounted robotic arm during its further execution of the corresponding preset action based on the stopping position.
  • Step 3 Check whether the real-time position is within the spatial range. If so, determine the control script whose real-time position is within the spatial range as a control script that can be selected. If not, determine the control script whose real-time position is not within the spatial range. Control script determined to be unselectable.
  • control scripts that can be selected by the user are set to a state that can be selected by the user, so that the user can visually perceive which control script is the next control script that can be selected. This can improve the user's experience in editing the target script sequence and improve the efficiency of the user in editing the target script sequence.
  • the control scripts available for the user to select in order to improve the degree of visualization of the control scripts available for the user to select and improve the user experience, in the process of displaying the control scripts available for the user to select, the control scripts available for the user to select can be displayed first.
  • the script is set to highlight display mode, and then the control scripts that can be selected by the user are highlighted.
  • control scripts that can be selected by the user can also set to a mode that can be selected by the user, and set the control scripts that are not available for user selection to a mode that cannot be selected by the user.
  • Control scripts that are available for user selection are set to a highlighted mode, and control scripts that are not available for user selection are set to a grayed out mode.
  • control scripts that are not available for selection by the user can also be set to a mode that cannot be displayed.
  • the vehicle-mounted robotic arm As an automatic control device that imitates the functions of a human arm and can complete various tasks, the vehicle-mounted robotic arm has been widely used in industrial manufacturing, medical rescue, aerospace and other fields. However, installing vehicle-mounted robotic arms in the vehicle cockpit to provide intelligent driving services to vehicle occupants is an application field of vehicle-mounted robotic arms that few people have touched upon.
  • Embodiment 3 of the present application provides a script sequence determination method for determining a target script sequence.
  • the method for determining the script sequence may include the following steps: first, for each control script in the first script sequence, Detect the reachability of the vehicle-mounted manipulator during the process of executing the preset action according to the first script sequence.
  • the control script is a script used to control the vehicle-mounted manipulator to perform the preset action.
  • the reachability is used to indicate whether the onboard manipulator exceeds the preset limit. accessible space.
  • the first script sequence is used to determine the target script sequence for controlling the vehicle-mounted manipulator to perform the preset action.
  • the script sequence determination method provided in Embodiment 3 of the present application can sequentially detect the reachability of the vehicle-mounted robotic arm during the execution of the preset action according to the first script sequence, and use the first script according to the corresponding reachability. Sequence, determine the target script sequence used to control the vehicle-mounted manipulator to perform preset actions. Since the target script sequence is a script sequence used to control the vehicle-mounted robot arm to perform preset actions, the target script sequence can be used to achieve personalized control of the vehicle-mounted robot arm, so that the vehicle-mounted robot arm can control each of the first script sequences. All preset actions are executed.
  • the execution subject of the script sequence determination method provided in Embodiment 3 of the present application is generally a server, and may also be a client.
  • the so-called server can be a cloud server or cloud server cluster that provides services such as data processing, storage, and forwarding, or it can be a traditional server or traditional server cluster that provides services such as data processing, storage, and forwarding.
  • the traditional server is generally implemented as a computing device.
  • the so-called client is an application (Application, APP), application or software that at least has a script sequence determination function.
  • the client can be deployed and run on a vehicle-mounted electronic device, on an application electronic device, or on a browser web page (web).
  • the client deployed and run on the vehicle-mounted electronic device is the vehicle-mounted client
  • the client deployed and run on the mobile electronic device is the mobile client
  • the client deployed and run on the browser web page is the web client.
  • a common mobile client is a mobile client.
  • the vehicle-mounted client responds to the script sequence editing request triggered by the user through the human-computer interaction interface and displays the pre-designed control script on the designated page.
  • the vehicle-mounted client selects the corresponding control script in response to the selection operation triggered by the user through the human-computer interaction interface for the control script, and uses the selected control script to generate the first script sequence according to the order in which the user selects the control script.
  • the first script sequence may also be generated by the target application, program or software in response to the user through the human-computer interaction interface of the electronic device. Generated.
  • the so-called target application, program or software is an application, program or software that at least has a script editing function.
  • the so-called electronic devices run target applications, programs or software, and the specific implementation methods include but are not limited to mobile phones, computers and vehicle-mounted electronic devices.
  • implementation methods of electronic devices include but are not limited to mobile phones and computers.
  • the so-called electronic devices include but are not limited to mobile phones, computers and vehicle-mounted electronic devices.
  • the first script sequence may also be generated by other clients in response to the user's triggering of the human-computer interaction interface.
  • the other clients also need to be notified of having the script sequence editing function. That is to say, when the target application, program or software is an application, program or software that has both a script sequence editing function and a script sequence processing function, the target application, program or software is another client.
  • the server After receiving the first script sequence forwarding request, the server will send a target script sequence determination request to the vehicle-mounted client. Finally, after receiving the target script sequence determination request, the vehicle-mounted client will parse the target script sequence determination request and obtain the first script sequence carried in the target script sequence determination request.
  • the so-called first script sequence acquisition process can be as follows: first, the client responds to the script sequence editing triggered by the user through the human-computer interaction interface Request to display the preconfigured control script on the specified page. Secondly, the client responds to the user touching the control script through the human-computer interaction interface. A selection operation is performed to select the corresponding control script, and according to the order in which the user selects the control scripts, the selected control script is used to generate the first script sequence.
  • the client sends a target script sequence determination request to the server. Finally, after receiving the target script sequence determination request, the server parses the target script sequence determination request and obtains the first script sequence carried in the target script sequence determination request.
  • the selection order is determined according to the order in which the user selects control scripts
  • the first script sequence is a script sequence generated for the control script selected by the user based on the selection order. Therefore, the first script sequence can reflect the user's personalized needs for the vehicle-mounted robotic arm to perform specified preset actions in a specified execution sequence.
  • the first script sequence since the first script sequence is generated in response to the control script selected by the user in the control script collection, the first script sequence can meet the user's control requirements for the vehicle-mounted robotic arm. And because the target script sequence is determined using the first script sequence, the target script sequence can also meet the user's control requirements for the vehicle-mounted robotic arm.
  • the target script sequence will be further determined based on the first script sequence according to the reachability situation, so as to control the vehicle-mounted manipulator to execute the corresponding response in the preset reachable space according to the target script sequence. preset actions. Therefore, during the process of generating the first script sequence, the user does not need to pay attention to whether the selected control script will cause the vehicle-mounted manipulator to exceed the reachable space, thereby improving the user experience.
  • the first script sequence in Embodiment 3 of the present application can not only be obtained through the above method, but also can be obtained through the following method: first, the merchant writes according to the preset action requirements through specific programming software. First script sequence. Then, after the first script sequence is written, it is uploaded to the server through the merchant. When the execution subject is the server, the server determines the script sequence for the first script sequence sent by the merchant. When the execution subject is a client, the server forwards the first script sequence sent by the merchant to the corresponding client, so that the corresponding client determines the script sequence for the first script sequence.
  • the so-called vehicle-mounted robotic arm is a robotic arm installed in the vehicle cabin and used to provide intelligent driving services to people on the vehicle.
  • the number of vehicle-mounted robotic arms may be one or multiple.
  • the script sequence can be determined separately for the multiple vehicle-mounted robotic arms, or the script sequence can be determined for any one of the multiple vehicle-mounted robotic arms.
  • the method for determining the script sequence provided in Embodiment 3 of the present application will be described in detail below, taking only one vehicle-mounted robotic arm as an example.
  • the vehicle-mounted robotic arm can independently perform corresponding preset actions to provide intelligent driving services for vehicle personnel.
  • the vehicle-mounted robotic arm can be used alone to control the movement of the display screen, such as controlling the preset action of moving the vehicle display back and forth or adjusting the angle, or independently performing the preset action of swinging left and right.
  • the vehicle-mounted robotic arm can also cooperate with the vehicle infotainment system, instrument panel, head-up display, streaming rearview mirror, ambient light, smart door, smart speaker, etc. in the vehicle cockpit to complete corresponding tasks in preset scenarios.
  • Default action For example, perform left and right swinging actions in conjunction with the flashing of the ambient light.
  • the vehicle-mounted client realizes control of the vehicle-mounted robotic arm by parsing and executing the control script. Specifically, the vehicle-mounted client can parse the control script and control the vehicle-mounted robotic arm according to the preset actions determined by the control script.
  • the so-called preset actions are preset execution actions for the vehicle-mounted robotic arm. During the execution of the preset actions, the vehicle-mounted robotic arm is within the reachable space.
  • the preset action may specifically be a separate basic action, such as upward movement, downward movement, or left movement, etc.
  • Preset actions can also be complex actions composed of basic actions, such as swinging left and right, moving up and down, shaking your head or waving your hands, etc.
  • the target control script when using the first script sequence to determine the target script sequence according to the reachability situation, can be obtained first when the reachability situation is detected, including the first reachability situation.
  • the situation is used to indicate that the vehicle-mounted robot arm exceeds the reachable space, and the target control script is the control script that causes the vehicle-mounted robot arm to exceed the reachable space.
  • the reset script is a script used to control the vehicle-mounted manipulator to perform a reset action.
  • the reachability of the vehicle-mounted robotic arm during execution of the preset action according to the second script sequence is detected in turn.
  • use the second script sequence to determine Target script sequence.
  • the vehicle-mounted manipulator can execute the preset actions corresponding to the target control script, it is still unclear whether the vehicle-mounted manipulator can execute the preset actions corresponding to the control scripts after the target control script.
  • the second script sequence is used to determine the target script sequence.
  • the compiler compiles and detects the first control script and detects that the vehicle-mounted robotic arm 301 is within the reachable space 302 during the execution of the preset actions corresponding to the first control script, it proves that the vehicle-mounted robotic arm can Execute a preset action corresponding to the first script sequence according to the first script sequence. At this time, it can be further detected whether the vehicle-mounted manipulator can perform the preset action corresponding to the second script sequence according to the first script sequence.
  • the vehicle-mounted robot arm 301 If it is detected that the vehicle-mounted robot arm 301 is in the process of further executing the preset action corresponding to the second control script, there may be a situation where the vehicle-mounted robot arm 301 exceeds the accessible space 302 . At this time, it is necessary to stop detecting the remaining control scripts in the first script sequence. That is, further detection of the third control script and the fourth control script is stopped, and the second control script is determined as the target control script.
  • a reset script needs to be inserted into the first script sequence as a previous control script of the target control script to obtain the second script sequence.
  • the script execution order in the second script sequence is the first control script (the first control script to be executed in the second script sequence), the reset script (the second control script to be executed in the second script sequence). script), the second control script (the third control script to be executed in the second script sequence), the third control script (the fourth control script to be executed in the second script sequence), and the fourth control script ( The fifth executed control script is required in the second script sequence).
  • the second script control sequence not only includes all the control scripts in the first script, but also because before executing the second control script according to the second control script sequence, the vehicle-mounted manipulator 301 needs to execute the reset script first to achieve reset. When the vehicle-mounted manipulator 301 is reset, it can be ensured that the second control script will not exceed the reachable space 302 during execution.
  • the step of using the second script sequence to determine the target script sequence according to the reachability situation includes: determining the second script sequence as the target when the reachability situations are all second reachability situations. Script sequence, the second reachable situation is used to indicate that the vehicle-mounted manipulator does not exceed the reachable space.
  • determining the second script sequence as the target script sequence can ensure that the target script sequence can realize personalized control of the vehicle-mounted robotic arm, so that the vehicle-mounted robotic arm performs all preset actions in the first script sequence. implement.
  • the step of determining the target script sequence using the second script sequence according to the reachability situation also includes: first, obtaining the target control script when the reachability situation is detected including the first reachability situation. . Insert the reset script in the second script sequence as the previous control script of the target control script to obtain the third script sequence, and so on until the target script sequence is determined.
  • the step of determining the target script sequence may also include: when the reachability situations are all second reachability situations, determining the first script sequence as Target script sequence, the second reachable situation is used to indicate that the vehicle-mounted manipulator does not exceed the reachable space.
  • determining the first script sequence as the target script sequence can ensure that the target script sequence can realize personalized control of the vehicle-mounted robotic arm, so that the vehicle-mounted robotic arm can execute the specified preset action in accordance with the specified execution sequence.
  • the steps of determining the target script sequence using the first script sequence according to the reachability situation can also be as follows: Step 1, first detect whether the vehicle-mounted robotic arm performs the preset action according to the first script sequence. There is a first reachable situation. Step 2: If there is a first reachable situation, obtain the target control script, insert the reset script into the first script sequence as the previous control script of the target control script, and re-execute step 1. Step 3: If there is no first reachable situation, determine the first script sequence as the target script sequence.
  • the so-called sequential detection of whether there is a first reachable situation when the vehicle-mounted robotic arm performs the preset action according to the first script sequence is: sequentially detecting whether the vehicle-mounted robotic arm performs the preset action according to the first script sequence. Is there any situation where the vehicle-mounted robotic arm exceeds the accessible space during the process?
  • the specific implementation method includes: if there is a situation where the vehicle-mounted robotic arm exceeds the reachable space , then insert the reset script into the first script sequence as the previous control script of the target control script.
  • the so-called specific implementation method of determining the first script sequence as the target script sequence if there is no first reachable situation means: if the vehicle-mounted manipulator does not exceed the reachable space during the execution of the preset action according to the first script sequence. case, the first script sequence is determined as the target script sequence.
  • the detection method of the reachability situation is as follows: first, determine the spatial range limited by the reachability space. Then, based on the currently detected control script, the real-time position of the vehicle-mounted robotic arm during execution of the corresponding preset action is predicted. Finally, the spatial range and real-time location are used to detect reachability.
  • the specific implementation method of determining the spatial range defined by the accessible space is as follows: first, using the installation position of the vehicle-mounted robotic arm in the vehicle cockpit as the coordinate origin, a three-dimensional coordinate system for the vehicle-mounted robotic arm is constructed. Please refer to Figure 4 again.
  • the x-axis of the three-dimensional coordinate system shown in Figure 4 points to the rear of the car, and the z-axis points to the roof.
  • the spatial range can be represented by the pitch angle range, yaw angle range and spatial distance of the vehicle-mounted manipulator arm.
  • the specific implementation method of predicting the real-time position of the vehicle-mounted robotic arm during the execution of the corresponding preset action is: based on the stop position, based on the currently detected control script, based on the relative position of the vehicle-mounted robotic arm during the execution of the corresponding preset action.
  • the spatial position changes that occur at the installation position are used to predict the real-time position of the vehicle-mounted manipulator during the execution of the corresponding preset action.
  • the spatial position change of the vehicle-mounted manipulator relative to the installation position during the execution of the preset action will be configured in advance, and the process of the vehicle-mounted manipulator performing the corresponding preset action will be predicted.
  • the real-time position in the vehicle is determined, the real-time position of the vehicle-mounted robot arm during the execution of the corresponding preset action can be predicted based on the stop position and the spatial position change of the vehicle-mounted robot arm relative to the installation position during the execution of the preset action.
  • the vehicle-mounted robotic arm may be composed of at least one joint or connecting rod.
  • each joint or link The pole may exceed the accessible space during the execution of the preset action. Therefore, when predicting the real-time position of the vehicle-mounted manipulator during the execution of the corresponding preset action, it is necessary to predict the real-time position of each joint or link in the vehicle-mounted manipulator during the execution of the corresponding preset action, and according to the vehicle-mounted The real-time position of each joint or link in the robotic arm during the execution of the corresponding preset action is used to detect reachability.
  • the execution subject of the script sequence determination method provided in Embodiment 3 of the present application is the server, in order to enable the vehicle-mounted client to achieve personalized control of the vehicle-mounted robotic arm through the target script sequence, so that the vehicle-mounted robotic arm can Each preset action in a script sequence is executed.
  • the target script sequence needs to be further sent to the vehicle-mounted client for calling and parsing the target script sequence, so that the vehicle-mounted client can follow the
  • the target script sequence controls the vehicle-mounted manipulator to perform preset actions.
  • the execution subject of the script sequence determination method provided in Embodiment 3 of the present application is a vehicle-mounted client, in order to achieve personalized control of the vehicle-mounted robotic arm through the target script sequence, so that the vehicle-mounted robotic arm can control the first script sequence
  • Each preset action in is executed.
  • the target script sequence needs to be called and parsed first, and based on the parsed target script sequence, the vehicle-mounted manipulator is controlled to execute the preset according to the target script sequence. action.
  • Embodiment 4 of the present application provides a control device for a vehicle-mounted robotic arm.
  • the device may include: a first control instruction sequence generation module 501, configured to generate a first control instruction sequence for vehicle-mounted controllable components including a vehicle-mounted robotic arm according to the first trigger information; the first trigger information is generated according to the vehicle-mounted robotic arm.
  • the command information of different personnel and/or the environmental information of the current vehicle location is determined; the second control command sequence generation module 502 is used to receive the second trigger during the process of the vehicle-mounted controllable component executing the first control command sequence.
  • a second control instruction sequence is generated for the vehicle-mounted controllable components including the vehicle-mounted manipulator; the second trigger information is determined based on the instruction information of different people in the vehicle and/or the environmental information of the current vehicle location. ;
  • the control objects of the two control instruction sequences all include the vehicle-mounted robotic arm.
  • the conflict resolution strategy determination module 503 may further include: a type determination sub-module, used to determine the types of the first trigger information and the second trigger information; a conflict resolution strategy determination execution sub-module, used to determine the type of the first trigger information and the second trigger information when there is If the specified type or the priority of the specified type of trigger information is higher than the priority of another trigger information, the other trigger information will be ignored.
  • control device of the vehicle-mounted robotic arm may further include: a sending module, specifically configured to directly send the control instruction sequence corresponding to the specified type of trigger information to the vehicle-mounted controllable component.
  • the conflict resolution strategy determination module 503 may further include: a type determination sub-module, used to determine the types of the first trigger information and the second trigger information; an attribute determination sub-module, used in the case of the same type Next, determine the attributes of the first control instruction sequence and the second control instruction sequence.
  • the attributes include template class attributes or customized class attributes; the conflict resolution strategy determines the execution sub-module, which is used when the attributes are different, or the attributes are the same and belong to the customized class. Under the condition, the vehicle-mounted robotic arm is controlled to pause, and when a new control instruction sequence is received, the vehicle-mounted robotic arm is controlled to execute the new control instruction sequence.
  • the conflict resolution strategy determination module 503 may further include: a priority comparison submodule, used to compare the first control instruction sequence and the second control sequence when the attributes are the same and belong to customized attributes. The priority of the instruction sequence; the conflict resolution strategy determination execution sub-module is also used to control the vehicle-mounted manipulator to execute the control instruction sequence with high priority based on the comparison results.
  • control device of the vehicle-mounted robotic arm may also include: a state detection module for detecting the state of the vehicle-mounted controllable components in real time, where the state includes a normal state or an abnormal state; and an executability determination module for detecting the state of the vehicle-mounted controllable component according to The detection result of the status of the vehicle-mounted controllable component determines the executability of the first control instruction sequence and/or the second control instruction sequence.
  • control device of the vehicle-mounted manipulator may further include: an execution control module, configured to control the vehicle-mounted controllable component to execute the first control component in parallel when there is no conflict between the first control instruction sequence and the second control instruction sequence. a control instruction sequence and a second control instruction sequence.
  • the first control instruction sequence generation module 501 is specifically configured to obtain the first control instruction sequence based on script parsing of a received target script sequence containing vehicle-mounted manipulator control instructions.
  • the first control instruction sequence generation module 501 includes: a selectable script determination subunit for When the user uses the control script set to edit the script sequence, the next control script that can be selected in the control script set is determined for the control script currently selected by the user.
  • the control script is a script used to control the vehicle-mounted robotic arm to perform preset actions.
  • the control script that can be selected is a control script that can control the vehicle-mounted robotic arm to further execute the corresponding preset action within the preset reachable space after the vehicle-mounted robotic arm has completed the preset action corresponding to the currently selected control script; it can
  • the selected script determination subunit is used to determine the control script that can be selected as the next control script that can be selected by the user.
  • the selectable script determines subunits, which may include:
  • Stop position prediction subunit used to predict the stopping position of the vehicle-mounted manipulator after executing the preset action corresponding to the currently selected control script
  • the real-time position prediction subunit is used to predict the real-time position of the vehicle-mounted robotic arm during its further execution of the corresponding preset action based on the stop position for each control script in the control script set;
  • the first determining subunit of the scripts that can be selected is used to determine the control scripts that can be selected based on the real-time position.
  • the first determined subunit of the script available for selection may include:
  • the spatial range determination subunit is used to determine the spatial range limited by the accessible space
  • the real-time position detection subunit is used to detect whether the real-time position is within the spatial range for each control script
  • the second determination subunit of the scripts that can be selected is used to determine the control script whose real-time position is within the spatial range as the control script that can be selected.
  • the selectable script determining subunit may include: a selectable script first determining subunit, configured to determine the currently selected control script in response to a selection operation triggered by the user for the control script;
  • the device also includes: a state setting unit, used to set the control script available for selection by the user to a state available for selection by the user.
  • the selectable script determining subunit may include: a second selectable script determining subunit, configured to use the first prompt information sent by the server for the selectable control script to determine that the selectable script can be selected.
  • the selected control script ;
  • the device also includes: a first sending unit for prompt information, configured to send a second prompt information to the server for prompting a control script available for selection by the user, so that the first electronic device uses the second prompt information to provide the control script available to the user.
  • the selected control script is made available for user selection.
  • the device also includes a second prompt information sending unit for sending fourth prompt information for prompting the control script available for the user to select to the second electronic device, so that the second electronic device uses the fourth prompt information to provide the control script available to the user.
  • the selected control script is made available for user selection.
  • the first control instruction sequence generation module 501 includes: an reachability detection subunit, configured to sequentially detect for each control script in the first script sequence when the vehicle-mounted manipulator is executing the predetermined sequence according to the first script sequence. Assume the reachability situation during the action.
  • the control script is a script used to control the vehicle-mounted robotic arm to perform preset actions.
  • the reachability situation is used to indicate whether the vehicle-mounted robotic arm exceeds the preset reachable space; the target script sequence determines the subunit, Used to use the first script sequence to determine a target script sequence for controlling the vehicle-mounted manipulator to perform a preset action based on the reachability situation.
  • the target script sequence determining subunit may include: a first subunit for obtaining a target control script, configured to obtain the target control script when detecting that a reachable situation includes a first reachable situation.
  • the reach situation is used to indicate that the vehicle-mounted robot arm exceeds the reachable space
  • the target control script is the control script that causes the vehicle-mounted robot arm to exceed the reachable space
  • the second script sequence obtains a sub-unit, which is used to insert a reset script as a target in the first script sequence
  • the previous control script of the control script is used to obtain the second script sequence.
  • the reset script is a script used to control the vehicle-mounted robotic arm to perform a reset action; the reachability detection subunit is used to target each control script in the second script sequence, The reachability of the vehicle-mounted manipulator is detected in sequence while performing preset actions according to the second script sequence; the target script sequence determines the first subunit, which is used to determine the target script sequence according to the reachability using the second script sequence.
  • the first subunit of determining the target script sequence may include: a second subunit of determining the target script sequence, which is used to convert the second script sequence into Determined as the target script sequence, the second reachability condition is used to indicate that the vehicle-mounted manipulator does not exceed the reachable space.
  • the target script sequence determines the first subunit, and may further include: the target control script obtains the second subunit, which is used to obtain the target control when the reachable situation is detected including the first reachable situation. script; the third script sequence gets the sub A unit for inserting a reset script into the second script sequence as the previous control script of the target control script to obtain the third script sequence, and so on until the target script sequence is determined.
  • the target script sequence determining subunit may include: a third target script sequence determining subunit, configured to determine the first script sequence as when all reachable situations are second reachable situations.
  • Target script sequence, the second reachable situation is used to indicate that the vehicle-mounted manipulator does not exceed the reachable space.
  • the reachability detection subunit may include: a spatial range determination subunit, used to determine the spatial range defined by the reachable space; a real-time position prediction subunit, used to control the script for the current detection, Predict the real-time position of the vehicle-mounted robotic arm during the execution of the corresponding preset action; use the spatial range and real-time position to detect reachability.
  • the device may further include: a target script sequence sending unit, configured to send the target script sequence to the vehicle-mounted client for calling and parsing the target script sequence, so that the vehicle-mounted client controls according to the target script sequence.
  • a target script sequence sending unit configured to send the target script sequence to the vehicle-mounted client for calling and parsing the target script sequence, so that the vehicle-mounted client controls according to the target script sequence.
  • the vehicle-mounted robotic arm performs preset actions.
  • the device may further include: a target script sequence parsing unit, configured to call and parse the target script sequence;
  • the fifth embodiment of the present application provides a vehicle-mounted display device, including: a control unit for executing the control method of the vehicle-mounted robotic arm provided in the first embodiment of the present application, or including the vehicle-mounted robotic arm provided in the fourth embodiment of the present application.
  • a control device a display module composed of a vehicle-mounted robotic arm and a vehicle-mounted screen. The vehicle-mounted robotic arm is used to drive the vehicle-mounted screen to complete at least one target action.
  • the present application also provides an electronic device, a readable storage medium and a computer program product.
  • FIG. 6 shows a schematic block diagram of an example electronic device 600 that may be used to implement embodiments of the present application.
  • Electronic devices are intended to refer to various forms of digital computers, such as laptop computers, desktop computers, workstations, personal digital assistants, servers, blade servers, mainframe computers, and other suitable computers.
  • Electronic devices may also represent various forms of mobile devices, such as personal digital assistants, cellular phones, smart phones, wearable devices, and other similar computing devices.
  • the components shown herein, their connections and relationships, and their functions are examples only and are not intended to limit the implementation of the present application as described and/or claimed herein.
  • the device 600 includes a computing unit 601 that can execute according to a computer program stored in a read-only memory (ROM) 602 or loaded from a storage unit 608 into a random access memory (RAM) 603 Various appropriate actions and treatments. In the RAM 603, various programs and data required for the operation of the device 600 can also be stored.
  • Computing unit 601, ROM 602 and RAM 603 are connected to each other via bus 604.
  • An input/output (I/O) interface 605 is also connected to bus 604.
  • I/O interface 605 Multiple components in device 600 are connected to I/O interface 605, including: input unit 606, such as keyboard, mouse, etc.; output unit 607, such as various types of displays, speakers, etc.; storage unit 608, such as magnetic disk, optical disk, etc. ; and communication unit 609, such as a network card, modem, wireless communication transceiver, etc.
  • the communication unit 609 allows the device 600 to exchange information/data with other devices through computer networks such as the Internet and/or various telecommunications networks.
  • Computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 601 include, but are not limited to, a central processing unit (CPU), a graphics processing unit (GPU), various dedicated artificial intelligence (AI) computing chips, various computing units that run machine learning model algorithms, digital signal processing processor (DSP), and any appropriate processor, controller, microcontroller, etc.
  • the computing unit 601 performs the various methods and processes described above, such as the control of the vehicle-mounted robotic arm. preparation method.
  • the control method of the vehicle-mounted robotic arm may be implemented as a computer software program, which is tangibly included in a machine-readable medium, such as the storage unit 608.
  • part or all of the computer program may be loaded and/or installed onto device 600 via ROM 602 and/or communication unit 609.
  • the computer program When the computer program is loaded into the RAM 603 and executed by the computing unit 601, one or more steps of the control method of the vehicle-mounted robotic arm described above may be performed.
  • the computing unit 601 may be configured to execute the control method of the vehicle-mounted robotic arm in any other suitable manner (eg, by means of firmware).
  • Various implementations of the systems and techniques described above may be implemented in digital electronic circuit systems, integrated circuit systems, field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), application specific standard products (ASSPs), systems on a chip implemented in a system (SOC), load programmable logic device (CPLD), computer hardware, firmware, software, and/or a combination thereof.
  • FPGAs field programmable gate arrays
  • ASICs application specific integrated circuits
  • ASSPs application specific standard products
  • SOC system
  • CPLD load programmable logic device
  • computer hardware firmware, software, and/or a combination thereof.
  • These various embodiments may include implementation in one or more computer programs executable and/or interpreted on a programmable system including at least one programmable processor, the programmable processor
  • the processor which may be a special purpose or general purpose programmable processor, may receive data and instructions from a storage system, at least one input device, and at least one output device, and transmit data and instructions to the storage system, the at least one input device, and the at least one output device.
  • An output device may be a special purpose or general purpose programmable processor, may receive data and instructions from a storage system, at least one input device, and at least one output device, and transmit data and instructions to the storage system, the at least one input device, and the at least one output device.
  • An output device may be a special purpose or general purpose programmable processor, may receive data and instructions from a storage system, at least one input device, and at least one output device, and transmit data and instructions to the storage system, the at least one input device, and the at least one output device.
  • Program code for implementing the methods of the present application may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general-purpose computer, special-purpose computer, or other programmable data processing device, such that the program codes, when executed by the processor or controller, cause the functions specified in the flowcharts and/or block diagrams/ The operation is implemented.
  • the program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in connection with an instruction execution system, apparatus, or device.
  • the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, laptop disks, hard drives, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read only memory
  • CD-ROM portable compact disk read-only memory
  • magnetic storage device or any suitable combination of the above.
  • the systems and techniques described herein may be implemented on a computer having a display device (eg, a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user ); and a keyboard and pointing device (eg, a mouse or a trackball) through which a user can provide input to the computer.
  • a display device eg, a CRT (cathode ray tube) or LCD (liquid crystal display) monitor
  • a keyboard and pointing device eg, a mouse or a trackball
  • Other kinds of devices may also be used to provide interaction with the user; for example, the feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and may be provided in any form, including Acoustic input, voice input or tactile input) to receive input from the user.
  • the systems and techniques described herein may be implemented in a computing system that includes back-end components (e.g., as a data server), or a computing system that includes middleware components (e.g., an application server), or a computing system that includes front-end components (e.g., A user's computer having a graphical user interface or web browser through which the user can interact with implementations of the systems and technologies described herein), or including such backend components, middleware components, or any combination of front-end components in a computing system.
  • the components of the system may be interconnected by any form or medium of digital data communication (eg, a communications network). Examples of communication networks include: local area network (LAN), wide area network (WAN), and the Internet.
  • Computer systems may include clients and servers.
  • Clients and servers are generally remote from each other and typically interact over a communications network.
  • the relationship of client and server is created by computer programs running on corresponding computers and having a client-server relationship with each other.
  • the server can be a cloud server, a distributed system server, or a server combined with a blockchain.
  • a vehicle-mounted robotic arm including: a multi-degree-of-freedom adjustment mechanism fixed on the back of the vehicle-mounted screen 3, and multiple telescopic units installed on the multi-degree-of-freedom adjustment mechanism; Among them, the vehicle-mounted robotic arm is used to drive the vehicle-mounted screen 3 to complete any one or more of the following four actions.
  • the four actions include: vehicle-mounted screen translation action, vehicle-mounted screen flipping action, vehicle-mounted screen rotation action, and vehicle-mounted screen forward and backward movement. .
  • Vehicle Screen translation action As shown in Figure 12, the front of the vehicle screen 3 translates, and the front of the vehicle screen 3 translates at any angle in a plane perpendicular to the initial axis;
  • Vehicle screen flip action As shown in Figure 11, the vehicle screen The front of the screen 3 is flipped, and there is an angle between the front of the vehicle screen 3 and the initial axis after the flip is completed;
  • the vehicle screen rotation action as shown in Figure 13, the front of the vehicle screen 3 revolves around the initial axis or is in contact with the initial axis Parallel axes rotate; the vehicle screen moves forward and backward: As shown in Figure 14, the front of the vehicle screen 3 moves in the front and rear directions, and the moving direction of the front of the
  • multiple telescopic units are used to drive the vehicle-mounted screen 3 to flip up, down, left, and right, and a multi-degree-of-freedom adjustment mechanism is used to drive the vehicle-mounted screen 3 to rotate and translate.
  • Up, down, left, and right refer to when the above-mentioned vehicle-mounted screen 3 is facing the user.
  • the vehicle-mounted screen 3 performs movements such as the upper part tilting to the rear, the lower part tilting to the rear, the left part tilting to the rear, and the right part tilting to the rear.
  • the vehicle-mounted central control screen adjustment mechanism involved in this application may not be provided with the above-mentioned multi-degree-of-freedom adjustment mechanism, and directly connect the vehicle-mounted screen 3 with several telescopic units 9, thereby realizing that the control screen can It only swings up, down, left and right according to the usage requirements.
  • the vehicle-mounted central control screen adjustment mechanism involved in this application can also directly connect the vehicle-mounted screen 3 with the multi-degree-of-freedom adjustment mechanism without providing the above-mentioned telescopic units, thereby realizing the control screen itself. Rotational and translational sliding.
  • each telescopic unit is connected to a multi-degree-of-freedom adjustment mechanism, and the driving end of each telescopic unit is connected to a driving part.
  • the driving part is a center console inside the car.
  • a corresponding control system is provided in the center console, and the control system is used to control the telescopic actions of several telescopic units and the movement of the multi-degree-of-freedom adjustment mechanism.
  • the telescopic unit can also be a bendable rod with a ball head structure, and the telescopic rod interferes with the side of the ball head and the multi-degree-of-freedom adjustment mechanism away from the vehicle screen 3 Extruded installation.
  • the user can manually apply force on the vehicle-mounted screen 3, so that the multi-degree-of-freedom adjustment mechanism exerts force on the ball head structure as part of the force transmission, and when the ball head structure is made to swing to a certain angle, the ball head structure and the multi-degree-of-freedom adjustment mechanism are Sufficient friction is generated between the adjustment mechanisms to maintain the current position of the vehicle screen 3.
  • each telescopic unit includes: a linear motion unit 10 and a multi-degree of freedom connector.
  • One end of the linear motion unit 10 is connected to the multi-degree of freedom connector, and the multi-degree of freedom connector is connected.
  • the device is installed on a multi-degree-of-freedom adjustment mechanism.
  • the multi-degree-of-freedom connector is a ball joint structure or a universal joint structure.
  • the ball joint structure includes: a ball joint 11 and a ball socket slider 12.
  • the ball joint 11 is fixedly connected to the linear motion unit 10, and each ball joint 11 is It is installed in a ball socket slide block 12, and each ball socket slide block 12 is installed on a multi-degree-of-freedom adjustment mechanism.
  • each ball socket slider 12 has a spherical recess that matches the ball joint 11 .
  • the universal joint joint structure includes: a first rotating part, a second rotating part and a hinge part connecting the first and second rotating parts, one end of the first rotating part is fixed to the telescopic unit The other end of the first rotating part is connected to one end of the hinge part, the other end of the hinge part is rotatably connected to one end of the second rotating part, and the other end of the second rotating part is fixedly connected to the multi-degree-of-freedom adjustment mechanism.
  • the linear motion unit 10 is an electric push rod or a manual push rod. Furthermore, when the linear motion unit 10 is a manual push rod, the user can manually push the vehicle-mounted screen 3 to make the vehicle-mounted screen 3 make corresponding actions; when the electric push rod of the vehicle-mounted robotic arm is in a power-off state, the electric push rod The user should be allowed to manually drive the electric push rod to expand and contract accordingly to complete the movement of the vehicle screen 3.
  • the contact surfaces between the ball joint 11, the linear motion unit 10, the multi-degree-of-freedom adjustment mechanism and other movable parts of the vehicle-mounted manipulator have a certain degree of friction with the corresponding connection parts. Resistance, frictional resistance is used to maintain the stability of the current posture during the vehicle's form.
  • the electric push rod or the vehicle-mounted screen 3 is provided with a force sensing part.
  • the force sensing part is used to obtain information about the external force at the corresponding position.
  • the force sensing part passes The information of the external force determines the target of the force: when the target of the force is a passenger, that is, when the passenger pushes the vehicle screen 3, the force sensing part analyzes the information of the external force into action information, and causes the vehicle-mounted robotic arm to Carry out corresponding actions according to the action information to provide assistance to the passengers in the process of pushing the vehicle-mounted screen, so that the passengers can easily drive the vehicle-mounted screen 3 to complete the corresponding actions; when the target of the force is an external force that is not the passenger's intention to push, that is, When the vehicle encounters bumps or a passenger performs a touch operation on the vehicle screen, the vehicle-mounted robotic arm does not move or drives the corresponding driving part to drive in reverse to control the vehicle-mounted screen 3 to maintain the current state.
  • a vehicle collision detection system is installed on the car.
  • the vehicle collision system is used to detect the driving information of the vehicle in real time.
  • the vehicle-mounted robotic arm immediately drives the vehicle-mounted screen 3 to quickly move away from the passengers, so as to prevent the passengers from colliding with the vehicle-mounted screen 3 due to the inertia of the collision and causing injuries.
  • the linear motion unit 10 is an unpowered telescopic rod.
  • the linear motion unit 10 is a hydraulic push rod.
  • each guide rail 13 is installed on a multi-degree-of-freedom adjustment mechanism, and each ball socket slider 12 is slidably installed on a guide rail 13.
  • the vehicle-mounted robotic arm includes three telescopic units.
  • the number of guide rails 13 , ball joints 11 , ball socket sliders 12 and linear motion units 10 are each three.
  • the three guide rails 13 are arranged at an angle of 110 degrees in pairs. Furthermore, that is, the extension lines of the three guide rails 13 converge at one point after intersecting, and the two adjacent extension lines are spaced 110 degrees apart.
  • the multi-degree-of-freedom adjustment mechanism includes: a sliding mechanism 5 and a rotating mechanism 2, the rotating mechanism 2 is connected to the sliding mechanism 5, and one of the rotating mechanism 2 and the sliding mechanism 5 is connected to the vehicle screen 3, The other one of the rotating mechanism 2 and the sliding mechanism 5 is connected to the telescopic unit.
  • the rotating mechanism 2 includes: a support part, a motor 21, a worm 22, a turbine 23 and a sector gear 24.
  • the motor 21, the worm 22 and the turbine 23 are all installed on the support part.
  • the motor 21 is connected to the worm 22, the worm 22 is drivingly connected to the turbine 23, the turbine 23 is meshed with the sector gear 24, and the sector gear 24 is installed on the vehicle screen 3 or the sliding mechanism 5.
  • the support part is connected with several telescopic units or connected with the sliding mechanism 5 .
  • the support part has a shell-type structure, and the above-mentioned shell-type structure accommodates the motor 21, the worm 22, the turbine 23 and the sector gear 24 in the support part.
  • the outer edge of one end of the sector gear 24 protrudes radially outward to form an arcuate portion.
  • An arcuate rack is provided on the arcuate portion. The tooth tips of the rack are arranged radially inward. The rack is drivingly connected to the turbine 23 .
  • the optional embodiment of the present application also includes: two rotation stop blocks 25, the two rotation stop blocks 25 are installed on the support part, and the two rotation stop blocks 25 are respectively operable with both ends of the sector gear 24. Offset settings.
  • the rotating mechanism 2 also includes: a rotating shaft, one end of the rotating shaft is fixedly installed on the support part, and the other end of the rotating shaft is rotatably installed on the back of the vehicle screen 3 or slides through a bearing or the like. Agency 5 on.
  • the sliding mechanism 5 includes: a first sliding part, a second sliding part and a sliding driving device.
  • the first sliding part is slidably connected to the second sliding part, and the sliding direction is consistent with the direction of the rotating mechanism 2
  • the rotation axis is arranged vertically, and the sliding driving device is installed between the first sliding part and the second sliding part.
  • the sliding driving device is used to drive the relative sliding between the first sliding part and the second sliding part.
  • Optional embodiments of the present application also include: a visual sensing device, the visual sensor is installed on the front of the vehicle screen 3, the visual sensor is used to detect the position of the user's eyes, and the visual sensor is connected to the control system. Furthermore, with the help of the angle adjustment mechanism and the multi-degree-of-freedom adjustment mechanism, the front of the vehicle screen 3 is set as far as possible toward the driver through the visual sensor and the control system, where the visual sensor is a smart camera or a human body position sensor.
  • the visual sensing device includes a plurality of visual sensors, and the plurality of visual sensors are arranged on the front of the vehicle screen 3 and/or at any position in the cab of the automobile.
  • the visual sensor is used to identify the specified gestures of the passengers of the car, and based on the different gestures recognized, the vehicle-mounted robotic arm
  • the gesture action information obtained by the visual sensor controls the vehicle screen 3 to perform matching actions. For example, gestures can control the car screen to move forward or backward, or A certain application scenario triggers the vehicle screen to face the user.
  • Optional embodiments of the present application also include: a mechanism controller, the mechanism controller is used to control the vehicle-mounted robotic arm, the mechanism controller can be used to collect information about passengers in the vehicle, the information includes but is not limited to the corresponding Passengers' personal information such as height information, weight information or gender information, and the mechanism controller is also used to collect the corresponding member's seat posture information, and automatically controls the vehicle by processing the passenger's personal information and seat posture information.
  • the mechanical arm or the seat posture adjustment mechanism makes the front of the vehicle screen 3 face the passenger.
  • the mechanism controller also collects relative position information with the car's steering wheel at all times, and limits the action range of the vehicle screen 3 by calculating a safe distance between the vehicle screen 3 and the steering wheel. That is, the mechanism controller controls the movement of the vehicle-mounted robotic arm. The distance between the vehicle screen 3 and the steering wheel is controlled to be always greater than or equal to the above-mentioned safe distance.
  • Optional embodiments of the present application also include: a sound sensing device.
  • the sound sensing device includes several sound receivers. The plurality of sound receivers are arranged on the outer edge of the vehicle screen 3 or in the cab of the car.
  • the sound sensing device is connected to the control system. Furthermore, the sound sensing device is used to detect the user's speaking position, thereby adjusting the orientation position of the vehicle-mounted screen 3 .
  • the present application Another technical solution for providing the above-mentioned displacement is also provided, specifically as follows: the driving end of each telescopic unit of the present application also has a rotating member 4, and each rotating member 4 is installed on the driving part.
  • the moving ends are all rotatably connected to a rotating member 4. That is, the adaptive displacement that originally occurred on the guide rail 13 is transferred to the rotation of the telescopic unit itself to match the displacement of the ball joint 11 of the telescopic unit.
  • the multi-degree-of-freedom connector of the telescopic unit is no longer connected to the guide rail 13, but is directly connected to the multi-degree-of-freedom adjustment mechanism.
  • the ball socket slider 12 is directly fixed on the multi-degree-of-freedom adjustment mechanism, and the ball joint 11 is rotatably installed on the ball socket slider 12 .
  • the rotating member 4 is rotatably connected to the middle part of the linear motion unit 10 .
  • the rotating member 4 is arranged in a shaft-shaped structure.
  • the driving part has a shell structure, and the three rotating parts 4 are all fixedly installed on the shell.
  • the axes of the three rotating members 4 intersect and are arranged at intervals of 110 degrees.
  • the vehicle-mounted central control screen includes a vehicle-mounted screen 3 and a vehicle-mounted mechanical arm of any one of the above, that is, the corresponding vehicle-mounted screen 3 is a central control screen, and the central control screen is arranged in the front cabin of the car.
  • the console several telescopic units and multi-degree-of-freedom adjustment mechanisms work together to drive the central control screen to complete the translational movement of the vehicle screen 3, the flipping movement of the vehicle screen 3, the rotation movement of the vehicle screen 3 and the forward and backward movement of the vehicle screen 3 in the small space inside the car. action.
  • individual or combined implementations based on the above actions can also form the presentation of various other application scenarios, such as turning and/or rotating to the user (driver or passenger in the car)
  • application scenarios such as turning and/or rotating to the user (driver or passenger in the car)
  • specific flipping actions in certain car-machine interaction scenarios show rocking effect or shaking head effect or head-tilt effect
  • specific flipping actions when over-the-air upgrade (OTA) is successful and facing users when triggered by a certain car-machine interaction scene (such as using the car screen as a makeup mirror), triggering the car screen to move forward and backward with gesture operations or other motion capture, triggering the car screen rotation with specific content or motion capture, adjusting the amount of movement of each of the above single actions or action combinations with voice, etc.
  • An embodiment of the present application also provides a vehicle-mounted display device, which may include the above-mentioned electronic device, the vehicle-mounted robotic arm and the vehicle-mounted screen of any embodiment of the application.
  • Embodiments of the present application also provide a vehicle-mounted display device, which may include a vehicle-mounted robot arm control unit and the vehicle-mounted robot arm and vehicle-mounted screen of any embodiment of the application, wherein the vehicle-mounted robot arm control unit is used to execute any embodiment of the application.
  • the control method, or the robot arm control unit may include the control device of any embodiment of the present application.
  • the vehicle-mounted robotic arm control unit can also be referred to as the control unit for short.
  • An embodiment of the present application also provides a vehicle, which may include the above-mentioned electronic device, the robotic arm and the vehicle-mounted screen of any embodiment of the present application.
  • Embodiments of the present application also provide a vehicle, which may include a vehicle-mounted robotic arm control unit and the vehicle-mounted robotic arm and vehicle-mounted screen of any embodiment of the present application, wherein the vehicle-mounted robotic arm control unit is used to perform the control of any embodiment of the present application.
  • the method, or the vehicle-mounted manipulator control unit may include the control device of any embodiment of the present application.
  • the electronic device may be a Body Domain Control Module (BDCM), an Infotainment Domain Control Module (IDCM), or a Vehicle Domain Control Module (VDCM). At least one of Vehicle Domain Control Module), Automated-driving Domain Control Module (ADCM), or Robotic Arm Controller (RAC).
  • BDCM Body Domain Control Module
  • IDCM Infotainment Domain Control Module
  • VDCM Vehicle Domain Control Module
  • At least one of Vehicle Domain Control Module Automated-driving Domain Control Module (ADCM), or Robotic Arm Controller (RAC).
  • ADCM Automated-driving Domain Control Module
  • RAC Robotic Arm Controller
  • the vehicle in this embodiment can be a vehicle driven by any power such as a fuel vehicle, an electric vehicle, a solar vehicle, etc.
  • the vehicle in this embodiment may be an autonomous vehicle.
  • connection and fastening components can be adopted from various technical solutions known to those of ordinary skill in the art now and in the future, and will not be described in detail here.
  • the vehicle-mounted screen can implement at least one action driven by the vehicle-mounted mechanical arm.
  • the action can be a telescopic action along the X-axis, Y-axis, or Z-axis, or it can also be a telescopic action along the The rotation of the shaft.
  • the X-axis is the vehicle length direction
  • the positive direction of the X-axis points to the rear direction of the vehicle
  • the Y-axis is the vehicle width direction
  • the Z-axis is the vehicle height direction, as shown in Figure 17.
  • more detailed actions on the vehicle screen can be realized, such as nodding, shaking, shaking and other anthropomorphic actions.
  • the vehicle screen can be any display screen installed on the vehicle, such as the central control screen (CID, Center Informative Display, also called the central information display), passenger screen, head-up display (Head Up Display, HUD), rear screen, etc. .
  • the vehicle screen in this embodiment is a central control screen.
  • the vehicle-mounted robotic arm can use a multi-degree-of-freedom vehicle-mounted digital robot to drive the vehicle-mounted screen to complete actions with multiple degrees of freedom.
  • the position of the vehicle-mounted screen can be characterized by screen coordinates or vehicle-mounted robotic arm coordinates, for example, the coordinates of one or more key points on the vehicle-mounted screen or the vehicle-mounted robotic arm are used as the position of the vehicle-mounted screen.
  • the vehicle-mounted screen After determining the target position of the vehicle-mounted screen based on the target user's pose information, driven by the vehicle-mounted robotic arm, the vehicle-mounted screen can move from the current position to the target position and complete adaptive adjustment, so that the vehicle-mounted screen can provide the target user with more provide the best viewing angle, thereby improving vehicle intelligence and user experience.
  • any step or steps in the vehicle-mounted robotic arm control method can be executed in real time, or according to a preset time interval, or after certain trigger conditions are met; Any one step or multiple steps in the method controlled by the vehicle-mounted robotic arm can be executed once or multiple times.
  • the triggering conditions may include the user turning on the screen adaptive adjustment switch; or detecting a change in the pose information of the target user; or detecting a change in the position of the vehicle screen, etc.
  • the position of the vehicle screen can be collected through the gyroscope sensor of the vehicle screen, and then it is determined whether the position of the vehicle screen has changed through comparison.
  • Using the gyroscope sensor to collect the position of the vehicle screen can improve the success rate of anti-pinch during the movement of the vehicle manipulator; ensure the stability of the vehicle screen during movement and reduce shaking caused by the movement or movement of the screen; enhance the rotation of the vehicle screen During the process, the orientation of the display screen on the vehicle screen can always be maintained.
  • an embodiment of the present application provides a vehicle control system, including an information domain controller (IDCM), a vehicle screen module, and a robotic arm control unit (RAC).
  • IDCM information domain controller
  • RAC robotic arm control unit
  • the above control method may be executed by the RAC, that is, the method among the methods of controlling the vehicle-mounted robotic arm by the RAC.
  • the "car” can also be called a vehicle, and the “vehicle-mounted robotic arm” can also be called a screen adjustment mechanism.
  • "2/5" in Figures 11, 14 and 15 represents the rotating mechanism 2 and /or sliding mechanism 5.

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Fittings On The Vehicle Exterior For Carrying Loads, And Devices For Holding Or Mounting Articles (AREA)
  • Manipulator (AREA)
  • User Interface Of Digital Computer (AREA)
  • Navigation (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Control Of Driving Devices And Active Controlling Of Vehicle (AREA)
  • Toys (AREA)

Abstract

一种车载机械臂的控制的方法、装置、车载显示设备及车辆,涉及自动控制技术领域。该方法包括:根据第一触发信息,生成对包括车载机械臂在内的车载可控部件的第一控制指令序列;在车载可控部件执行第一控制指令序列的过程中,接收到第二触发信息的情况下,生成包括车载机械臂在内的车载可控部件的第二控制指令序列;第一、第二触发信息是根据车内不同人员的指令信息,和/或当前车辆所在位置的环境信息确定的;在第一控制指令序列和第二控制指令序列存在冲突的情况下,确定冲突化解策略以对冲突进行化解;存在冲突的情况包括第一控制指令序列和第二控制指令序列的控制对象均包含车载机械臂。

Description

车载机械臂的控制的方法、装置、车载显示设备及车辆
本申请要求于2022年6月30日提交至国家知识产权局、申请号为202210766008.5、名称为“脚本序列的确定方法、装置、电子设备及车辆”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请要求于2022年6月30日提交至国家知识产权局、申请号为202210765976.2、名称为“车载机械臂,及其控制的方法、装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请要求于2022年6月30日提交至国家知识产权局、申请号为202210766469.0、名称为“脚本序列的处理方法、装置、电子设备及车辆”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及自动控制技术领域,尤其涉及车载机械臂的控制的方法、装置、车载显示设备及车辆。
背景技术
智能驾舱主要构成包括车载信息娱乐系统、仪表盘、抬头显示、流媒体后视镜、氛围灯、智能车门以及智能音箱等。智能座舱中各项功能可以进行组合以提供更为个性化的驾驶服务。机械臂已广泛应用于自动化场景,但是鲜有将机械臂与智能驾舱结合的示例,如何将二者进行结合,在保障驾驶安全的前提下为驾驶者和乘客提供更多样的智能化服务,兼顾不同情况下对于车载机械臂的控制指令,成为亟待解决的问题。
发明内容
本申请提供了车载机械臂的控制的方法、装置、车载显示设备及车辆,以解决相关技术中存在的技术问题。
根据本申请的一方面,提供了一种车载机械臂的控制方法,该方法可以包括以下步骤:根据第一触发信息,生成对包括车载机械臂在内的车载可控部件的第一控制指令序列;第一触发信息是根据车内不同人员的指令信息,和/或当前车辆所在位置的环境信息确定的;在车载可控部件执行第一控制指令序列的过程中,接收到第二触发信息的情况下,生成包括车载机械臂在内的车载可控部件的第二控制指令序列;第二触发信息是根据车内不同人员的指令信息,和/或当前车辆所在位置的环境信息确定的;在第一控制指令序列和第二控制指令序列存在冲突的情况下,确定冲突化解策略以对冲突进行化解;存在冲突的情况包括第一控制指令序列和第二控制指令序列的控制对象均包含车载机械臂。
根据本申请的另一方面,提供了一种车载机械臂的控制装置,该装置可以包括:第一控制指令序列生成模块,用于根据第一触发信息,生成对包括车载机械臂在内的车载可控部件的第一控制指令序列;第一触发信息是根据车内不同人员的指令信息,和/或当前车辆所在位置的环境信息确定的;第二控制指令序列生成模块,用于在车载可控部件执行第一控制指令序列的过程中,接收到第二触发信息的情况下,生成包括车载机械臂在内的车载可控部件的第二控制指令序列;第二触发信息是根据车内不同人员的指令信息,和/或当前车辆所在位置的环境信息确定的;冲突化解策略确定模块,用于在第一控制指令序列和第二控制指令序列存在冲突的情况下,确定冲突化解策略以对冲突进行化解;存在冲突的情况包括第一控制指令序列和第二控制指令序列的控制对象均包含车载机械臂。
根据本申请的另一方面,提供了一种电子设备,包括:至少一个处理器;以及与该至少一个处理器通信连接的存储器;其中,该存储器存储有可被该至少一个处理器执行的指令,该指令被该至少一个处理器执行,以使该至少一个处理器能够执行本申请任一实施例中的方法。
根据本申请的另一方面,提供了一种存储有计算机指令的非瞬时计算机可读存储介质,该计算 机指令用于使计算机执行本申请任一实施例中的方法。
根据本申请的另一方面,提供了一种计算机程序产品,包括计算机程序/指令,该计算机程序/指令被处理器执行时实现本申请任一实施例中的方法。
根据本申请的另一方面,提供了一种车载显示设备,包括:控制单元,用于执行车载机械臂控制的方法,或包括车载机械臂控制的系统;由机械臂和车载屏幕组成的显示模组,机械臂用于驱动车载屏幕完成至少一种目标动作。
根据本申请的另一方面,提供了一种车辆,包括:控制单元,用于执行车载机械臂控制的方法,或包括车载机械臂控制的系统;由机械臂和车载屏幕组成的显示模组,机械臂用于驱动车载屏幕完成至少一种目标动作。
根据本申请的技术在检测结果为存在冲突的情况下,执行预先设定的冲突化解策略调整第一控制指令序列和第二控制指令序列。例如,可以根据控制指令序列的优先级作为冲突化解策略,调整第一控制指令序列和第二控制指令序列。从而满足支持多控制指令序列的并发,可控部件按照冲突化解策略各司其职执行对应的功能,提高车载可控部件的智能化。
上述概述仅仅是为了说明书的目的,并不意图以任何方式进行限制。除上述描述的示意性的方面、实施方式和特征之外,通过参考附图和以下的详细描述,本申请进一步的方面、实施方式和特征将会是容易明白的。
附图说明
在附图中,除非另外规定,否则贯穿多个附图相同的附图标记表示相同或相似的部件或元素。这些附图不一定是按照比例绘制的。应该理解,这些附图仅描绘了根据本申请公开的一些实施方式,而不应将其视为是对本申请范围的限制。
图1示出了本申请实施例一中提供的一种车载机械臂的控制方法的流程图。
图2示出了本申请实施例一中提供的一种确定冲突化解策略以对冲突进行化解的流程图。
图3示出了本申请实施例一中提供的一种可达空间的示意图。
图4示出了本申请实施例一中提供一种车载机械臂的坐标系的示意图。
图5示出了本申请实施例四中提供的一种车载机械臂的控制装置的示意图。
图6示出了用来实现本申请实施例提供的脚本序列的确定方法的电子设备的框图。
图7示出了本申请实施例的车载机械臂的整体示意图。
图8示出了本申请实施例的车载机械臂的导轨示意图。
图9示出了本申请实施例的车载机械臂的旋转机构示意图。
图10示出了本申请实施例的车载机械臂的直线运动单元的另一种安装方式实施例的示意图。
图11示出了本申请实施例的车载机械臂的车载屏幕翻转动作示意图。
图12示出了本申请实施例的车载机械臂的车载屏幕平移动作示意图。
图13示出了本申请实施例的车载机械臂的车载屏幕旋转动作示意图。
图14示出了本申请实施例的车载机械臂的车载屏幕前后移动动作示意图。
图15示出了本申请实施例的车载机械臂的旋转件动作示意图。
具体实施方式
在下文中,仅简单地描述了某些示例性实施例。正如本领域技术人员可认识到的那样,在不脱离本申请的精神或范围的情况下,可通过各种不同方式修改所描述的实施例。因此,附图和描述被认为本质上是示例性的而非限制性的。
实施例一
本申请实施例一中提供的脚本序列的确定方法,如图1所示,该方法可以包括以下步骤。
S101:根据第一触发信息,生成对包括车载机械臂在内的车载可控部件的第一控制指令序列;第一触发信息是根据车内不同人员的指令信息,和/或当前车辆所在位置的环境信息确定的;
S102:在车载可控部件执行第一控制指令序列的过程中,接收到第二触发信息的情况下,生成 包括车载机械臂在内的车载可控部件的第二控制指令序列;第二触发信息是根据车内不同人员的指令信息,和/或当前车辆所在位置的环境信息确定的;
S103:在第一控制指令序列和第二控制指令序列存在冲突的情况下,确定冲突化解策略以对冲突进行化解;存在冲突的情况包括第一控制指令序列和第二控制指令序列的控制对象均包含车载机械臂。
本申请上述过程的执行主体可以是车端。车载可控部件可以包括车载机械臂、车门、音箱、氛围灯等。对于各个车载可控部件的控制原理相同,当前实施方式中,以车载机械臂为示例展开详述。
其中,车载机械臂可以是车机显示屏幕的支架,通过车载机械臂的动作,可以配合车机显示屏幕的显示内容。例如,可以协同车机显示屏幕进行前后一定角度、左右一定角度的摆动等。车载机械臂的初始位置可以是嵌入至汽车操作台,在接收到控制住指令的情况下,可以基于控制指令进行运动。例如,运动可以是朝向驾驶员倾斜、移动至后排乘客的对应位置等。或者,车载机械臂也可以是一个独立的车载部件,例如,可以设置于汽车的扶手托处。车载机械臂的初始位置可以充当扶手托,在接收到控制住指令的情况下,可以基于控制指令进行运动。不仅如此,车载机械臂还可以配合音箱播放的音乐、氛围灯的不同颜色执行不同的动作等,从而满足用户的个性化需求。在本申请中,对于车载机械臂的具体设置位置或功能并不进行限定。
(第一或第二)触发信息可以是车内不同人员下达的。例如,第一触发指令是驾驶员下达的,第二触发指令是坐在驾驶员后侧的乘客下达的。触发用于控制车载可控部件的控制指令的信息。或者,触发信息也可以是车端传感器检测到的信息。
以车内不同人员下达触发信息为示例,触发信息可以是驾驶员或者乘客通过语音、动作或触控等方式下发的对于车载可控部件的控制指令。
其中,通过下发的对于车载可控部件的控制指令动作可以包括通过手势调节进行下发。结合图2所示,以标准方向(上、下、左、右)各对称展开45度组成的90度扇形面为标准方向,作为判断执行区间。手势识别系统工作正常且开启状态下,可通过手势识别进行车载机械臂的左右旋转调节和上下俯仰的方位调节。例如,可以实时检测用户通过拳头手势下达的调节指令。在手势识别系统前保持一定时长(例如2秒)的情况下,可以进行手势调节的激活。激活后可以发出反馈音效,且中控屏幕显示手势已激活对应内容,表征手势调节功能激活完毕。手势识别系统中的图像采集设备采集用户手势(例如握拳)移动方向,手势识别系统中的图像解析模块根据确定出手势的上下左右的移动,最终可以控制车载机械臂的左右旋转或上下俯仰动作。在检测到用户停止移动手势时,可以控制车载机械臂也停止转动。另外,在检测到用户移动过程(向左)中短暂停留(例如不大于1秒),并后向另一方向(向右)移动。对于此种情况,可以继续判断方向并控制车载机械臂进行新方向(向右)的运动。当检测到用户将拳头移开感知区域,又或检测到用户改变手势时导致的无法识别,再或检测到用户保持拳头姿势并静止超过1秒时,则可以控制退出手势调节模式。退出手势调节模式后,中控屏幕显示手势调节已退出的对应内容。另外,对于手势控制,还可以包括以下手势。例如,手势可以包括在检测到用户五指合并,手掌朝下,五指弯曲,五指指尖朝向/远离手掌方向多次重复,示例性地,每一次重复可以表示控制车载机械臂的朝向控制人员的位置移动。或者,手势还可以包括在检测到用户五指合并,手掌朝上,五指弯曲,五指指尖朝向/远离手掌方向多次重复等。
进一步的,以触控为示例进一步阐述。触发信息可以是用户通过前端设备进行编辑生成的。示例性地,前端设备可以是智能手机端(APP)、也可以是车机端(APP),还可以是Web端(编辑页面或APP)等。以前端设备为APP为示例。智能手机端、车机端或Web端的APP中可以预制可视化编程界面。用户(车主)开启APP可以进入可视化编辑页面。在可视化编程界面可以利用图标拖拽式的编辑方式,通过编辑车载机械臂的动作、音箱的音效、车门开启状态、氛围灯变化等控制指令图标所对应的外设原子化功能块。例如,在可视化编程界面中设置有车载机械臂的控制指令图标、音箱的控制指令图标、氛围灯的控制指令图标、车门的控制指令图标等。以对车载机械臂的控制为示例,在接收到用户对车载机械臂的控制指令图标的拖拽指令的情况下,可以将车载机械臂的控制指令图标置为可编辑,其他车载部件的控制指令图标置为不可编辑。不可编辑可以是置灰、加 载不可编辑图层等。对于可编辑的控制指令图标,可以进行子菜单的显示,例如,子菜单可以显示预制的车载机械臂的动作,如左右摆动(摇头)、上下摆动(点头)等。又例如,子菜单可以显示单一的控制动作,例如前进一格、向上一格等、朝向左侧转向5°等。拖拽指令可以是选择单一车载部件的控制指令图标,也可以是先后选择的多个车载部件的控制指令图标。
进一步的,在先后选择的多个车载部件的控制指令图标的情况下,还可以结合时间轴,控制多个车载部件的执行顺序。例如,执行顺序可以是串行或并行等。在接收到用户的编辑完成的指令的情况下,可以得到触发信息,从而触发用于控制车载可控部件的控制指令。
将接收到的包含车载机械臂控制指令的不同脚本格式进行归一化格式的转换。例如,可以转换为.json格式。在转换为.json格式后,可以在后续的脚本解析过程中,基于.json格式的文件进行控制指令的解析和获取。
另外,触发信息也可以是车端传感器检测到的信息。例如,在检测到车辆制动幅度超过对应阈值,或者检测到车辆前方存在障碍物主动触发制动的情况下,可以生成触发信息。在车辆制动情况下,触发信息可以是控制车载机械臂执行复位指令,从而保障安全。或者,在检测到驾驶员与车辆的距离小于对应阈值的情况下,可以生成触发信息,从而触发控制车载机械臂进行复位动作的控制指令。
第一触发信息可以是在时序上先生成的触发信息。根据第一触发信息,可以生成对包括车载机械臂在内的车载可控部件的第一控制指令序列。例如,利用第一触发信息生成的第一控制指令序列可以是仅针对车载机械臂的,还可以是针对车载机械臂、车门、音箱、氛围灯等多个车载可控部件的。同理,利用第二触发信息所生成的第二控制指令序列同样可以是针对车载机械臂的,也可是针对车门、音箱、氛围灯等多个车载可控部件的。
在车载可控部件执行第一控制指令序列的过程中,接收到对第二控制指令序列的情况下,有可能会存在冲突。
由此,便需要确定冲突化解策略以化解冲突。例如,冲突化解策略可以是根据接收到第一触发信息、第二触发信息的时间确定的;也可以是根据预先确定的优先级确定的。示例性地,第一控制指令序列是后排乘客下达的控制车载机械臂朝向后排座椅移动。在移动过程中,检测到驾驶员下达的第二触发信息对应的第二控制指令序列,驾驶员下达的是控制车载机械臂朝向驾驶员移动;或者,在移动过程中,检测到第二触发信息对应的第二控制指令序列是在车辆紧急制动情况下产生的。在上述情况下,第一控制指令序列和第二控制指令序列中均包含对车载机械臂的控制,且后到的第二控制指令序列的优先级高于第一控制指令序列的优先级。由此,会存在第一控制指令序列和第二控制指令序列的冲突情况。
反之,如果第一控制指令序列是驾驶员下达的,第二控制指令序列是后排乘客下达的。在上述情况下,第一控制指令序列的优先级高于第二控制指令序列,由此,不会存在第一控制指令序列和第二控制指令序列的冲突情况。对于不存在冲突的情况,可以按照接收控制指令序列的顺序控制可控设备进行执行。
通过上述过程,在检测结果为存在冲突的情况下,执行预先设定的冲突化解策略调整第一控制指令序列和第二控制指令序列。例如,可以根据控制指令序列的优先级作为冲突化解策略,调整第一控制指令序列和第二控制指令序列。从而满足支持多控制指令序列的并发,可控部件按照冲突化解策略各司其职执行对应的功能,提高车载可控部件的智能化。
在一种实施方式中,在确定冲突化解策略以对冲突进行化解时,可以先确定第一触发信息和第二触发信息的类型。然后,再在存在指定类型,或者指定类型的触发信息的优先级高于另一触发信息的优先级的情况下,忽略另一触发信息。
第一触发信息和第二触发信息的类型可以包括安全类或非安全类。其中安全类可以是由车辆制动触发的。例如,自动驾驶域控制器(ADCM,Auto-driving Domain Controller Module)发出的自动紧急制动信号(AEB),可以对应安全类触发信息。又如,通过信息娱乐域控制器(IDCM,Infotainment Domain Controller Module)发出的控制信号,以及通过车身域控制器(BDCM,Body Domain Controller Module)发出的控制信号,都可以作为非安全类触发信息。
指定类型可以是安全类。即,安全类的触发信息的优先级可以是最高级别。例如,在非安全类的触发信息对应的(第一)控制指令序列被安全类的触发信息对应的(第二)控制指令序列打断,且二者存在冲突的情况下,由于安全类的触发信息对应的(第二)控制指令序列对应有更高级别,在此情况下,即便非安全类的触发信息对应的(第一)控制指令序列是在先的,依然可以被忽略。
另外,对于IDCM发出的控制信号,以及通过BDCM发出的控制信号,可以事先确定各信号的优先级。例如,通过BDCM发出的高频、功能固定场景对应的控制信号的优先级可以高于用户主动调节场景对应的控制信号的优先级。
其中,高频、功能固定场景对应的控制信号可以包括迎宾模式、副驾模式、下电回归模式、赛道模式、后排畅享空间模式等。迎宾模式可以是在检测到用户距离车辆超过预定距离的情况下,进行多媒体展示。例如,(携带钥匙的)用户在向车辆靠近时,可以车机显示屏显示“欢迎您归来”等字样或者礼花等图像。另外,在迎宾模式,车载机械臂可以进行左右晃动表示挥手等。副驾模式可以包括,在检测到副驾驶上车后,车载机械臂进行左右旋转移动,从而使车机显示屏幕可以朝向副驾驶侧,方便副驾驶上车操作使用。下电回归模式可以是断电后的车载机械臂复位。赛道模式可以是车辆开启竞速后,车载机械臂复位,或者配合车辆的转向等进行对应的一定角度的倾斜等。后排畅享空间模式的触发条件可以是驻车状态,或者可以是行车速度为0公里/时的状态。在此情况下,还需要满足前排座椅均保持5秒没有占位信号等。主副驾座椅向车头方向移动至最靠近位置,并向前折叠。(承载有显示设备的)车载机械臂向车尾方向移动至最优位置(例如居中前伸),氛围灯进行昏暗灯光显示等。后排畅享空间模式的正常中断可以包括:用户自主结束场景。本情况不再触发场景(本次后排畅享空间模式结束),其余未触发情况仍可触发场景。后排畅享空间模式的异常中断可以包括:更高优先级场景被执行(本次后排畅享空间模式暂停),再次满足触发条件会重新触发本场景。
用户主动调节场景可以是手动助力调节、方向盘控制调节、语音调节和动作调节等。用户主动调节场景可以包括用户个性化创建的控制指令序列(点动命令);用户主动调节场景还可以包括用户选择的已有控制指令序列,在此称为调用场景卡对应的控制序列。调用场景卡对应的控制序列需要借助场景引擎,场景引擎中保存有场景卡。场景卡包括提供预置场景(出厂时已创建)和用户共建场景(用户自定义编译并保存,满足合理性要求)2种形式。场景卡本身是公共资源,可以通过脚本预埋、UI界面人机交互、语音、方向盘控制等方式调用车载机械臂控制器,进而控制车载机械臂控制器。
点动命令的触发信号直接调用车载机械臂服务(Bot Service),进而控制车载机械臂控制器。在此过程可以进行冲突化解策略调整。点动命令具有一定随机性,例如,可以是驾驶员或乘客在乘车过程中随机化的发出的控制指令。示例性地,可以是驾驶员通过语音发出的“向左移动一点”、“离我近一点”等。
通过上述过程,通过确定第一控制指令序列第二控制指令序列的优先级确定冲突化解策略。如果是相同优先级,可以是先到先得,也可以在不存在被控对象冲突的情况下并行执行。
不难理解,在不存在冲突的情况下,第一控制指令序列和第二控制指令序列可以并行执行,也可以按照接收时间先后执行。
在一种实施方式中,还包括以下过程:将指定类型的触发信息所对应的控制指令序列直接发送至车载可控部件。
在当前实施方式中,指定类型触发信息可以是前述安全类触发信息,即,对应为ADCM发出的自动紧急制动信号所对应的触发信息;或者,还可以是通过BDCM发出的控制信号所对应的触发信息。
以安全类触发信息为示例,在接收到ADCM发出的自动紧急制动信号所对应的触发信息,此时需要将车载机械臂复位至原始位姿的控制指令序列。在此情况下,AEB信号经过信息娱乐域控制器(IDCM)的主控芯片(MCU),直接控制车载机械臂。即,同时存在安全类和非安全类的情况下,可以忽略非安全类控制指令序列。
同理,在接收到通过BDCM发出的控制信号所对应的触发信息的情况下,同样可以经过IDCM 的MCU,直接控制车载机械臂。
由此,通过直接控制车载机械臂的方式,可以降低控制信号传输路径,确保可以实现第一时间对于车载机械臂的准确控制。
在一种实施方式中,在确定冲突化解策略以对冲突进行化解时,还可以先确定第一触发信息和第二触发信息的类型。然后,再在类型相同的情况下,确定第一控制指令序列、第二控制指令序列的属性,属性包括模板类属性或定制类属性。最后,在属性不同,或属性相同且属于定制类的情况下,控制车载机械臂暂停动作,在接收到新的控制指令序列的情况下,控制车载机械臂执行新的控制指令序列。
在当前实施方式中,类型相同可以是对应非安全类。确定触发信息的类型可以是利用触发信息的来源渠道进行。例如,通过BDCM或通过IDCM发出的控制信号所对应的触发信息,可以对应为相同类型(非安全类型)。
在第一触发信息和第二触发信息的类型均为非安全类的情况,可以进一步确定第一触发信息所对应的第一控制指令序列的属性、以及第二触发信息所对应的第二控制指令序列的属性。
属性可以包括模板类属性或定制类属性。例如,前已述及的迎宾模式、副驾便利模式、下电回归模式、赛道模式等可以对应为模板类属性。即,上述各模式已是预先设定好的一套完整的控制序列,在触发的情况下即可按照对应的序列执行即可。
定制类属性可以是用户通过语音、手势或者触控指令触发的微调类控制指令所对应的属性。例如“再高一点”、“离我近一点”、“朝向我转10°”等。上述控制指令序列不具有完整的控制序列,而是用户随机下达的一个控制指令或者控制指令序列。对此,可以划分为定制类属性。
在第一触发信息所对应的第一控制指令序列的属性、以及第二触发信息所对应的第二控制指令序列的属性彼此不同(一个是模板类属性、另一个是定制类属性),或者第一触发信息所对应的第一控制指令序列的属性、以及第二触发信息所对应的第二控制指令序列的属性都属于定制类属性的情况下,冲突化解策略可以争夺策略。
例如,车载机械臂等正在执行副驾便利所对应的控制指令序列(第一控制指令序列,模板类属性)。此时,驾驶员通过方控、语音、手势等方式下达指令生成第二触发信息。在第二触发信息对应的第二控制指令序列与第一触发信息对应的第一控制指令序列相冲突的情况下,如果车载机械臂接到新的点动控制命令时正处于运行状态(此时不缓存第二控制指令序列的点动控制命令),控制车载机械臂停止并进入准备接收新控制指令的状态。对于后续的动作执行,实行先到先得的冲突化解策略。即,在车载机械臂暂停后,后续的控制指令序列以先到为准。即,接收到新的控制指令序列可以是重新接收到的第一控制指令序列、第二控制指令序列,也可以是不同于第一控制指令序列、第二控制指令序列的第三控制指令序列。
又例如,在类型相同,且控制指令序列的属性均为定制类属性的情况下,也可以采用同样的方式得到冲突化解策略。
通过上述过程,可以实现冲突化解策略的确定。
在一种实施方式中,在确定冲突化解策略以对冲突进行化解时,可以先在属性相同,且属于定制类属性的情况下,比较第一控制指令序列和第二控制指令序列的优先级。然后,再根据比较结果,控制车载机械臂执行优先级高的控制指令序列。
在第一触发信息所对应的第一控制指令序列的属性、以及第二触发信息所对应的第二控制指令序列的属性相同,且都属于定制类属性的情况下,可以首先比较二者的优先级。
前已述及,在非安全类型中,通过BDCM发出的高频、功能固定场景对应的控制信号的优先级可以高于用户利用方向盘发出的用户主动调节的操作对应的控制信号的优先级。
即高频、功能固定场景对应的控制信号的优先级可以是中级(安全类的优先级为高级)。
另外,还可以具有以下几个场景,各场景的优先级如下详述:
用户共创类场景的控制信号的优先级可以是中低级。用户共创类场景可以是用户自定义添加的动作控制指令组成的场景。由于是用户已经创建完成并且通过合理性检测的场景,因此对于用户共创场景而言,也可以作为模板类型。
用户利用方向盘发出的用户主动调节场景对应的控制信号的优先级可以是低级。
人机互动场景对应的控制信号的优先级可以是低级。人机互动场景可以是预制的几种模式。例如,可以包括安全管家模式(例如安全管理、自动驾驶等)、试乘试驾介绍模式(例如车辆功能介绍)、醒神模式(例如与驾驶员互动)、智能音量调节模式(例如智能降噪、接电话或交谈时的静音)、KTV模式、智能天气播报模式、新春舞狮模式(特定节假日的多媒体播放)、儿童模式(播放动画片或者卡通歌曲)、低电量模式等。
随动自动调节场景对应的控制信号的优先级可以是低级。随动自动调节场景可以是伴随类的控制指令序列。例如,氛围灯的颜色变化可以随着音箱中乐曲的变化而自动调节。随动自动调节场景可以对应为模板类型。
通过比较优先级,可以保障优先级高的控制指令序列的正常进行。另外,对于优先级相同的控制指令,可以延续前述步骤先到先得的方式得到冲突化解策略。
在一种实施方式中,还可以包括以下步骤:首先,实时检测车载可控部件的状态,状态包括正常状态或非正常状态。然后,根据车载可控部件的状态的检测结果,确定第一控制指令序列和/或第二控制指令序列的可执行性。
车载可控部件的状态可以包括正常状态或非正常状态(异常状态)。对于车载机械臂而言,其状态具体可以如表1所示。

表1
通过前述ADCM发出的自动紧急制动信号所对应的触发信息,以及通过BDCM发出的控制信号所对应的触发信息,可以经过IDCM的MCU,直接控制车载机械臂,上述信息传输过程可以利用CAN协议。由此保障控制的稳定性,时效性。对于IDCM发出的控制信号所对应的触发信息,可以基于车载机械臂服务(Bot Service)对车载机械臂进行控制。例如,Bot Service可以实现通信、场景 封装、车载机械臂状态查询以及车载机械臂驱动等功能。通信功能原理包括:与外部数据接口(例如Open API)进行数据交换,接收上层场景引擎的.json格式的控制指令。场景封装可以是指将前述不同的场景进行封装处理,得到封装后的控制指令序列,以实现对于车载机械臂的控制。
对于封装后的控制指令序列,可以结合车载可控部件的当前状态、目标状态,以及触发条件,综合判断是否满足控制指令序列的执行。如果不满足,则返回相应返回值,例如,返回值可以含错误码等信息。如果满足,则执行控制指令序列。
在一种实施方式中,在第一控制指令序列和第二控制指令序列不存在冲突的情况下,控制车载可控部件并行执行第一控制指令序列和第二控制指令序列。
对于不存在冲突的第一控制指令序列和第二控制指令序列,可以并行执行。可以实现主驾、副驾、后排乘客,千人千面的个性化体验与服务。综上,结合安全、用户体验、服务优先级、触发时序,在同一时刻为主驾、副驾、后排乘客提供不同的服务。在车载可控部件存在冲突的情况下,需要从安全、体验等维度,结合冲突化解策略,对存在冲突的车载可控部件及与该车载可控部件相关联的部件进行优先级仲裁。
本申请实施例中,第一控制指令序列至少基于对接收到的包含车载机械臂控制指令的目标脚本序列进行脚本解析获取。其中,目标脚本序列的确定步骤可以如本申请实施例二所示,本申请实施例二提供了一种脚本序列的处理方法。另外,目标脚本序列的确定步骤也可以如本申请实施例三所示,本申请实施例三提供了一种脚本序列的确定方法。
实施例二
车载机械臂作为一种具有模仿人类手臂功能并可完成各种作业的自动控制设备,已经被广泛应用于工业制造、医疗救援以及航空航天等领域。但将车载机械臂安装在车辆座舱内,以用于为车上人员提供智能化的驾驶服务,却是鲜有人涉及的车载机械臂的应用领域。
而如果想要将车载机械臂安装在车辆座舱内,以用于为车上人员提供智能化的驾驶服务,如何实现对车载机械臂的个性化控制,就成为了不得不面临的问题。
为了解决上述问题,本申请实施例二提供了一种脚本序列的处理方法,以用于确定目标脚本序列。本申请实施例二中提供的脚本序列的处理方法可以包括如下步骤:首先,在用户利用控制脚本集编辑目标脚本序列的过程中,针对用户当前选中的控制脚本,在控制脚本集中确定下一个可被选中的控制脚本,控制脚本为用于控制车载机械臂执行预设动作的脚本,可被选中的控制脚本为在车载机械臂执行完与当前选中的控制脚本对应的预设动作后,可控制车载机械臂在预设的可达空间内进一步执行对应预设动作的控制脚本。然后,将可被选中的控制脚本确定为下一个可供用户选取的控制脚本。
本申请实施例二中的脚本序列的处理方法,在用户利用控制脚本集编辑目标脚本序列的过程中,每当用户选中一个控制脚本后,都能够自动将可被选中的控制脚本确定为下一个可供用户选取的控制脚本。
由于可被选中的控制脚本为在车载机械臂执行完与当前选中的控制脚本对应的预设动作后,可控制车载机械臂在预设的可达空间内进一步执行对应预设动作的控制脚本。因此,在将可被选中的控制脚本确定为下一个可供用户选取的控制脚本的情况下,能够确保用户在当前选中的控制脚本后选中的下一个控制脚本一定为在车载机械臂执行完与当前选中的控制脚本对应的预设动作后,可控制车载机械臂在预设的可达空间内进一步执行对应预设动作的控制脚本。
本申请实施例二中的脚本序列的处理方法,能够使车载机械臂在按照编辑好的目标脚本序列依次执行各个目标脚本序列对应的预设动作的过程中,均能均处在可达空间内。从而通过编辑好的目标脚本序列能够实现对车载机械臂的个性化控制,以使车载机械臂能够按照编辑好的目标脚本序列依次执行各个目标脚本序列对应的预设动作。
本申请实施例中提供的脚本序列的处理方法的执行主体一般为服务端,也可以为客户端。
所谓服务端可以为用于提供数据处理、存储、转发等服务的云端服务器或者云端服务器集群,也可以为用于提供数据处理、存储、转发等服务的传统服务器或者传统服务器集群。其中,该传统服务器的一般实现方式为计算设备。所谓客户端为至少具有目标脚本序列处理功能的应用程序 (Application,APP)、应用或者软件。
本申请实施例二中,客户端可以部署、运行在车载电子设备上,也可以部署、运行在应用电子设备上,还可以部署、运行在浏览器网页(web)上。其中,部署、运行在车载电子设备上的客户端为车载客户端,部署、运行在应用电子设备上的客户端为移动客户端,部署、运行在浏览器网页上的客户端为web客户端。
常见的移动客户端为手机客户端。
本申请实施例二中,所谓车载机械臂为安装在车辆座舱内,用于为车上人员提供智能化驾驶服务的机械臂。
需要说明的是,车载机械臂的数目可以为一个,也可以为多个。在车载机械臂的数目为多个时,可以针对多个车载机械臂分别进行目标脚本序列的处理,也可以针对多个车载机械臂中任意一个进行目标脚本序列的处理。以下仅以车载机械臂的数目是一个为例,来对本申请实施例中提供的脚本序列的处理方法进行详细说明。
该车载机械臂可以单独执行相应的预设动作,以用于为车上人员提供智能化驾驶服务。具体的,该车载机械臂可以单独用于控制显示屏幕移动,例如控制车载显示屏前后移动的预设动作或者调整角度的预设动作,或者单独执行左右摆动的预设动作。
另外,该车载机械臂也可以与车辆座舱内的车载信息娱乐系统、仪表盘、抬头显示、流媒体后视镜、氛围灯、智能车门以及智能音箱等进行配合,在预设场景下完成相应的预设动作。例如,配合氛围灯的闪烁来执行左右摇摆动作。
本申请实施例二中,车载客户端通过解析并执行控制脚本来实现对车载机械臂的控制。具体的,车载客户端能够解析控制脚本并能够按照该控制脚本所确定的预设动作来对车载机械臂进行控制。
本申请实施例二中,控制脚本集中至少包括两个预设动作。所谓预设动作为针对车载机械臂预先设置好的执行动作,在执行预设动作过程中车载机械臂均处在可达空间内。该预设动作具体可以为单独的基础动作,例如:向上运动、向下运动或者向左运动等。预设动作具体也可以是由基础动作组成的复杂动作,例如:左右摇摆、上下移动、摇头或者摆手等。
本申请实施例二中,针对用户当前选中的控制脚本,在控制脚本集中确定下一个可被选中的控制脚本的具体过程可以包括如下步骤:首先,预测车载机械臂在执行完与当前选中的控制脚本对应的预设动作后所处的停止位置。然后,针对控制脚本集中的各个控制脚本,预测车载机械臂在停止位置的基础上进一步执行对应预设动作过程中的实时位置。最后,根据实时位置,确定可被选中的控制脚本。
本申请实施例二中,根据车载机械臂在停止位置的基础上进一步执行对应预设动作过程中的实时位置,来确定可被选中的控制脚本,能够确保可被选中的控制脚本为在车载机械臂在执行相应预设动作的过程中均在可达空间内的控制脚本。从而能够避免可被选中的控制脚本中存在虽然在控制车载机械臂执行完相应预设动作后能够使车载机械臂所处的停止位置在可达空间内,但控制车载机械臂执行相应预设动作的过程中会导致车载机械臂超出可达空间的控制脚本。
由于在将可被选中的控制脚本确定为下一个可供用户选取的控制脚本的情况下,能够确保用户在当前选中的控制脚本后选中的下一个控制脚本一定为在车载机械臂执行完与当前选中的控制脚本对应的预设动作后,可控制车载机械臂在预设的可达空间内进一步执行对应预设动作的控制脚本。因此,通过实时位置来确定可被选中的控制脚本,能够使车载机械臂在按照编辑好的目标脚本序列依次执行各个目标脚本序列对应的预设动作的过程中,均能均处在可达空间内。
也就是说,通过编辑好的目标脚本序列能够实现对车载机械臂的个性化控制,以使车载机械臂能够按照编辑好的目标脚本序列依次执行各个目标脚本序列对应的预设动作。
具体的,以当前选中的控制脚本为目标脚本序列中的第二个控制脚本为例,对针对用户当前选中的控制脚本,在控制脚本集中确定下一个可被选中的控制脚本的过程进行详细的说明:
首先,在用户选中第二个控制脚本后,针对第二个控制脚本,预测车载机械臂在执行完对应的预设动作后所处的停止位置;然后,遍历控制脚本集中的各个控制脚本,分别预测车载机械臂在停止位置的基础上进一步执行对应预设动作过程中的实时位置。最后,利用各个控制脚本对应的实时 位置,在控制脚本集中确定可被选中作为目标脚本序列中的第三个控制脚本的可被选中的控制脚本。
需要说明的是,所谓下一个可被选中的控制脚本中至少包括一个控制脚本。而如果针对用户当前选中的控制脚本,无法在控制脚本集中将任意一个控制脚本确定为下一个可被选中的控制脚本,则证明目标脚本序列的处理过程已经无法继续。此时,需要停止目标脚本序列的处理过程,并生成目标脚本序列。
以下结合图3对根据实时位置,确定可被选中的控制脚本的具体过程进行详细的说明,图3为本申请实施例二中提供的一种可达空间的示意图。图3中的301用于表示车载机械臂,302用于表示302用于表示由可达空间限定出的空间范围。本申请实施例中,根据实时位置,确定可被选中的控制脚本的具体过程如下:
首先,确定可达空间限定出的空间范围。然后,针对各个控制脚本,检测实时位置是否处在空间范围内。最后,将实时位置处在空间范围内的控制脚本确定为可被选中的控制脚本。
将实时位置处在空间范围内的控制脚本确定为可被选中的控制脚本,能够使车载机械臂在按照编辑好的目标脚本序列依次执行各个目标脚本序列对应的预设动作的过程中,均能均处在可达空间内。从而通过编辑好的目标脚本序列能够实现对车载机械臂的个性化控制,以使车载机械臂能够按照编辑好的目标脚本序列依次执行各个目标脚本序列对应的预设动作。
本申请实施例二中,确定可达空间限定出的空间范围的具体实现方式为:首先,以车载机械臂在车辆座舱内的安装位置为坐标原点,构建针对车载机械臂的三维坐标系。如图4所示,图4为本申请实施例二中提供的一种车载机械臂的坐标系的示意图。图4中示出的三维坐标系x轴指向车尾、z轴指向车顶。然后,确定车载机械臂在车辆座舱的座舱空间在某一个方位上能够到达的坐标点集合,并利用坐标点集合来确定空间范围。空间范围可以通过车载机械臂的俯仰角范围、偏航角范围以及空间距离来表示。
本申请实施例二中预测车载机械臂在停止位置的基础上进一步执行对应预设动作过程中的实时位置的具体实现方式如下:
在停止位置的基础上,针对控制脚本集中的各个控制脚本,基于车载机械臂在执行对应预设动作过程中相对于安装位置发生的空间位置变化,来预测车载机械臂在执行对应预设动作过程中的实时位置。
具体的,对于每一预设动作,均会预先对应配置有车载机械臂在执行该预设动作过程中相对于安装位置发生的空间位置变化,在预测车载机械臂在执行对应预设动作的过程中的实时位置时,可以基于停止位置以及车载机械臂在执行该预设动作过程中相对于安装位置发生的空间位置变化,来预测车载机械臂在执行对应预设动作的过程中的实时位置。
需要说明的是,本申请实施例二中,车载机械臂可以由至少一个关节或者连杆组成,在车载机械臂由两个及其以上个关节或者连杆组成的情况下,每个关节或者连杆都可能在执行预设动作过程中存在超出可达空间的情况。因此,在预测车载机械臂在执行对应预设动作的过程中的实时位置时,需要预测车载机械臂中的每个关节或者连杆在执行对应预设动作的过程中的实时位置。
本申请实施例二中,在执行主体为客户端时,当前选中的控制脚本的确定方式可以为:响应于用户针对控制脚本触发的选中操作,确定当前选中的控制脚本。另外,在确定可被选中的控制脚本之后,还可以进一步将可供用户选取的控制脚本设置为可供用户选取的状态。该种确定方式下,客户端除了需要至少具有目标脚本序列处理功能外,还需要同时具有目标脚本序列编辑功能。
将目标脚本序列处理功能与目标脚本序列编辑功能分别部署,往往会存在由于不同电子设备之间网络通信信号差而导致的目标脚本序列编辑有延迟的问题。将目标脚本序列处理功能以及目标脚本序列编辑功能同时设置在同一客户端上,能够有效防止这一问题的发生。
本申请实施例二中,在执行主体为客户端时,当前选中的控制脚本的确定方式还可以为:利用服务端针对可被选中的控制脚本发送的第一提示信息,确定可被选中的控制脚本。
另外,在确定可被选中的控制脚本之后,还可以进一步向服务端发送用于提示可供用户选取的控制脚本的第二提示信息,以便第一电子设备利用第二提示信息,将可供用户选取的控制脚本设置为可供用户选取的状态。
在该种确定方式下,客户端除了需要至少具有目标脚本序列处理功能外,也可以同时具有目标脚本序列编辑功能,可被选中的控制脚本为用户通过人机交互界面触发运行在第一电子设备上的目标应用、程序或者软件而选中的控制脚本。
该目标应用、程序或者软件为至少具有目标脚本序列编辑功能应用、程序或者软件。在执行主体为车载客户端的情况下,第一电子设备的实现方式包括但不限于手机以及电脑。在执行主体为移动终端的情况下,所谓第一电子设备包括但不限于手机、电脑以及车载电子设备。
本申请实施例二中,将目标脚本序列处理功能与目标脚本序列编辑功能分别部署,能够降低目标脚本序列处理功能与目标脚本序列编辑功能对用于支持各自运行的电子设备的资源消耗。
本申请实施例二中,在执行主体为服务端时,当前选中的控制脚本的确定方式可以为:利用第二电子设备针对当前选中的控制脚本发送的第三提示信息,确定当前选中的控制脚本。另外,在确定可被选中的控制脚本之后,还可以进一步向第二电子设备发送用于提示可供用户选取的控制脚本的第四提示信息,以便第二电子设备利用第四提示信息,将可供用户选取的控制脚本设置为可供用户选取的状态。
在执行主体为服务端的情况下,可被选中的控制脚本为用户通过第二电子设备的人机交互界面触发第二电子设备上运行的目标应用、程序或者软件而选中的控制脚本。
其中,目标应用、程序或者软件为至少具有目标脚本序列编辑功能应用、程序或者软件。所谓电子设备包括但不限于手机、电脑以及车载电子设备。
本申请实施例二中,将目标脚本序列处理功能部署在服务端,不仅能够提高可供用户选取的控制脚本的确定速度,还能够为不同的用户提供目标脚本序列处理功能服务。
本申请实施例二中,在执行主体为服务端时,目标脚本序列的处理的详细过程可以包括如下步骤:步骤一,利用第二电子设备针对当前选中的控制脚本发送的第三提示信息,确定当前选中的控制脚本。步骤二,针对控制脚本集中的各个控制脚本,预测车载机械臂在停止位置的基础上进一步执行对应预设动作过程中的实时位置。步骤三,检测实时位置是否处在空间范围内,若是,将实时位置处在空间范围内的控制脚本确定为可被选中的控制脚本,若否,将实时位置未处在空间范围内的控制脚本确定为不可被选中的控制脚本。步骤四,向第二电子设备发送用于提示可供用户选取的控制脚本的第四提示信息,以便第二电子设备利用第四提示信息,将可供用户选取的控制脚本设置为可供用户选取的状态,并将不可供用户选取的控制脚本设置为不可供用户选取的状态。
本申请实施例二中,将可供用户选取的控制脚本设置为可供用户选取的状态,能够使用户可视化的感知到哪些控制脚本是下一个可以选取的控制脚本。从而能够提高用户对目标脚本序列编辑的体验,提高用户编辑目标脚本序列的效率。
本申请实施例二中,为了能够提高可供用户选取的控制脚本对用户可视化的程度,提高用户的用户体验,在展示可供用户选取的控制脚本的过程中可以先将可供用户选取的控制脚本设置为高亮展示的模式,再对可供用户选取的控制脚本进行高亮展示。
另外,为了可视化效果更加明显,还可以同时将可供用户选取的控制脚本设置为可供用户选取的模式,以及将不可供用户选取的控制脚本设置为不可供用户选取的模式后,进一步将可供用户选取的控制脚本设置为高亮显示的模式,以及将不可供用户选取的控制脚本设置为置灰显示的模式。
此外,还可以通过其他方式来进一步提高用户对可供用户选取的控制脚本的可视化程度,例如:还可以将不可供用户选取的控制脚本设置为无法展示的模式。
实施例三
车载机械臂作为一种具有模仿人类手臂功能并可完成各种作业的自动控制设备,已经被广泛应用于工业制造、医疗救援以及航空航天等领域。但将车载机械臂安装在车辆座舱内,以用于为车上人员提供智能化的驾驶服务,却是鲜有人涉及的车载机械臂的应用领域。
而如果想要将车载机械臂安装在车辆座舱内,以用于为车上人员提供智能化的驾驶服务,如何实现对车载机械臂的个性化控制,就成为了不得不面临的问题。
为了解决上述问题,本申请实施例三提供了一种脚本序列的确定方法,以用于确定目标脚本序列。该脚本序列的确定方法可以包括如下步骤:首先,针对第一脚本序列中的各个控制脚本,依次 检测车载机械臂在按照第一脚本序列执行预设动作过程中的可达情况,控制脚本为用于控制车载机械臂执行预设动作的脚本,可达情况用于表示车载机械臂是否超出预设的可达空间。然后,根据可达情况,利用第一脚本序列,确定用于控制车载机械臂执行预设动作的目标脚本序列。
本申请实施例三中提供的脚本序列的确定方法,能够依次检测车载机械臂在按照第一脚本序列执行预设动作过程中的可达情况,并根据相应的可达情况,利用该第一脚本序列,确定用于控制车载机械臂执行预设动作的目标脚本序列。由于目标脚本序列为用于控制车载机械臂执行预设动作的脚本序列,因此,通过该目标脚本序列能够实现对车载机械臂的个性化控制,以使车载机械臂对第一脚本序列中的各个预设动作均进行动作的执行。
本申请实施例三中提供的脚本序列的确定方法的执行主体一般为服务端,也可以为客户端。
所谓服务端可以为用于提供数据处理、存储、转发等服务的云端服务器或者云端服务器集群,也可以为用于供数据处理、存储、转发等服务的传统服务器或者传统服务器集群。其中,该传统服务器的一般实现方式为计算设备。
所谓客户端为至少具有脚本序列确定功能的应用程序(Application,APP)、应用或者软件。该客户端可以部署、运行在车载电子设备上,也可以部署、运行在应用电子设备上,还可以部署、运行在浏览器网页(web)上。其中,部署、运行在车载电子设备上的客户端为车载客户端,部署、运行在移动电子设备上的客户端为移动客户端,部署、运行在浏览器网页上的客户端为web客户端。
常见的移动客户端为手机客户端。
需要说明的是,在本申请实施例三中提供的脚本序列的确定方法的执行主体为客户端的情况下,第一脚本序列的确定方式一般为:用户通过人机交互界面触发客户端生成第一脚本序列。此时,客户端需要同时具备脚本序列处理功能以及脚本序列编辑功能。以执行主体为车载客户端为例,用户通过人机交互界面触发客户端生成第一脚本序列的过程如下:
首先,车载客户端响应于用户通过人机交互界面触发的脚本序列编辑请求,在指定页面展示预先设计好的控制脚本。
然后,车载客户端响应于用户通过人机交互界面针对控制脚本触发的选取操作,选中相应的控制脚本,并按照用户对控制脚本的选取顺序,利用选中的控制脚本生成第一脚本序列。
在本申请实施例三中提供的脚本序列的确定方法的执行主体为客户端的情况下,第一脚本序列还可以是由目标应用、程序或者软件响应于用户通过对电子设备的人机交互界面而生成的。所谓目标应用、程序或者软件为至少具有脚本编辑功能的应用、程序或者软件。所谓电子设备运行有目标应用、程序或者软件,具体实现方式包括但不限于手机、电脑以及车载电子设备。
具体的,在执行主体为车载客户端的情况下,电子设备的实现方式包括但不限于手机以及电脑。在执行主体为移动终端的情况下,所谓电子设备包括但不限于手机、电脑以及车载电子设备。
具体的,第一脚本序列还可以是由其他客户端响应于用户对人机交互界面的触发而生成的,此时,其他客户端还需要通知具备脚本序列编辑功能。也就是说,在目标应用、程序或者软件为同时具备脚本序列编辑功能以及脚本序列处理功能的应用、程序或者软件情况下,该目标应用、程序或者软件为其他客户端。
以下具体以执行主体为车载客户端,其他客户端为其他手机客户端为例,对第一脚本序列的确定过程进行详细说明:首先,手机客户端响应于用户通过人机交互界面触发的脚本序列编辑请求,在指定页面中展示预先配置好的控制脚本。其次,手机客户端响应于用户通过人机交互界面针对控制脚本触发的选取操作,选中相应的控制脚本,并按照用户对控制脚本的选取顺序,利用选中的控制脚本生成第一脚本序列。再次,手机客户端响应于用户通过人机交互界面触发的第一脚本序列确定操作,向服务端发送第一脚本序列转发请求。服务端在接收到该第一脚本序列转发请求后,会向车载客户端发送目标脚本序列确定请求。最后,车载客户端在接收到目标脚本序列确定请求后,会解析该目标脚本序列确定请求,并获得该目标脚本序列确定请求中携带的第一脚本序列。
本申请实施例三中,在脚本序列的确定方法的执行主体为服务端的情况下,所谓第一脚本序列的获得过程可以如下:首先,客户端响应于用户通过人机交互界面触发的脚本序列编辑请求,在指定页面中展示预先配置好的控制脚本。其次,客户端响应于用户通过人机交互界面针对控制脚本触 发的选取操作,选中相应的控制脚本,并按照用户对控制脚本的选取顺序,利用选中的控制脚本生成第一脚本序列。再次,客户端响应于用户通过人机交互界面触发的第一脚本序列确定操作,向服务端发送目标脚本序列确定请求。最后,服务端在接收到目标脚本序列确定请求后,会解析目标脚本序列确定请求,并获得该目标脚本序列确定请求中携带的第一脚本序列。
需要说明的是,由于选取顺序是按照用户对控制脚本选取操作的先后来确定的,且第一脚本序列是基于该选取顺序针对用户选取的控制脚本而生成的脚本序列。因此,该第一脚本序列能够反应用户对车载机械臂按照指定执行顺序执行指定预设动作的个性化需求。
另外,由于第一脚本序列是响应于用户对控制脚本集中选中的控制脚本而生成的,因此,该第一脚本序列能够满足用户对车载机械臂的控制需求。并且由于目标脚本序列是利用第一脚本序列确定的,从而该目标脚本序列也能够满足用户对车载机械臂的控制需求。
此外,由于在生成第一脚本序列后,会根据可达情况在该第一脚本序列的基础上进一步确定目标脚本序列,以控制车载机械臂按照目标脚本序列在预设的可达空间内执行对应的预设动作。因此,在生成第一脚本序列的过程中,无需用户关注所选中的控制脚本是否会导致车载机械臂超出可达空间,从而能够提高用户的使用体验。
需要说明的是,本申请实施例三中的第一脚本序列的不仅仅可以通过上述方式获得,还可以通过如下方式获得:首先,由商家根据预设的动作需求,通过特定的编程软件,编写第一脚本序列。然后,在第一脚本序列编写完成后,通过商家端上传至服务端。在执行主体为服务端的情况下,由服务端针对商家端发送的第一脚本序列进行脚本序列的确定。在执行主体为客户端的情况下,再由服务端将商家端发送的第一脚本序列转发至相应的客户端,以使相应的客户端针对第一脚本序列进行脚本序列的确定。
也就是说,本申请实施例三中,对第一脚本序列的获得方式不做具体限定。但是,对于由不同途径获得的第一脚本序列,在获得第一脚本序列后往往需要先进行格式统一。
本申请实施例三中,所谓车载机械臂为安装在车辆座舱内,用于为车上人员提供智能化驾驶服务的机械臂。
需要说明的是,车载机械臂的数目可以为一个,也可以为多个。在车载机械臂的数目为多个时,可以针对多个车载机械臂分别进行脚本序列的确定,也可以针对多个车载机械臂中任意一个进行脚本序列的确定。以下仅以车载机械臂的数目是一个为例,来对本申请实施例三中提供的脚本序列的确定方法进行详细说明。
该车载机械臂可以单独执行相应的预设动作,以用于为车上人员提供智能化驾驶服务。具体的,该车载机械臂可以单独用于控制显示屏幕移动,例如控制车载显示屏前后移动的预设动作或者调整角度的预设动作,或者单独执行左右摆动的预设动作。
另外,该车载机械臂也可以与车辆座舱内的车载信息娱乐系统、仪表盘、抬头显示、流媒体后视镜、氛围灯、智能车门以及智能音箱等进行配合,在预设场景下完成相应的预设动作。例如,配合氛围灯的闪烁来执行左右摇摆动作。
本申请实施例三中,车载客户端通过解析并执行控制脚本来实现对车载机械臂的控制。具体的,车载客户端能够解析控制脚本并能够按照该控制脚本所确定的预设动作来对车载机械臂进行控制。
所谓预设动作为针对车载机械臂预先设置好的执行动作,在执行预设动作过程中车载机械臂均处在可达空间内。该预设动作具体可以为单独的基础动作,例如:向上运动、向下运动或者向左运动等。预设动作具体也可以是由基础动作组成的复杂动作,例如:左右摇摆、上下移动、摇头或者摆手等。
本申请实施例三中,在根据可达情况,利用第一脚本序列,确定目标脚本序列时,可以先在检测到可达情况包括第一可达情况时,获得目标控制脚本,第一可达情况用于表示车载机械臂超出可达空间,目标控制脚本为导致车载机械臂超出可达空间的控制脚本。其次,再在第一脚本序列中插入复位脚本作为目标控制脚本的前一控制脚本,以获得第二脚本序列,复位脚本为用于控制车载机械臂执行复位动作的脚本。再次,针对第二脚本序列中的各个控制脚本,依次检测车载机械臂在按照第二脚本序列执行预设动作过程中的可达情况。最后,根据可达情况,利用第二脚本序列,确定 目标脚本序列。
本申请实施例三中,在检测到第一脚本序列中存在导致车载机械臂超出可达空间时,表明车载机械臂无法再按照第一脚本序列继续执行相应的预设动作。此时,即可停止对第一脚本序列中的剩余控制脚本的检测,转而需要将导致车载机械臂超出可达空间的控制脚本确定为目标控制脚本。在确定目标控制脚本后,需要在第一脚本序列中插入复位脚本作为目标控制脚本的前一控制脚本,以获得第二脚本序列。
车载机械臂在按照第二脚本序列执行相应的预设动作的过程中,由于在执行目标控制脚本对应的预设动作之前,会先行执行复位动作。在此情况下,再执行目标控制脚本对应的预设动作,就是在车载机械臂复位的基础上进一步执行该目标控制脚本对应的预设动作,从而能够保证车载机械臂可以对该目标控制脚本对应的预设动作进行动作的执行。
虽然对于能够保证车载机械臂可以对该目标控制脚本对应的预设动作进行动作的执行,但是对于车载机械臂能否对目标控制脚本之后的控制脚本所对应的预设动作进行动作的执行依然无法确定。因此,在获得第二脚本序列后,仍需要针对第二脚本序列中的各个控制脚本,依次检测车载机械臂在按照第二脚本序列执行预设动作过程中的可达情况,并进一步根据可达情况,利用第二脚本序列,确定目标脚本序列。
以下结合图3对第二脚本序列的生成过程进行详细的说明,图3中的301用于表示车载机械臂,302用于表示302用于表示由可达空间限定出的空间范围。本申请实施例三中,第二脚本序列的生成过程如下:例如,第一脚本序列中按照先后顺序依次有4个控制脚本,分别记为第一控制脚本、第二控制脚本、第三控制脚本以及第四控制脚本。部署在服务端或者客户端用于对控制脚本进行编译的编译器在获得第一控制脚本序列后,会依次检测车载机械臂在按照第一脚本序列执行预设动作过程中的可达情况。
如果编译器会对第一控制脚本进行编译、检测后,检测到车载机械臂301在执行第一控制脚本对应的预设动作的过程中均处在可达空间302内,则证明车载机械臂能够按照第一脚本序列执行与第一脚本序列对应的预设动作。此时,可以进一步检测车载机械臂是否能够按照第一脚本序列执行与第二脚本序列对应的预设动作。
具体的,车载机械臂301在执行第二控制脚本对应的预设动作的过程中,需要在执行完第一控制脚本对应的预设动作后多对应的停止位置的基础上,进一步执行第二控制脚本对应的预设动作。此时,编译器会对第二控制脚本进行检测。
如果经检测车载机械臂301在进一步执行第二控制脚本对应的预设动作的过程中,会存在车载机械臂301超出可达空间302的情况。此时,则需要停止对第一脚本序列中的剩余控制脚本的检测。即,停止对第三控制脚本以及第四控制脚本的进一步检测,并将第二控制脚本确定为目标控制脚本。
在确定目标控制脚本后,需要在第一脚本序列中插入复位脚本作为目标控制脚本的前一控制脚本,以获得第二脚本序列。此时,第二脚本序列中的脚本执行顺序为第一控制脚本(第二脚本序列中需要第一个被执行的控制脚本)、复位脚本(第二脚本序列中需要第二个被执行的控制脚本)、第二控制脚本(第二脚本序列中需要第三个被执行的控制脚本)、第三控制脚本(第二脚本序列中需要第四个被执行的控制脚本)以及第四控制脚本(第二脚本序列中需要第五个被执行的控制脚本)。
第二脚本控制序列中不仅包含有第一脚本中的全部控制脚本,并且由于在按照第二控制脚本序列执行至第二控制脚本之前,车载机械臂301需要先执行复位脚本,以实现复位。而在车载机械臂301复位的情况下,能够保证第二控制脚本在执行过程中不会存在超出可达空间302的情况。
本申请实施例三中,根据可达情况,利用第二脚本序列,确定目标脚本序列的步骤,包括:在可达情况均为第二可达情况的情况下,将第二脚本序列确定为目标脚本序列,第二可达情况用于表示车载机械臂未超出可达空间。
如果车载机械臂在按照第二脚本序列执行预设动作的过程中,车载机械臂所对应的可达情况均为第二可达情况,则证明车载机械臂在按照第二脚本序列执行预设动作的过程中均不会超出可达空间。此时,将第二脚本序列确定为目标脚本序列,能够确保目标脚本序列可以实现对车载机械臂的个性化控制,以使车载机械臂对第一脚本序列中的各个预设动作均进行动作的执行。
本申请实施例三中,根据可达情况,利用第二脚本序列,确定目标脚本序列的步骤,还包括:首先,在检测到可达情况包括第一可达情况的情况下,获得目标控制脚本。在第二脚本序列中插入复位脚本作为目标控制脚本的前一控制脚本,以获得第三脚本序列,以此类推,直至确定目标脚本序列。
也就是说,只要车载机械臂在按照新生成的脚本序列执行相应预设动作的过程中,还存在具有无法继续执行的预设动作时,就需要不断的确定目标控制脚本,并在当前脚本序列中插入复位脚本作为目标控制脚本的前一控制脚本,以获得新的脚本序列。直至车载机械臂在按照某一脚本序列执行相应预设动作的过程中,车载机械臂均处于可达空间内,此时,才将该脚本序列确定为目标脚本序列。这样,可以确保一定能够生成目标脚本序列,且该目标脚本序列能够实现对车载机械臂的个性化控制,以使车载机械臂对第一脚本序列中的各个预设动作均进行动作的执行。
本申请实施例三中,根据可达情况,利用第一脚本序列,确定目标脚本序列的步骤还可以包括:在可达情况均为第二可达情况的情况下,将第一脚本序列确定为目标脚本序列,第二可达情况用于表示车载机械臂未超出可达空间。
也就是说,如果车载机械臂在按照第一脚本序列执行预设动作的过程中,车载机械臂所对应的可达情况均为第一可达情况,则证明车载机械臂在按照第一脚本序列执行预设动作的过程中均不会超出可达空间。此时,将第一脚本序列确定为目标脚本序列,能够确保目标脚本序列可以实现对车载机械臂的个性化控制,以使车载机械臂能够按照指定执行顺序执行指定预设动作。
本申请实施例三中,根据可达情况,利用第一脚本序列,确定目标脚本序列的步骤还可以如下:步骤一,次检测车载机械臂在按照第一脚本序列执行预设动作过程中,是否存在第一可达情况。步骤二,再如果存在第一可达情况,则获得目标控制脚本,在第一脚本序列中插入复位脚本作为目标控制脚本的前一控制脚本,并重新执行步骤一。步骤三,如果不存在第一可达情况,则将第一脚本序列确定为目标脚本序列。
具体的,所谓依次检测车载机械臂在按照第一脚本序列执行预设动作过程中,是否存在第一可达情况的具体实现方式为:依次检测车载机械臂在按照第一脚本序列执行预设动作过程中是否存在车载机械臂超出可达空间的情况。
所谓如果存在第一可达情况,则获得目标控制脚本,在第一脚本序列中插入复位脚本作为目标控制脚本的前一控制脚本的具体实现方式包括:如果存在车载机械臂超出可达空间的情况,则在第一脚本序列中插入复位脚本作为目标控制脚本的前一控制脚本。
所谓如果不存在第一可达情况,则将第一脚本序列确定为目标脚本序列的具体实现方式是指:如果车载机械臂在按照第一脚本序列执行预设动作过程不存在超出可达空间的情况,则将第一脚本序列确定为目标脚本序列。
为了能够精准的对可达情况进行实时的检测,本申请实施例三中,所采用的可达情况的检测方式为:首先,确定可达空间限定出的空间范围。然后,针对当前检测的控制脚本,预测车载机械臂在执行对应预设动作的过程中的实时位置。最后,利用空间范围以及实时位置,检测可达情况。
本申请实施例三中,确定可达空间限定出的空间范围的具体实现方式为:首先,以车载机械臂在车辆座舱内的安装位置为坐标原点,构建针对车载机械臂的三维坐标系。请再参照图4,图4中示出的三维坐标系x轴指向车尾、z轴指向车顶。然后,确定车载机械臂在车辆座舱的座舱空间在某一个方位上能够到达的坐标点集合,并利用坐标点集合来确定空间范围。空间范围可以通过车载机械臂的俯仰角范围、偏航角范围以及空间距离来表示。
所谓预测车载机械臂在执行对应预设动作的过程中的实时位置的具体实现方式为:在停止位置的基础上,针对当前检测的控制脚本,基于车载机械臂在执行对应预设动作过程中相对于安装位置发生的空间位置变化,来预测车载机械臂在执行对应预设动作过程中的实时位置。
具体的,对于每一预设动作,均会预先对应配置有车载机械臂在执行该预设动作过程中相对于安装位置发生的空间位置变化,在预测车载机械臂在执行对应预设动作的过程中的实时位置时,可以基于停止位置以及车载机械臂在执行该预设动作过程中相对于安装位置发生的空间位置变化,来预测车载机械臂在执行对应预设动作的过程中的实时位置。
需要说明的是,本申请实施例三中,车载机械臂可以由至少一个关节或者连杆组成,在车载机械臂由两个及其以上个关节或者连杆组成的情况下,每个关节或者连杆都可能在执行预设动作过程中存在超出可达空间的情况。因此,在预测车载机械臂在执行对应预设动作的过程中的实时位置时,需要预测车载机械臂中的每个关节或者连杆在执行对应预设动作的过程中的实时位置,并根据车载机械臂中的每个关节或者连杆在执行对应预设动作的过程中的实时位置,来检测可达情况。
在本申请实施例三中提供的脚本序列的确定方法的执行主体为服务端时,为了使车载客户端能够通过该目标脚本序列实现对车载机械臂的个性化控制,以使车载机械臂对第一脚本序列中的各个预设动作均进行动作的执行,在确定目标脚本程序后,还需要进一步将目标脚本序列发送给用于调用并解析目标脚本序列的车载客户端,以使车载客户端按照目标脚本序列控制车载机械臂执行预设动作。
在本申请实施例三中提供的脚本序列的确定方法的执行主体为车载客户端时,为了能够通过该目标脚本序列实现对车载机械臂的个性化控制,以使车载机械臂对第一脚本序列中的各个预设动作均进行动作的执行,在确定目标脚本程序后,则需要先调用并解析目标脚本序列,并根据解析后的目标脚本序列,来控制车载机械臂按照目标脚本序列执行预设动作。
实施例四
如图5所示,本申请实施例四提供了一种车载机械臂的控制装置。该装置可以包括:第一控制指令序列生成模块501,用于根据第一触发信息,生成对包括车载机械臂在内的车载可控部件的第一控制指令序列;第一触发信息是根据车内不同人员的指令信息,和/或当前车辆所在位置的环境信息确定的;第二控制指令序列生成模块502,用于在车载可控部件执行第一控制指令序列的过程中,接收到第二触发信息的情况下,生成包括车载机械臂在内的车载可控部件的第二控制指令序列;第二触发信息是根据车内不同人员的指令信息,和/或当前车辆所在位置的环境信息确定的;冲突化解策略确定模块503,用于在第一控制指令序列和第二控制指令序列存在冲突的情况下,确定冲突化解策略以对冲突进行化解;存在冲突的情况包括第一控制指令序列和第二控制指令序列的控制对象均包含车载机械臂。
在一种实施方式中,冲突化解策略确定模块503,可以进一步包括:类型确定子模块,用于确定第一触发信息和第二触发信息的类型;冲突化解策略确定执行子模块,用于在存在指定类型,或者指定类型的触发信息的优先级高于另一触发信息的优先级的情况下,忽略另一触发信息。
在一种实施方式中,车载机械臂的控制装置还可以包括:发送模块,具体用于将指定类型的触发信息所对应的控制指令序列直接发送至车载可控部件。
在一种实施方式中,冲突化解策略确定模块503,可以进一步包括:类型确定子模块,用于确定第一触发信息和第二触发信息的类型;属性确定子模块,用于在类型相同的情况下,确定第一控制指令序列、第二控制指令序列的属性,属性包括模板类属性或定制类属性;冲突化解策略确定执行子模块,用于在属性不同,或属性相同且属于定制类的情况下,控制车载机械臂暂停动作,在接收到新的控制指令序列的情况下,控制车载机械臂执行新的控制指令序列。
在一种实施方式中,冲突化解策略确定模块503,还可以进一步包括:优先级比较子模块,用于在属性相同,且属于定制类属性的情况下,比较第一控制指令序列和第二控制指令序列的优先级;冲突化解策略确定执行子模块还用于根据比较结果,控制车载机械臂执行优先级高的控制指令序列。
在一种实施方式中,车载机械臂的控制装置还可以包括:状态检测模块,用于实时检测车载可控部件的状态,状态包括正常状态或非正常状态;可执行性确定模块,用于根据车载可控部件的状态的检测结果,确定第一控制指令序列和/或第二控制指令序列的可执行性。
在一种实施方式中,车载机械臂的控制装置还可以包括:执行控制模块,用于在第一控制指令序列和第二控制指令序列不存在冲突的情况下,控制车载可控部件并行执行第一控制指令序列和第二控制指令序列。
在一种实施方式中,第一控制指令序列生成模块501具体用于,基于对接收到的包含车载机械臂控制指令的目标脚本序列进行脚本解析获取第一控制指令序列。
在一种实施方式中,第一控制指令序列生成模块501包括:可被选中脚本确定子单元,用于在 用户利用控制脚本集编辑脚本序列的过程中,针对用户当前选中的控制脚本,在控制脚本集中确定下一个可被选中的控制脚本,控制脚本为用于控制车载机械臂执行预设动作的脚本,可被选中的控制脚本为在车载机械臂执行完与当前选中的控制脚本对应的预设动作后,可控制车载机械臂在预设的可达空间内进一步执行对应预设动作的控制脚本;可被选取脚本确定子单元,用于将可被选中的控制脚本确定为下一个可供用户选取的控制脚本。
在一种实施方式中,可被选取脚本确定子单元,可以包括:
停止位置预测子单元,用于预测车载机械臂在执行完与当前选中的控制脚本对应的预设动作后所处的停止位置;
实时位置预测子单元,用于针对控制脚本集中的各个控制脚本,预测车载机械臂在停止位置的基础上进一步执行对应预设动作过程中的实时位置;
可供选中脚本第一确定子单元,用于根据实时位置,确定可被选中的控制脚本。
在一种实施方式中,可供选中脚本第一确定子单元,可以包括:
空间范围确定子单元,用于确定可达空间限定出的空间范围;
实时位置检测子单元,用于针对各个控制脚本,检测实时位置是否处在空间范围内;
可供选中脚本第二确定子单元,用于将实时位置处在空间范围内的控制脚本确定为可被选中的控制脚本。
在一种实施方式中,可被选中脚本确定子单元,可以包括:可被选中脚本第一确定子单元,用于响应于用户针对控制脚本触发的选中操作,确定当前选中的控制脚本;
该装置,还包括:状态设置单元,用于将可供用户选取的控制脚本设置为可供用户选取的状态。
在一种实施方式中,可被选中脚本确定子单元,可以包括:可被选中脚本第二确定子单元,用于利用服务端针对可被选中的控制脚本发送的第一提示信息,确定可被选中的控制脚本;
该装置,还包括:提示信息第一发送单元,用于向服务端发送用于提示可供用户选取的控制脚本的第二提示信息,以便第一电子设备利用第二提示信息,将可供用户选取的控制脚本设置为可供用户选取的状态。
在一种实施方式中,可被选中脚本确定子单元,可以包括:可被选中脚本第三确定子单元,用于利用第二电子设备针对当前选中的控制脚本发送的第三提示信息,确定当前选中的控制脚本;
装置,还包括提示信息第二发送单元,用于向第二电子设备发送用于提示可供用户选取的控制脚本的第四提示信息,以便第二电子设备利用第四提示信息,将可供用户选取的控制脚本设置为可供用户选取的状态。
在一种实施方式中,第一控制指令序列生成模块501包括:可达情况检测子单元,用于针对第一脚本序列中的各个控制脚本,依次检测车载机械臂在按照第一脚本序列执行预设动作过程中的可达情况,控制脚本为用于控制车载机械臂执行预设动作的脚本,可达情况用于表示车载机械臂是否超出预设的可达空间;目标脚本序列确定子单元,用于根据可达情况,利用第一脚本序列,确定用于控制车载机械臂执行预设动作的目标脚本序列。
在一种实施方式中,目标脚本序列确定子单元,可以包括:目标控制脚本获得第一子单元,用于在检测到可达情况包括第一可达情况时,获得目标控制脚本,第一可达情况用于表示车载机械臂超出可达空间,目标控制脚本为导致车载机械臂超出可达空间的控制脚本;第二脚本序列获得子单元,用于在第一脚本序列中插入复位脚本作为目标控制脚本的前一控制脚本,以获得第二脚本序列,复位脚本为用于控制车载机械臂执行复位动作的脚本;可达情况检测子单元,用于针对第二脚本序列中的各个控制脚本,依次检测车载机械臂在按照第二脚本序列执行预设动作过程中的可达情况;目标脚本序列确定第一子单元,用于根据可达情况,利用第二脚本序列,确定目标脚本序列。
在一种实施方式中,目标脚本序列确定第一子单元,可以包括:目标脚本序列确定第二子单元,用于在可达情况均为第二可达情况的情况下,将第二脚本序列确定为目标脚本序列,第二可达情况用于表示车载机械臂未超出可达空间。
在一种实施方式中,目标脚本序列确定第一子单元,还可以包括:目标控制脚本获得第二子单元,用于在检测到可达情况包括第一可达情况的情况下,获得目标控制脚本;第三脚本序列获得子 单元,用于在第二脚本序列中插入复位脚本作为目标控制脚本的前一控制脚本,以获得第三脚本序列,以此类推,直至确定目标脚本序列。
在一种实施方式中,目标脚本序列确定子单元,可以包括:目标脚本序列确定第三子单元,用于在可达情况均为第二可达情况的情况下,将第一脚本序列确定为目标脚本序列,第二可达情况用于表示车载机械臂未超出可达空间。
在一种实施方式中,可达情况检测子单元,可以包括:空间范围确定子单元,用于确定可达空间限定出的空间范围;实时位置预测子单元,用于针对当前检测的控制脚本,预测车载机械臂在执行对应预设动作的过程中的实时位置;利用空间范围以及实时位置,检测可达情况。
在一种实施方式中,该装置还可以包括:目标脚本序列发送单元,用于将目标脚本序列发送给用于调用并解析目标脚本序列的车载客户端,以使车载客户端按照目标脚本序列控制车载机械臂执行预设动作。
在一种实施方式中,该装置还可以包括:目标脚本序列解析单元,用于调用并解析目标脚本序列;
车载机械臂控制单元,用于根据解析后的目标脚本序列,控制车载机械臂按照目标脚本序列执行预设动作。
本申请实施例各装置中的各单元的功能可以参见上述方法中的对应描述,在此不再赘述。
本申请的技术方案中,所涉及的用户个人信息的获取,存储和应用等,均符合相关法律法规的规定,且不违背公序良俗。
实施例五
本申请实施例五中提供了一种车载显示设备,包括:控制单元,用于执行本申请实施例一中提供的车载机械臂的控制方法,或包括本申请实施例四中提供的车载机械臂的控制装置;由车载机械臂和车载屏幕组成的显示模组,车载机械臂用于驱动车载屏幕完成至少一种目标动作
实施例六
本申请实施例六中提供了一种车辆,包括:控制单元,用于执行本申请实施例一中提供的车载机械臂的控制方法,或包括本申请实施例四中提供的车载机械臂的控制装置;由车载机械臂和车载屏幕组成的显示模组,车载机械臂用于驱动车载屏幕完成至少一种目标动作。
实施例七
根据本申请实施例,本申请还提供了一种电子设备、一种可读存储介质和一种计算机程序产品。
图6示出了可以用来实施本申请实施例的示例电子设备600的示意性框图。电子设备旨在表示各种形式的数字计算机,诸如,膝上型计算机、台式计算机、工作台、个人数字助理、服务器、刀片式服务器、大型计算机、和其它适合的计算机。电子设备还可以表示各种形式的移动装置,诸如,个人数字处理、蜂窝电话、智能电话、可穿戴设备和其它类似的计算装置。本文所示的部件、它们的连接和关系、以及它们的功能仅仅作为示例,并且不意在限制本文中描述的和/或者要求的本申请的实现。
如图7所示,设备600包括计算单元601,其可以根据存储在只读存储器(ROM)602中的计算机程序或者从存储单元608加载到随机访问存储器(RAM)603中的计算机程序,来执行各种适当的动作和处理。在RAM 603中,还可存储设备600操作所需的各种程序和数据。计算单元601、ROM602以及RAM 603通过总线604彼此相连。输入/输出(I/O)接口605也连接至总线604。
设备600中的多个部件连接至I/O接口605,包括:输入单元606,例如键盘、鼠标等;输出单元607,例如各种类型的显示器、扬声器等;存储单元608,例如磁盘、光盘等;以及通信单元609,例如网卡、调制解调器、无线通信收发机等。通信单元609允许设备600通过诸如因特网的计算机网络和/或各种电信网络与其他设备交换信息/数据。
计算单元601可以是各种具有处理和计算能力的通用和/或专用处理组件。计算单元601的一些示例包括但不限于中央处理单元(CPU)、图形处理单元(GPU)、各种专用的人工智能(AI)计算芯片、各种运行机器学习模型算法的计算单元、数字信号处理器(DSP)、以及任何适当的处理器、控制器、微控制器等。计算单元601执行上文所描述的各个方法和处理,例如车载机械臂的控 制方法。例如,在一些实施例中,车载机械臂的控制方法可被实现为计算机软件程序,其被有形地包含于机器可读介质,例如存储单元608。在一些实施例中,计算机程序的部分或者全部可以经由ROM 602和/或通信单元609而被载入和/或安装到设备600上。当计算机程序加载到RAM 603并由计算单元601执行时,可以执行上文描述的车载机械臂的控制方法的一个或多个步骤。备选地,在其他实施例中,计算单元601可以通过其他任何适当的方式(例如,借助于固件)而被配置为执行车载机械臂的控制方法。
本文中以上描述的系统和技术的各种实施方式可以在数字电子电路系统、集成电路系统、场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、芯片上系统的系统(SOC)、负载可编程逻辑设备(CPLD)、计算机硬件、固件、软件、和/或它们的组合中实现。这些各种实施方式可以包括:实施在一个或者多个计算机程序中,该一个或者多个计算机程序可在包括至少一个可编程处理器的可编程系统上执行和/或解释,该可编程处理器可以是专用或者通用可编程处理器,可以从存储系统、至少一个输入装置、和至少一个输出装置接收数据和指令,并且将数据和指令传输至该存储系统、该至少一个输入装置、和该至少一个输出装置。
用于实施本申请的方法的程序代码可以采用一个或多个编程语言的任何组合来编写。这些程序代码可以提供给通用计算机、专用计算机或其他可编程数据处理装置的处理器或控制器,使得程序代码当由处理器或控制器执行时使流程图和/或框图中所规定的功能/操作被实施。程序代码可以完全在机器上执行、部分地在机器上执行,作为独立软件包部分地在机器上执行且部分地在远程机器上执行或完全在远程机器或服务器上执行。
在本申请的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
为了提供与用户的交互,可以在计算机上实施此处描述的系统和技术,该计算机具有:用于向用户显示信息的显示装置(例如,CRT(阴极射线管)或者LCD(液晶显示器)监视器);以及键盘和指向装置(例如,鼠标或者轨迹球),用户可以通过该键盘和该指向装置来将输入提供给计算机。其它种类的装置还可以用于提供与用户的交互;例如,提供给用户的反馈可以是任何形式的传感反馈(例如,视觉反馈、听觉反馈、或者触觉反馈);并且可以用任何形式(包括声输入、语音输入或者、触觉输入)来接收来自用户的输入。
可以将此处描述的系统和技术实施在包括后台部件的计算系统(例如,作为数据服务器)、或者包括中间件部件的计算系统(例如,应用服务器)、或者包括前端部件的计算系统(例如,具有图形用户界面或者网络浏览器的用户计算机,用户可以通过该图形用户界面或者该网络浏览器来与此处描述的系统和技术的实施方式交互)、或者包括这种后台部件、中间件部件、或者前端部件的任何组合的计算系统中。可以通过任何形式或者介质的数字数据通信(例如,通信网络)来将系统的部件相互连接。通信网络的示例包括:局域网(LAN)、广域网(WAN)和互联网。
计算机系统可以包括客户端和服务器。客户端和服务器一般远离彼此并且通常通过通信网络进行交互。通过在相应的计算机上运行并且彼此具有客户端-服务器关系的计算机程序来产生客户端和服务器的关系。服务器可以是云服务器,也可以为分布式系统的服务器,或者是结合了区块链的服务器。
下面参照图7至图15描述根据本申请实施例的车载机械臂。
如图7所示,示出一种可选实施例的车载机械臂,包括:固定于车载屏幕3的背面的多自由度调整机构、以及安装于多自由度调整机构上的多个伸缩单元;其中,车载机械臂用于驱动车载屏幕3完成下述四种动作中的任一个或多个,四种动作包括:车载屏幕平移动作、车载屏幕翻转动作、车载屏幕旋转动作和车载屏幕前后移动动作。进一步地,以车载屏幕3处于未发生任何动作的状态为 初始状态,当车载屏幕3处于初始状态下,在空间中具有一初始轴线,该初始轴线与车载屏幕3于初始状态下所在平面垂直设置;则对于上述的四种动作具有如下的具体解释:车载屏幕平移动作:如图12所示,车载屏幕3的正面进行平移,车载屏幕3的正面在与初始轴线垂直的一平面内作任意角度的平移;车载屏幕翻转动作:如图11所示,车载屏幕3的正面进行翻转,车载屏幕3的正面在翻转完成后与初始轴线之间存在一夹角;车载屏幕旋转动作:如图13所示,车载屏幕3的正面绕初始轴线或一与初始轴线平行的轴线作转动;车载屏幕前后移动动作:如图14所示,车载屏幕3的正面进行前后方向作移动,车载屏幕3的正面的移动方向与初始轴线平行设置。换句话说,多个伸缩单元用于带动车载屏幕3进行上下左右的翻转,多自由度调整机构用于带动车载屏幕3进行旋转和平移,其中上下左右是指当上述的车载屏幕3处于面向使用者的竖直状态下,车载屏幕3相对于初始位置进行上部向后侧倾斜、下部向后侧倾斜、左部向后侧倾斜以及右部向后侧倾斜的动作。
进一步,作为一种可选的实施例,本申请涉及的车载中控屏调整机构也可以不设置上述的多自由度调整机构,直接使得车载屏幕3与若干伸缩单元9连接,从而实现控制屏可以根据使用需求只进行上、下、左、右的摆动。
进一步,作为一种可选的实施例,本申请涉及的车载中控屏调整机构也可以不设置上述的若干伸缩单元,直接使得车载屏幕3与多自由度调整机构连接,从而实现控制屏的自身旋转以及平移式的滑动。
在另一个可选的实施例中,每一伸缩单元的运动均与多自由度调整机构连接,每一伸缩单元的驱动端均与一驱动部连接。
进一步,作为一种可选的实施例,驱动部为汽车的内部的中控台。
进一步,作为一种可选的实施例,中控台内设置有相应的控制系统,控制系统用于控制若干伸缩单元的伸缩动作和多自由度调整机构的运动。
进一步,作为一种可选的实施例,伸缩单元也可为一具有球头结构的可弯曲杆件,该伸缩杆件通过球头与多自由度调整机构远离车载屏幕3的一侧进行过盈挤压的安装。进一步地,使用者可通过手动施力于车载屏幕3,使得多自由度调整机构作为传递力的部分施力于球头结构上,并当使得摆动至一定角度后,球头结构与多自由度调整机构之间产生足够的摩擦力使得车载屏幕3保持当前位置。
进一步,作为一种可选的实施例,每一伸缩单元的运动端均包括:直线运动单元10和多自由度连接器,直线运动单元10的一端与多自由度连接器连接,多自由度连接器安装于多自由度调整机构上。
进一步,作为一种可选的实施例,多自由度连接器为球头接头结构或万向节接头结构。
进一步,如图8所示,作为一种可选的实施例,球头接头结构包括:球形接头11和球窝滑块12,球形接头11与直线运动单元10固定连接,每一球形接头11均安装于一球窝滑块12内,每一球窝滑块12均安装于多自由度调整机构上。
进一步,作为一种可选的实施例,每一球窝滑块12上均具有一与球形接头11相匹配的球形凹陷。
进一步,作为一种可选的实施例,万向节接头结构包括:第一转动部、第二转动部及连接第一、第二转动部的铰接部,第一转动部的一端与伸缩单元固定连接,第一转动部的另一端与铰接部一端连接,铰接部的另一端与第二转动部的一端可转动地连接,第二转动部的另一端与多自由度调整机构固定连接。
进一步,作为一种可选的实施例,直线运动单元10为电动推杆或手动推杆。进一步地,当直线运动单元10为手动推杆时,使用者可通过手动推动车载屏幕3以使得车载屏幕3做出相应的动作;当车载机械臂的电动推杆处于断电状态,电动推杆应允许使用者通过手动的方式驱使电动推杆进行相应的伸缩以完成车载屏幕3的动作。
进一步,作为一种可选的实施例,球头接头11、直线运动单元10、多自由度调整机构等本车载机械臂的可动部位均与相应的连接部位之间的接触面具有一定的摩擦阻力,摩擦阻力用于使得在车辆形式的过程中保持当前姿态的稳定。
进一步,作为一种可选的实施例,电动推杆或者车载屏幕3内设置有受力传感部,受力传感部用于获取对应位置处所受外力的信息,受力传感部通过该外力的信息对施力的目标进行判断:当施力的目标为乘客时,即乘客对车载屏幕3进行推动时,受力传感部将外力的信息分析为动作信息,并使得车载机械臂根据动作信息进行相应的动作,以形成对乘客推动车载屏幕过程中的助力,使得乘客能够轻松地驱使车载屏幕3完成相应的动作;当施力的目标为非乘客推动意愿的外力作用时,即车辆遇到颠簸或者乘客对车载屏幕进行触控操作时,车载机械臂不动或驱使相应的驱动部进行反向驱动以使得控制车载屏幕3保持当前的状态。
进一步,作为一种可选的实施例,还包括:车辆碰撞检测系统,车辆碰撞检测系统安装于汽车上,车辆碰撞系统用于实时检测车辆的行驶信息,当车辆即将发生或已经发生碰撞时,车载机械臂立即驱动车载屏幕3快速远离乘客,以避免乘客在碰撞时惯性的作用下与车载屏幕3相碰撞而造成伤害。
进一步,作为一种可选的实施例,直线运动单元10为无动力伸缩杆。
进一步,作为一种可选的实施例,直线运动单元10为液压推杆。
进一步,作为一种可选的实施例,还包括:若干导轨13,每一导轨13均安装于多自由度调整机构上,每一球窝滑块12均可滑动地安装于一导轨13上。
本申请在上述基础上还具有如下实施方式:
本申请的可选的实施例中,车载机械臂包括三个伸缩单元。
本申请的可选的实施例中,导轨13、球形接头11、球窝滑块12和直线运动单元10的数量均为三个。
本申请的可选的实施例中,三导轨13呈两两间隔110度夹角设置。进一步地,即三个导轨13的延长线相交后汇聚一点,并且相邻的两延长线之间间隔110度。
本申请的可选的实施例中,多自由度调整机构包括:滑动机构5和旋转机构2,旋转机构2与滑动机构5连接,旋转机构2和滑动机构5中的一个与车载屏幕3连接,旋转机构2和滑动机构5中的另一个与伸缩单元连接。
如图9所示,本申请的可选的实施例中,旋转机构2包括:支撑部、电机21、蜗杆22、涡轮23和扇形齿轮24,电机21、蜗杆22和涡轮23均安装于支撑部上,电机21与蜗杆22连接,蜗杆22与涡轮23传动连接,涡轮23与扇形齿轮24啮合连接,扇形齿轮24安装于车载屏幕3或滑动机构5上。进一步地,支撑部与若干伸缩单元连接或与滑动机构5连接。支撑部呈壳体式结构,上述的壳体式结构将电机21、蜗杆22、涡轮23和扇形齿轮24容置于支撑部内。扇形齿轮24的一端的外缘径向向外凸出形成有一弧形部,弧形部上设置有一弧形的齿条,齿条的齿尖径向向内设置,齿条与涡轮23传动连接。
本申请的可选的实施例中,还包括:两旋转止挡块25,两旋转止挡块25安装于支撑部上,且两旋转止挡块25分别可操作地与扇形齿轮24的两端相抵设置。
本申请的可选的实施例中,旋转机构2还包括:旋转轴,旋转轴的一端固定安装于支撑部上,旋转轴的另一端通过轴承等可转动地安装于车载屏幕3的背面或滑动机构5上。
本申请的可选的实施例中,滑动机构5包括:第一滑动部、第二滑动部和滑动驱动装置,第一滑动部与第二滑动可滑动的连接,且滑动方向与旋转机构2的旋转轴垂直设置,滑动驱动装置安装于第一滑动部和第二滑动部之间,滑动驱动装置用于驱动第一滑动部和第二滑动部之间的相对滑动。
本申请的可选的实施例中,还包括:视觉传感装置,视觉传感器安装于车载屏幕3的正面,视觉传感器用于检测使用者的眼睛的位置,且视觉传感器与控制系统连接。进一步地,通过视觉传感器以及控制系统使得在角度调整机构和多自由度调整机构的帮助下使得车载屏幕3的正面尽可能地朝向驾驶者设置,其中视觉传感器为智能摄像头或人体位置传感器。在另一可选的实施例中,视觉传感装置包括若干视觉传感器,若干视觉传感器设置于车载屏幕3的正面和/或汽车的驾驶室的任意位置。在对视觉传感装置的具体应用中,作为本视觉传感装置的一种使用方法,视觉传感器用于识别汽车的乘客的指定手势动,并且根据识别到的手势动作的不同,车载机械臂根据视觉传感器获得的手势动作信息控制车载屏幕3进行与之相匹配的动作。例如,手势控制车载屏幕前后移动,或随 某个应用场景触发车载屏幕正对使用者。
本申请的可选的实施例中,还包括:机构控制器,机构控制器用于控制车载机械臂,机构控制器可用于对车辆内的乘客的信息进行收集,该信息包括但不限定于相应的乘客的身高信息、体重信息或性别信息等个人信息,同时机构控制器还用于收集相应的成员的座椅的姿态信息,通过对乘客的个人信息以及座椅的姿态信息进行处理,自动控制车载机械臂或者座椅位姿调整机构使得车载屏幕3的正面朝向乘客。同时,机构控制器还时刻收集与汽车的方向盘的相对位置信息,通过计算车载屏幕3与方向盘之间的一安全距离对车载屏幕3的动作范围进行限制,即通过机构控制器对车载机械臂的控制保持车载屏幕3与方向盘之间的距离始终大于等于上述的安全距离。
本申请的可选的实施例中,还包括:声音传感装置,声音传感装置包括若干个声音接收器,若干声音接收器布置于车载屏幕3的外缘或者汽车的驾驶室内,声音传感装置与控制系统连接。进一步地,通过声音传感装置用于检测使用者的说话的位置,从而调整车载屏幕3的朝向位置。
如图10和图15所示,本申请的可选的实施例中,相区别于上述的利用球窝滑块12与滑轨的匹配来适应伸缩单元的端部的位移的技术方案,本申请还提供了另一种提供上述的位移的技术方案,具体如下:本申请的每一伸缩单元的驱动端还具有一旋转件4,每一旋转件4均安装于驱动部上,每一伸缩单元的运动端均均与一旋转件4可转动地连接。即将原本发生在导轨13上的适应性位移转移至伸缩单元自身的转动以匹配伸缩单元的球形接头11的位移。
本申请的可选的实施例中,对于上述采用旋转件4的实施例,对应的,伸缩单元的多自由度连接器不再与导轨13连接,而是直接与多自由度调整机构连接。
本申请的可选的实施例中,球窝滑块12直接固定于多自由度调整机构上,球形接头11可转动地安装于球窝滑块12上。
本申请的可选的实施例中,旋转件4与直线运动单元10的中部转动连接。
本申请的可选的实施例中,旋转件4呈轴状结构设置。
本申请的可选的实施例中,驱动部为壳体式结构,三旋转件4均固定安装于壳体上。
本申请的可选的实施例中,三旋转件4的轴线相交呈110度夹角间隔布置。
作为一种可选的实施例的车载中控屏,包括车载屏幕3以及上述中任意一项的车载机械臂,即相应的车载屏幕3为中控屏,中控屏设置于汽车的前舱的控制台处,若干个伸缩单元与多自由度调整机构共同参驱动中控屏在车内狭小空间内完成车载屏幕3平移动作、车载屏幕3翻转动作、车载屏幕3旋转动作和车载屏幕3前后移动动作。除上述已提及的应用场景举例之外,基于以上动作的单独实施或组合实施还可形成各种其他应用场景的呈现,例如通过翻转和/或旋转向使用者(驾驶员或车内乘客)打招呼、某些车机交互场景下的特定翻转动作(展现摇摆效果或摇头效果或歪头效果)、空中升级(OTA)成功时的特定翻转动作、随某一车机交互场景的触发面向使用者(如将车载屏幕作为化妆镜)、随手势操作或其他动作捕捉触发车载屏幕前后移动、随特定内容或动作捕捉触发车载屏幕旋转、随语音调整上述各单一动作或动作组合的运动量等。
本申请实施例还提供一种车载显示设备,可以包括上述的电子设备以及本申请任一实施例的车载机械臂和车载屏幕。
本申请实施例还提供一种车载显示设备,可以包括车载机械臂控制单元以及本申请任一实施例的车载机械臂和车载屏幕,其中,车载机械臂控制单元用于执行本申请任一实施例的控制方法,或,机械臂控制单元可以包括本申请任一实施例的控制装置。车载机械臂控制单元也可以简称为控制单元。
本申请实施例还提供一种车辆,可以包括上述的电子设备以及本申请任一实施例的机械臂和车载屏幕。
本申请实施例还提供一种车辆,可以包括车载机械臂控制单元以及本申请任一实施例的车载机械臂和车载屏幕,其中,车载机械臂控制单元用于执行本申请任一实施例的控制方法,或,车载机械臂控制单元可以包括本申请任一实施例的控制装置。
示例性地,电子设备可以为车身域控制模块(BDCM,Body Domain Control Module)、信息娱乐域控制模块(IDCM,Infotainment Domain Control Module)、行驶域控制模块(VDCM, Vehicle Domain Control Module)、自动驾驶域控制模块(ADCM,Automated-driving Domain Control Module)、机械臂控制单元(RAC,Robotic Arm Controller)中的至少一个。
示例性地,本实施例中的车辆可以燃油车、电动车、太阳能车等任何动力驱动的车辆。示例性地,本实施例中的车辆可以为自动驾驶车辆。
本实施例的车辆的其他构成,如车架和车轮的具体结构以及连接紧固部件等,可以采用于本领域普通技术人员现在和未来知悉的各种技术方案,这里不再详细描述。
本实施例中,车载屏幕能够在车载机械臂的驱动下,实现至少一种动作,该动作可以是沿X轴或Y轴或Z轴的伸缩动作,也可以是沿X轴、Y轴和Z轴的旋转动作。其中,X轴为车辆长度方向,且X轴正方向指向车尾方向,Y轴为车辆宽度方向,Z轴车辆高度方向,如图17所示。进一步地,基于这些动作,可以实现车载屏幕更细化的动作,如点头、摇头、摇摆等拟人化动作。
车载屏幕可以为设置在车辆上的任一显示屏,如中控屏(CID,Center Informative Display,也可以叫做中央信息显示器)、副驾屏、平视显示器(Head Up Display,HUD)、后排屏等。优选地,本实施例的车载屏幕为中控屏。
车载机械臂可以采用多自由度车载数字机器人,进而驱动车载屏幕完成多个自由度下的动作。
需要说明的是,本申请实施例对车载机械臂的电气性能、机械结构等内容不作具体限定,只要能够实现由车载机械臂带动车载屏幕运动即可。
在一个示例中,车载屏幕的位置可以用屏幕坐标或车载机械臂坐标来表征,例如以车载屏幕或车载机械臂上的一个或多个关键点的坐标作为车载屏幕的位置。
在基于目标用户的位姿信息确定车载屏幕的目标位置后,在车载机械臂的驱动下,车载屏幕能够从当前位置运动至目标位置,完成自适应调节,从而使车载屏幕能够提供给目标用户较佳视角,进而提升车辆智能性和用户体验。
需要说明的是,本实施例中,车载机械臂控制的方法中的任一个步骤或多个步骤可以实时执行,也可以按照预设的时间间隔执行,还可以是满足一定的触发条件后执行;车载机械臂控制的方法中的任一个步骤或多个步骤可以是单次执行,也可以是多次执行。本实施例对此不作限定。示例性地,触发条件可以包括用户打开了屏幕自适应调节的开关;或检测到目标用户的位姿信息发生了变化;或检测到车载屏幕的位置发生了变化等。
示例性地,车载屏幕的位置可以通过车载屏幕的陀螺仪传感器采集,进而通过比较确定车载屏幕的位置是否发生了变化。通过陀螺仪传感器采集车载屏幕的位置,可以提高车载机械臂运动过程中防夹成功率;保证车载屏幕在运动过程中的平稳,降低由于屏幕的运动或动作造成的晃动;增强了车载屏幕在旋转过程中使车载屏幕的显示画面的朝向始终可以保持的能力。
如图17所示,本申请实施例提供一种车辆控制系统,包括影音域控制器(IDCM,Infotainment Domain Control Module)、车载屏幕模块和机械臂控制单元(RAC,Robotic Arm Controller)。上述控制方法可以由RAC执行,即由RAC执行车载机械臂控制的方法中的方法。
需要说明的是,本实施例中,“汽车”也可以叫做车辆,“车载机械臂”也可以叫做屏幕调整机构,图11、图14和图15中的“2/5”表示旋转机构2和/或滑动机构5。
应该理解,可以使用上面所示的各种形式的流程,重新排序、增加或删除步骤。例如,本发公开中记载的各步骤可以并行地执行也可以顺序地执行也可以不同的次序执行,只要能够实现本申请公开的技术方案所期望的结果,本文在此不进行限制。
上述具体实施方式,并不构成对本申请保护范围的限制。本领域技术人员应该明白的是,根据设计要求和其他因素,可以进行各种修改、组合、子组合和替代。任何在本申请的精神和原则之内所作的修改、等同替换和改进等,均应包含在本申请保护范围之内。

Claims (25)

  1. 一种车载机械臂的控制方法,包括:
    根据第一触发信息,生成对包括车载机械臂在内的车载可控部件的第一控制指令序列;所述第一触发信息是根据车内不同人员的指令信息,和/或当前车辆所在位置的环境信息确定的;
    在所述车载可控部件执行所述第一控制指令序列的过程中,接收到第二触发信息的情况下,生成包括所述车载机械臂在内的车载可控部件的第二控制指令序列;所述第二触发信息是根据车内不同人员的指令信息,和/或当前车辆所在位置的环境信息确定的;
    在所述第一控制指令序列和所述第二控制指令序列存在冲突的情况下,确定冲突化解策略以对所述冲突进行化解;所述存在冲突的情况包括所述第一控制指令序列和所述第二控制指令序列的控制对象均包含所述车载机械臂。
  2. 根据权利要求1所述的方法,其中,所述在所述第一控制指令序列和所述第二控制指令序列存在冲突的情况下,确定冲突化解策略以对所述冲突进行化解,包括:
    确定所述第一触发信息和所述第二触发信息的类型;
    在存在指定类型,或者所述指定类型的触发信息的优先级高于另一触发信息的优先级的情况下,忽略所述另一触发信息。
  3. 根据权利要求2所述的方法,还包括:
    将所述指定类型的触发信息所对应的控制指令序列直接发送至车载可控部件。
  4. 根据权利要求1所述的方法,其中,所述在所述第一控制指令序列和所述第二控制指令序列存在冲突的情况下,确定冲突化解策略以对所述冲突进行化解,包括:
    确定所述第一触发信息和所述第二触发信息的类型;
    在类型相同的情况下,确定所述第一控制指令序列、所述第二控制指令序列的属性,所述属性包括模板类属性或定制类属性;
    在属性不同,或属性相同且属于定制类的情况下,控制所述车载机械臂暂停动作,在接收到新的控制指令序列的情况下,控制所述车载机械臂执行新的控制指令序列。
  5. 根据权利要求4所述的方法,其中,所述在所述第一控制指令序列和所述第二控制指令序列存在冲突的情况下,确定冲突化解策略以对所述冲突进行化解,还包括:
    在属性相同,且属于定制类属性的的情况下,比较所述第一控制指令序列和所述第二控制指令序列的优先级;
    根据比较结果,控制所述车载机械臂执行优先级高的控制指令序列。
  6. 根据权利要求1所述的方法,还包括:
    实时检测所述车载可控部件的状态,所述状态包括正常状态或非正常状态;
    根据所述车载可控部件的状态的检测结果,确定所述第一控制指令序列和/或所述第二控制指令序列的可执行性。
  7. 根据权利要求1所述的方法,还包括:
    在所述第一控制指令序列和所述第二控制指令序列不存在冲突的情况下,控制所述车载可控部件并行执行所述第一控制指令序列和所述第二控制指令序列。
  8. 根据权利要求1所述的方法,其中,所述第一控制指令序列至少基于对接收到的包含车载机械臂控制指令的目标脚本序列进行脚本解析获取。
  9. 根据权利要求8所述的方法,其中,所述目标脚本序列通过如下步骤确定:
    在用户利用控制脚本集编辑目标脚本序列的过程中,针对所述用户当前选中的控制脚本,在所述控制脚本集中确定下一个可被选中的控制脚本,所述控制脚本为用于控制车载机械臂执行预设动作的脚本,所述可被选中的控制脚本为在所述车载机械臂执行完与所述当前选中的控制脚本对应的所述预设动作后,可控制所述车载机械臂在预设的可达空间内进一步执行对应所述预设动作的所述控制脚本;
    将所述可被选中的控制脚本确定为下一个可供用户选取的控制脚本。
  10. 根据权利要求9所述的方法,其中,所述针对所述用户当前选中的控制脚本,在所述控制脚 本集中确定下一个可被选中的控制脚本,包括:
    预测所述车载机械臂在执行完与所述当前选中的控制脚本对应的所述预设动作后所处的停止位置;
    针对所述控制脚本集中的各个所述控制脚本,预测所述车载机械臂在所述停止位置的基础上进一步执行对应所述预设动作过程中的实时位置;
    根据所述实时位置,确定所述可被选中的控制脚本。
  11. 根据权利要求10所述的方法,其中,所述根据所述实时位置,确定所述可被选中的控制脚本,包括:
    确定所述可达空间限定出的空间范围;
    针对各个所述控制脚本,检测所述实时位置是否处在所述空间范围内;
    将所述实时位置处在所述空间范围内的所述控制脚本确定为所述可被选中的控制脚本。
  12. 根据权利要求9-11任意一项所述的方法,其中,所述当前选中的控制脚本的确定方式,包括:响应于用户针对所述控制脚本触发的选中操作,确定所述当前选中的控制脚本;
    所述方法,还包括:将所述可供用户选取的控制脚本设置为可供用户选取的状态。
  13. 根据权利要求9-11任意一项所述的方法,其中,所述当前选中的控制脚本的确定方式,包括:利用服务端针对所述可被选中的控制脚本发送的第一提示信息,确定所述可被选中的控制脚本;
    所述方法,还包括:向所述服务端发送用于提示所述可供用户选取的控制脚本的第二提示信息,以便第一电子设备利用所述第二提示信息,将所述可供用户选取的控制脚本设置为可供用户选取的状态。
  14. 根据权利要求9-11任意一项所述的方法,其中,所述当前选中的控制脚本的确定方式,包括:利用第二电子设备针对所述当前选中的控制脚本发送的第三提示信息,确定所述当前选中的控制脚本;
    所述方法,还包括:向所述第二电子设备发送用于提示所述可供用户选取的控制脚本的第四提示信息,以便所述第二电子设备利用所述第四提示信息,将所述可供用户选取的控制脚本设置为可供用户选取的状态。
  15. 根据权利要求8所述的方法,其中,所述目标脚本序列通过如下步骤确定:
    针对第一脚本序列中的各个控制脚本,依次检测车载机械臂在按照所述第一脚本序列执行预设动作过程中的可达情况,所述控制脚本为用于控制所述车载机械臂执行所述预设动作的脚本,所述可达情况用于表示所述车载机械臂是否超出预设的可达空间;
    根据所述可达情况,利用所述第一脚本序列,确定用于控制所述车载机械臂执行所述预设动作的目标脚本序列。
  16. 根据权利要求15所述的方法,其中,所述根据所述可达情况,利用所述第一脚本序列,确定用于控制所述车载机械臂执行所述预设动作的目标脚本序列,包括:
    在检测到所述可达情况包括第一可达情况时,获得目标控制脚本,所述第一可达情况用于表示所述车载机械臂超出所述可达空间,所述目标控制脚本为导致所述车载机械臂超出所述可达空间的所述控制脚本;
    在所述第一脚本序列中插入复位脚本作为所述目标控制脚本的前一控制脚本,以获得第二脚本序列,所述复位脚本为用于控制所述车载机械臂执行复位动作的脚本;
    针对所述第二脚本序列中的各个所述控制脚本,依次检测所述车载机械臂在按照所述第二脚本序列执行所述预设动作过程中的所述可达情况;
    根据所述可达情况,利用所述第二脚本序列,确定所述目标脚本序列。
  17. 根据权利要求16所述的方法,其中,所述根据所述可达情况,利用所述第二脚本序列,确定所述目标脚本序列,包括:
    在所述可达情况均为第二可达情况的情况下,将所述第二脚本序列确定为所述目标脚本序列,所述第二可达情况用于表示所述车载机械臂未超出所述可达空间。
  18. 根据权利要求17所述的方法,其中,所述根据所述可达情况,利用所述第二脚本序列,确定 所述目标脚本序列,还包括:
    在检测到所述可达情况包括第一可达情况的情况下,获得所述目标控制脚本;
    在所述第二脚本序列中插入所述复位脚本作为所述目标控制脚本的前一所述控制脚本,以获得第三脚本序列,以此类推,直至确定所述目标脚本序列。
  19. 根据权利要求15所述的方法,其中,所述根据所述可达情况,利用所述第一脚本序列,确定用于控制所述车载机械臂执行所述预设动作的目标脚本序列,包括:
    在所述可达情况均为第二可达情况的情况下,将所述第一脚本序列确定为所述目标脚本序列,所述第二可达情况用于表示所述车载机械臂未超出所述可达空间。
  20. 根据权利要求15-19任意一项所述的方法,其中,所述可达情况的检测方式,包括:
    确定所述可达空间限定出的空间范围;
    针对当前检测的所述控制脚本,预测所述车载机械臂在执行对应所述预设动作的过程中的实时位置;
    利用所述空间范围以及所述实时位置,检测所述可达情况。
  21. 根据权利要求15-19任意一项所述的方法,还包括:
    将所述目标脚本序列发送给用于调用并解析所述目标脚本序列的车载客户端,以使所述车载客户端按照所述目标脚本序列控制所述车载机械臂执行所述预设动作。
  22. 根据权利要求15-19任意一项所述的方法,还包括:
    调用并解析所述目标脚本序列;
    根据解析后的所述目标脚本序列,控制所述车载机械臂按照所述目标脚本序列执行所述预设动作。
  23. 一种车载机械臂的控制装置,包括:
    第一控制指令序列生成模块,用于根据第一触发信息,生成对包括车载机械臂在内的车载可控部件的第一控制指令序列;所述第一触发信息是根据车内不同人员的指令信息,和/或当前车辆所在位置的环境信息确定的;
    第二控制指令序列生成模块,用于在所述车载可控部件执行所述第一控制指令序列的过程中,接收到第二触发信息的情况下,生成包括所述车载机械臂在内的车载可控部件的第二控制指令序列;所述第二触发信息是根据车内不同人员的指令信息,和/或当前车辆所在位置的环境信息确定的;
    冲突化解策略确定模块,用于在所述第一控制指令序列和所述第二控制指令序列存在冲突的情况下,确定冲突化解策略以对所述冲突进行化解;所述存在冲突的情况包括所述第一控制指令序列和所述第二控制指令序列的控制对象均包含所述车载机械臂。
  24. 一种车载显示设备,包括:
    控制单元,用于执行权利要求1至22任一项所述的控制方法,或包括权利要求23所述的车载机械臂的控制装置;
    由车载机械臂和车载屏幕组成的显示模组,所述车载机械臂用于驱动所述车载屏幕完成至少一种目标动作。
  25. 一种车辆,包括:
    控制单元,用于执行权利要求1至22任一项所述的控制方法,或包括权利要求23所述的车载机械臂的控制装置;
    由车载机械臂和车载屏幕组成的显示模组,所述车载机械臂用于驱动所述车载屏幕完成至少一种目标动作。
PCT/CN2023/104187 2021-11-01 2023-06-29 车载机械臂的控制的方法、装置、车载显示设备及车辆 WO2024002297A1 (zh)

Applications Claiming Priority (7)

Application Number Priority Date Filing Date Title
CN202122647760 2021-11-01
CN202210767008.5A CN116061170A (zh) 2021-11-01 2022-06-30 脚本序列的确定方法、装置、电子设备及车辆
CN202210765976.2A CN116061167A (zh) 2021-11-01 2022-06-30 车载机械臂,及其控制的方法、装置
CN202210766469.0 2022-06-30
CN202210766469.0A CN116061169A (zh) 2021-11-01 2022-06-30 脚本序列的处理方法、装置、电子设备及车辆
CN202210767008.5 2022-06-30
CN202210765976.2 2022-06-30

Publications (1)

Publication Number Publication Date
WO2024002297A1 true WO2024002297A1 (zh) 2024-01-04

Family

ID=86168807

Family Applications (4)

Application Number Title Priority Date Filing Date
PCT/CN2023/104091 WO2024002276A1 (zh) 2021-11-01 2023-06-29 脚本序列的确定方法、装置、电子设备及车辆
PCT/CN2023/104082 WO2024002273A1 (zh) 2021-11-01 2023-06-29 车载机械臂及其控制的方法、系统
PCT/CN2023/104212 WO2024002303A1 (zh) 2021-11-01 2023-06-29 车载屏幕的机械臂控制方法、装置、设备和车辆
PCT/CN2023/104187 WO2024002297A1 (zh) 2021-11-01 2023-06-29 车载机械臂的控制的方法、装置、车载显示设备及车辆

Family Applications Before (3)

Application Number Title Priority Date Filing Date
PCT/CN2023/104091 WO2024002276A1 (zh) 2021-11-01 2023-06-29 脚本序列的确定方法、装置、电子设备及车辆
PCT/CN2023/104082 WO2024002273A1 (zh) 2021-11-01 2023-06-29 车载机械臂及其控制的方法、系统
PCT/CN2023/104212 WO2024002303A1 (zh) 2021-11-01 2023-06-29 车载屏幕的机械臂控制方法、装置、设备和车辆

Country Status (2)

Country Link
CN (15) CN116061820A (zh)
WO (4) WO2024002276A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116061820A (zh) * 2021-11-01 2023-05-05 华人运通(江苏)技术有限公司 车辆的控制方法、装置、系统、机械臂以及车辆
CN219351252U (zh) * 2022-12-09 2023-07-14 华人运通(江苏)技术有限公司 一种屏幕线束支架和车载屏幕

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100312547A1 (en) * 2009-06-05 2010-12-09 Apple Inc. Contextual voice commands
US20150066479A1 (en) * 2012-04-20 2015-03-05 Maluuba Inc. Conversational agent
JP2016053966A (ja) * 2015-10-15 2016-04-14 クラリオン株式会社 情報処理装置、音声操作システム、および、情報処理装置の音声操作方法
CN107499251A (zh) * 2017-04-01 2017-12-22 宝沃汽车(中国)有限公司 用于车载显示屏显示的方法、装置和车辆
CN109658922A (zh) * 2017-10-12 2019-04-19 现代自动车株式会社 车辆的用于处理用户输入的装置和方法
CN111002996A (zh) * 2019-12-10 2020-04-14 广州小鹏汽车科技有限公司 车载语音交互方法、服务器、车辆和存储介质
CN112298059A (zh) * 2020-10-26 2021-02-02 武汉华星光电技术有限公司 车载显示屏调节装置及车辆
CN116061169A (zh) * 2021-11-01 2023-05-05 华人运通(江苏)技术有限公司 脚本序列的处理方法、装置、电子设备及车辆

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3727078B2 (ja) * 1994-12-02 2005-12-14 富士通株式会社 表示装置
CN103218289B (zh) * 2013-03-29 2015-07-08 北京控制工程研究所 一种星载软件测试自动执行方法
US20150088489A1 (en) * 2013-09-20 2015-03-26 Abdelhalim Abbas Systems and methods for providing man-machine communications with etiquette
CN104331547B (zh) * 2014-10-23 2017-05-10 北京控制工程研究所 一种基于可操作性的空间机械臂结构参数优化方法
EP3200719B1 (en) * 2015-11-02 2019-05-22 Brainlab AG Determining a configuration of a medical robotic arm
CN106873765B (zh) * 2016-12-27 2018-09-11 比亚迪股份有限公司 车载终端的屏幕状态的切换方法和装置
CN106803423B (zh) * 2016-12-27 2020-09-04 智车优行科技(北京)有限公司 基于用户情绪状态的人机交互语音控制方法、装置及车辆
CN108327649B (zh) * 2017-01-20 2020-11-20 比亚迪股份有限公司 车载显示终端的支撑装置和车辆
CN107600075A (zh) * 2017-08-23 2018-01-19 深圳市沃特沃德股份有限公司 车载系统的控制方法和装置
CN107598929B (zh) * 2017-10-25 2020-04-21 北京邮电大学 一种单关节故障空间机械臂位姿可达空间求解方法
CN107877517B (zh) * 2017-11-16 2021-03-30 哈尔滨工业大学 基于CyberForce遥操作机械臂的运动映射方法
CN109017602B (zh) * 2018-07-13 2023-08-18 吉林大学 一种基于人体姿态识别的自适应中控台及其控制方法
CN109684223B (zh) * 2018-12-28 2022-03-15 河南思维轨道交通技术研究院有限公司 一种测试脚本自动化链接方法、存储介质
CN109766402A (zh) * 2019-01-16 2019-05-17 广东南方数码科技股份有限公司 空间数据处理方法、装置和计算机设备
DE102019102803B4 (de) * 2019-02-05 2022-02-17 Franka Emika Gmbh Ausrichten zweier Roboterarme zueinander
CN111890373A (zh) * 2020-09-29 2020-11-06 常州唯实智能物联创新中心有限公司 车载机械臂的感知定位方法
WO2022074448A1 (en) * 2020-10-06 2022-04-14 Mark Oleynik Robotic kitchen hub systems and methods for minimanipulation library adjustments and calibrations of multi-functional robotic platforms for commercial and residential environments with artificial intelligence and machine learning
CN113669572A (zh) * 2021-07-01 2021-11-19 华人运通(江苏)技术有限公司 车载屏幕的调节装置、车载显示装置和车辆
CN113752265B (zh) * 2021-10-13 2024-01-05 国网山西省电力公司超高压变电分公司 一种机械臂避障路径规划方法、系统及装置
CN113778416A (zh) * 2021-11-11 2021-12-10 深圳市越疆科技有限公司 基于图形化编程的机械臂搬运脚本生成方法和装置

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100312547A1 (en) * 2009-06-05 2010-12-09 Apple Inc. Contextual voice commands
US20150066479A1 (en) * 2012-04-20 2015-03-05 Maluuba Inc. Conversational agent
JP2016053966A (ja) * 2015-10-15 2016-04-14 クラリオン株式会社 情報処理装置、音声操作システム、および、情報処理装置の音声操作方法
CN107499251A (zh) * 2017-04-01 2017-12-22 宝沃汽车(中国)有限公司 用于车载显示屏显示的方法、装置和车辆
CN109658922A (zh) * 2017-10-12 2019-04-19 现代自动车株式会社 车辆的用于处理用户输入的装置和方法
CN111002996A (zh) * 2019-12-10 2020-04-14 广州小鹏汽车科技有限公司 车载语音交互方法、服务器、车辆和存储介质
CN112298059A (zh) * 2020-10-26 2021-02-02 武汉华星光电技术有限公司 车载显示屏调节装置及车辆
CN116061169A (zh) * 2021-11-01 2023-05-05 华人运通(江苏)技术有限公司 脚本序列的处理方法、装置、电子设备及车辆
CN116061170A (zh) * 2021-11-01 2023-05-05 华人运通(江苏)技术有限公司 脚本序列的确定方法、装置、电子设备及车辆
CN116061167A (zh) * 2021-11-01 2023-05-05 华人运通(江苏)技术有限公司 车载机械臂,及其控制的方法、装置

Also Published As

Publication number Publication date
CN116061819A (zh) 2023-05-05
WO2024002303A1 (zh) 2024-01-04
CN116061829A (zh) 2023-05-05
CN116061827A (zh) 2023-05-05
CN116061828A (zh) 2023-05-05
CN116061170A (zh) 2023-05-05
CN116061820A (zh) 2023-05-05
CN116061823A (zh) 2023-05-05
CN116061821A (zh) 2023-05-05
CN116061169A (zh) 2023-05-05
CN116061167A (zh) 2023-05-05
CN116061168A (zh) 2023-05-05
CN116061826A (zh) 2023-05-05
CN116061825A (zh) 2023-05-05
WO2024002273A1 (zh) 2024-01-04
CN116061824A (zh) 2023-05-05
CN116061822A (zh) 2023-05-05
WO2024002276A1 (zh) 2024-01-04

Similar Documents

Publication Publication Date Title
WO2024002297A1 (zh) 车载机械臂的控制的方法、装置、车载显示设备及车辆
US9738158B2 (en) Motor vehicle control interface with gesture recognition
US10764536B2 (en) System and method for a dynamic human machine interface for video conferencing in a vehicle
KR20210011416A (ko) 차량 탑승자 및 원격 사용자를 위한 공유 환경
US9613459B2 (en) System and method for in-vehicle interaction
KR101199852B1 (ko) 이미지 조작 장치 및 방법
JP7174030B2 (ja) 超音波レーダアレイ、障害物検出方法及びシステム
US20070174416A1 (en) Spatially articulable interface and associated method of controlling an application framework
EP2726961A2 (en) Systems and methods for controlling a cursor on a display using a trackpad input device
US20210072831A1 (en) Systems and methods for gaze to confirm gesture commands in a vehicle
EP4344255A1 (en) Method for controlling sound production apparatuses, and sound production system and vehicle
US20190171296A1 (en) Gesture determination apparatus and program
CN111638786B (zh) 车载后排投影显示系统的显示控制方法、装置、设备及存储介质
TW202414030A (zh) 場景展示方法、裝置、電子設備及儲存介質
WO2024002307A1 (zh) 车辆控制系统、车载显示屏控制方法、机械臂及车辆
US20210276575A1 (en) Vehicle component identification system
WO2024002255A1 (zh) 对象的控制方法、装置、设备、存储介质及车辆
CN113361361B (zh) 与乘员交互的方法及装置、车辆、电子设备和存储介质
JP2022088089A (ja) 制御装置、車両、およびプログラム
KR20210130054A (ko) 제스처 인식 장치 및 그 방법
CN115891838A (zh) 基于流媒体后视镜的显示方法、装置、设备、介质及车辆
KR20210130055A (ko) 제스처 인식 장치 및 그 방법
CN117880467A (zh) 一种视频显示控制方法、舱驾融合芯片和车辆
KR20190052434A (ko) 대화 시스템, 이를 포함하는 차량 및 대화 처리 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23830461

Country of ref document: EP

Kind code of ref document: A1