CN116198525A - Vehicle-mounted system control method, vehicle and storage medium - Google Patents
Vehicle-mounted system control method, vehicle and storage medium Download PDFInfo
- Publication number
- CN116198525A CN116198525A CN202310149004.5A CN202310149004A CN116198525A CN 116198525 A CN116198525 A CN 116198525A CN 202310149004 A CN202310149004 A CN 202310149004A CN 116198525 A CN116198525 A CN 116198525A
- Authority
- CN
- China
- Prior art keywords
- event
- target action
- occurrence
- action event
- area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 76
- 230000009471 action Effects 0.000 claims abstract description 433
- 230000008569 process Effects 0.000 claims abstract description 12
- 230000033001 locomotion Effects 0.000 claims description 24
- 238000006073 displacement reaction Methods 0.000 claims description 9
- 238000004590 computer program Methods 0.000 claims description 6
- 230000000875 corresponding effect Effects 0.000 description 38
- 230000001276 controlling effect Effects 0.000 description 15
- 230000000694 effects Effects 0.000 description 8
- 230000006870 function Effects 0.000 description 8
- 238000012545 processing Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 230000011218 segmentation Effects 0.000 description 3
- 230000002596 correlated effect Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000005192 partition Methods 0.000 description 2
- 238000003825 pressing Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/08—Interaction between the driver and the control system
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R16/00—Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for
- B60R16/02—Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/08—Interaction between the driver and the control system
- B60W50/14—Means for informing the driver, warning the driver or prompting a driver intervention
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/08—Interaction between the driver and the control system
- B60W50/14—Means for informing the driver, warning the driver or prompting a driver intervention
- B60W2050/146—Display means
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/02—Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]
Landscapes
- Engineering & Computer Science (AREA)
- Automation & Control Theory (AREA)
- Mechanical Engineering (AREA)
- Human Computer Interaction (AREA)
- Transportation (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The application discloses a vehicle-mounted system control method, which comprises the following steps: acquiring a first target action event and a second target action event in the occurrence process of the first target action event; determining a first occurrence area of a display part of a first target action event in a vehicle cabin and a second occurrence area of a second target action event in the display part, wherein the display part comprises at least two areas; when the first occurrence area and the second occurrence area are different, the vehicle-mounted system is controlled according to the first target action event and the second target action event respectively. According to the method and the device, the occurrence areas corresponding to different events are respectively determined according to different target action events, the target action events are processed independently in the different occurrence areas, and the problem that the different target action events are recognized as multiple parts of the same target action event and cannot be recognized and executed due to the fact that logic contradictions exist between the different occurrence areas is avoided.
Description
Technical Field
The application relates to the technical field of intelligent vehicles, in particular to a vehicle-mounted system control method, a vehicle and a computer-readable storage medium.
Background
With the development of the vehicle cabin, in the related art, a plurality of touch-sensitive display parts can be disposed in the cabin to form a plurality of display areas, so that users in different areas in the vehicle cabin can interact with the display parts in corresponding areas through gesture actions, for example, display contents in different display areas are operated, and circulation of the display contents is performed between the different display areas. However, the processing logic of gesture motion interaction of the vehicle-mounted system on different areas is limited, so that when motion events occur on different display areas at the same time, the vehicle-mounted system generally determines that the motion events are invalid to be manipulated, for example, different users slide down from the top end of a screen with a single finger on the first display area and the second display area at the same time, and the user intends to open a drop-down shortcut menu of each area.
Disclosure of Invention
The application provides a vehicle-mounted system control method, a vehicle and a computer readable storage medium.
The vehicle-mounted system control method according to the embodiment of the application comprises the following steps:
acquiring a first target action event and a second target action event in the occurrence process of the first target action event;
determining a first occurrence area of the display part of the first target action event in a vehicle cabin and a second occurrence area of the display part of the second target action event, wherein the display part comprises at least two areas;
and controlling the vehicle-mounted system according to the first target action event and the second target action event respectively when the first generation area and the second generation area are different.
In this way, the method and the device respectively determine the occurrence areas corresponding to different events according to different target action events, partition the target action events when the occurrence areas corresponding to different target action events are different, and independently process the target action events in different occurrence areas, so that the problem that the different target action events are recognized as multiple parts of the same target action event due to logic contradiction among different occurrence areas, and therefore the target action events cannot be recognized and executed is solved, and a user can control the corresponding display part areas through gesture actions in different occurrence areas independently.
The step of acquiring a first target action event and a second target action event in the occurrence process of the first target action event further comprises the following steps:
determining a plurality of parameter information based on preset action parameters according to preset action recognition conditions;
the determining that the first target action event displays a first occurrence area of a component in a vehicle cabin and the second target action event displays a second occurrence area of the component includes:
and determining the first occurrence area and the second occurrence area according to the parameter information, the first target action event and the second target action event.
Therefore, the conditions preset by the vehicle-mounted system and used for identifying the action events occurring on the display parts can be analyzed into the parameter information, and the occurrence areas corresponding to the action events are identified according to the analyzed parameter information, so that different action events can be segmented, identified and segmented in different occurrence areas.
The determining a plurality of parameter information based on the preset action parameters according to the preset action recognition conditions comprises the following steps:
determining first parameter information according to coordinate information of a position where the action occurs on the display part so as to determine an occurrence area of the action;
Determining second parameter information according to displacement information of the action on the display part so as to determine the displacement of the action;
and determining third parameter information according to the duration of the action so as to limit the speed of the action.
In this way, a plurality of different parameter information obtained by analyzing the conditions preset according to the vehicle-mounted system and used for identifying the action events occurring on the display part are provided, so that the action events of different categories can be distinguished and processed according to the different parameter information.
The determining the first occurrence area and the second occurrence area according to the parameter information, the first target action event and the second target action event includes:
acquiring the position of the first target action event and the coordinate information of the position of the second target action event on the display part;
and determining the first generation area and the second generation area according to the coordinate information and the first parameter information.
In this way, the occurrence area corresponding to the target action event can be determined based on the analyzed parameter information related to the determination occurrence area and the acquired target action event, and preparation can be made for dividing the target action event.
The determining the first occurrence area and the second occurrence area according to the coordinate information and the first parameter information includes:
acquiring the area information of each area of the display part;
and determining the region information of the first generation region and the region information of the second generation region according to the coordinate information, the region information of each region and the first parameter information.
In this way, the first generation region, the second generation region, and the corresponding region information can be correlated based on the analyzed parameter information related to the specific generation region and the region information of each region of the display component, and data informatization preparation can be made for the division processing target action event.
When the first occurrence area is different from the second occurrence area, controlling the vehicle-mounted system according to the first target action event and the second target action event, respectively, including:
acquiring event categories of the first target action event and the second target action event;
and controlling the vehicle-mounted system according to the event category, the first target action event and the second target action event.
In this way, when the occurrence areas corresponding to different target action events are different, different target action events can be executed in different occurrence areas according to the event types, so as to achieve the effect of executing different target action events in a splitting way to control the vehicle-mounted system.
The executing the first target action event and the second target action event according to the event category, and controlling the vehicle-mounted system, including:
under the condition that the event category is a first event category, acquiring the occurrence time length and the coordinate information of the occurrence position of the first target action event on the display part, and the occurrence time length and the coordinate information of the occurrence position of the second target action event on the display part;
determining an action category of the first target action event according to the area information of the first generation area, the occurrence time of the first target action event and the coordinate information of the occurrence position on the display part, wherein the action category comprises a first action category or a second action category;
determining the action category of the second target action event according to the area information of the second occurrence area, the occurrence time of the second target action event and the coordinate information of the occurrence position on the display part;
And controlling the vehicle-mounted system according to the first target action event and the action category of the second target action event and the second target action event.
In this way, when the event category is the first event category, the action category of the target action event can be determined, and the target action event can be executed in different occurrence areas based on the action category, thereby realizing control of the in-vehicle system.
The executing the first target action event and the second target action event according to the event category, and controlling the vehicle-mounted system, including:
when the event category is a second event category, determining an action category of the first target action event and an action category of the second target action event according to the area information of the first occurrence area, the area information of the second occurrence area, the second parameter information and the third parameter information;
and controlling the vehicle-mounted system according to the first target action event and the action category of the second target action event and the second target action event.
In this way, when the event category is the second event category, the action category of the target action event can be determined, and the target action event can be executed in different occurrence areas based on the action category, thereby realizing control of the in-vehicle system.
The executing the first target action event and the second target action event according to the event category, and controlling the vehicle-mounted system, including:
and deleting the occurrence time length and the coordinate information of the occurrence position of the first target action event on the display part and deleting the occurrence time length and the position coordinate of the action of the second target action event according to the region information of the first occurrence region and the region information of the second occurrence region when the event category is the third event category.
In this way, when the event type is the third event type, the relevant parameter information can be deleted based on the area information of the occurrence area, so that the execution of the action event cancellation target action event by the action event of the third event type can be realized.
The present application also provides a vehicle comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to execute the in-vehicle system control method as described above.
The present application also provides a computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by one or more processors, implements the in-vehicle system control method as described above.
Additional aspects and advantages of embodiments of the application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of embodiments of the application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a schematic flow chart of a control method of an on-board system provided by the present application;
FIG. 2 is a schematic flow chart of the control method of the vehicle-mounted system provided by the application;
FIG. 3 is a schematic flow chart of the control method of the vehicle-mounted system provided by the application;
FIG. 4 is a schematic flow chart of the control method of the vehicle-mounted system provided by the application;
FIG. 5 is a schematic flow chart of the control method of the vehicle-mounted system provided by the application;
FIG. 6 is a flow chart of the control method of the vehicle-mounted system provided by the application;
FIG. 7 is a schematic flow chart of the control method of the vehicle-mounted system provided by the application;
FIG. 8 is a flow chart of the control method of the vehicle-mounted system provided by the application;
fig. 9 is a schematic application scenario diagram of the vehicle-mounted system control method provided in the present application;
fig. 10 is a schematic application scenario diagram of the vehicle-mounted system control method provided in the present application;
fig. 11 is an application scenario schematic diagram of the vehicle-mounted system control method provided by the application.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are exemplary only for explaining the embodiments of the present application and are not to be construed as limiting the embodiments of the present application.
With the development of the vehicle cabin, in the related art, a plurality of touch-sensitive display parts can be disposed in the cabin to form a plurality of display areas, so that users in different areas in the vehicle cabin can interact with the display parts in corresponding areas through gesture actions, for example, display contents in different display areas are operated, and circulation of the display contents is performed between the different display areas. However, the processing logic of gesture motion interaction of the vehicle-mounted system on different areas is limited, so that when motion events occur on different display areas at the same time, the vehicle-mounted system generally determines that the motion events are invalid to be manipulated, for example, different users slide down from the top end of a screen with a single finger on the first display area and the second display area at the same time, and the user intends to open a drop-down shortcut menu of each area. The condition of not responding to the operation of the user cannot meet the operation requirement of the user, and the user can misunderstand that the parts are malfunctioning, so that the user experience is poor.
To this end, as shown in fig. 1, the present application provides a vehicle-mounted system control method, including:
01: acquiring a first target action event and a second target action event in the occurrence process of the first target action event;
02: determining a first occurrence area of a first target action event in a display part in a vehicle cabin and a second occurrence area of a second target action event in the display part;
03: when the first occurrence area and the second occurrence area are different, the vehicle-mounted system is controlled according to the first target action event and the second target action event respectively.
The application further provides a vehicle, which comprises a memory and a processor, wherein the memory stores a computer program, the processor is used for acquiring a first target action event and a second target action event in the occurrence process of the first target action event, determining a first occurrence area of a display part of the first target action event in a cabin of the vehicle and a second occurrence area of the display part of the second target action event, and controlling a vehicle-mounted system according to the first target action event and the second target action event under the condition that the first occurrence area is different from the second occurrence area.
Specifically, a display component for carrying and displaying a user interface of the vehicle-mounted system is disposed in the vehicle cabin, and specifically may be a touch screen or other display device capable of recognizing gesture or touch action, where each display device corresponds to a display area, for example, in some examples, a display device in a certain intelligent vehicle cabin is disposed with two areas of a primary driving screen and a secondary driving screen. The target motion event is a motion event with a control function that can be recognized by the in-vehicle system, such as a gesture motion or a touch motion that occurs on the display component.
Since in the related art, no matter how many areas the display components are divided into, these areas are all merged by the in-vehicle system to be regarded as the same display component based on the connection manner between the devices. For example, a main driving screen and a co-driving screen on a cabin of an intelligent vehicle can still be regarded as the same screen in an on-vehicle system, and are only physically divided into two split screens. In this case, if a plurality of action events occur simultaneously in different areas, the in-vehicle system recognizes the action events as one action event, and thus, misrecognition and finally, failure of execution of the action event occurs.
Based on this, in some examples, taking the case where two target action events occur synchronously as an example, a first target action event is acquired first, and a second target action event that occurs synchronously during the occurrence of the first target action event is acquired at the same time. After the target action event is acquired, determining an occurrence area corresponding to each target action event according to the related parameters when the target action event occurs, wherein the first target action event corresponds to the first occurrence area, and the second target action time corresponds to the second occurrence area. For example, a first screen and a second screen are arranged in a cabin of the intelligent vehicle, a user synchronously executes finger sliding operation on the first screen and the second screen, a first target action event is finger sliding operation, a first generation area is the first screen, a second target action event is finger sliding operation, and a second generation area is the second screen.
Finally, if the first occurrence area is different from the second occurrence area, that is, the two groups of target action events occur in different areas, in this case, generally, the user needs to perform different operations in the two occurrence areas respectively, and in this case, the two target action events are respectively identified and performed according to the different occurrence areas, so as to avoid a control failure phenomenon caused by misidentification of the vehicle-mounted system itself on the target action events. For example, if the user performs three-finger sliding operation on the first screen and the second screen simultaneously, the vehicle-mounted system can recognize the occurred target action event as six-finger sliding if a split execution mode is not adopted, namely, misidentification of the action event occurs, and if the action event of six-finger sliding does not set a corresponding control item, the action event of the user has no effect, namely, the control failure occurs; if the method provided by the application is used, the actions occurring on each screen are respectively divided based on the first screen and the second screen, the actions are respectively identified and executed, the control items corresponding to the three-finger sliding are executed on the first screen, the first screen is controlled to respond, the control items corresponding to the three-finger sliding are also executed on the second screen, the second screen is controlled to respond, and further the vehicle-mounted system is respectively controlled.
In this way, the method and the device respectively determine the occurrence areas corresponding to different events according to different target action events, partition the target action events when the occurrence areas corresponding to different target action events are different, and independently process the target action events in different occurrence areas, so that the problem that the different target action events are recognized as multiple parts of the same target action event due to logic contradiction among different occurrence areas, and therefore the target action events cannot be recognized and executed is solved, and a user can control the corresponding display part areas through gesture actions in different occurrence areas independently.
As shown in fig. 2, step 01 further includes:
001: determining a plurality of parameter information based on preset action parameters according to preset action recognition conditions;
under the above conditions, step 02 includes:
021: and determining a first occurrence area and a second occurrence area according to the parameter information, the first target action event and the second target action event.
The processor is further configured to determine a plurality of parameter information based on the preset motion parameters according to the preset motion recognition condition, and determine a first occurrence area and a second occurrence area according to the parameter information, the first target motion event and the second target motion event.
Specifically, in order to enable the vehicle-mounted system to realize informatization distinction between different target action events and different occurrence areas, a data comparison base needs to be prepared in advance in the system to determine the acquired data information of the target action events and the occurrence areas, so as to prepare for dividing and processing the different target action events. In some examples, recognition of an action event, such as a gesture action or a touch action, by the in-vehicle system is based on preset action recognition conditions thereof. Generally, for the recognition of an action event, the application refers to the recognition condition of the action formed by information such as a generation position condition, a displacement condition, a generation time period condition, and the like, and in order to compare the needs of determining a target action event and a corresponding generation area, analyze the recognition condition of the action, and recombine the analyzed properties of the action event with a plurality of preset action parameters which are divided and recognized according to the needs, so as to determine a plurality of parameter information. And finally, determining an occurrence area corresponding to the target action event from the angle of the vehicle-mounted system based on the determined plurality of parameter information.
Therefore, the conditions preset by the vehicle-mounted system and used for identifying the action events occurring on the display parts can be analyzed into the parameter information, and the occurrence areas corresponding to the action events are identified according to the analyzed parameter information, so that different action events can be segmented, identified and segmented in different occurrence areas.
As shown in fig. 3, step 001 includes:
0011: determining first parameter information according to coordinate information of a position where the action occurs on the display part so as to determine an occurrence area of the action;
0012: determining second parameter information according to displacement information of the action on the display part so as to determine the displacement of the action;
0013: and determining third parameter information according to the duration of the action so as to limit the speed of the action.
The processor is used for determining first parameter information according to coordinate information of a position of action on the display part, determining an action occurrence area, determining second parameter information according to displacement information of the action on the display part, determining displacement of the action, and determining third parameter information according to duration of the action, so as to limit the speed of the action.
Specifically, for the parameter information, different combinations can be performed according to different preset action parameters and different action events which are identified according to the needs, so as to generate a plurality of parameter information meeting the requirements. An example is provided next:
for example, in some examples, for an action event based on a native swipe gesture of the in-vehicle system, sliding the drop-down shortcut panel from top to bottom of the area, if the in-vehicle system recognizes that the action is true, the following three conditions need to be satisfied:
(1) Pressing down on the ordinate y < = mswitchestartthreshold with a finger
(2) The current position of the finger, ordinate y > (finger pressing ordinate y+mSwipeDistanceThreshold)
(3) elased < SWIPE_TIMEOUT_MS indicates that the time the finger was pressed to slide to the current position is to be within SWIPE_TIMEOUT_MS.
Based on the above condition analysis, the condition (1) can be converted into parameter information about the finger-pressed area verification; condition (2) can be transcribed into: the ordinate y of the current movement position of the finger-the finger presses down the ordinate y > mSwipeDistancethreshold, namely, the parameter information related to the sliding distance of the finger is converted; the condition (3) requires that the finger sliding time is within a certain time range, and the condition (3) can be converted into parameter information of the finger sliding speed by combining the limitation of the finger sliding distance. The parameter information soldier is not limited to the action event of sliding the original swipe gesture of the vehicle-mounted system from top to bottom, but can adapt to all the action events by adjusting the specific parameter range. Thus, three pieces of parameter information for determining the first occurrence area and the second occurrence area can be obtained.
In this way, a plurality of different parameter information obtained by analyzing the conditions preset according to the vehicle-mounted system and used for identifying the action events occurring on the display part are provided, so that the action events of different categories can be distinguished and processed according to the different parameter information.
As shown in fig. 4, step 021 includes:
0211: acquiring the position of the first target action event and the coordinate information of the position of the second target action event on the display part;
0212: and determining a first generation area and a second generation area according to the coordinate information and the first parameter information.
The processor is used for acquiring the coordinate information of the position of the first target action event and the position of the second target action event on the display part, and determining a first occurrence area and a second occurrence area according to the coordinate information and the first parameter information.
Specifically, after a plurality of parameter information is acquired, the acquired parameter information is taken as a reference, and the occurrence area corresponding to the target action event can be determined by comparing the relevant action parameters of the target action event with the parameter information. In some examples, different regions on the display part correspond to different coordinate ranges, and when the coordinate information of the position where a certain action event occurs falls into the coordinate range corresponding to a certain region, the occurrence region corresponding to the action event can be determined as the region on the display part with the coordinate range. On the basis, after the coordinate information of the position of the first target action time on the display part is obtained, the coordinate information of the position of the first target action event on the display part can be determined by comparing the coordinate information with the first parameter information, and the coordinate information of the position of the first target action event on the display part falls into the coordinate range of the first generation region, so that the vehicle-mounted system determines that the first target action event corresponds to the first generation region.
In this way, the occurrence area corresponding to the target action event can be determined based on the analyzed parameter information related to the determination occurrence area and the acquired target action event, and preparation can be made for dividing the target action event.
As shown in fig. 5, step 0212 includes:
02121: acquiring area information of each area of the display part;
02122: and determining the region information of the first generation region and the region information of the second generation region according to the coordinate information, the region information of each region and the first parameter information.
The processor is used for acquiring the area information of each area of the display part and determining the area information of the first generation area and the area information of the second generation area according to the coordinate information, the area information of each area and the first parameter information.
Specifically, in order to make the operation of confirming the occurrence area data so as to facilitate the distinction between the action event and the setting of the variable at the time of the division execution, it is first necessary to edit and acquire the area information of each area of the display part, and generally, the area information is made into the number of the screen. For example, in some example, a first screen and a second screen are set in the cabin of the intelligent vehicle, and the screen number (screen id) of the first screen is set to 1 by default, and the number of the second screen is set to 2. In the above case, the number of the occurrence area corresponding to the target action event can be determined from the coordinate information of the occurrence position of the target action event on the display component and the first parameter information. For example, in some embodiments, if it is determined that the first target action event corresponds to the first occurrence area, the number of the occurrence area of the first target action event is 1. Then, the screen number screenID is used as a variable to further divide, identify and execute the action event.
In this way, the first generation region, the second generation region, and the corresponding region information can be correlated based on the analyzed parameter information related to the specific generation region and the region information of each region of the display component, and data informatization preparation can be made for the division processing target action event.
As shown in fig. 6, step 03 includes:
031: acquiring event categories of a first target action event and a second target action event;
032: and controlling the vehicle-mounted system according to the event category, the first target action event and the second target action event.
The processor is used for acquiring event categories of the first target action event and the second target action event and controlling the vehicle-mounted system according to the event categories, the first target action event and the second target action event.
Specifically, there are multiple event categories for one action event, while one action event may include multiple sub-events, each sub-event also having an event category. Event categories are generally used to describe motion characteristics of motion events, and are generally classified into touch, general sliding, upward sliding, and the like, and one motion event may have only one event category as a sub-event of a touch motion, and may also include a plurality of sub-events having different event categories of touch, general sliding, upward sliding, and the like. However, when there is a difference in event types, the control principle and effect are greatly different in the practical implementation level, so that in this case, different processing needs to be performed for different event types. Therefore, the event categories need to be divided in advance before different target action events are executed in a dividing way, so that the condition of confusing control is avoided.
Therefore, under the condition that the occurrence areas corresponding to different target action events are different, according to the event types, different target action events are executed in different occurrence areas respectively, so that the effect of executing different target action events in a splitting way to control the vehicle-mounted system is achieved, and the condition of disordered control effect is avoided.
As shown in fig. 7, step 032 includes:
0321: under the condition that the event category is the first event category, acquiring the occurrence time length and the coordinate information of the occurrence position of the first target action event on the display part, and the occurrence time length and the coordinate information of the occurrence position of the second target action event on the display part;
0322: determining the action category of the first target action event according to the area information of the first generation area, the generation time length and the position coordinates of the first target action event;
0323: determining the action category of the second target action event according to the area information of the second occurrence area, the occurrence time of the second target action event and the coordinate information of the occurrence position on the display part;
0324: and controlling the vehicle-mounted system according to the first target action event and the action category of the second target action event and the second target action event.
The processor is used for acquiring the coordinate information of the occurrence time length and the occurrence position of the first target action event on the display component and the coordinate information of the occurrence time length and the occurrence position of the second target action event on the display component under the condition that the event category is the first event category, determining the action category of the first target action event according to the area information of the first occurrence area, the occurrence time length and the coordinate information of the occurrence position of the first target action event on the display component, determining the action category of the second target action event according to the area information of the second occurrence area, the occurrence time length and the coordinate information of the occurrence position of the second target action event on the display component, and controlling the vehicle-mounted system according to the action category of the first target action event and the action category of the second target action event.
Specifically, a method of executing different target action events to control the in-vehicle system by dividing is described below taking the case that the event category is touch as an example:
in some examples, when the in-vehicle system acquires the target ACTION event or sub-event action_down (or action_potential_down, etc. ACTION event with a touch event category), the number screen of the occurrence screen corresponding to the target ACTION event is queried by invoking the pointeIndex function, the number screen id has been determined in the foregoing method, in some examples the screen id corresponding to the first target ACTION event is 1, and the screen id corresponding to the second target ACTION event is 2.
Then, using screen number screen ID as variable, passing through method
SystemGesturesPointerEventListener.captureDown(screenId)
The xpsystem gesturelistener capturedown method is created such that the variable screenId is passed to other methods. And then, invoking an xpsystem GesturesListener.dispatchDown Event (screenId) method to acquire related data of the target action event, such as the occurrence time downTime of the target action event, and coordinate information downX and downY of the occurrence position of the target action event.
Then, based on downTime, downX, downY, and other information and screen id, the action type of the target action event is determined by the method xpsystem gestureslistener. The action category is used to describe the nature of the action and generally includes two categories of multi-finger gesture actions and swipe gesture actions. In the above example, since the event category of the target action event is a touch and the swipe gesture action does not include a case of no-slip action, it is only necessary to determine whether the action category of the target action event is a multi-finger gesture action, and if the action category of the target action event is a multi-finger gesture action, the multi-finger gesture action is used as the action category, and the multi-finger gesture action response logic is executed using the method xpsystem gestureidstreevent (screen id), and the target action event is executed in the corresponding area. If the action category of the target action event is not the multi-finger gesture action, the target action event is considered to be unsatisfactory, and the vehicle-mounted system does not execute any action.
For instance, in some examples, if it is determined that the first target motion event is a multi-finger gesture motion based on the method xpsystem gestureslist.
As another example, in some examples, if it is determined that the second target action event is not a multi-finger gesture action based on the method xpsystem gestureslist.
In this way, when the event category is the first event category, the action category of the target action event can be determined, and the target action event can be executed in different occurrence areas based on the action category, thereby realizing control of the in-vehicle system.
As shown in fig. 8, step 032 includes:
0325: when the event type is the second event type, determining the action type of the first target action event and the action type of the second target action event according to the area information of the first generation area, the area information of the second generation area, the second parameter information and the third parameter information;
0326: and controlling the vehicle-mounted system according to the first target action event and the action category of the second target action event and the second target action event.
The processor is used for determining the action category of the first target action event and the action category of the second target action event according to the area information of the first occurrence area, the area information of the second occurrence area, the second parameter information and the third parameter information, and controlling the vehicle-mounted system according to the action categories of the first target action event and the action categories of the second target action event and the second target action event.
Specifically, a method of executing different target action events to control the in-vehicle system by segmentation will be described below taking the case where the event category is a general sliding as an example:
in some examples, when the vehicle-mounted system acquires the target ACTION event or the sub-event action_move, the number screen id of the occurrence screen corresponding to the target ACTION event is queried by calling the pointeIndex function, the number screen id has been determined in the foregoing method, in some examples, the screen id corresponding to the first target ACTION event is 1, and the screen id corresponding to the second target ACTION event is 2. And then, judging the action type of the target action event or sub-event by taking the screen number screen ID as a variable and taking the time length of the target action event or sub-event and the sliding speed in the occurrence process as parameters through combining the second parameter information and the third parameter information by using a method xpS system GesturesListener.
More specifically, if it is determined that the multi-finger gesture is performed, the multi-finger gesture is used as a gesture type, multi-finger gesture response logic is executed using the method xpsystem gestureslistener. If the swipe gesture action is judged, the swipe gesture action is taken as an action category, swipe gesture action response logic is executed by using a method xpsystem gestureslist.
For example, in some examples, the first target action event is divided into two sub-events, the event category of the first sub-event being touch and the event category of the second sub-event being general slide. For the second sub-event, with screen number screen id=1 as a variable, with the duration of the second sub-event and the sliding speed in the process of occurrence as parameters, by combining the second parameter information and the third parameter information through the method xpsystem gestureslister.
In this way, when the event category is the second event category, the action category of the target action event can be determined, and the target action event can be executed in different occurrence areas based on the action category, thereby realizing control of the in-vehicle system.
Step 032 includes:
and deleting the occurrence time length and the coordinate information of the occurrence position of the first target action event on the display part according to the region information of the first occurrence region and the region information of the second occurrence region, and deleting the occurrence time length and the position coordinate of the action of the second target action event when the event category is the third event category.
The processor is used for deleting the occurrence time length and the coordinate information of the occurrence position of the first target action event on the display part according to the region information of the first occurrence region and the region information of the second occurrence region and deleting the occurrence time length and the position coordinate of the action of the second target action event under the condition that the event category is the third event category.
Specifically, a method of executing different target action events to control the in-vehicle system by segmentation is described below taking the case of the event category as an upward slide as an example:
In the related art, an action in which an event category is slide-up generally represents a cancel operation, and thus when an event category of a target action event or sub-event is slide-up, a user aims to cancel a current target action event.
Thus, in some examples, when the in-vehicle system acquires the target ACTION event or sub-event action_up or action_cancel, the number screen id of the occurrence screen corresponding to the target ACTION event is queried by calling the pointeIndex function, where the number screen id has been determined in the foregoing method, and in some examples the screen id corresponding to the first target ACTION event is 1 and the screen id corresponding to the second target ACTION event is 2. And then using the screen number screen ID as a variable, and completely emptying the stored data related to the target action event or sub-event by an xpsystem GesturesListener. CaptureUpOrCancel (screen Id) method, namely emptying the stored data related to the target action event or sub-event including the occurrence time of the target action event or sub-event and the coordinate information of the occurrence position. The event category is that the event sliding upwards is a sub-event, and then the vehicle-mounted system does not execute other sub-events occurring before the sub-event. In order to ensure the normal operation of the display parts and the vehicle-mounted system after deleting the data, the function or variable value corresponding to the deleted data is restored to the state before the occurrence of the target action event or sub-event.
For example, in some examples, the first target action event is divided into three sub-events, the event category of the first sub-event is touch, the event category of the second sub-event is general slide, and the event category of the third sub-event is slide up. In this case, after the third sub-event occurs, the stored data related to the first target action event is all emptied by xpsystem gesturenstristener. Captureupercancel (screen id) method using screen number screen=1 as a variable, the first sub-event and the second word event are not executed any more, and the in-vehicle system is restored to the state before the first target action event occurs.
In this way, when the event type is the third event type, the relevant parameter information can be deleted based on the area information of the occurrence area, so that the execution of the action event cancellation target action event by the action event of the third event type can be realized.
Next, a plurality of practical application scenarios are taken as examples to describe the practical application effects of the present application.
In some examples, as shown in fig. 9, a first screen with a screen number of 1 and a second screen with a screen number of 2 are provided in the vehicle cabin. Fig. 9 (a) is a scenario when a target action event occurs, where there is a window a on a first screen and a window B on a second screen. The user makes a press-touch on the first screen as shown by the black dots in fig. 9 (a), while sliding left on the second screen as shown by the left arrows in fig. 9 (a), with the aim of confirming and closing window a while streaming window B to the first screen. In this case, the event category of the target action event on the first screen is touch, and the target action event on the second screen is general slide. After the two target action events are split and executed according to the above-described method, a scene as shown in fig. 9 (b) is obtained. If the method provided in the present application is not adopted, the original recognition mode of the vehicle-mounted system is maintained, and the vehicle-mounted system recognizes the action event as a finger press touch with another finger sliding, and the general effect corresponding to the action event is to enlarge or reduce the press touch interface instead of the above-mentioned segmentation effect expected by the user.
In some examples, as shown in an application scenario in fig. 10, a first screen with a screen number of 1 and a second screen with a screen number of 2 are provided in a vehicle cabin. Fig. 10 (a) is a scenario when a target action event occurs, and a user simultaneously slides down from a single direction at the top edge of the screen on the first screen and the second screen, so that both screens are simultaneously shown by black arrows in fig. 10 (a), and the purpose is to simultaneously open a drop-down shortcut menu on the first screen and the second screen. In this case, the event types of the target action events on the first screen and the second screen are both general sliding, and after the two target action events are divided and executed according to the above-described method, a scene as shown in fig. 10 (b) is obtained. If the method provided by the application is not adopted, the original recognition mode of the vehicle-mounted system is maintained, and the vehicle-mounted system can recognize the motion as shown in fig. 10 (a) as double-finger sliding downwards from the top end of the screen at the same time, unlike single-finger sliding downwards from the top end of the screen, the user cannot realize the purpose of opening the shortcut menu on both screens.
In some examples, as shown in fig. 11, a first screen with a screen number of 1 and a second screen with a screen number of 2 are provided in the vehicle cabin. Fig. 11 (a) shows a scenario when the target action event occurs, where there is a window C on the first screen and a window D on the second screen, and the user slides down three directions on the first screen and the second screen simultaneously, as shown by the three black arrows side by side in fig. 11 (a), in order to close the window C and the window D simultaneously. In this case, the event types of the target action events on the first screen and the second screen are both general sliding, and after the two target action events are divided and executed according to the above method, the purpose of closing the window C and the window D is achieved, and the scene shown in fig. 11 (b) is obtained. If the method provided by the application is not adopted, the original recognition mode of the vehicle-mounted system is maintained, and the vehicle-mounted system recognizes the action shown in fig. 11 (a) as six-finger simultaneous downward sliding, unlike three-finger simultaneous downward sliding, the user cannot simultaneously close the window C and the window D on two screens.
The present application also provides a computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by one or more processors, implements the in-vehicle system control method as described above.
In the description of the present specification, reference to the terms "certain embodiments," "in one example," "illustratively," and the like, means that a particular feature, structure, material, or characteristic described in connection with the embodiments or examples is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and further implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present application.
While embodiments of the present application have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the present application, and that variations, modifications, alternatives, and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the present application.
Claims (11)
1. A vehicle-mounted system control method, characterized in that the method comprises:
acquiring a first target action event and a second target action event in the occurrence process of the first target action event;
determining a first occurrence area of the display part of the first target action event in a vehicle cabin and a second occurrence area of the display part of the second target action event, wherein the display part comprises at least two areas;
and controlling the vehicle-mounted system according to the first target action event and the second target action event respectively when the first generation area and the second generation area are different.
2. The method of claim 1, wherein the acquiring a first target action event and a second target action event during the occurrence of the first target action event further comprises:
Determining a plurality of parameter information based on preset action parameters according to preset action recognition conditions;
the determining that the first target action event displays a first occurrence area of a component in a vehicle cabin and the second target action event displays a second occurrence area of the component includes:
and determining the first occurrence area and the second occurrence area according to the parameter information, the first target action event and the second target action event.
3. The method according to claim 2, wherein determining a plurality of parameter information based on the preset motion parameters according to the preset motion recognition condition includes:
determining first parameter information according to coordinate information of a position where the action occurs on the display part so as to determine an occurrence area of the action;
determining second parameter information according to displacement information of the action on the display part so as to determine the displacement of the action;
and determining third parameter information according to the duration of the action so as to limit the speed of the action.
4. The method of claim 3, wherein the determining the first occurrence region and the second occurrence region based on the parameter information, the first target action event, and the second target action event comprises:
Acquiring the position of the first target action event and the coordinate information of the position of the second target action event on the display part;
and determining the first generation area and the second generation area according to the coordinate information and the first parameter information.
5. The method of claim 4, wherein determining the first occurrence area and the second occurrence area based on the coordinate information and the first parameter information comprises:
acquiring the area information of each area of the display part;
and determining the region information of the first generation region and the region information of the second generation region according to the coordinate information, the region information of each region and the first parameter information.
6. The method of claim 5, wherein controlling the in-vehicle system according to the first target action event and the second target action event, respectively, when the first occurrence region and the second occurrence region are different, comprises:
acquiring event categories of the first target action event and the second target action event;
And controlling the vehicle-mounted system according to the event category, the first target action event and the second target action event.
7. The method of claim 6, wherein the executing the first target action event and the second target action event according to the event category, controlling the in-vehicle system, comprises:
under the condition that the event category is a first event category, acquiring the occurrence time length and the coordinate information of the occurrence position of the first target action event on the display part, and the occurrence time length and the coordinate information of the occurrence position of the second target action event on the display part;
determining an action category of the first target action event according to the area information of the first generation area, the coordinate information of the occurrence time and the occurrence position of the first target action event on the display part, wherein the action category comprises a first action category or a second action category;
determining the action category of the second target action event according to the area information of the second occurrence area, the occurrence time of the second target action event and the coordinate information of the occurrence position on the display part;
And controlling the vehicle-mounted system according to the first target action event and the action category of the second target action event and the second target action event.
8. The method of claim 7, wherein the executing the first target action event and the second target action event according to the event category, controlling the in-vehicle system, comprises:
when the event category is a second event category, determining an action category of the first target action event and an action category of the second target action event according to the area information of the first occurrence area, the area information of the second occurrence area, the second parameter information and the third parameter information;
and controlling the vehicle-mounted system according to the first target action event and the action category of the second target action event and the second target action event.
9. The method of claim 8, wherein the executing the first target action event and the second target action event according to the event category, controlling the in-vehicle system, comprises:
And deleting the occurrence time length and the coordinate information of the occurrence position of the first target action event on the display part and deleting the occurrence time length and the position coordinate of the action of the second target action event according to the region information of the first occurrence region and the region information of the second occurrence region when the event category is the third event category.
10. A vehicle comprising a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the method of any of claims 1-9.
11. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program which, when executed by one or more processors, implements the method according to any of claims 1-9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310149004.5A CN116198525B (en) | 2023-02-21 | 2023-02-21 | Vehicle-mounted system control method, vehicle and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310149004.5A CN116198525B (en) | 2023-02-21 | 2023-02-21 | Vehicle-mounted system control method, vehicle and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116198525A true CN116198525A (en) | 2023-06-02 |
CN116198525B CN116198525B (en) | 2024-08-09 |
Family
ID=86509032
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310149004.5A Active CN116198525B (en) | 2023-02-21 | 2023-02-21 | Vehicle-mounted system control method, vehicle and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116198525B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105612476A (en) * | 2013-10-08 | 2016-05-25 | Tk控股公司 | Self-calibrating tactile haptic multi-touch, multifunction switch panel |
CN106573627A (en) * | 2014-08-20 | 2017-04-19 | 哈曼国际工业有限公司 | Multitouch chording language |
CN110262690A (en) * | 2019-06-18 | 2019-09-20 | Oppo广东移动通信有限公司 | Double-screen display method and device, mobile terminal, computer readable storage medium |
CN110633044A (en) * | 2019-08-27 | 2019-12-31 | 联想(北京)有限公司 | Control method, control device, electronic equipment and storage medium |
CN113448451A (en) * | 2020-03-24 | 2021-09-28 | 高创(苏州)电子有限公司 | Touch display device, touch display method and storage medium |
CN115033163A (en) * | 2022-06-06 | 2022-09-09 | 广州小鹏汽车科技有限公司 | Control method of in-vehicle system, vehicle, and storage medium |
-
2023
- 2023-02-21 CN CN202310149004.5A patent/CN116198525B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105612476A (en) * | 2013-10-08 | 2016-05-25 | Tk控股公司 | Self-calibrating tactile haptic multi-touch, multifunction switch panel |
CN106573627A (en) * | 2014-08-20 | 2017-04-19 | 哈曼国际工业有限公司 | Multitouch chording language |
CN110262690A (en) * | 2019-06-18 | 2019-09-20 | Oppo广东移动通信有限公司 | Double-screen display method and device, mobile terminal, computer readable storage medium |
CN110633044A (en) * | 2019-08-27 | 2019-12-31 | 联想(北京)有限公司 | Control method, control device, electronic equipment and storage medium |
CN113448451A (en) * | 2020-03-24 | 2021-09-28 | 高创(苏州)电子有限公司 | Touch display device, touch display method and storage medium |
CN115033163A (en) * | 2022-06-06 | 2022-09-09 | 广州小鹏汽车科技有限公司 | Control method of in-vehicle system, vehicle, and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN116198525B (en) | 2024-08-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109062479B (en) | Split screen application switching method and device, storage medium and electronic equipment | |
US20170003868A1 (en) | Method and terminal for activating application based on handwriting input | |
CN110928409B (en) | Vehicle-mounted scene mode control method and device, vehicle and storage medium | |
CN110147256B (en) | Multi-screen interaction method and device | |
WO2020043064A1 (en) | Page switching method, apparatus, storage medium, and computer device | |
CN109933388B (en) | Vehicle-mounted terminal equipment and display processing method of application components thereof | |
CN107704184B (en) | Method for operating a device and operating device | |
CN111201501A (en) | Method for providing haptic feedback to an operator of a touch sensitive display device | |
EP3726360B1 (en) | Device and method for controlling vehicle component | |
CN114564102A (en) | Automobile cabin interaction method and device and vehicle | |
KR102377998B1 (en) | Means of transportation, user interface and method for defining a tile on a display device | |
US9213435B2 (en) | Method and system for selecting items using touchscreen | |
US10078443B2 (en) | Control system for virtual mouse and control method thereof | |
US10082902B1 (en) | Display changes via discrete multi-touch gestures | |
CN111831204A (en) | Device control method, device, storage medium and electronic device | |
CN116198525B (en) | Vehicle-mounted system control method, vehicle and storage medium | |
CN114415886A (en) | Application icon management method and electronic equipment | |
CN106095303B (en) | Application program operation method and device | |
CN111433735A (en) | Method, apparatus and computer readable medium for implementing a generic hardware-software interface | |
US10908813B2 (en) | Method, computer program product and device for determining input regions on a graphical user interface | |
CN112383826A (en) | Control method and device of vehicle-mounted entertainment terminal, storage medium, terminal and automobile | |
CN115917488A (en) | Display interface processing method and device and storage medium | |
CN112776720A (en) | Display method of vehicle-mounted display screen and vehicle-mounted system | |
CN107656668B (en) | Sideslip menu loading method and device | |
CN107562260B (en) | A kind of method and device of touch control |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20231016 Address after: No. 48, room 1507, 15th floor, Fumin Building, 18 Beijiang Avenue, Zhaoqing high tech Zone, Zhaoqing City, Guangdong Province, 526238 Applicant after: Zhaoqing Xiaopeng New Energy Investment Co.,Ltd. Address before: 510000 No.8 Songgang street, Cencun, Tianhe District, Guangzhou City, Guangdong Province Applicant before: GUANGZHOU XIAOPENG MOTORS TECHNOLOGY Co.,Ltd. |
|
TA01 | Transfer of patent application right | ||
GR01 | Patent grant | ||
GR01 | Patent grant |