CN117908723A - Vehicle-computer interaction method, device, equipment and storage medium - Google Patents

Vehicle-computer interaction method, device, equipment and storage medium Download PDF

Info

Publication number
CN117908723A
CN117908723A CN202410013232.4A CN202410013232A CN117908723A CN 117908723 A CN117908723 A CN 117908723A CN 202410013232 A CN202410013232 A CN 202410013232A CN 117908723 A CN117908723 A CN 117908723A
Authority
CN
China
Prior art keywords
virtual key
vehicle
target
user
key image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410013232.4A
Other languages
Chinese (zh)
Inventor
周祥
周霞
邓小成
牟胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Changan Automobile Co Ltd
Original Assignee
Chongqing Changan Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Changan Automobile Co Ltd filed Critical Chongqing Changan Automobile Co Ltd
Priority to CN202410013232.4A priority Critical patent/CN117908723A/en
Publication of CN117908723A publication Critical patent/CN117908723A/en
Pending legal-status Critical Current

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The application relates to a vehicle-machine interaction method, device, equipment and storage medium, and relates to the technical field of automobiles. The method comprises the following steps: responding to touch operation of a user in a preset area on a vehicle center console, projecting a plurality of virtual key images in a first area on the center console, wherein each virtual key image in the plurality of virtual key images is used for controlling one function of the vehicle; and responding to the touch operation of the user on the target virtual key image in the plurality of virtual key images, and controlling the vehicle to execute the target function corresponding to the target virtual key image. Therefore, the technical problem that the interaction efficiency with the vehicle machine is low under the condition of ensuring the integrity of the vehicle modeling can be solved.

Description

Vehicle-computer interaction method, device, equipment and storage medium
Technical Field
The invention relates to the technical field of automobiles, in particular to an automobile-machine interaction method, an automobile-machine interaction device, automobile-machine interaction equipment and a storage medium.
Background
Along with the continuous development of automobiles, the level of intelligence is also continuously improved, and the requirements of users on the internal modeling of automobiles are also continuously improved. Because the entity button inside the automobile can destroy the integral modeling of the automobile interior, the button function is integrated in the central control screen, the central control screen is used for opening or closing the automobile function and adjusting the size, so that the automobile interior is simpler, and the interior integrity is higher.
However, in the above method, because more functions are added to the central control screen, when the user needs to interact with the vehicle with some conventional functions, the user needs to perform searching operations in a hierarchical manner in software of the central control screen, so that the operation steps are more, the time is longer, the user cannot pay attention to the change around the vehicle in the driving process of the vehicle, and traffic accidents may be caused when serious. Therefore, the efficiency of interaction with the vehicle machine is low while ensuring the integrity of the vehicle model.
Disclosure of Invention
The application aims to provide a vehicle-machine interaction method, device, equipment and storage medium, which are used for solving the technical problem of lower interaction efficiency with a vehicle machine under the condition of ensuring the integrity of the modeling of a vehicle. The technical scheme of the application is as follows:
According to a first aspect of the present application, there is provided a vehicle-computer interaction method, including: responding to touch operation of a user in a preset area on a vehicle center console, projecting a plurality of virtual key images in a first area on the center console, wherein each virtual key image in the plurality of virtual key images is used for controlling one function of the vehicle; and responding to the touch operation of the user on the target virtual key image in the plurality of virtual key images, and controlling the vehicle to execute the target function corresponding to the target virtual key image.
According to the technical means, the method and the device respond to the touch operation of the user on the first area of the vehicle center console, so that a plurality of virtual key images for controlling functions of the vehicle are projected on the first area of the center console, and then the vehicle is controlled to execute the target function corresponding to the target virtual key image according to the touch operation of the user on the target virtual key image in the plurality of virtual key images. According to the method, based on the touch operation of the user, a plurality of virtual key images are projected, and the target function corresponding to the target virtual key under the touch operation is executed, so that the virtual key is projected through the projection lamp to replace the entity key of the existing vehicle, and the integrity of the vehicle modeling is ensured. Moreover, the virtual keys replace the central control screen with more software layers, so that the interaction between the user and the vehicle is realized, and the interaction efficiency between the user and the vehicle is improved.
In one possible implementation manner, a preset area and a first area on the center console comprise touch control films, and the touch control films are used for determining position information corresponding to touch control operation of a user through a touch control circuit; responding to the touch operation of a user on a target virtual key image in a plurality of virtual key images, controlling the vehicle to execute a target function corresponding to the target virtual key image, and comprising the following steps: receiving touch operation of a user on a target position in a first area, and determining position information of the target position based on a touch membrane; determining a virtual key image projected by a target position in the first area, and determining the virtual key image projected by the target position as a target virtual key image; and controlling the vehicle to execute the target function corresponding to the target virtual key image.
According to the technical means, the position information of the target position of the touch operation is determined by using the touch diaphragm, the vehicle is triggered to execute the target function corresponding to the target virtual key image according to the target virtual key image projected by the target position, the operation object of the touch operation of the user is detected by the touch diaphragm, the vehicle is triggered to execute the corresponding function, and the interactive operation of the user and the vehicle is completed.
In one possible embodiment, the vehicle includes a target sensor for detecting position information corresponding to a touch operation of a user; responding to the touch operation of a user on a target virtual key image in a plurality of virtual key images, controlling the vehicle to execute a target function corresponding to the target virtual key image, and comprising the following steps: receiving touch operation of a user on a target position in a first area, and acquiring a touch shadow image generated at the target position by the touch operation through a target sensor; fusing the touch shadow image and the virtual key images projected from the first area, determining the virtual key image projected from the target position, and determining the virtual key image projected from the target position as a target virtual key image; and controlling the vehicle to execute the target function corresponding to the target virtual key image.
According to the technical means, the touch shadow image formed on the target position by the touch operation of the user is obtained through the target sensor, and the target virtual key image of the target position is determined according to the fusion processing of the touch shadow image and the virtual key images, so that the vehicle is triggered to execute the target function corresponding to the target virtual key image, the operation object of the touch operation of the user is detected through the target sensor, and the interactive operation of the user and the vehicle is completed.
In one possible embodiment, the method further comprises: receiving touch operation of a user on a first virtual key image in a plurality of virtual key images projected in a first area; and responding to the touch operation of the user on the first virtual key image, controlling the vehicle to project a target picture in a second area on the center console, and controlling the vehicle to play music.
According to the technical means, the method and the device can complete the projection of the target picture on the center console and control the music playing function of the vehicle according to the touch operation of the user on the first virtual key image, and achieve personalized projection of the user on the vehicle.
In one possible embodiment, the method further comprises: receiving touch operation of a user on a second virtual key image in the plurality of virtual key images projected in the first area; and responding to the touch operation of the user on the second virtual key image, and controlling the third area projection atmosphere lamp light of the vehicle on the center console.
According to the technical means, the function of projecting the atmosphere lamp on the center console by the vehicle can be finished according to the touch operation of the user on the second virtual key image, and the projection of the atmosphere lamp of the vehicle to the vehicle machine by the user is realized.
According to a second aspect of the present application, there is provided a vehicle-to-machine interaction device, the vehicle-to-machine interaction device including a processing module; the processing module is used for responding to the touch operation of a user in a preset area on the vehicle center console, projecting a plurality of virtual key images in a first area on the center console, wherein each virtual key image in the plurality of virtual key images is used for controlling one function of the vehicle; the processing module is further used for responding to touch operation of a user on a target virtual key image in the plurality of virtual key images and controlling the vehicle to execute a target function corresponding to the target virtual key image.
In one possible implementation manner, a preset area and a first area on the center console comprise touch control films, and the touch control films are used for determining position information corresponding to touch control operation of a user through a touch control circuit; the car machine interaction device further comprises: a receiving module; the receiving module is used for receiving touch operation of a user on a target position in the first area and determining position information of the target position based on the touch membrane; the processing module is specifically used for determining a virtual key image projected by a target position in the first area and determining the virtual key image projected by the target position as a target virtual key image; the processing module is specifically used for controlling the vehicle to execute the target function corresponding to the target virtual key image.
In one possible embodiment, the vehicle includes a target sensor for detecting position information corresponding to a touch operation of a user; the receiving module is also used for receiving touch operation of a user on a target position in the first area and acquiring a touch shadow image generated at the target position by the touch operation through the target sensor; the processing module is specifically configured to perform fusion processing on the touch shadow image and the plurality of virtual key images projected from the first area, determine a virtual key image projected from the target position, and determine the virtual key image projected from the target position as a target virtual key image; the processing module is specifically used for controlling the vehicle to execute the target function corresponding to the target virtual key image.
In a possible implementation manner, the receiving module is further configured to receive a touch operation of a user on a first virtual key image in the plurality of virtual key images projected in the first area; the processing module is also used for responding to the touch operation of the user on the first virtual key image, controlling the vehicle to project a target picture in a second area on the center console and controlling the vehicle to play music.
In a possible implementation manner, the receiving module is further configured to receive a touch operation of a user on a second virtual key image in the plurality of virtual key images projected in the first area; and the processing module is also used for responding to the touch operation of the user on the second virtual key image and controlling the projection atmosphere lamp light of the vehicle in the third area on the center console.
According to a third aspect of the present application, there is provided an electronic apparatus comprising: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to execute instructions to implement the method of the first aspect and any of its possible embodiments described above.
According to a fourth aspect of the present application there is provided a computer readable storage medium, which when executed by a processor of an electronic device, enables the electronic device to perform the method of the first aspect and any of its possible embodiments.
According to a fifth aspect of the present application, there is provided a vehicle comprising: the vehicle-computer interaction device is used for realizing the method of the first aspect and any possible implementation manner of the first aspect.
According to a sixth aspect of the present application there is provided a computer program product comprising computer instructions which, when run on an electronic device, cause the electronic device to perform the method of the first aspect and any of its possible embodiments.
Therefore, the technical characteristics of the application have the following beneficial effects:
(1) Responding to the touch operation of a user on a first area of a vehicle center console, projecting a plurality of virtual key images for controlling functions of the vehicle on the first area of the center console, and controlling the vehicle to execute target functions corresponding to target virtual key images according to the touch operation of the user on target virtual key images in the plurality of virtual key images. According to the method, based on the touch operation of the user, a plurality of virtual key images are projected, and the target function corresponding to the target virtual key under the touch operation is executed, so that the virtual key is projected through the projection lamp to replace the entity key of the existing vehicle, and the integrity of the vehicle modeling is ensured. Moreover, the virtual keys replace the central control screen with more software layers, so that the interaction between the user and the vehicle is realized, and the interaction efficiency between the user and the vehicle is improved.
(2) The method comprises the steps of determining position information of a target position of touch operation by using a touch diaphragm, triggering a vehicle to execute a target function corresponding to a target virtual key image according to the target virtual key image projected by the target position, detecting an operation object of touch operation of a user by the touch diaphragm, triggering the vehicle to execute the corresponding function, and completing interactive operation of the user and a vehicle.
(3) The method comprises the steps of obtaining a touch shadow image formed on a target position by using a target sensor in touch operation of a user, determining a target virtual key image of the target position according to fusion processing of the touch shadow image and a plurality of virtual key images, triggering a vehicle to execute a target function corresponding to the target virtual key image, detecting an operation object of the touch operation of the user by using the target sensor, and completing interactive operation of the user and a vehicle.
(4) According to the touch operation of the user on the first virtual key image, the function of projecting the target picture on the center console and controlling the music playing of the vehicle can be completed, and personalized projection of the user on the vehicle machine can be realized.
(5) The function of projecting the atmosphere lamp on the center console by the vehicle can be completed according to the touch operation of the user on the second virtual key image, and the projection of the atmosphere lamp of the vehicle machine by the user is realized.
It should be noted that, the technical effects caused by any implementation manner of the second aspect to the sixth aspect may refer to the technical effects caused by the corresponding implementation manner in the first aspect, which is not described herein.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application and do not constitute a undue limitation on the application.
FIG. 1 is a schematic diagram of a vehicle-to-machine interaction system, according to an exemplary embodiment;
FIG. 2 is a flow chart illustrating a method of vehicle-to-machine interaction according to an exemplary embodiment;
FIG. 3 is a schematic diagram illustrating a placement position of a projection lamp according to an exemplary embodiment;
FIG. 4 is a schematic illustration of the location of a projection area shown according to an exemplary embodiment;
FIG. 5 is a flowchart illustrating yet another vehicle-to-machine interaction method according to an example embodiment;
FIG. 6 is a diagram of a touch diaphragm structure shown in accordance with an exemplary embodiment;
FIG. 7 is a flowchart illustrating yet another method of vehicle-to-machine interaction, according to an example embodiment;
FIG. 8 is a flowchart illustrating yet another method of vehicle-to-machine interaction, according to an example embodiment;
FIG. 9 is a flowchart illustrating yet another vehicle-to-machine interaction method according to an example embodiment;
FIG. 10 is a block diagram of a vehicle-to-machine interaction device, according to an example embodiment;
Fig. 11 is a block diagram of an electronic device, according to an example embodiment.
Detailed Description
Further advantages and effects of the present invention will become readily apparent to those skilled in the art from the disclosure herein, by referring to the accompanying drawings and the preferred embodiments. The invention may be practiced or carried out in other embodiments that depart from the specific details, and the details of the present description may be modified or varied from the spirit and scope of the present invention. It should be understood that the preferred embodiments are presented by way of illustration only and not by way of limitation.
It should be noted that the illustrations provided in the following embodiments merely illustrate the basic concept of the present invention by way of illustration, and only the components related to the present invention are shown in the drawings and are not drawn according to the number, shape and size of the components in actual implementation, and the form, number and proportion of the components in actual implementation may be arbitrarily changed, and the layout of the components may be more complicated.
Along with the continuous development of automobile intellectualization, the functions that the vehicle can realize are also increasing continuously, and the requirement of the user on the overall modeling of the automobile interior is also increasing continuously, and the entity key inside the automobile can seriously influence the overall modeling of the automobile interior. Therefore, more and more entity keys are integrated into the central control screen, but the central control screen has more control levels, so that the control is inconvenient, a driver is easily distracted in the driving process of the vehicle, and traffic accidents are caused. Therefore, there is a need for a method that can replace physical keys, implement some common, basic automotive functions, and ensure the integrity of the vehicle styling.
At present, in order to solve the conflict between the compact modeling and the integral requirement of the vehicle and the requirement of the physical key, a common method is a hidden touch scheme (light-transmitting surface skin + backlight + pressure sensing technology), but the scheme needs to be made into a separate decoration, so that the integral requirement of the modeling cannot be completely solved, the manufacturing cost is higher, and the function expansion cannot be performed.
For easy understanding, the vehicle-computer interaction method provided by the application is specifically described below with reference to the accompanying drawings.
The vehicle-computer interaction method provided by the embodiment of the application can be applied to a vehicle-computer interaction system. Fig. 1 is a schematic structural diagram of a vehicle-to-machine interaction system according to an exemplary embodiment. As shown in fig. 1, the car-machine interaction system 10 includes: a target vehicle 11 and a user 12. The target vehicle 11 may respond to the touch operation of the user 12, so as to project a plurality of virtual key images, and respond to the touch operation of the user 12 on the target virtual key images, to execute the target function corresponding to the target virtual key images.
FIG. 2 is a flow chart illustrating a vehicle-to-machine interaction method, as shown in FIG. 2, according to an exemplary embodiment, comprising the steps of:
s201, responding to touch operation of a user on a preset area on a vehicle center console, and projecting a plurality of virtual key images on a first area on the center console.
Wherein each virtual key image of the plurality of virtual key images is used to control one function of the vehicle.
Alternatively, as shown in fig. 3, a schematic view of the installation position of the projection lamp is shown. The projection lamp for projection on the center console can be a low-cost fixed projection lamp, can be arranged in front of the sun shield and hidden in the ceiling inner decorative plate, so that the projection lamp is hidden, the integrity of the vehicle modeling is ensured, and the cost is low and reduced.
Alternatively, as shown in fig. 4, a schematic view of the position of the projection area is shown. The projection area projected by the projection lamp may be divided into a first area (functional projection area a or functional projection area D), a second area (picture projection area C), and a third area (atmosphere lamp projection area B).
It should be noted that, the projection pattern projected by the projection lamp needs to be distorted and corrected according to the curved surface of the projected center console and the installation position of the projection lamp, so as to improve the user experience.
S202, responding to touch operation of a user on a target virtual key image in the plurality of virtual key images, and controlling the vehicle to execute a target function corresponding to the target virtual key image.
It can be understood that, because each virtual key image is equivalent to one entity key, one function of the vehicle can be controlled, so that the user performs touch operation on the target virtual key image in the plurality of virtual key images according to the user's own needs, thereby controlling the vehicle to execute the target function corresponding to the target virtual key image and completing the interaction between the user and the vehicle.
Optionally, through the projection lamp, with touch diaphragm or object sensor's combination, realize that virtual button replaces the function of entity button and controls to and individualized scene picture projection function, atmosphere lamp projection function have guaranteed that the inside molding of vehicle is succinct, smooth, have still satisfied the user and have had the demand to specific entity button, reduce the traffic accident that causes because of the loaded down with trivial details distraction of control of well accuse screen to, can increase people-vehicle interaction, improve user's use car experience.
FIG. 5 is a flowchart of yet another vehicle-to-machine interaction method according to an exemplary embodiment, wherein a preset area and a first area on a console include touch control films, and the touch control films are used for determining position information corresponding to a touch control operation of a user through a touch control circuit; as shown in fig. 5, the method in step S202 specifically includes the following steps S301 to S303:
S301, receiving touch operation of a user on a target position in a first area, and determining position information of the target position based on the touch membrane.
Alternatively, as shown in fig. 6, a structure diagram of the touch membrane is shown. The touch control membrane is a capacitance membrane arranged between an inner decorative surface of the center console and the automobile framework, and can be a whole touch control membrane or a plurality of touch control membranes arranged at the projection positions of the virtual key images.
Optionally, the projection pattern projected by the projection lamp needs to be distorted and corrected according to the curved surface of the center console and the installation position of the projection lamp, and then the touch membrane and the touch circuit are pre-embedded at the positions of the multiple virtual key images. And further starting a projection lamp to project a plurality of virtual key images, so that the position calibration of the touch control membrane is carried out on the whole vehicle, and the calibration result is written into the control host.
After the user performs the touch operation in the first area, the touch diaphragm may receive the touch operation of the user, and feedback the touch coordinate signal to the control host through the touch circuit, so as to determine the position information of the target position.
S302, determining a virtual key image projected at a target position in the first area, and determining the virtual key image projected at the target position as a target virtual key image.
When the projection lamp projects a plurality of virtual key images and the touch diaphragm is mounted, the position information of the plurality of virtual key images corresponding to the plurality of virtual key images is determined and stored in the control host, so that the virtual key image projected by the target position can be determined according to the target position determined by the touch diaphragm and used as the target virtual key image.
S303, controlling the vehicle to execute the target function corresponding to the target virtual key image.
It should be noted that, since each of the plurality of virtual key images is used to control one function of the vehicle, when the user performs the touch operation on the target virtual key image, the vehicle is controlled to execute the target function corresponding to the target virtual key image.
Alternatively, the target function performed may be to turn on an air conditioner, raise or lower the temperature, or the like.
FIG. 7 is a flowchart illustrating yet another method of vehicle-to-machine interaction, the vehicle including an object sensor for detecting position information corresponding to a touch operation of a user, according to an exemplary embodiment; as shown in fig. 7, the method in the above step S202 specifically includes the following steps S401 to S403:
S401, receiving touch operation of a user on a target position in a first area, and acquiring a touch shadow image generated at the target position by the touch operation through a target sensor.
Alternatively, the object sensor may be a depth camera, such as a structured light technology camera, a Time of Flight (TOF) camera, or a binocular multi-angle stereoscopic imaging camera.
Alternatively, the object sensor and the projection lamp may be placed in a position, both disposed in front of the sun visor, hidden inside the interior trim panel of the vehicle roof, so that the projected pattern of the projection lamp projected on the center console can be entirely captured by the object sensor. The capture range of the object sensor is larger than the projection range of the projection lamp.
It can be appreciated that when the user performs the touch operation, the target sensor may capture an image of the user in the first area, so as to determine whether the user performs the touch operation in the first area, and if the user performs the touch operation for the first time, the projection lamp is turned on. And if the touch operation is performed again, acquiring a touch shadow image generated in the first area by the user.
Optionally, when the user performs the touch operation in the first area, the TOF camera may capture a 3D image of the user in the first area, thereby acquiring position information of a finger endpoint of the user, as a touch shadow position, and feeding back the touch shadow position to the control host.
S402, fusing the touch shadow image and the virtual key images projected from the first area, determining the virtual key image projected from the target position, and determining the virtual key image projected from the target position as the target virtual key image.
It can be understood that, according to the obtained touch shadow image, fusion processing is performed on the touch shadow image and the plurality of virtual key images projected by the projection lamp in the first area, whether the touch shadow image is fused with any one of the plurality of virtual key images is determined, and if the images are fused, the virtual key image is determined to be the target virtual key image.
Optionally, comparing the touch shadow position stored in the control host with the position information of the plurality of virtual key images projected by the projection lamp, and when the touch shadow position is consistent with the position information of one virtual key image in the position information of the plurality of virtual key images, taking the virtual key image as a target virtual key image, thereby executing a target function corresponding to the target virtual key image.
S403, controlling the vehicle to execute the target function corresponding to the target virtual key image.
It can be understood that the projection lamp is firstly utilized to project on the curved surface of the central console, and then the target sensor is used for detecting the curved surface of the central console to obtain a detection result, so that the projection pattern projected by the projection lamp is automatically corrected on the curved surface of the central console according to the detection result. After the user performs touch operation in the first area on the vehicle center console, the target sensor can extract the touch operation of the user in the first area, further obtain a touch shadow image generated by the user at the target position, and send the touch shadow image to the control host. And carrying out fusion processing on the touch shadow image and the plurality of virtual key images in the control host, determining a target virtual key image corresponding to the touch operation by the control host, and further controlling the execution target function of the vehicle according to the target virtual key image to complete the interaction between the user and the vehicle.
Fig. 8 is a flowchart of yet another vehicle-computer interaction method according to an exemplary embodiment, as shown in fig. 8, after the step S202, the method specifically further includes the following steps S501-S502:
S501, receiving touch operation of a user on a first virtual key image in a plurality of virtual key images projected in a first area.
It should be noted that, the projection of the personalized scene picture may be performed by the first virtual key image of the plurality of virtual key images projected by the projection lamp.
Optionally, the user may match the corresponding picture for the first virtual key image according to the requirement of the user, so as to project the target picture in the second area when performing the touch operation on the first virtual key image. Each first virtual key image corresponds to a picture of a different scene, such as a picture related to a birthday, a picture related to a anniversary, a picture related to a marriage, a picture related to entertainment, etc., and each scene may also correspond to specific music, and the user may also customize the projected picture.
Specifically, after the user performs the touch operation on the first virtual key image, the touch operation of the user may be received through the touch diaphragm or the target sensor, so as to respond to the touch operation of the first virtual key image.
S502, responding to the touch operation of the user on the first virtual key image, controlling the vehicle to project a target picture in a second area on the center console, and controlling the vehicle to play music.
It can be understood that, because the second area of the center console is wider and flatter, the personalized scene picture can be projected on the second area of the center console, so that the projection operation of a user under a specific scene by using the projection lamp is satisfied.
It can be understood that, according to the first virtual key image selected by the user, the vehicle projection lamp is controlled to project a target picture corresponding to the first virtual key image, and corresponding music is played.
Optionally, the user may also project through a central control screen or a voice function in the vehicle system. The user initiates a projection instruction of the personalized scene picture to the control host through a voice function in the central control screen or the car machine system, the control host searches a target picture of a corresponding scene in the projection lamp based on the control instruction, puts the target picture in a second area and plays corresponding music so as to meet personalized projection requirements of the user.
Fig. 9 is a flowchart of yet another vehicle-computer interaction method according to an exemplary embodiment, as shown in fig. 9, after the step S202, the method specifically further includes the following steps S601-S602:
S601, receiving touch operation of a user on a second virtual key image in the plurality of virtual key images projected in the first area.
The second virtual key image of the plurality of virtual key images projected by the projection lamp may be used for projection of the atmosphere lamp.
Specifically, after the user performs the touch operation on the second virtual key image, the touch operation of the user may be received through the touch membrane or the target sensor, so as to respond to the touch operation of the second virtual key image.
S602, responding to touch operation of a user on the second virtual key image, and controlling third area projection atmosphere lamplight of the vehicle on the center console.
It can be understood that according to the second virtual key image selected by the user, the vehicle projection lamp is controlled to project the light corresponding to the second virtual key image, so that the atmosphere in the vehicle is baked, and the user experience is improved.
The embodiment of the application provides a vehicle-computer interaction method, which aims to improve the intention recognition capability of voice content of a user to the greatest extent by adopting a method for comprehensively analyzing vehicle-computer state parameters and a plurality of pieces of historical dialogue text data on the existing vehicle voice interaction system. Specifically, when a dialogue is performed between a user and a vehicle, target voice content input by the user is received, and vehicle state parameters and a plurality of pieces of historical dialogue text data of a target vehicle are acquired. Then, sample data are extracted from a plurality of pieces of historical dialogue text data to train a preset model, a target model is obtained, a target strategy is determined according to the state parameters of the vehicle, the last piece of historical dialogue text data and target voice content, so that target context information is obtained, and target intention corresponding to the target voice content is determined by utilizing the target model and the target context information.
The foregoing description of the solution provided by the embodiments of the present application has been mainly presented in terms of a method. In order to realize the functions, the vehicle-computer interaction device or the electronic equipment comprises corresponding hardware structures and/or software modules for executing the functions. Those of skill in the art will readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
According to the method, the functional modules of the vehicle-computer interaction device or the electronic device can be divided, for example, the vehicle-computer interaction device or the electronic device can comprise each functional module corresponding to each functional division, and two or more functions can be integrated into one processing module. The integrated modules may be implemented in hardware or in software functional modules. It should be noted that, in the embodiment of the present application, the division of the modules is schematic, which is merely a logic function division, and other division manners may be implemented in actual implementation.
Fig. 10 is a block diagram illustrating a vehicle-to-machine interaction device, according to an example embodiment. Referring to fig. 10, the car-interactive apparatus 100 includes: a processing module 1001 and a receiving module 1002; a processing module 1001, configured to, in response to a touch operation of a user on a preset area on a console in a vehicle, project a plurality of virtual key images on a first area on the console, where each virtual key image in the plurality of virtual key images is used to control a function of the vehicle; the processing module 1001 is further configured to control, in response to a touch operation of a user on a target virtual key image of the plurality of virtual key images, the vehicle to execute a target function corresponding to the target virtual key image.
In one possible implementation manner, a preset area and a first area on the center console comprise touch control films, and the touch control films are used for determining position information corresponding to touch control operation of a user through a touch control circuit; the receiving module 1002 is configured to receive a touch operation of a user on a target position in the first area, and determine position information of the target position based on the touch diaphragm; the processing module 1001 is specifically configured to determine a virtual key image projected from a target position in the first area, and determine the virtual key image projected from the target position as a target virtual key image; the processing module 1001 is specifically configured to control the vehicle to execute a target function corresponding to the target virtual key image.
In one possible embodiment, the vehicle includes a target sensor for detecting position information corresponding to a touch operation of a user; the receiving module 1002 is further configured to receive a touch operation of a user on a target position in the first area, and obtain, by using a target sensor, a touch shadow image generated by the touch operation at the target position; the processing module 1001 is specifically configured to perform fusion processing on the touch shadow image and the plurality of virtual key images projected from the first area, determine a virtual key image projected from the target position, and determine the virtual key image projected from the target position as a target virtual key image; the processing module 1001 is specifically configured to control the vehicle to execute a target function corresponding to the target virtual key image.
In a possible implementation manner, the receiving module 1002 is further configured to receive a touch operation of the user on a first virtual key image of the plurality of virtual key images projected in the first area; the processing module 1001 is further configured to control, in response to a touch operation of the user on the first virtual key image, the vehicle to project a target picture in a second area on the console, and control the vehicle to play music.
In a possible implementation manner, the receiving module 1002 is further configured to receive a touch operation of the user on a second virtual key image of the plurality of virtual key images projected in the first area; the processing module 1001 is further configured to control, in response to a touch operation of the user on the second virtual key image, projection atmosphere light of a third area of the vehicle on the center console.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Fig. 11 is a block diagram of an electronic device, according to an example embodiment. As shown in fig. 11, electronic device 110 includes, but is not limited to: a processor 1101 and a memory 1102.
The memory 1102 is used for storing executable instructions of the processor 1101. It is understood that the processor 1101 is configured to execute instructions to implement the vehicle-to-machine interaction method in the above embodiment.
It should be noted that the electronic device structure shown in fig. 11 is not limited to the electronic device, and the electronic device may include more or less components than those shown in fig. 11, or may combine some components, or may have different arrangements of components, as will be appreciated by those skilled in the art.
The processor 1101 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 1102, and invoking data stored in the memory 1102, thereby performing overall monitoring of the electronic device. The processor 1101 may include one or more processing modules. Alternatively, the processor 1101 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 1101.
Memory 1102 may be used to store software programs as well as various data. The memory 1102 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, application programs (such as an acquisition unit, a determination unit, a processing unit, etc.) required for at least one functional module, and the like. In addition, memory 1102 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
In an exemplary embodiment, a computer readable storage medium is also provided, such as a memory 1102, comprising instructions executable by the processor 1101 of the electronic device 110 to implement the vehicle-to-machine interaction method of the above embodiments.
In actual implementation, the functions of the processing module 1001 and the receiving module 1002 in fig. 10 may be implemented by the processor 1101 in fig. 11 calling a computer program stored in the memory 1102. The specific implementation process of the method may refer to the description of the vehicle-computer interaction method in the above embodiment, and will not be repeated here.
Alternatively, the computer readable storage medium may be a non-transitory computer readable storage medium, for example, a read-only memory (ROM), a random access memory (random access memory, RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a vehicle including a vehicle-computer interaction device is also provided, where the vehicle may complete the vehicle-computer interaction method in the foregoing embodiment through the vehicle-computer interaction device.
In an exemplary embodiment, embodiments of the application also provide a computer program product comprising one or more instructions executable by the processor 1101 of the electronic device to perform the method of vehicle-to-machine interaction of the above embodiments.
It should be noted that, when the instructions in the computer readable storage medium or one or more instructions in the computer program product are executed by the processor of the electronic device, the processes of the embodiments of the vehicle-computer interaction method are implemented, and the technical effects same as those of the vehicle-computer interaction method can be achieved, so that repetition is avoided, and no further description is provided herein.
From the foregoing description of the embodiments, it will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of functional modules is illustrated, and in practical application, the above-described functional allocation may be performed by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules, so as to perform all the classification parts or part of the functions described above.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another apparatus, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and the parts shown as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed in a plurality of different places. The purpose of the embodiment scheme can be achieved by selecting part or all of the classification part units according to actual needs.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application, or the portion contributing to the prior art or the whole classification portion or portion of the technical solution, may be embodied in the form of a software product stored in a storage medium, where the software product includes several instructions to cause a device (may be a single-chip microcomputer, a chip or the like) or a processor (processor) to execute the whole classification portion or part of the steps of the method of the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk, etc.
The present application is not limited to the above embodiments, and any changes or substitutions within the technical scope of the present application should be covered by the scope of the present application. Therefore, the protection scope of the application is subject to the protection scope of the claims.

Claims (10)

1. The vehicle-computer interaction method is characterized by comprising the following steps of:
responding to touch operation of a user in a preset area on a vehicle center console, projecting a plurality of virtual key images in a first area on the center console, wherein each virtual key image in the plurality of virtual key images is used for controlling one function of the vehicle;
And responding to the touch operation of a user on a target virtual key image in the plurality of virtual key images, and controlling the vehicle to execute a target function corresponding to the target virtual key image.
2. The method of claim 1, wherein the preset area and the first area on the center console include touch control films, and the touch control films are used for determining position information corresponding to touch control operation of a user through a touch control circuit;
the controlling, in response to a touch operation of a user on a target virtual key image in the plurality of virtual key images, the vehicle to execute a target function corresponding to the target virtual key image includes:
receiving touch operation of a user on a target position in the first area, and determining position information of the target position based on the touch membrane;
determining a virtual key image projected from the target position in the first area, and determining the virtual key image projected from the target position as the target virtual key image;
And controlling the vehicle to execute the target function corresponding to the target virtual key image.
3. The method of claim 1, wherein the vehicle includes a target sensor for detecting position information corresponding to a touch operation of a user;
the controlling, in response to a touch operation of a user on a target virtual key image in the plurality of virtual key images, the vehicle to execute a target function corresponding to the target virtual key image includes:
Receiving touch operation of a user on a target position in the first area, and acquiring a touch shadow image generated at the target position by the touch operation through the target sensor;
Performing fusion processing on the touch shadow image and the virtual key images projected from the first area, determining the virtual key image projected from the target position, and determining the virtual key image projected from the target position as the target virtual key image;
And controlling the vehicle to execute the target function corresponding to the target virtual key image.
4. A method according to any one of claims 1-3, characterized in that the method further comprises:
receiving touch operation of a user on a first virtual key image in the plurality of virtual key images projected in the first area;
And responding to the touch operation of the user on the first virtual key image, controlling the vehicle to project a target picture in a second area on the center console, and controlling the vehicle to play music.
5. A method according to any one of claims 1-3, characterized in that the method further comprises:
Receiving touch operation of a user on a second virtual key image in the plurality of virtual key images projected in the first area;
and responding to the touch operation of the user on the second virtual key image, and controlling the third area projection atmosphere lamp light of the vehicle on the center console.
6. The vehicle-machine interaction device is characterized by comprising a processing module;
the processing module is used for responding to touch operation of a user on a preset area on a vehicle center console, projecting a plurality of virtual key images on a first area on the center console, wherein each virtual key image in the plurality of virtual key images is used for controlling one function of the vehicle;
The processing module is further configured to control, in response to a touch operation of a user on a target virtual key image in the plurality of virtual key images, the vehicle to execute a target function corresponding to the target virtual key image.
7. The vehicle-computer interaction device according to claim 6, wherein the preset area and the first area on the center console comprise touch control films, and the touch control films are used for determining position information corresponding to touch control operation of a user through a touch control circuit; the car-machine interaction device further comprises: a receiving module;
The receiving module is used for receiving touch operation of a user on a target position in the first area and determining position information of the target position based on the touch membrane;
The processing module is specifically configured to determine a virtual key image projected from the target position in the first area, and determine the virtual key image projected from the target position as the target virtual key image;
The processing module is specifically configured to control a vehicle to execute the target function corresponding to the target virtual key image.
8. An electronic device, comprising: a processor;
A memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the method of any one of claims 1 to 5.
9. A computer readable storage medium, characterized in that, when computer-executable instructions stored in the computer readable storage medium are executed by a processor of an electronic device, the electronic device is capable of performing the method of any one of claims 1 to 5.
10. A vehicle comprising a vehicle interaction device according to any one of claims 6 to 7, the vehicle being adapted to implement the method according to any one of claims 1 to 5.
CN202410013232.4A 2024-01-02 2024-01-02 Vehicle-computer interaction method, device, equipment and storage medium Pending CN117908723A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410013232.4A CN117908723A (en) 2024-01-02 2024-01-02 Vehicle-computer interaction method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410013232.4A CN117908723A (en) 2024-01-02 2024-01-02 Vehicle-computer interaction method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117908723A true CN117908723A (en) 2024-04-19

Family

ID=90681210

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410013232.4A Pending CN117908723A (en) 2024-01-02 2024-01-02 Vehicle-computer interaction method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117908723A (en)

Similar Documents

Publication Publication Date Title
KR102594718B1 (en) Dynamic reconfigurable display knob
EP3072710B1 (en) Vehicle, mobile terminal and method for controlling the same
US9030465B2 (en) Vehicle user interface unit for a vehicle electronic device
KR101585387B1 (en) Light-based touch controls on a steering wheel and dashboard
US20150131857A1 (en) Vehicle recognizing user gesture and method for controlling the same
EP2700528A2 (en) Vehicular manipulation input apparatus
US20160085332A1 (en) Touch sensitive holographic display system and method of using the display system
CN109643219B (en) Method for interacting with image content presented on a display device in a vehicle
KR101741691B1 (en) Vehicle and method of controlling the same
US10983691B2 (en) Terminal, vehicle having the terminal, and method for controlling the vehicle
KR101535032B1 (en) Method for extending interface in vehicle
KR20160083722A (en) Vehicle and controlling method thereof
US20150185858A1 (en) System and method of plane field activation for a gesture-based control system
CN110341475B (en) Display device for vehicle
JP5886172B2 (en) Vehicle information display system and vehicle information display control device
KR20160009037A (en) Gesture Input Device for Car Navigation Device
CN106155423A (en) Vehicle-mounted laser projection key system and vehicle
CN106199616A (en) Proximity transducer and control method thereof
KR101736820B1 (en) Mobile terminal and method for controlling the same
JP2012059085A (en) On-vehicle information apparatus
CN117908723A (en) Vehicle-computer interaction method, device, equipment and storage medium
JP7003268B2 (en) How to link with in-vehicle information devices and mobile terminals
CN107757463A (en) Vehicle head lamp controls
CN112034987B (en) Display method and device and electronic terminal
JP2017052409A (en) Vehicle setting device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination