CN114936000B - Vehicle-machine interaction method, system, medium and equipment based on picture framework - Google Patents
Vehicle-machine interaction method, system, medium and equipment based on picture framework Download PDFInfo
- Publication number
- CN114936000B CN114936000B CN202210706972.7A CN202210706972A CN114936000B CN 114936000 B CN114936000 B CN 114936000B CN 202210706972 A CN202210706972 A CN 202210706972A CN 114936000 B CN114936000 B CN 114936000B
- Authority
- CN
- China
- Prior art keywords
- picture
- user
- pictures
- vehicle
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000003993 interaction Effects 0.000 title claims abstract description 107
- 238000000034 method Methods 0.000 title claims abstract description 62
- 238000004891 communication Methods 0.000 claims abstract description 16
- 230000001360 synchronised effect Effects 0.000 claims description 7
- 230000000694 effects Effects 0.000 claims description 5
- 230000006855 networking Effects 0.000 claims description 5
- 230000001960 triggered effect Effects 0.000 claims 1
- 230000002452 interceptive effect Effects 0.000 abstract description 5
- 230000006399 behavior Effects 0.000 description 19
- 230000006870 function Effects 0.000 description 16
- 238000004590 computer program Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 8
- 230000008451 emotion Effects 0.000 description 4
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000003491 array Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 230000002996 emotional effect Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000001931 thermography Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04842—Selection of displayed objects or displayed text elements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04847—Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04883—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention provides a vehicle-machine interaction method, a system, a medium and equipment based on a picture framework, wherein the vehicle-machine interaction method based on the picture framework comprises the following steps: after the car machine is started, the car machine is in communication connection with the mobile equipment to receive the screen projection content of the mobile equipment for displaying a picture frame, wherein the pictures are edited and classified in advance through the mobile equipment so as to use the types of the pictures for matching application scenes; displaying the picture matched with the application scene on a screen of a vehicle-mounted terminal, and generating a picture operation interface; and receiving and executing a picture operation instruction sent by a user through the picture operation interface. The invention can expand the interactive mode on the application of the picture framework, synchronize the picture to the vehicle-mounted terminal through communication connection, and enrich the scene experience of the user through the interactive operation of the user on the vehicle-mounted terminal.
Description
Technical Field
The invention belongs to the field of intelligent interaction, relates to an interaction method based on a picture framework, and particularly relates to a vehicle-machine interaction method, system, medium and device based on the picture framework.
Background
Each screen of televisions, PCs, cell phones, tablets, etc. has created a tremendous market. The screen itself is of course not so powerful, but rather sticks to the vast majority of users based on the content of the same system under stable ecology. The importance of the central control display screen of a car which is ignored for a long time, whether being used as a car networking interactive terminal or one of the future end-to-end entrances, is becoming more and more prominent. The existing car display screen is developed along with wireless broadcasting, a radio is added, the radio is evolved into a cassette, a CD, MP3, MP4 and GPS with multimedia functions, and the functions of 360-degree image, thermal imaging and the like are added nowadays. It is known that the multimedia functions of the vehicle are more and more abundant, but the content enrichment must be based on strong network communication.
At present, the most common applications of mobile phone interconnection are billions and hundred degrees. The picture of the mobile phone projects after the Yi Lian is connected with the mobile phone, and the double-screen interaction is good in experience on a vertical screen; the built-in function is available after the hundred degrees are connected with the mobile phone, mainly comprises on-line navigation and on-line music, has a special user interface, and is good in experience.
However, the prior art still does not improve the tedious application environment when the user is in the vehicle, especially on the display content and interaction mode of the display screen of the vehicle.
Therefore, how to provide a vehicle-computer interaction method, system, medium and device based on a picture framework, so as to solve the defects that the prior art cannot realize rich display contents and interaction modes of a vehicle-computer display screen, and the like, is a technical problem to be solved by a person skilled in the art.
Disclosure of Invention
In view of the above drawbacks of the prior art, the present invention is directed to providing a vehicle-computer interaction method, system, medium and device based on a picture framework, which are used for solving the problem that the prior art cannot realize rich display contents and interaction modes of a vehicle-computer display screen.
To achieve the above and other related objects, an aspect of the present invention provides a vehicle-to-machine interaction method based on a picture frame, which is characterized in that the vehicle-to-machine interaction method based on the picture frame includes: displaying the picture matched with the application scene on a screen of a vehicle machine end, and generating a picture operation interface; and receiving and executing a picture operation instruction sent by a user through the picture operation interface.
In an embodiment of the present invention, the step of receiving and executing the picture operation instruction sent by the user through the picture operation interface includes: receiving a picture operation instruction sent by a user through a corresponding touch position in the picture operation interface; displaying information to be browsed by a user according to the picture operation instruction, wherein the browsed information comprises navigation information, music information, audio playing information, communication information, associated vehicle information, setting information and mall information matched with the picture; the mall information includes status information of the picture of the transaction.
In an embodiment of the present invention, the step of displaying information to be browsed by the user according to the picture operation instruction includes: analyzing user interaction requirements contained in the picture operation instruction; and displaying corresponding browsed information according to the user interaction requirement.
In an embodiment of the present invention, the image operation instruction includes a preset direction movement instruction of a current image displayed on a screen of the vehicle terminal and a preset direction movement instruction of the screen of the vehicle terminal.
In an embodiment of the present invention, the current frame slides up and down to indicate that the user needs to browse pictures recommended by others; the left and right sliding of the current picture indicates that the user needs to browse the pictures of the same category; the left side of the screen slides to indicate that the user needs to make a selection on the classification of the pictures; the right side of the screen slides to indicate that the user needs to receive recommendation information of a third party; the side sliding on the screen indicates that the user needs to execute the selection and switching of different applications; the screen undershoot indicates that the user needs to perform a switch of the currently used application.
In an embodiment of the present invention, the pictures are edited and classified in advance by the mobile device, so that the categories of the pictures are used for matching application scenes; the pictures comprise pictures shot by the user and pictures disclosed by others in the application program.
In an embodiment of the present invention, before the step of displaying the picture matched with the application scene on the screen of the vehicle-to-machine end and generating the picture operation interface, the vehicle-to-machine interaction method based on the picture framework further includes: according to the behavior habit data of the user and combining the current time and place information, presuming the behavior dynamics of the user; the behavior habit data of the user refers to historical data of corresponding behavior activities made by the user at different times and places, and the behavior dynamics of the user refers to a possible next-step plan made by the user at the current time and place; according to the user behavior, dynamically judging a required application scene, extracting feature words of the application scene, wherein the feature words are consistent with classification words used by the synchronized pictures in classification; and searching the pictures of which the classification words are the same as the characteristic words, and taking the pictures as pictures matched with the application scene.
Another aspect of the present invention provides a vehicle-computer interaction system based on a picture frame, where the vehicle-computer interaction system based on the picture frame includes: the display module is used for displaying the pictures matched with the application scene on a screen of the vehicle-mounted terminal and generating a picture operation interface; and the operation module is used for receiving and executing the picture operation instruction sent by the user through the picture operation interface.
In yet another aspect, the present invention provides a medium on which a computer program is stored, which when executed by a processor implements the picture-framework-based vehicle-to-machine interaction method.
In a final aspect the invention provides an apparatus comprising: a processor and a memory; the memory is used for storing a computer program, and the processor is used for executing the computer program stored in the memory, so that the equipment executes the vehicle-machine interaction method based on the picture framework.
As described above, the vehicle-computer interaction method, system, medium and device based on the picture framework provided by the invention have the following beneficial effects:
through synchronizing the edited pictures in the application program of the mobile equipment to the vehicle terminal, more contents related to the life of the user can be displayed on the display screen of the vehicle, the contents related to the life of the user are displayed on the display screen of the vehicle as a first page, more flexible and humanized interaction modes are provided for the user, the emotional interaction experience of the user in the vehicle is improved, and the monotonous and boring driving environment in the vehicle is improved.
Drawings
Fig. 1 is a schematic diagram of an application scenario in an embodiment of a vehicle-to-machine interaction method based on a picture framework according to the present invention.
Fig. 2 is a schematic flow chart of a vehicle-to-machine interaction method based on a picture framework according to an embodiment of the invention.
Fig. 3 is a flow chart illustrating command execution in an embodiment of the vehicle-to-machine interaction method based on the picture framework according to the present invention.
Fig. 4 is a flowchart illustrating a vehicle-to-machine interaction method based on a picture framework according to an embodiment of the invention.
Fig. 5 is a schematic diagram illustrating command classification in an embodiment of a vehicle-to-machine interaction method based on a picture framework according to the present invention.
Fig. 6 is a schematic diagram of a command interface of an embodiment of a vehicle-to-machine interaction method based on a picture framework according to the present invention.
Fig. 7 is a flowchart of a family picture interaction in an embodiment of a car-to-machine interaction method based on a picture framework according to the present invention.
Fig. 8 is a schematic structural diagram of a vehicle-computer interaction system based on a picture framework according to an embodiment of the invention.
Description of element reference numerals
8. Car machine interaction system based on picture framework
81. Display module
82. Operation module
S21-S22 vehicle-machine interaction method steps based on picture framework
S221-S222 call the procedure steps
S222A-S222B trigger flow steps
S71-S76 interaction flow steps of family pictures
Detailed Description
Other advantages and effects of the present invention will become apparent to those skilled in the art from the following disclosure, which describes the embodiments of the present invention with reference to specific examples. The invention may be practiced or carried out in other embodiments that depart from the specific details, and the details of the present description may be modified or varied from the spirit and scope of the present invention. It should be noted that the following embodiments and features in the embodiments may be combined with each other without conflict.
It should be noted that the illustrations provided in the following embodiments merely illustrate the basic concept of the present invention by way of illustration, and only the components related to the present invention are shown in the drawings and are not drawn according to the number, shape and size of the components in actual implementation, and the form, number and proportion of the components in actual implementation may be arbitrarily changed, and the layout of the components may be more complicated.
The technical principles of the vehicle-computer interaction method, system, medium and equipment based on the picture framework are as follows: displaying the picture matched with the application scene on a screen of a vehicle machine end, and generating a picture operation interface; and receiving and executing a picture operation instruction sent by a user through the picture operation interface.
Example 1
The embodiment provides a vehicle-computer interaction method based on a picture framework, which comprises the following steps:
displaying the picture matched with the application scene on a screen of a vehicle machine end, and generating a picture operation interface;
and receiving and executing a picture operation instruction sent by a user through the picture operation interface.
The following describes the vehicle-machine interaction method based on the picture framework provided in this embodiment in detail with reference to the drawings. Referring to fig. 1, an application scene architecture diagram of an embodiment of a vehicle-to-machine interaction method based on a picture architecture according to the present invention is shown. As shown in fig. 1, a user uses a mobile device with an application program built in, and carries the mobile device into a vehicle, and after the vehicle is started, the mobile device and the vehicle are in communication connection through a USB or wireless mode. In this embodiment, the mobile device includes a smart phone, a notebook computer, a PDA (Personal Digital Assistant, a palm computer), or other mobile devices capable of editing and interacting with pictures. The vehicle comprises an intelligent vehicle embedded with an intelligent application program or an existing general vehicle with a vehicle-mounted display function. After the intelligent application program is executed, the content synchronization and the function fusion of the mobile equipment and the vehicle machine can be completed; the conventional general vehicle with the vehicle-mounted display function can receive the screen projection content of the mobile equipment to display the picture framework, so that interaction is realized.
Referring to fig. 2, a schematic flow chart of a vehicle-computer interaction method based on a picture framework according to an embodiment of the invention is shown. As shown in fig. 2, the vehicle-machine interaction method based on the picture framework specifically includes the following steps:
s21, displaying the picture matched with the application scene on a screen of the vehicle machine end, and generating a picture operation interface.
In this embodiment, the pictures are edited and classified in advance by the mobile device, so that the categories of the pictures are used for matching application scenes; the pictures comprise pictures shot by the user and pictures disclosed by others in the application program.
Specifically, the matching process of the category of the picture and the application scene is as follows:
firstly, according to behavior habit data of a user, presuming behavior dynamics of the user by combining current time and place information; the behavior habit data of the user refers to historical data of corresponding behavior activities of the user at different times and places, and the behavior dynamics of the user refers to the possible next plan of the user at the current time and place.
And then, dynamically judging the required application scene according to the user behaviors, and extracting feature words of the application scene, wherein the feature words are consistent with classification words used in classification of the synchronized pictures.
And finally, searching the pictures of which the classification words are the same as the characteristic words, and taking the pictures as pictures matched with the application scene.
S22, receiving and executing a picture operation instruction sent by a user through the picture operation interface.
Referring to fig. 3, a command execution flow chart of an embodiment of the picture-based vehicle-to-machine interaction method according to the present invention is shown. As shown in fig. 3, S22 includes:
s221, receiving a picture operation instruction sent by a user through a corresponding touch position in the picture operation interface.
Specifically, the corresponding relation between the touch position and the picture operation instruction is predefined at different positions in the picture operation interface, and when the touch operation of a user on a certain touch position is received, the picture operation instruction corresponding to the action of the touch position is found out.
And S222, displaying information required to be browsed by a user according to the picture operation instruction, wherein the browsed information comprises navigation information, music information, audio playing information, communication information, associated vehicle information, setting information and mall information matched with the picture.
In this embodiment, the mall information includes status information of the picture of the transaction.
Referring to fig. 4, a flowchart of a vehicle-computer interaction method based on a picture frame according to an embodiment of the invention is shown.
As shown in fig. 4, S222 includes:
S222A, analyzing the user interaction requirement contained in the picture operation instruction.
Specifically, the user may need to call other functions or switch the category of the browsed picture when browsing the picture, and at this time, a user interaction requirement is generated.
And S222B, displaying corresponding browsed information according to the user interaction requirement.
Specifically, when the user needs to browse the pictures disclosed by others, browsing information corresponding to the interaction requirement of browsing the pictures disclosed by others is correspondingly displayed on the interface.
Referring to fig. 5, a schematic diagram of command classification in an embodiment of a vehicle-computer interaction method based on a picture frame according to the present invention is shown. As shown in fig. 5, the picture operation instruction includes a preset direction movement instruction of a current picture displayed on a screen of the vehicle side and a preset direction movement instruction of the screen of the vehicle side, and the picture operation instruction further includes a preset instruction for editing a property of a displayed picture, that is, an editing instruction of a picture property.
Referring to fig. 6, a schematic diagram of a command interface of an embodiment of a vehicle-computer interaction method based on a picture frame according to the present invention is shown. As shown in fig. 6, letters A, B, C, D, E, G, H represent different picture manipulation instructions, respectively. The method specifically comprises the following commands:
letter a: and the corresponding interaction gesture sliding on the screen represents that the user needs to execute the selection and switching of different applications.
Specifically, after the corresponding picture operation instruction is executed on the upper side of the screen, different applications are displayed on the upper side of the screen, including: navigation, music, sound, communication, vehicle, setup, mall. For example, when the user currently uses the navigation function and wants to play music after the navigation setting is completed, application switching is required, and the executed navigation function can be switched to the music function through the interaction gesture sliding in on the upper side of the corresponding screen.
Letter B: and corresponding to the interaction gesture sliding in from the lower side of the screen, indicating that the user needs to execute the switching of the currently used application.
Specifically, when the user plays music, and does not want to listen to the currently played song, the song can be switched through the interaction gesture sliding in from the lower side of the screen.
Letter C: and corresponding to the interaction gesture sliding in on the right side of the screen, indicating that the user needs to receive the recommendation information of the third party.
Specifically, the user can disclose the same type of pictures in the third party social networking site or software, and similarly, other people can also disclose the pictures in the third party, and after the third party website collates the data, the user can recommend pictures of the user or other people from the pictures of the same type according to the preference of the user so as to be browsed and selected by the user.
Letter D: and corresponding to an interaction gesture sliding in from the left side of the screen, indicating that the user needs to make a selection on the classification of the pictures.
Specifically, when a user synchronizes photos through a mobile device, a plurality of different types of photos can be simultaneously imported into a vehicle terminal, and through an interaction gesture of sliding in on the left side of a screen, the user can determine a type from the synchronized different types of photos, and then browse the type of photos.
Letter E: and the corresponding editing icon displayed at the corresponding position of the screen is clicked, and the interactive gesture of clicking the editing icon indicates that the user needs to perform attribute editing on the displayed picture.
Specifically, the editing icon is pen-shaped, the category to which the current picture belongs can be modified through the interaction gesture of the editing icon, the current picture is set as a cover, winning is shared in a third-party social networking site or software through the current picture, the current display picture is modified into a privacy mode, the current picture is deleted, and if the current picture browsed by a user is a picture disclosed by other people, the current picture disclosed by other people can be reported when the picture is considered to be improper.
Letter G: and corresponding to the interaction gesture of sliding left and right of the current picture, indicating that the user needs to browse the pictures of the same category.
Specifically, through the interaction gesture that the picture position presented by the current picture slides left and right, the user can browse own pictures which are synchronized to the car machine, wherein the own pictures comprise pictures which are shot and edited by the user and pictures of other people who purchase the user. It should be noted that, because the picture carries the attribute information, after the user purchases in the mall, the user may trigger the corresponding application function of the picture, for example, to navigate to the location of the building in the picture or to dial the order call of the restaurant involved in the picture.
Letter H: and corresponding to the interaction gesture of sliding up and down the current picture, indicating that the user needs to browse pictures recommended by other people.
Specifically, through the interaction gesture that the current picture slides up and down, the user can browse the pictures disclosed by other people in the third-party social networking site or software, and interact with other people or purchase the pictures.
The present embodiment provides a computer storage medium having a computer program stored thereon, which when executed by a processor implements the picture-framework-based vehicle-to-machine interaction method.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the method embodiments described above may be performed by computer program related hardware. The aforementioned computer program may be stored in a computer readable storage medium. The program, when executed, performs steps including the method embodiments described above; and the aforementioned computer-readable storage medium includes: various computer storage media such as ROM, RAM, magnetic or optical disks may store program code.
In practical application, the vehicle-computer interaction method based on the picture framework provided by the embodiment takes the family picture as a specific implementation case.
Referring to fig. 7, a flowchart of a family picture interaction in an embodiment of a car-to-machine interaction method based on a picture framework according to the present invention is shown.
S71, according to the behavior habit data of the user and combining the current time and place information, the user is presumed to go home from work.
Specifically, when the user brings the mobile device into the car machine in the afternoon of Wednesday, the user is judged to be working at this time and needs to drive home.
S72, judging that the required application scene is home, extracting the characteristic word 'home' of the application scene, wherein the characteristic word is consistent with the classification word 'home' used by the synchronized picture in classification.
S73, searching for a picture classified as 'home', and taking the picture as a picture matched with the application scene.
And S74, displaying the picture classified as 'home' on a screen of a vehicle machine side, and generating a picture operation interface.
And S75, receiving a picture operation instruction of left and right sliding of the current page sent by a user through a corresponding touch position in the picture operation interface.
Specifically, when a user starts the car machine in winter, the car machine needs to be preheated for a period of time, and at the moment, the user can browse pictures of the family members through picture operation instructions of left and right sliding of the current page so as to meet the concept of the user on the family members at the moment; or the user does not determine what to eat dinner when going home, and is prompted by the previously taken family dinner picture.
S76, analyzing the user interaction requirement contained in the picture operation instruction to browse the pictures of the same category, and displaying the picture information of the same category according to the interaction requirement for the user to browse.
Specifically, according to the interactive gesture of the left and right sliding of the current page repeatedly sent by the user, sequentially displaying multiple pieces of picture information of the same category of the user.
According to the vehicle-computer interaction method based on the picture framework, the experience of the vehicle interior environment of the user can be improved, emotion, life and memory of the user are better connected together through rich interaction functions, and emotion culture requirements of the user are met.
Example two
The embodiment provides a car machine interaction system based on picture framework, the car machine interaction system based on picture framework includes:
the display module is used for displaying the pictures matched with the application scene on a screen of the vehicle-mounted terminal and generating a picture operation interface;
and the operation module is used for receiving and executing the picture operation instruction sent by the user through the picture operation interface.
The following describes the vehicle-machine interaction system based on the picture framework provided in this embodiment in detail with reference to the drawings. It should be noted that, it should be understood that the division of the modules of the following system is merely a division of a logic function, and may be fully or partially integrated into a physical entity or may be physically separated. The modules can be realized in a form of calling the processing element through software, can be realized in a form of hardware, can be realized in a form of calling the processing element through part of the modules, and can be realized in a form of hardware. For example: the x module may be a separately built processing element or may be integrated in a chip of the system described below. The x module may be stored in the memory of the following system in the form of program codes, and the functions of the x module may be called and executed by a certain processing element of the following system. The implementation of the other modules is similar. All or part of the modules can be integrated together or can be implemented independently. The processing element described herein may be an integrated circuit having signal processing capabilities. In implementation, each step of the above method or each module below may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in a software form.
The following modules may be one or more integrated circuits configured to implement the above methods, for example: one or more application specific integrated circuits (Application Specific Integrated Circuit, ASIC for short), one or more digital signal processors (Digital Singnal Processor, DSP for short), one or more field programmable gate arrays (Field Programmable Gate Array, FPGA for short), and the like. When a module is implemented in the form of a processing element calling program code, the processing element may be a general-purpose processor, such as a central processing unit (Central Processing Unit, CPU) or other processor that may call program code. These modules may be integrated together and implemented in the form of a System-on-a-chip (SOC) for short.
Referring to fig. 8, a schematic diagram of a vehicle-computer interaction system based on a picture frame according to an embodiment of the invention is shown. As shown in fig. 8, the vehicle-computer interaction system 8 based on the picture framework includes: a display module 81 and an operation module 82.
The display module 81 is configured to display a picture matched with the application scene on a screen of the vehicle-mounted device, and generate a picture operation interface.
In this embodiment, the pictures are edited and classified in advance by the mobile device, so that the categories of the pictures are used for matching application scenes; the pictures comprise pictures shot by the user and pictures disclosed by others in the application program.
Specifically, when the category of the picture is matched with an application scene, the behavior dynamics of the user is presumed by a matching module according to the behavior habit data of the user and combining with the current time and place information; the behavior habit data of the user refers to historical data of corresponding behavior activities made by the user at different times and places, and the behavior dynamics of the user refers to a possible next-step plan made by the user at the current time and place; according to the user behavior, dynamically judging a required application scene, extracting feature words of the application scene, wherein the feature words are consistent with classification words used by the synchronized pictures in classification; and searching the pictures of which the classification words are the same as the characteristic words, and taking the pictures as pictures matched with the application scene.
The operation module 82 is configured to receive and execute a picture operation instruction sent by a user through the picture operation interface.
In this embodiment, the operation module 82 is specifically configured to receive a picture operation instruction sent by a user through a corresponding touch position in the picture operation interface; displaying information to be browsed by a user according to the picture operation instruction, wherein the browsed information comprises navigation information, music information, audio playing information, communication information, associated vehicle information, setting information and mall information matched with the picture; the mall information includes status information of the picture of the transaction.
Specifically, the operation module 82 analyzes the user interaction requirement contained in the picture operation instruction; and displaying corresponding browsed information according to the user interaction requirement.
In this embodiment, the picture operation instruction includes a preset direction movement instruction of a current picture displayed on a screen of the vehicle-mounted device and a preset direction movement instruction of the screen of the vehicle-mounted device. The current picture slides up and down to show that the user needs to browse pictures recommended by other people; the left and right sliding of the current picture indicates that the user needs to browse the pictures of the same category; the left side of the screen slides to indicate that the user needs to make a selection on the classification of the pictures; the right side of the screen slides to indicate that the user needs to receive recommendation information of a third party; the side sliding on the screen indicates that the user needs to execute the selection and switching of different applications; the screen undershoot indicates that the user needs to perform a switch of the currently used application.
According to the car and machine interaction system based on the picture framework, experience of the environment in a car of a user can be improved, emotion, life and memory of the user are better connected together through rich interaction functions, and emotion culture requirements of the user are met.
Example III
The present embodiment provides an apparatus including: a processor, memory, transceiver, communication interface, or/and system bus; the memory and the communication interface are connected with the processor and the transceiver through the system bus and complete the communication among each other, the memory is used for storing a computer program, the communication interface is used for communicating with other equipment, and the processor and the transceiver are used for running the computer program to enable the equipment to execute the steps of the vehicle-computer interaction method based on the picture framework.
The system bus mentioned above may be a peripheral component interconnect standard (Peripheral Component Interconnect, PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, or the like. The system bus may be classified into an address bus, a data bus, a control bus, and the like. The communication interface is used for realizing communication between the database access device and other devices (such as a client, a read-write library and a read-only library). The memory may comprise random access memory (Random Access Memory, RAM) and may also comprise non-volatile memory (non-volatile memory), such as at least one disk memory.
The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU for short), a network processor (Network Processor, NP for short), etc.; but also digital signal processors (Digital Signal Processing, DSP for short), application specific integrated circuits (scan application lication Specific Integrated Circuit, ASIC for short), field programmable gate arrays (Field Programmable Gate Array, FPGA for short) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
The protection scope of the vehicle-computer interaction method based on the picture framework is not limited to the execution sequence of the steps listed in the embodiment, and all the schemes realized by the steps of increasing and decreasing and step replacement in the prior art according to the principles of the invention are included in the protection scope of the invention.
The invention also provides a vehicle-computer interaction system based on the picture framework, which can realize the vehicle-computer interaction method based on the picture framework, but the realization device of the vehicle-computer interaction method based on the picture framework comprises but is not limited to the structure of the vehicle-computer interaction system based on the picture framework listed in the embodiment, and all the structural modifications and substitutions of the prior art according to the principle of the invention are included in the protection scope of the invention. It should be noted that, the vehicle-computer interaction method based on the picture framework and the vehicle-computer interaction system based on the picture framework are also applicable to content in other multimedia forms such as video, friend circle message and the like, and are included in the protection scope of the present invention.
In summary, according to the picture framework-based vehicle-mounted interaction method, system, medium and device, through synchronizing the pictures edited in the application program of the mobile device to the vehicle-mounted terminal, more contents related to the life of the user can be displayed on the vehicle-mounted display screen, the contents related to the life of the user can be displayed on the vehicle-mounted display screen as a top page, more flexible and humanized interaction modes are provided for the user, the emotional interaction experience of the user in the vehicle-mounted terminal is improved, and the monotonous and boring driving environment in the vehicle-mounted terminal is improved. The invention effectively overcomes various defects in the prior art and has high industrial utilization value.
The above embodiments are merely illustrative of the principles of the present invention and its effectiveness, and are not intended to limit the invention. Modifications and variations may be made to the above-described embodiments by those skilled in the art without departing from the spirit and scope of the invention. Accordingly, it is intended that all equivalent modifications and variations of the invention be covered by the claims, which are within the ordinary skill of the art, be within the spirit and scope of the present disclosure.
Claims (11)
1. A car-machine interaction method based on a picture framework is characterized in that,
the vehicle-computer interaction method based on the picture framework comprises the following steps:
after the car machine is started, the car machine is in communication connection with the mobile equipment to receive the screen projection content of the mobile equipment for displaying a picture frame, wherein the pictures are edited and classified in advance through the mobile equipment so as to use the types of the pictures for matching application scenes;
displaying the picture matched with the application scene on a screen of a vehicle-mounted terminal, and generating a picture operation interface;
receiving and executing a picture operation instruction sent by a user through the picture operation interface;
the matching of the category of the picture with the application scene comprises the following steps:
the method comprises the steps of estimating the behavior dynamics of a user according to behavior habit data of the user and combining current time and place information, wherein the behavior habit data of the user refers to historical data of corresponding behavior activities of the user at different times and places, and the behavior dynamics of the user refers to a possible next plan of the user at the current time and place;
according to the user behavior, dynamically judging a required application scene, extracting feature words of the application scene, wherein the feature words are consistent with classification words used by the synchronized pictures in classification;
and searching the pictures of which the classification words are the same as the characteristic words, and taking the pictures as pictures matched with the application scene.
2. The picture-framework-based car-to-machine interaction method according to claim 1, wherein,
the receiving and executing the picture operation instruction sent by the user through the picture operation interface comprises the following steps:
receiving a picture operation instruction sent by a user through a corresponding touch position in the picture operation interface;
and executing the picture operation instruction.
3. The picture-framework-based car-to-machine interaction method according to claim 2, wherein,
and the corresponding relation between the touch position and the picture operation instruction is predefined in different positions in the picture operation interface.
4. The picture-framework-based car-to-machine interaction method according to claim 2, wherein,
the picture operation instruction comprises a preset direction movement instruction of a current picture displayed on a screen of the vehicle side, a preset direction movement instruction of the screen of the vehicle side and/or a preset instruction for editing the attribute of the displayed picture.
5. The picture-framework-based car-to-machine interaction method according to claim 4, wherein,
the preset instruction for editing the attribute of the displayed picture comprises the following steps:
modifying the category to which the current picture belongs, setting the current picture as a cover, sharing win through the current picture in a third party social networking site or software, modifying the current display picture into a privacy mode, deleting the current picture and/or reporting the current picture disclosed by other people.
6. The picture-framework-based car-to-machine interaction method according to claim 1, wherein,
the pictures comprise pictures shot by the user and pictures disclosed by others in the application program.
7. The picture-framework-based car-to-machine interaction method according to claim 6, wherein,
the picture carries attribute information, and after the picture is purchased, the corresponding application function of the picture can be triggered, including navigation to the position of a building in the picture or dialing of a ordering telephone of a restaurant involved in the picture.
8. The picture-framework-based car-to-machine interaction method according to claim 1 or 2, wherein,
the executing the picture operation instruction comprises:
displaying information to be browsed by a user according to the picture operation instruction, wherein the browsed information comprises navigation information, music information, audio playing information, communication information, associated vehicle information, setting information and mall information matched with the picture;
the mall information includes status information of the picture of the transaction.
9. The picture-framework-based car-to-machine interaction method according to claim 8, wherein,
the displaying the information to be browsed by the user according to the picture operation instruction comprises the following steps:
analyzing user interaction requirements contained in the picture operation instruction;
and displaying corresponding browsed information according to the user interaction requirement.
10. The picture-framework-based car-to-machine interaction method according to claim 9, wherein,
the user interaction requirement includes:
executing selection and switching of different applications;
executing switching of the current use application;
receiving recommendation information of a third party;
making a selection of a classification of the picture;
editing the attribute of the displayed picture;
browsing pictures of the same category; and/or
And browsing pictures recommended by others.
11. The picture-framework-based car-to-machine interaction method according to claim 2, wherein,
the receiving, by the user, a picture operation instruction sent by the user through a corresponding touch position in the picture operation interface includes:
when a touch operation of a user on the touch position is received, searching a picture operation instruction corresponding to the touch operation of the touch position; the current picture slides up and down to show that the user needs to browse pictures recommended by other people; the left and right sliding of the current picture indicates that the user needs to browse the pictures of the same category; the left side of the screen slides to indicate that the user needs to make a selection on the classification of the pictures; the right side of the screen slides to indicate that the user needs to receive recommendation information of a third party; the side sliding on the screen indicates that the user needs to execute the selection and switching of different applications; the screen undershoot indicates that the user needs to perform a switch of the currently used application.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210706972.7A CN114936000B (en) | 2019-12-26 | 2019-12-26 | Vehicle-machine interaction method, system, medium and equipment based on picture framework |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911367475.3A CN111158573B (en) | 2019-12-26 | 2019-12-26 | Vehicle-mounted machine interaction method, system, medium and equipment based on picture framework |
CN202210706972.7A CN114936000B (en) | 2019-12-26 | 2019-12-26 | Vehicle-machine interaction method, system, medium and equipment based on picture framework |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911367475.3A Division CN111158573B (en) | 2019-12-26 | 2019-12-26 | Vehicle-mounted machine interaction method, system, medium and equipment based on picture framework |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114936000A CN114936000A (en) | 2022-08-23 |
CN114936000B true CN114936000B (en) | 2024-02-13 |
Family
ID=70558363
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911367475.3A Active CN111158573B (en) | 2019-12-26 | 2019-12-26 | Vehicle-mounted machine interaction method, system, medium and equipment based on picture framework |
CN202210706972.7A Active CN114936000B (en) | 2019-12-26 | 2019-12-26 | Vehicle-machine interaction method, system, medium and equipment based on picture framework |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911367475.3A Active CN111158573B (en) | 2019-12-26 | 2019-12-26 | Vehicle-mounted machine interaction method, system, medium and equipment based on picture framework |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN111158573B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111782166B (en) * | 2020-06-30 | 2024-02-09 | 深圳赛安特技术服务有限公司 | Multi-screen interaction method, device, equipment and storage medium |
CN115017352B (en) * | 2022-06-22 | 2023-04-07 | 润芯微科技(江苏)有限公司 | Mobile phone car machine interaction system based on image recognition |
CN117971142A (en) * | 2022-10-25 | 2024-05-03 | 蔚来移动科技有限公司 | Screen projection method, device, vehicle and storage medium |
Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103377276A (en) * | 2012-04-16 | 2013-10-30 | 宏达国际电子股份有限公司 | Method for offering suggestion during conversation and electronic device using the same |
CN103780702A (en) * | 2014-02-17 | 2014-05-07 | 重庆长安汽车股份有限公司 | Vehicle-mounted amusement device and mobile phone interactive system and method |
CN104333844A (en) * | 2014-11-12 | 2015-02-04 | 沈阳美行科技有限公司 | Interconnection method of vehicle-mounted terminal and smart phone |
CN105549821A (en) * | 2015-12-18 | 2016-05-04 | 东软集团股份有限公司 | Interconnecting method, device and system of mobile equipment and car-mounted information entertainment product |
CN105868360A (en) * | 2016-03-29 | 2016-08-17 | 乐视控股(北京)有限公司 | Content recommendation method and device based on voice recognition |
CN105867640A (en) * | 2016-05-12 | 2016-08-17 | 上海擎感智能科技有限公司 | Smart glasses and control method and control system of smart glasses |
CN106454689A (en) * | 2015-08-07 | 2017-02-22 | 深圳前海智云谷科技有限公司 | Information synchronization method of mobile terminal and vehicle-mounted electronic device |
CN107391521A (en) * | 2016-05-17 | 2017-11-24 | 谷歌公司 | Expand message exchange topic automatically based on message category |
CN107391522A (en) * | 2016-05-17 | 2017-11-24 | 谷歌公司 | Optional application link is incorporated into message exchange topic |
CN107391524A (en) * | 2016-05-17 | 2017-11-24 | 谷歌公司 | Strengthen message exchange topic |
CN107408125A (en) * | 2015-07-13 | 2017-11-28 | 谷歌公司 | For inquiring about the image of answer |
CN107479818A (en) * | 2017-08-16 | 2017-12-15 | 维沃移动通信有限公司 | A kind of information interacting method and mobile terminal |
CN107977152A (en) * | 2017-11-30 | 2018-05-01 | 努比亚技术有限公司 | A kind of picture sharing method, terminal and storage medium based on dual-screen mobile terminal |
CN108803879A (en) * | 2018-06-19 | 2018-11-13 | 驭势(上海)汽车科技有限公司 | A kind of preprocess method of man-machine interactive system, equipment and storage medium |
CN110225379A (en) * | 2018-03-02 | 2019-09-10 | 上海博泰悦臻电子设备制造有限公司 | Intelligent terminal screen sharing method and system, car-mounted terminal based on car-mounted terminal |
CN110365836A (en) * | 2019-06-06 | 2019-10-22 | 华为技术有限公司 | A kind of reminding method of notice, terminal and system |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7451188B2 (en) * | 2005-01-07 | 2008-11-11 | At&T Corp | System and method for text translations and annotation in an instant messaging session |
WO2012154440A2 (en) * | 2011-05-06 | 2012-11-15 | Evans Michael Shepherd | System and method for including advertisements in electronic communications |
US8819006B1 (en) * | 2013-12-31 | 2014-08-26 | Google Inc. | Rich content for query answers |
CN104129347B (en) * | 2014-08-04 | 2016-08-24 | 京乐驰光电技术(北京)有限公司 | Control method between onboard system and terminal |
US9965559B2 (en) * | 2014-08-21 | 2018-05-08 | Google Llc | Providing automatic actions for mobile onscreen content |
EP2996030A1 (en) * | 2014-09-15 | 2016-03-16 | Quanta Storage Inc. | System and method for interacting screens in a car to perform remote operation |
CN107563826A (en) * | 2016-07-01 | 2018-01-09 | 阿里巴巴集团控股有限公司 | The method and apparatus operated based on object picture to destination object |
CN108382305B (en) * | 2018-02-11 | 2020-04-21 | 北京车和家信息技术有限公司 | Image display method and device and vehicle |
-
2019
- 2019-12-26 CN CN201911367475.3A patent/CN111158573B/en active Active
- 2019-12-26 CN CN202210706972.7A patent/CN114936000B/en active Active
Patent Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103377276A (en) * | 2012-04-16 | 2013-10-30 | 宏达国际电子股份有限公司 | Method for offering suggestion during conversation and electronic device using the same |
CN103780702A (en) * | 2014-02-17 | 2014-05-07 | 重庆长安汽车股份有限公司 | Vehicle-mounted amusement device and mobile phone interactive system and method |
CN104333844A (en) * | 2014-11-12 | 2015-02-04 | 沈阳美行科技有限公司 | Interconnection method of vehicle-mounted terminal and smart phone |
CN107408125A (en) * | 2015-07-13 | 2017-11-28 | 谷歌公司 | For inquiring about the image of answer |
CN106454689A (en) * | 2015-08-07 | 2017-02-22 | 深圳前海智云谷科技有限公司 | Information synchronization method of mobile terminal and vehicle-mounted electronic device |
CN105549821A (en) * | 2015-12-18 | 2016-05-04 | 东软集团股份有限公司 | Interconnecting method, device and system of mobile equipment and car-mounted information entertainment product |
CN105868360A (en) * | 2016-03-29 | 2016-08-17 | 乐视控股(北京)有限公司 | Content recommendation method and device based on voice recognition |
CN105867640A (en) * | 2016-05-12 | 2016-08-17 | 上海擎感智能科技有限公司 | Smart glasses and control method and control system of smart glasses |
CN107391522A (en) * | 2016-05-17 | 2017-11-24 | 谷歌公司 | Optional application link is incorporated into message exchange topic |
CN107391524A (en) * | 2016-05-17 | 2017-11-24 | 谷歌公司 | Strengthen message exchange topic |
CN107391521A (en) * | 2016-05-17 | 2017-11-24 | 谷歌公司 | Expand message exchange topic automatically based on message category |
CN107479818A (en) * | 2017-08-16 | 2017-12-15 | 维沃移动通信有限公司 | A kind of information interacting method and mobile terminal |
CN107977152A (en) * | 2017-11-30 | 2018-05-01 | 努比亚技术有限公司 | A kind of picture sharing method, terminal and storage medium based on dual-screen mobile terminal |
CN110225379A (en) * | 2018-03-02 | 2019-09-10 | 上海博泰悦臻电子设备制造有限公司 | Intelligent terminal screen sharing method and system, car-mounted terminal based on car-mounted terminal |
CN108803879A (en) * | 2018-06-19 | 2018-11-13 | 驭势(上海)汽车科技有限公司 | A kind of preprocess method of man-machine interactive system, equipment and storage medium |
CN110365836A (en) * | 2019-06-06 | 2019-10-22 | 华为技术有限公司 | A kind of reminding method of notice, terminal and system |
Also Published As
Publication number | Publication date |
---|---|
CN114936000A (en) | 2022-08-23 |
CN111158573A (en) | 2020-05-15 |
CN111158573B (en) | 2022-06-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2784646B1 (en) | Method and Device for Executing Application | |
CN106790120B (en) | Terminal equipment and video stream associated information live broadcast control and interaction method | |
CN114936000B (en) | Vehicle-machine interaction method, system, medium and equipment based on picture framework | |
CN103914502B (en) | The method and its terminal of the intelligent search service of use situation identification | |
WO2021258821A1 (en) | Video editing method and device, terminal, and storage medium | |
CN104571877A (en) | Display processing method and device for pages | |
CN106789551B (en) | Conversation message methods of exhibiting and device | |
CN104077026A (en) | Device and method for displaying execution result of application | |
TW201923630A (en) | Processing method, device, apparatus, and machine-readable medium | |
WO2018113064A1 (en) | Information display method, apparatus and terminal device | |
CN104572853A (en) | Searching method and searching device | |
CN110322305A (en) | Data object information providing method, device and electronic equipment | |
CN109144285A (en) | A kind of input method and device | |
CN105373580A (en) | Method and device for displaying subjects | |
CN103365550A (en) | User information setting method and device and client device | |
CN106815291A (en) | Search result items exhibiting method, device and the device represented for search result items | |
CN107526740A (en) | A kind of method and electronic equipment for showing search result | |
CN109683760B (en) | Recent content display method, device, terminal and storage medium | |
KR20140090114A (en) | Keyword search method and apparatus | |
CN110489635B (en) | Data object search control method, device and system | |
CN113835594A (en) | Interaction method and device, electronic equipment and readable storage medium | |
CN113032163A (en) | Resource collection method and device, electronic equipment and medium | |
CN107194004B (en) | Data processing method and electronic equipment | |
CN111046196A (en) | Voice comment method, system, medium and device based on picture | |
KR101648783B1 (en) | Mobile electronic-display application |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |