CN111158573B - Vehicle-mounted machine interaction method, system, medium and equipment based on picture framework - Google Patents

Vehicle-mounted machine interaction method, system, medium and equipment based on picture framework Download PDF

Info

Publication number
CN111158573B
CN111158573B CN201911367475.3A CN201911367475A CN111158573B CN 111158573 B CN111158573 B CN 111158573B CN 201911367475 A CN201911367475 A CN 201911367475A CN 111158573 B CN111158573 B CN 111158573B
Authority
CN
China
Prior art keywords
picture
user
pictures
information
screen
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911367475.3A
Other languages
Chinese (zh)
Other versions
CN111158573A (en
Inventor
徐婷婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Qwik Smart Technology Co Ltd
Original Assignee
Shanghai Qwik Smart Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Qwik Smart Technology Co Ltd filed Critical Shanghai Qwik Smart Technology Co Ltd
Priority to CN202210706972.7A priority Critical patent/CN114936000B/en
Priority to CN201911367475.3A priority patent/CN111158573B/en
Publication of CN111158573A publication Critical patent/CN111158573A/en
Application granted granted Critical
Publication of CN111158573B publication Critical patent/CN111158573B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects

Abstract

The invention provides a vehicle-machine interaction method, a system, a medium and equipment based on a picture framework, wherein the vehicle-machine interaction method based on the picture framework comprises the following steps: presume the user behavior trends according to the behavioral habit data of the user and in combination with current time, place information; dynamically judging a required application scene according to the user behavior, and extracting feature words of the application scene, wherein the feature words are consistent with classification words used by the synchronized pictures during classification; searching the pictures of the classification words, which are the same as the characteristic words, taking the pictures as pictures matched with the application scenes, displaying the pictures matched with the application scenes on a screen at the vehicle end, and generating a picture operation interface; and receiving and executing a picture operation instruction sent by a user through the picture operation interface. The invention can expand the interactive mode on the application of the picture framework, synchronize the picture to the vehicle end through communication connection, and enrich the scene experience of the user through the interactive operation of the user at the vehicle end.

Description

Vehicle-mounted machine interaction method, system, medium and equipment based on picture framework
Technical Field
The invention belongs to the field of intelligent interaction, relates to an interaction method based on a picture framework, and particularly relates to a vehicle-mounted machine interaction method, a system, a medium and equipment based on the picture framework.
Background
Every screen of television, PC, mobile phone, tablet, etc. has created a huge market. The screen itself is certainly not so powerful, but sticks to the widest users based on the content of the same system under stable ecology. The central control display screen of a car which is "ignored" for a long time, whether as an internet of vehicles interactive terminal or one of the future end-to-end portals, is increasingly highlighting its importance. The existing car display screen is added with a radio along with the rise of wireless broadcasting, the radio is evolved into a cassette, a CD, MP3, MP4 and GPS with multimedia functions, and the functions of 360-degree images, thermal imaging and the like are added at present. Therefore, the multimedia functions of the car machine are more and more abundant, but the enrichment of the content must be based on strong network communication.
At present, the most applications of mobile phone interconnection are billions and hundreds of degrees. The mobile phone images are projected after the hundred million links are connected with the mobile phone, double screens are interactive, and the experience on a vertical screen is better; the back built-in function of the Baidu connection mobile phone is available, mainly comprises online navigation and online music, has a special user interface and has better experience.
However, the prior art still does not improve the monotonous and boring application environment when the user is in the car machine, especially on the display content and the interaction mode of the car machine display screen.
Therefore, how to provide a car machine interaction method, system, medium and device based on a picture framework to solve the defects that the prior art cannot realize rich display contents and interaction modes of a car machine display screen and the like becomes a technical problem to be solved urgently by technical staff in the field.
Disclosure of Invention
In view of the above drawbacks of the prior art, an object of the present invention is to provide a car-mounted device interaction method, system, medium and apparatus based on a picture framework, which are used to solve the problem that the prior art cannot realize rich display contents and interaction modes of a car-mounted device display screen.
In order to achieve the above and other related objects, in one aspect, the present invention provides a picture-frame-based car-machine interaction method, which is characterized in that the picture-frame-based car-machine interaction method includes: displaying the picture matched with the application scene on a screen at the vehicle machine end, and generating a picture operation interface; and receiving and executing a picture operation instruction sent by a user through the picture operation interface.
In an embodiment of the present invention, the step of receiving and executing the picture operation instruction sent by the user through the picture operation interface includes: receiving a picture operation instruction sent by a user through a corresponding touch position in the picture operation interface; displaying information which needs to be browsed by a user according to the picture operation instruction, wherein the browsed information comprises navigation information, music information, sound playing information, communication information, associated vehicle information, setting information and mall information which are matched with the picture; the mall information includes status information of the picture of the transaction.
In an embodiment of the present invention, the step of displaying the information that the user needs to browse according to the picture operation instruction includes: analyzing user interaction requirements contained in the picture operation instruction; and displaying corresponding browsing information according to the user interaction requirement.
In an embodiment of the invention, the picture operation instruction includes a preset direction moving instruction of a current picture displayed on the vehicle-end screen and a preset direction moving instruction of the vehicle-end screen.
In an embodiment of the invention, the current picture slides up and down to show that the user needs to browse pictures recommended by others; the left and right sliding of the current picture indicates that the user needs to browse the pictures of the same category; sliding in on the left side of the screen indicates that the user needs to make a selection for the classification of the picture; sliding in the right side of the screen to indicate that the user needs to receive recommendation information of a third party; the upper side of the screen slides in to show that the user needs to execute the selection and switching of different applications; the screen lower side slide-in indicates that the user needs to perform switching of the currently used application.
In an embodiment of the present invention, the pictures are edited and classified in advance through a mobile device, so that the categories of the pictures are used for matching application scenes; the pictures comprise pictures shot by the user and pictures disclosed by other people in the application program.
In an embodiment of the present invention, before the step of displaying the picture matched with the application scene on the screen at the vehicle end and generating the picture operation interface, the vehicle-mounted interaction method based on the picture framework further includes: presume the user behavior trends according to the behavioral habit data of the user and in combination with current time, place information; the behavior habit data of the user refers to historical data of corresponding behavior activities made by the user at different time and places, and the user behavior dynamics refers to the next step plan possibly made by the user at the current time and place; dynamically judging a required application scene according to the user behavior, and extracting feature words of the application scene, wherein the feature words are consistent with classification words used by the synchronized pictures during classification; and searching the picture with the classification word same as the characteristic word, and taking the picture as the picture matched with the application scene.
The invention provides a vehicle-mounted interaction system based on a picture framework, which comprises the following components: the display module is used for displaying the picture matched with the application scene on a screen at the vehicle end and generating a picture operation interface; and the operation module is used for receiving and executing the picture operation instruction sent by the user through the picture operation interface.
Still another aspect of the present invention provides a medium having a computer program stored thereon, which when executed by a processor, implements the picture-based framework in-vehicle interaction method.
A final aspect of the invention provides an apparatus comprising: a processor and a memory; the memory is used for storing computer programs, and the processor is used for executing the computer programs stored in the memory, so that the equipment can execute the car machine interaction method based on the picture framework.
As described above, the picture framework based vehicle-machine interaction method, system, medium and device of the present invention have the following beneficial effects:
by synchronizing the edited picture in the application program of the mobile equipment to the car machine end, more contents related to the life of the user can be displayed on the car machine display screen, the contents related to the life of the user can be displayed on the car machine display screen as a home page, more flexible and humanized interaction modes are provided for the user, the emotional interaction experience of the user in the car machine is improved, and the monotonous and boring driving environment in the car machine is improved.
Drawings
Fig. 1 is a diagram illustrating an application scene architecture of the car-machine interaction method based on a picture architecture in an embodiment of the invention.
Fig. 2 is a schematic flowchart of a car-machine interaction method based on a picture framework according to an embodiment of the present invention.
Fig. 3 is a flowchart illustrating command execution of the car-machine interaction method based on the picture frame according to an embodiment of the present invention.
Fig. 4 is a display flowchart of the vehicle-mounted device interaction method based on the picture frame according to an embodiment of the present invention.
Fig. 5 is a schematic diagram illustrating command classification of the car-machine interaction method based on the picture frame according to an embodiment of the invention.
Fig. 6 is a schematic diagram of a command interface of the vehicle-mounted device interaction method based on a picture framework according to an embodiment of the invention.
Fig. 7 is a flowchart illustrating a family picture interaction method of a vehicle-mounted device interaction method based on a picture framework according to an embodiment of the present invention.
Fig. 8 is a schematic structural diagram of a vehicle-mounted device interactive system based on a picture framework according to an embodiment of the invention.
Description of the element reference numerals
Vehicle-mounted machine interaction system based on picture framework
81 display module
82 operating module
S21-S22 picture framework-based vehicle-machine interaction method and steps
S221-S222 call flow steps
S222A-S222B trigger the flow steps
Interaction flow steps of S71-S76 family pictures
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention, and the components related to the present invention are only shown in the drawings rather than drawn according to the number, shape and size of the components in actual implementation, and the type, quantity and proportion of the components in actual implementation may be changed freely, and the layout of the components may be more complicated.
The technical principles of the vehicle-machine interaction method, the system, the medium and the equipment based on the picture framework are as follows: displaying the picture matched with the application scene on a screen at the vehicle machine end, and generating a picture operation interface; and receiving and executing a picture operation instruction sent by a user through the picture operation interface.
Example one
The embodiment provides a vehicle-mounted machine interaction method based on a picture framework, which comprises the following steps:
displaying the picture matched with the application scene on a screen at the vehicle machine end, and generating a picture operation interface;
and receiving and executing a picture operation instruction sent by a user through the picture operation interface.
The car machine interaction method based on the picture framework provided by the embodiment will be described in detail with reference to the drawings. Please refer to fig. 1, which shows an application scenario architecture diagram of the car machine interaction method based on picture architecture according to an embodiment of the present invention. As shown in fig. 1, a user uses a mobile device with an application program built therein, and carries the mobile device into a car, and after the car starts, the mobile device and the car are in communication connection in a USB or wireless manner. In this embodiment, the mobile device includes a smart phone, a notebook computer, a PDA (Personal Digital Assistant), or other mobile devices capable of implementing picture editing and interaction. The car machine comprises an intelligent car machine embedded with an intelligent application program or an existing general car machine with a vehicle-mounted display function. After the intelligent application program is executed, the content synchronization and function fusion of the mobile equipment and the vehicle machine can be completed; the existing general car machine with the vehicle-mounted display function can receive screen projection content of the mobile equipment to display a picture frame, so that interaction is realized.
Please refer to fig. 2, which is a schematic flowchart illustrating an exemplary embodiment of a car-machine interaction method based on a picture frame according to the present invention. As shown in fig. 2, the picture framework based vehicle-mounted device interaction method specifically includes the following steps:
and S21, displaying the picture matched with the application scene on a screen of the vehicle end, and generating a picture operation interface.
In this embodiment, the pictures are edited and classified in advance through a mobile device, so that the categories of the pictures are used for matching application scenes; the pictures comprise pictures shot by the user and pictures disclosed by other people in the application program.
Specifically, the matching process of the picture category and the application scene is as follows:
firstly, presuming user behavior dynamics according to behavior habit data of a user and by combining current time and location information; the behavior habit data of the user refers to historical data of corresponding behavior activities made by the user at different time and places, and the user behavior dynamic refers to the next step planning which the user may perform at the current time and place.
And then, dynamically judging a required application scene according to the user behavior, and extracting the characteristic words of the application scene, wherein the characteristic words are consistent with the classification words used by the synchronized pictures during classification.
And finally, searching the picture with the classification word being the same as the characteristic word, and taking the picture as the picture matched with the application scene.
And S22, receiving and executing the picture operation instruction sent by the user through the picture operation interface.
Please refer to fig. 3, which is a flowchart illustrating a command execution flow of the car-machine interaction method based on a picture frame according to an embodiment of the present invention. As shown in fig. 3, S22 includes:
and S221, receiving a picture operation instruction sent by a user through a corresponding touch position in the picture operation interface.
Specifically, the corresponding relationship between the touch position and the picture operation instruction is predefined at different positions in the picture operation interface, and when a touch operation of a user on a certain touch position is received, the picture operation instruction corresponding to the action of the touch position is found out.
S222, displaying information which needs to be browsed by a user according to the picture operation instruction, wherein the browsed information comprises navigation information, music information, sound playing information, communication information, associated vehicle information, setting information and mall information which are matched with the picture.
In this embodiment, the mall information includes status information of the picture of the transaction.
Please refer to fig. 4, which is a flowchart illustrating an exemplary method for vehicle-machine interaction based on a picture frame according to an embodiment of the present invention.
As shown in fig. 4, S222 includes:
S222A, analyzing the user interaction requirements contained in the picture operation instruction.
Specifically, when browsing pictures, the user may need to call other functions or switch the category of the browsed pictures, and at this time, a user interaction requirement is generated.
S222B, displaying the corresponding browsing information according to the user interaction requirement.
Specifically, when the user needs to browse the pictures disclosed by the other people, browsing information corresponding to the interaction requirement for browsing the pictures disclosed by the other people is correspondingly displayed on the interface.
Please refer to fig. 5, which is a schematic diagram illustrating command classification of the car-machine interaction method based on the picture frame according to an embodiment of the present invention. As shown in fig. 5, the picture operation instruction includes a preset instruction for moving a direction of a current picture displayed on the screen of the vehicle end and a preset instruction for moving a direction of the screen of the vehicle end, and the picture operation instruction further includes a preset instruction for editing an attribute of a displayed picture, that is, an editing instruction for editing a picture attribute.
Please refer to fig. 6, which is a schematic diagram of a command interface of the car machine interaction method based on a picture framework according to an embodiment of the present invention. As shown in fig. 6, the letters A, B, C, D, E, G, H represent different picture operation commands respectively. The method specifically comprises the following commands:
letter a: and corresponding to the interactive gesture of sliding in on the upper side of the screen, the user needs to execute the selection and switching of different applications.
Specifically, after the screen upper side is executed to slide in the corresponding picture operation instruction, the screen upper side displays different applications, including: navigation, music, sound, communication, vehicle, settings, mall. For example, when the user uses the navigation function currently and wants to play music after the navigation setting is completed, the application switching is required, and the executed navigation function can be switched to the music function by the corresponding interactive gesture sliding on the upper side of the screen.
Letter B: and corresponding to the interactive gesture of sliding in at the lower side of the screen, indicating that the user needs to execute the switching of the currently used application.
Specifically, when the user is playing music, the user does not want to listen to the currently played song, and the song can be switched by an interactive gesture sliding in from the lower side of the screen.
Letter C: and corresponding to the interaction gesture of sliding in at the right side of the screen, indicating that the user needs to receive the recommendation information of the third party.
Specifically, the user can disclose the same type of pictures in a third-party social network site or software, and similarly, other people can also disclose the pictures in the third party, and after the third-party social network site finishes the data, the third-party social network site can recommend the photos of the user or other people to the user from the same type of pictures according to the preference of the user so as to be browsed and selected by the user.
Letter D: and corresponding to the interactive gesture of sliding in at the left side of the screen, indicating that the user needs to select the classification of the pictures.
Specifically, when a user synchronizes photos through a mobile device, multiple different classified pictures can be simultaneously imported into a vehicle end, and through an interactive gesture slid in from the left side of a screen, the user can determine a category from the synchronized different classified pictures and then browse the pictures of the category.
Letter E: and clicking the interactive gesture of the editing icon to indicate that the user needs to edit the attribute of the displayed picture corresponding to the editing icon displayed at the corresponding position of the screen.
Specifically, the editing icon is in a pen shape, the type of the current picture can be modified through an interaction gesture of the editing icon, the current picture is set as a cover page, the current picture is shared and enjoyed in a third-party social network site or software through the current picture, the current display picture is modified into a privacy mode, the current picture is deleted, and if the current picture browsed by the user is a picture disclosed by other people, the current picture disclosed by other people can be reported when the picture is considered to be improper.
Letter G: and corresponding to the interactive gesture of sliding left and right of the current picture, the user is indicated to browse the pictures of the same category.
Specifically, through the interactive gesture of sliding left and right at the picture position presented by the current picture, the user can browse the picture synchronized to the car machine, wherein the picture comprises the picture shot and edited by the user and the pictures of other people purchased by the user. It should be noted that, because the picture carries the attribute information, after the user purchases the picture in the mall, the user may trigger an application function corresponding to the picture, such as navigating to a position of a building in the picture or making an order call of a dining shop related to the picture.
Letter H: and corresponding to the interactive gesture of the up-and-down sliding of the current picture, the user is indicated to browse pictures recommended by others.
Specifically, through the interaction gesture of sliding down the current picture screen, the user can browse pictures disclosed by others in the third-party social network site or software to interact with others or purchase pictures thereof.
The embodiment provides a computer storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the picture framework-based car-machine interaction method is realized.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with a computer program. The aforementioned computer program may be stored in a computer readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned computer-readable storage media comprise: various computer storage media that can store program codes, such as ROM, RAM, magnetic or optical disks.
In practical applications, the car-machine interaction method based on the picture framework provided by the embodiment takes the family picture as a specific implementation case.
Please refer to fig. 7, which is a flowchart illustrating a family picture interaction method in an embodiment of a car machine interaction method based on a picture framework according to the present invention.
And S71, conjecturing that the user wants to go home from work according to the behavior habit data of the user and the current time and place information.
Specifically, the user enters the car machine with the mobile device at five pm on wednesday, and then it is determined that the user needs to drive to go home when going off work.
S72, judging whether the required application scene is home, extracting a feature word 'home' of the application scene, wherein the feature word is consistent with a classification word 'home' used in the classification of the synchronized pictures.
S73, the picture classified as "home" is searched for and is taken as a picture matching the application scene.
S74, displaying the picture classified as "home" on the screen of the vehicle side, and generating a picture operation interface.
And S75, receiving a picture operation instruction of sliding left and right of the current page sent by the user through the corresponding touch position in the picture operation interface.
Specifically, when a user starts the car machine in winter, the car machine needs to be preheated for a period of time, and at the moment, the user can browse the pictures of the family through the picture operation instruction of the left-right sliding of the current page so as to meet the thought of the user to the family at the moment; or the user goes home and does not determine what dinner is eaten, and the prompt is obtained through the shot family dinner picture.
And S76, analyzing the user interaction requirements contained in the picture operation instruction to browse the pictures of the same category, and displaying the picture information of the same category according to the interaction requirements for the user to browse.
Specifically, according to the interaction gesture of the left-right sliding of the current page repeatedly sent by the user, the information of a plurality of pictures of the same category of the user is sequentially displayed.
The car machine interaction method based on the picture framework can improve the experience of the environment in the car of the user, and better associates the emotion, life and memory of the user through abundant interaction functions, so that the emotion culture requirements of the user can be met.
Example two
This embodiment provides a car machine interactive system based on picture framework, car machine interactive system based on picture framework includes:
the display module is used for displaying the picture matched with the application scene on a screen at the vehicle end and generating a picture operation interface;
and the operation module is used for receiving and executing the picture operation instruction sent by the user through the picture operation interface.
The picture framework-based car machine interaction system provided by the embodiment will be described in detail with reference to the drawings. It should be noted that the division of the modules of the following system is only a logical division, and the actual implementation may be wholly or partially integrated into one physical entity or may be physically separated. And the modules can be realized in a form that all software is called by the processing element, or in a form that all the modules are realized in a form that all the modules are called by the processing element, or in a form that part of the modules are called by the hardware. For example: the x module may be a separate processing element, or may be integrated in one of the chips of the system described below. The x-module may be stored in the memory of the following system in the form of program code, and may be called by one of the processing elements of the following system to execute the functions of the following x-module. Other modules are implemented similarly. All or part of the modules can be integrated together or can be independently realized. The processing element described herein may be an integrated circuit having signal processing capabilities. In implementation, the steps of the above method or the following modules may be implemented by hardware integrated logic circuits in a processor element or instructions in software.
The following modules may be one or more integrated circuits configured to implement the above methods, for example: one or more Application Specific Integrated Circuits (ASICs), one or more Digital Signal Processors (DSPs), one or more Field Programmable Gate Arrays (FPGAs), and the like. When some of the following modules are implemented in the form of a program code called by a Processing element, the Processing element may be a general-purpose processor, such as a Central Processing Unit (CPU) or other processor capable of calling the program code. These modules may be integrated together and implemented in the form of a System-on-a-chip (SOC).
Please refer to fig. 8, which is a schematic structural diagram of a car-machine interaction system based on a picture framework according to an embodiment of the present invention. As shown in fig. 8, the car-machine interaction system 8 based on the picture framework includes: a display module 81 and an operation module 82.
The display module 81 is used for displaying the picture matched with the application scene on a screen at the vehicle end and generating a picture operation interface.
In this embodiment, the pictures are edited and classified in advance through a mobile device, so that the categories of the pictures are used for matching application scenes; the pictures comprise pictures shot by the user and pictures disclosed by other people in the application program.
Specifically, when the category of the picture is matched with the application scene, the user behavior dynamics is presumed through a matching module according to the behavior habit data of the user and by combining the current time and place information; the behavior habit data of the user refers to historical data of corresponding behavior activities made by the user at different time and places, and the user behavior dynamics refers to the next step plan possibly made by the user at the current time and place; dynamically judging a required application scene according to the user behavior, and extracting feature words of the application scene, wherein the feature words are consistent with classification words used by the synchronized pictures during classification; and searching the picture with the classification word same as the characteristic word, and taking the picture as the picture matched with the application scene.
The operation module 82 is configured to receive and execute a picture operation instruction sent by a user through the picture operation interface.
In this embodiment, the operation module 82 is specifically configured to receive a picture operation instruction sent by a user through a corresponding touch position in the picture operation interface; displaying information which needs to be browsed by a user according to the picture operation instruction, wherein the browsed information comprises navigation information, music information, sound playing information, communication information, associated vehicle information, setting information and mall information which are matched with the picture; the mall information includes status information of the picture of the transaction.
Specifically, the user interaction requirement included in the picture operation instruction is analyzed through the operation module 82; and displaying corresponding browsing information according to the user interaction requirement.
In this embodiment, the picture operation instruction includes a preset direction movement instruction of a current picture displayed on the screen of the vehicle end and a preset direction movement instruction of the screen of the vehicle end. The sliding up and down of the current picture indicates that the user needs to browse pictures recommended by others; the left and right sliding of the current picture indicates that the user needs to browse the pictures of the same category; sliding in on the left side of the screen indicates that the user needs to make a selection for the classification of the picture; sliding in the right side of the screen to indicate that the user needs to receive recommendation information of a third party; the upper side of the screen slides in to show that the user needs to execute the selection and switching of different applications; the screen lower side slide-in indicates that the user needs to perform switching of the currently used application.
The car machine interactive system based on the picture framework can improve the experience of the environment in a car of a user, and better associates the emotion, life and memory of the user through abundant interactive functions, so that the emotion culture requirements of the user can be met.
EXAMPLE III
This embodiment provides an apparatus, the apparatus comprising: a processor, memory, transceiver, communication interface, or/and system bus; the memory and the communication interface are connected with the processor and the transceiver through a system bus and complete mutual communication, the memory is used for storing a computer program, the communication interface is used for communicating with other equipment, and the processor and the transceiver are used for running the computer program to enable the equipment to execute all steps of the picture framework-based vehicle-mounted device interaction method.
The above-mentioned system bus may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The system bus may be divided into an address bus, a data bus, a control bus, and the like. The communication interface is used for realizing communication between the database access device and other equipment (such as a client, a read-write library and a read-only library). The Memory may include a Random Access Memory (RAM), and may further include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory.
The Processor may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, or discrete hardware components.
The protection scope of the picture framework-based car-machine interaction method is not limited to the execution sequence of the steps listed in the embodiment, and all the schemes of increasing, decreasing and replacing the steps in the prior art according to the principle of the invention are included in the protection scope of the invention.
The invention also provides a picture-framework-based vehicle-machine interaction system, which can realize the picture-framework-based vehicle-machine interaction method, but the device for realizing the picture-framework-based vehicle-machine interaction method comprises but is not limited to the structure of the picture-framework-based vehicle-machine interaction system listed in the embodiment, and all structural deformation and replacement in the prior art according to the principle of the invention are included in the protection scope of the invention. It should be noted that the picture-framework-based car-machine interaction method and the picture-framework-based car-machine interaction system are also applicable to other multimedia-form contents such as videos, friend circle messages and the like, and are included in the protection scope of the present invention.
In summary, the picture framework based car-mounted interaction method, system, medium and device of the present invention can display more contents related to the user's life on the car-mounted display screen by synchronizing the edited picture in the application program of the mobile device to the car-mounted end, so that the contents related to the user's life are displayed on the car-mounted display screen as a home page, thereby providing more flexible and humanized interaction modes for the user, improving the emotional interaction experience of the user in the car, and improving the monotonous and boring driving environment in the car-mounted. The invention effectively overcomes various defects in the prior art and has high industrial utilization value.
The foregoing embodiments are merely illustrative of the principles and utilities of the present invention and are not intended to limit the invention. Any person skilled in the art can modify or change the above-mentioned embodiments without departing from the spirit and scope of the present invention. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical spirit of the present invention be covered by the claims of the present invention.

Claims (6)

1. The picture framework based vehicle-mounted interaction method is characterized by comprising the following steps:
presume the user behavior dynamic state according to the behavior habit data of the user and combining the present time, place information; the behavior habit data of the user refers to historical data of corresponding behavior activities made by the user at different time and places, and the user behavior dynamics refers to the next step plan possibly made by the user at the current time and place;
dynamically judging a required application scene according to the user behavior, and extracting feature words of the application scene, wherein the feature words are consistent with classification words used by the synchronized pictures during classification;
searching a picture with the classification word same as the feature word, and taking the picture as a picture matched with an application scene;
the pictures are edited and classified in advance through mobile equipment, so that the categories of the pictures are used for matching application scenes; the pictures comprise pictures shot by the user and pictures disclosed by others in the application program;
displaying the picture matched with the application scene on a screen at the vehicle machine end, and generating a picture operation interface;
receiving and executing a picture operation instruction sent by a user through the picture operation interface;
the step of receiving and executing the picture operation instruction sent by the user through the picture operation interface comprises the following steps:
receiving a picture operation instruction sent by a user through a corresponding touch position in the picture operation interface;
displaying information which needs to be browsed by a user according to the picture operation instruction, wherein the browsed information comprises navigation information, music information, sound playing information, communication information, associated vehicle information, setting information and mall information which are matched with the picture, and the mall information comprises state information of the picture for transaction;
the step of displaying the information which needs to be browsed by the user according to the picture operation instruction comprises the following steps:
analyzing user interaction requirements contained in the picture operation instruction;
and displaying corresponding browsing information according to the user interaction requirement.
2. The picture framework-based car machine interaction method according to claim 1,
the picture operation instruction comprises a preset direction moving instruction of a current picture displayed on the screen of the vehicle end and a preset direction moving instruction of the screen of the vehicle end.
3. The picture framework-based car machine interaction method according to claim 2,
the current picture slides up and down to indicate that the user needs to browse pictures recommended by others;
the left and right sliding of the current picture indicates that the user needs to browse the pictures of the same category;
sliding in on the left side of the screen indicates that the user needs to make a selection for the classification of the picture;
sliding in the right side of the screen to indicate that the user needs to receive recommendation information of a third party;
the upper side of the screen slides in to show that a user needs to execute selection and switching of different applications;
the screen lower side slide-in indicates that the user needs to perform switching of the currently used application.
4. The utility model provides a car machine interactive system based on picture framework which characterized in that, car machine interactive system based on picture framework includes:
the display module is used for displaying the picture matched with the application scene on a screen at the vehicle end and generating a picture operation interface;
and the operation module is used for receiving and executing the picture operation instruction sent by the user through the picture operation interface.
5. A medium having a computer program stored thereon, wherein the program, when executed by a processor, implements the picture framework based car machine interaction method according to any one of claims 1 to 3.
6. An apparatus, comprising: a processor and a memory;
the memory is used for storing a computer program, and the processor is used for executing the computer program stored in the memory, so that the equipment executes the picture framework-based car machine interaction method according to any one of claims 1 to 3.
CN201911367475.3A 2019-12-26 2019-12-26 Vehicle-mounted machine interaction method, system, medium and equipment based on picture framework Active CN111158573B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210706972.7A CN114936000B (en) 2019-12-26 2019-12-26 Vehicle-machine interaction method, system, medium and equipment based on picture framework
CN201911367475.3A CN111158573B (en) 2019-12-26 2019-12-26 Vehicle-mounted machine interaction method, system, medium and equipment based on picture framework

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911367475.3A CN111158573B (en) 2019-12-26 2019-12-26 Vehicle-mounted machine interaction method, system, medium and equipment based on picture framework

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202210706972.7A Division CN114936000B (en) 2019-12-26 2019-12-26 Vehicle-machine interaction method, system, medium and equipment based on picture framework

Publications (2)

Publication Number Publication Date
CN111158573A CN111158573A (en) 2020-05-15
CN111158573B true CN111158573B (en) 2022-06-24

Family

ID=70558363

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201911367475.3A Active CN111158573B (en) 2019-12-26 2019-12-26 Vehicle-mounted machine interaction method, system, medium and equipment based on picture framework
CN202210706972.7A Active CN114936000B (en) 2019-12-26 2019-12-26 Vehicle-machine interaction method, system, medium and equipment based on picture framework

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202210706972.7A Active CN114936000B (en) 2019-12-26 2019-12-26 Vehicle-machine interaction method, system, medium and equipment based on picture framework

Country Status (1)

Country Link
CN (2) CN111158573B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111782166B (en) * 2020-06-30 2024-02-09 深圳赛安特技术服务有限公司 Multi-screen interaction method, device, equipment and storage medium
CN115017352B (en) * 2022-06-22 2023-04-07 润芯微科技(江苏)有限公司 Mobile phone car machine interaction system based on image recognition

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104129347A (en) * 2014-08-04 2014-11-05 京乐驰光电技术(北京)有限公司 Control method, device and system for vehicle-mounted system and terminal
EP2996030A1 (en) * 2014-09-15 2016-03-16 Quanta Storage Inc. System and method for interacting screens in a car to perform remote operation
CN107563826A (en) * 2016-07-01 2018-01-09 阿里巴巴集团控股有限公司 The method and apparatus operated based on object picture to destination object
CN108382305A (en) * 2018-02-11 2018-08-10 北京车和家信息技术有限公司 A kind of image display method, device and vehicle

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7451188B2 (en) * 2005-01-07 2008-11-11 At&T Corp System and method for text translations and annotation in an instant messaging session
US20120284093A1 (en) * 2011-05-06 2012-11-08 Michael Shepherd Evans System and Method For Including Advertisements In Electronic Communications
US9685160B2 (en) * 2012-04-16 2017-06-20 Htc Corporation Method for offering suggestion during conversation, electronic device using the same, and non-transitory storage medium
US8819006B1 (en) * 2013-12-31 2014-08-26 Google Inc. Rich content for query answers
CN103780702A (en) * 2014-02-17 2014-05-07 重庆长安汽车股份有限公司 Vehicle-mounted amusement device and mobile phone interactive system and method
US9965559B2 (en) * 2014-08-21 2018-05-08 Google Llc Providing automatic actions for mobile onscreen content
CN104333844A (en) * 2014-11-12 2015-02-04 沈阳美行科技有限公司 Interconnection method of vehicle-mounted terminal and smart phone
WO2017011465A1 (en) * 2015-07-13 2017-01-19 Google Inc. Images for query answers
CN106454689A (en) * 2015-08-07 2017-02-22 深圳前海智云谷科技有限公司 Information synchronization method of mobile terminal and vehicle-mounted electronic device
CN105549821A (en) * 2015-12-18 2016-05-04 东软集团股份有限公司 Interconnecting method, device and system of mobile equipment and car-mounted information entertainment product
CN105868360A (en) * 2016-03-29 2016-08-17 乐视控股(北京)有限公司 Content recommendation method and device based on voice recognition
CN105867640B (en) * 2016-05-12 2020-07-03 上海擎感智能科技有限公司 Intelligent glasses, and control method and control system of intelligent glasses
GB2550448A (en) * 2016-05-17 2017-11-22 Google Inc Augmenting message exchange threads
US10263933B2 (en) * 2016-05-17 2019-04-16 Google Llc Incorporating selectable application links into message exchange threads
US10769155B2 (en) * 2016-05-17 2020-09-08 Google Llc Automatically augmenting message exchange threads based on tone of message
CN107479818B (en) * 2017-08-16 2020-01-07 维沃移动通信有限公司 Information interaction method and mobile terminal
CN107977152A (en) * 2017-11-30 2018-05-01 努比亚技术有限公司 A kind of picture sharing method, terminal and storage medium based on dual-screen mobile terminal
CN110225379A (en) * 2018-03-02 2019-09-10 上海博泰悦臻电子设备制造有限公司 Intelligent terminal screen sharing method and system, car-mounted terminal based on car-mounted terminal
CN108803879A (en) * 2018-06-19 2018-11-13 驭势(上海)汽车科技有限公司 A kind of preprocess method of man-machine interactive system, equipment and storage medium
CN110365836A (en) * 2019-06-06 2019-10-22 华为技术有限公司 A kind of reminding method of notice, terminal and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104129347A (en) * 2014-08-04 2014-11-05 京乐驰光电技术(北京)有限公司 Control method, device and system for vehicle-mounted system and terminal
EP2996030A1 (en) * 2014-09-15 2016-03-16 Quanta Storage Inc. System and method for interacting screens in a car to perform remote operation
CN107563826A (en) * 2016-07-01 2018-01-09 阿里巴巴集团控股有限公司 The method and apparatus operated based on object picture to destination object
CN108382305A (en) * 2018-02-11 2018-08-10 北京车和家信息技术有限公司 A kind of image display method, device and vehicle

Also Published As

Publication number Publication date
CN111158573A (en) 2020-05-15
CN114936000A (en) 2022-08-23
CN114936000B (en) 2024-02-13

Similar Documents

Publication Publication Date Title
CN106790120B (en) Terminal equipment and video stream associated information live broadcast control and interaction method
US10739958B2 (en) Method and device for executing application using icon associated with application metadata
CN102428462B (en) Server apparatus, electronic apparatus, electronic book providing system, electronic book providing method, electronic book displaying method, and program
CN103733197A (en) Management of local and remote media items
CN105190486A (en) Display apparatus and user interface screen providing method thereof
US11604580B2 (en) Configuration of application execution spaces and sub-spaces for sharing data on a mobile touch screen device
CN104571877A (en) Display processing method and device for pages
WO2018113064A1 (en) Information display method, apparatus and terminal device
CN104756484A (en) Information processing device, reproduction state control method, and program
EP2884377B1 (en) Information processing device and recording medium
TW201923630A (en) Processing method, device, apparatus, and machine-readable medium
CN107015979B (en) Data processing method and device and intelligent terminal
KR20190084725A (en) Display apparatus and Method for providing a content thereof
CN104145265A (en) Systems and methods involving features of seach and/or search integration
CN104572853A (en) Searching method and searching device
CN111158573B (en) Vehicle-mounted machine interaction method, system, medium and equipment based on picture framework
CN110322305A (en) Data object information providing method, device and electronic equipment
GB2519312A (en) An apparatus for associating images with electronic text and associated methods
CN104350455A (en) Causing elements to be displayed
KR102208361B1 (en) Keyword search method and apparatus
TWI505205B (en) Feedback system, feedback method and recording media thereof
WO2023134568A1 (en) Display method and apparatus, electronic device, and storage medium
CN104123112A (en) Image processing method and electronic equipment
CN110489635B (en) Data object search control method, device and system
WO2023217122A1 (en) Video clipping template search method and apparatus, and electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant