CN111752538A - Vehicle end scene generation method and device, cloud end, vehicle end and storage medium - Google Patents

Vehicle end scene generation method and device, cloud end, vehicle end and storage medium Download PDF

Info

Publication number
CN111752538A
CN111752538A CN202010581238.3A CN202010581238A CN111752538A CN 111752538 A CN111752538 A CN 111752538A CN 202010581238 A CN202010581238 A CN 202010581238A CN 111752538 A CN111752538 A CN 111752538A
Authority
CN
China
Prior art keywords
scene
vehicle
execution
target
strategy
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010581238.3A
Other languages
Chinese (zh)
Other versions
CN111752538B (en
Inventor
丁磊
徐超
谢建亮
余一波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Human Horizons Shanghai Internet Technology Co Ltd
Original Assignee
Human Horizons Shanghai Internet Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Human Horizons Shanghai Internet Technology Co Ltd filed Critical Human Horizons Shanghai Internet Technology Co Ltd
Priority to CN202010581238.3A priority Critical patent/CN111752538B/en
Publication of CN111752538A publication Critical patent/CN111752538A/en
Application granted granted Critical
Publication of CN111752538B publication Critical patent/CN111752538B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/20Software design
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Abstract

The application provides a method and a device for generating a vehicle-side scene, a cloud, a vehicle side and a storage medium, wherein the method comprises the following steps: receiving a scene generation request, wherein the scene generation request comprises scene creation parameters and the state of each scene execution component at a vehicle end; determining at least one target scene execution component corresponding to the scene creation parameters according to the scene creation parameters and the states of the scene execution components; generating a vehicle-end scene execution strategy, wherein the vehicle-end scene execution strategy comprises execution information corresponding to the target scene execution component; and returning a vehicle end scene execution strategy to the vehicle end so that the vehicle end triggers the target scene execution component to work according to the execution information to generate a vehicle end scene corresponding to the scene creation parameters. According to the scene creation parameters provided by the user, the scene execution strategy at the vehicle end can be constructed, and the user can enjoy 5D immersive scene experience containing 3D, touch and smell in the vehicle through synchronous cooperation of multiple devices in the vehicle.

Description

Vehicle end scene generation method and device, cloud end, vehicle end and storage medium
Technical Field
The application relates to an artificial intelligence technology, in particular to a method and a device for generating a vehicle-side scene, a cloud, a vehicle side and a storage medium.
Background
Various output devices are provided on the vehicle, such as a display screen, an atmosphere light, a seat, a sound, an air conditioner, and the like. These output devices typically perform some function alone and cannot cooperate with each other to achieve some scenario.
Disclosure of Invention
The embodiment of the application provides a method and a device for generating a vehicle end scene, a cloud end, a vehicle end and a storage medium, so as to solve the problems in the related art, and the technical scheme is as follows:
in a first aspect, an embodiment of the present application provides a method for generating a vehicle-side scene, which is applied to a cloud, and includes:
receiving a scene generation request, wherein the scene generation request comprises scene creation parameters and the state of each scene execution component at a vehicle end;
determining at least one target scene execution component corresponding to the scene creation parameters according to the scene creation parameters and the states of the scene execution components;
generating a vehicle-end scene execution strategy, wherein the vehicle-end scene execution strategy comprises execution information corresponding to the target scene execution component;
and returning a vehicle end scene execution strategy to the vehicle end so that the vehicle end triggers the target scene execution component to work according to the execution information to generate a vehicle end scene corresponding to the scene creation parameters.
In a second aspect, an embodiment of the present application provides a method for generating a vehicle-end scene, which is applied to a vehicle end, and includes:
sending a scene generation request to a cloud end, wherein the scene generation request comprises scene creation parameters and the states of all scene execution components of a vehicle end, so that the cloud end determines a vehicle end scene execution strategy according to the scene creation parameters and the states of all the scene execution components, and the vehicle end scene execution strategy comprises execution information corresponding to a target scene execution component;
and triggering the target scene execution component to work according to the execution information according to the vehicle end scene execution strategy returned by the cloud end so as to generate a vehicle end scene corresponding to the scene creation parameters.
In a third aspect, an embodiment of the present application provides a method for generating a vehicle-end scene, which is applied to a vehicle end, and includes:
determining at least one target scene execution component corresponding to the scene creation parameters according to the scene creation parameters and the states of the scene execution components;
generating a vehicle-end scene execution strategy, wherein the vehicle-end scene execution strategy comprises execution information corresponding to the target scene execution component;
and triggering the target scene execution component to work according to the execution information so as to generate the vehicle end scene corresponding to the scene creation parameters.
In a fourth aspect, an embodiment of the present application provides a generating device for a car-side scene, which is applied to a cloud, and includes:
the request receiving module is used for receiving a scene generation request, and the scene generation request comprises scene creation parameters and the state of each scene execution component at the vehicle end;
the execution component determining module is used for determining at least one target scene execution component corresponding to the scene creation parameters according to the scene creation parameters and the states of the scene execution components;
the execution strategy generation module is used for generating a vehicle-end scene execution strategy, and the vehicle-end scene execution strategy comprises execution information corresponding to the target scene execution component;
and the execution strategy returning module is used for returning a vehicle end scene execution strategy to the vehicle end so that the vehicle end triggers the target scene execution component to work according to the execution information to generate a vehicle end scene corresponding to the scene creation parameters.
In a fifth aspect, an embodiment of the present application provides a device for generating a vehicle-end scene, which is applied to a vehicle end, and includes:
the request sending module is used for sending a scene generation request to the cloud end, wherein the scene generation request comprises scene creation parameters and the states of all scene execution components of the vehicle end, so that the cloud end determines a vehicle end scene execution strategy according to the scene creation parameters and the states of all the scene execution components, and the vehicle end scene execution strategy comprises execution information corresponding to a target scene execution component;
and the triggering module is used for triggering the target scene execution component to work according to the execution information according to the vehicle end scene execution strategy returned by the cloud end so as to generate a vehicle end scene corresponding to the scene creation parameters.
In a sixth aspect, an embodiment of the present application provides a method for generating a vehicle-end scene, which is applied to a vehicle end, and is characterized by including:
the execution component determining module is used for determining at least one target scene execution component corresponding to the scene creation parameters according to the scene creation parameters and the states of the scene execution components;
the execution strategy generation module is used for generating a vehicle-end scene execution strategy, and the vehicle-end scene execution strategy comprises execution information corresponding to the target scene execution component;
and the triggering module is used for triggering the target scene execution component to work according to the execution information so as to generate the vehicle-end scene corresponding to the scene creation parameters.
In a seventh aspect, an embodiment of the present application provides a cloud, including:
at least one first processor; and
a first memory communicatively coupled to the at least one first processor; wherein the content of the first and second substances,
the first memory stores instructions executable by the at least one first processor, and the instructions are executed by the at least one first processor, so that the at least one first processor can execute any one of the above cloud-side vehicle-end scene generation methods.
In an eighth aspect, an embodiment of the present application provides a vehicle end, including:
at least one second processor; and
a second memory communicatively coupled to the at least one second processor; wherein the content of the first and second substances,
the second memory stores instructions executable by the at least one second processor, and the instructions are executed by the at least one second processor, so that the at least one second processor can execute the method for generating the vehicle-end scene of the vehicle end in any aspect.
In a ninth aspect, the present application provides a computer-readable storage medium, where computer instructions are stored in the computer-readable storage medium, and when executed by a processor, the computer instructions implement the method for generating the vehicle-end scene in any one of the above aspects.
The advantages or beneficial effects in the above technical scheme include: according to scene creation parameters provided by a user, a whole set of scene execution strategies containing dimensions such as videos, music, fragrance, seat vibration and the like can be constructed, and a section of scene is reproduced or fictionally constructed through synchronous cooperation of multiple devices in the vehicle, so that the user can enjoy 5D immersive scene experience containing 3D, touch and smell in the vehicle.
The foregoing summary is provided for the purpose of description only and is not intended to be limiting in any way. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features of the present application will be readily apparent by reference to the drawings and following detailed description.
Drawings
In the drawings, like reference numerals refer to the same or similar parts or elements throughout the several views unless otherwise specified. The figures are not necessarily to scale. It is appreciated that these drawings depict only some embodiments in accordance with the disclosure and are therefore not to be considered limiting of its scope.
Fig. 1 is a schematic diagram of a method for generating a cloud-side vehicle-side scene according to an embodiment of the present application;
fig. 2 is a schematic diagram of an application example of a method for generating a vehicle-end scene according to a first embodiment of the present application;
fig. 3 is a schematic diagram of an application example of a method for generating a vehicle-end scene according to a first embodiment of the present application;
fig. 4 is a schematic diagram of a method for generating a vehicle-end scene of a vehicle end according to a first embodiment of the present application;
fig. 5 is a schematic diagram of a method for generating a vehicle-end scene of a vehicle end according to a second embodiment of the present application;
fig. 6 is a schematic diagram of a cloud-side vehicle-end scene generation device according to a third embodiment of the present application;
fig. 7 is a schematic diagram of a device for generating a vehicle-end scene of a vehicle end according to a third embodiment of the present application;
fig. 8 is a schematic diagram of a device for generating a vehicle-end scene of a vehicle end according to a fourth embodiment of the present application;
fig. 9 is a schematic view of a cloud end or a vehicle end according to a fifth embodiment of the present application.
Detailed Description
In the following, only certain exemplary embodiments are briefly described. As those skilled in the art will recognize, the described embodiments may be modified in various different ways, all without departing from the spirit or scope of the present application. Accordingly, the drawings and description are to be regarded as illustrative in nature, and not as restrictive.
Example one
The embodiment of the application provides a generation method of a vehicle-side scene, which can be applied to a cloud, namely can be realized by the cloud. As shown in fig. 1, the generating method may include:
step S101, receiving a scene generation request, wherein the scene generation request comprises scene creation parameters and states of scene execution components at the vehicle end.
The scene executing component at the vehicle end includes, but is not limited to, each output device of the vehicle. Such as a central control screen, a secondary screen, a double-row screen, a seat, a steering wheel, an air conditioner, a fragrance releasing device, an audio device or a fragrance lamp, etc. That is to say, the vehicle-end scene execution component is an execution body for realizing a vehicle-end scene, and triggering a plurality of scene execution components can realize 5D scenes with various senses such as hearing, vision, touch, smell and the like at the vehicle end.
The method for the user to actively initiate the scene authoring appeal includes but is not limited to interface operation of a vehicle end or a mobile terminal, a voice mode and the like. Specifically, the user can perform corresponding interface operation through an interface of an Application (APP) at the vehicle end or the APP of the mobile terminal, so that a scene creation appeal is actively initiated through an interface operation mode. The user can also send a voice instruction to a microphone of the vehicle end or the mobile terminal, so that a scene creation appeal is actively initiated. After the user initiates a scene authoring appeal, the vehicle end or the mobile terminal can send a scene generation request to the cloud end. The mobile terminal can be an intelligent device such as a mobile phone and a tablet personal computer.
Further, the scene creation parameters, such as keywords of the scene theme or the setting of the scene preference of the user, are included in the scene generation request. The user can input scene creation parameters through the APP at the vehicle end or the APP at the mobile terminal.
And S102, determining at least one target scene execution component corresponding to the scene creation parameters according to the scene creation parameters and the states of the scene execution components.
For example, the cloud may perform mood analysis on the scene authoring parameters to determine a scene scheme, where execution of the scene scheme requires one or more scene execution components, and further may determine one or more available target scene execution components according to the state of the scene execution components. For example: the steering wheel is needed according to the scene scheme, but since the state of the steering wheel is unavailable during the driving process of the vehicle, the steering wheel cannot be used as a target scene execution component. In one example, the cloud may store a plurality of scenario solutions, and may further match corresponding scenario solutions according to the scenario authoring parameters.
And S103, generating a vehicle-end scene execution strategy, wherein the vehicle-end scene execution strategy comprises execution information corresponding to the target scene execution component.
The vehicle-end scene execution strategy comprises execution information corresponding to each target scene execution component, such as a movement direction, a movement angle, a perfume release ratio, a display strategy or display content and the like.
And step S104, returning an execution strategy to the vehicle end so that the vehicle end triggers the target scene execution component to work according to the execution information to generate a vehicle end scene corresponding to the scene creation parameters.
The cloud end returns the vehicle end to the execution strategy, so that the vehicle end can trigger the corresponding target scene execution component to work according to the execution information through the central control of the vehicle, and the scene corresponding to the scene creation parameters is realized. For example: the triggering seat moves forwards by 10 cm and then rotates by 90 degrees.
In one embodiment, the execution information includes an authored composition, which may include in step S103: and according to the scene creation parameters and the state of the target scene execution component, creating a corresponding creative work. In another embodiment, the execution information includes a storage address of the creative work, and the step S103 may include: according to the scene creation parameters and the state of the target scene execution component, creating a corresponding creative work; storing the creative work and acquiring the storage address of the creative work.
For example: the cloud end combines scene creation parameters according to the state of the target scene execution component, such as the user seat visual angle and the position angle of each display device (such as a central control screen, an auxiliary screen, a two-row screen and the like), and creates video creation works with different visual angles for the display of each display device.
The creative work such as music, poetry or painting can be directly returned to the vehicle end, and the vehicle end triggers the corresponding target scene execution assembly to display the creative work. The creative work such as videos and the like can also be stored in the cloud end, the storage address of the creative work is returned to the vehicle end by the cloud end, the vehicle end can download the creative work according to the storage address of the creative work, and the corresponding target scene execution assembly is triggered to display the creative work.
In one embodiment, the scene composition parameters include composition materials, and in step S103, the method may further include: and creating a corresponding creative work according to the creative material. In another embodiment, the scene creation parameters include a storage address of the creation material, and in step S103, the method may further include: calling the corresponding authoring material according to the storage address of the authoring material; and creating a corresponding creative work according to the creative material.
The authoring material may include images, audio or video, etc. of the vehicle's internal and external environment, as captured by the vehicle-mounted multimedia assembly. The vehicle end can send the creation material or the storage address of the creation material as the scene creation parameter to the cloud end. For example: the car end can upload the creation material to the high in the clouds, returns the memory address that this creation material corresponds to back for the car end after the high in the clouds is preserved.
An example of a method for generating a vehicle-end scene according to an embodiment of the present application is described below with reference to fig. 2. As shown in fig. 2, a user may input scene authoring parameters, such as authoring keywords and some preference settings, based on an in-vehicle authoring program (vehicle-side APP); and the vehicle end sends a scene generation request comprising scene creation parameters to the cloud end. The request processing service module at the cloud end processes a scene generation request sent by the vehicle end and sends the scene generation request to an Artificial Intelligence (AI) scene service module at the cloud end. The AI scene service module analyzes the scene creation parameters such as keywords, matches the available scene schemes in the vehicle to confirm the scene schemes, and further constructs a virtual scene (vehicle-end scene execution strategy). The virtual scene includes an authored work to be presented by an output device. The creative work can be generated by integrating creative information and creating by a work creation module. The authoring information may be scene authoring parameters, such as keywords, authoring material or addresses of the authoring material.
The work creation module may include an AI authoring model that may be derived by training a deep learning neural network with a large amount of sample data. The AI creation model can be a plurality of, like AI poetry creation model, AI drawing creation model, AI music creation model, AI video clip model etc. and then the high in the clouds can be according to the different models that select to correspond of creation information, the AI creation works that the creation corresponds. After the work creation module completes the work creation, the work creation module may store the work creation to a cloud, such as a Content Delivery Network (CDN). Further, the work authoring module may return a Uniform Resource Locator (URL) address of the authored work to the AI scene service module. After the AI scene service module obtains the creative work, the AI scene service module can pair corresponding target scene execution components to generate a vehicle-side scene execution strategy, the vehicle-side scene execution strategy is returned by the message pushing center to download the creative work according to the URL address of the creative work, each output device in the vehicle is dispatched in a centralized control mode, and a final result is displayed.
An example of a method for generating a vehicle-end scene according to an embodiment of the present application is described below with reference to fig. 3. As shown in fig. 3, a user may select scene reproduction through a car end authoring program (car end APP), so as to actively send out a complaint about scene generation, and then the car end may send a scene generation request to the cloud. A user may select video captured by a Digital Video Recorder (DVR) in a vehicle as an authoring material. For example: the vehicle end can upload the creation material collected by the vehicle-mounted DVR to an On-line Business Systems (OBS) of the cloud end, and sends the storage address (such as a URL (uniform resource locator) address) of the creation material as a scene creation parameter to the cloud end. The cloud end downloads the creation materials according to the storage address, performs mood analysis, matches available scene schemes in the vehicle, further determines a target scene execution assembly, performs splicing synthesis on video creation works according to the state of the target scene execution assembly, such as the visual angle of a user seat and the position angle of each display device, and generates execution information of each device in the vehicle, namely a vehicle end scene execution strategy. Further, the vehicle-end scene execution strategy and the related video materials are synchronized to the CDN. And the cloud end returns the vehicle end scene execution strategy to the vehicle end. And after the vehicle end downloads the creative works, the central control unit uniformly dispatches each output device in the vehicle to display the final result.
The embodiment of the application provides a generation method of a vehicle end scene, which can be applied to a vehicle end, namely can be realized by the vehicle end. As shown in fig. 4, the generating method may include:
step S401, sending a scene generation request to a cloud end, wherein the scene generation request comprises scene creation parameters and states of each scene execution component of a vehicle end, so that the cloud end determines a vehicle end scene execution strategy according to the scene creation parameters and the states of each scene execution component, and the vehicle end scene execution strategy comprises execution information corresponding to a target scene execution component;
and S402, triggering the target scene execution component to work according to the execution information according to the execution strategy returned by the cloud end so as to generate the vehicle end scene corresponding to the scene creation parameters.
In one embodiment, the execution information includes the creative work or a storage address of the creative work, and the step S402 may include: and triggering the target scene execution component to display the creative work according to the creative work or the storage address of the creative work.
In one embodiment, the scene authoring parameters include a storage address of the authoring material, and before step S401, the method may further include: uploading an creation material collected by a vehicle-end multimedia assembly to a cloud end; and receiving a storage address corresponding to the creation material returned by the cloud.
The method for generating the vehicle-end scene of the vehicle end may refer to the corresponding description in the method for generating the vehicle-end scene of the cloud end, and is not described herein again.
According to the technical scheme of the embodiment of the application, a whole set of scene execution strategies containing dimensions such as videos, music, fragrance, seat vibration and the like can be constructed according to scene intentions provided by a user or real creative materials collected by the vehicle-mounted multimedia component by means of strong computing power of the AI cloud. After the strategy is executed by the vehicle end, a section of scene is reproduced or fictitious constructed through synchronous cooperation of multiple devices in the vehicle, so that a user can enjoy 5D immersive scene experience containing 3D, touch and smell in the vehicle.
Example two
The embodiment of the application provides a generation method of a vehicle end scene, which can be applied to a vehicle end. As shown in fig. 5, the generating method may include:
step S501, determining at least one target scene execution component corresponding to the scene creation parameters according to the scene creation parameters and the states of the scene execution components;
step S502, generating a vehicle-end scene execution strategy, wherein the vehicle-end scene execution strategy comprises execution information corresponding to a target scene execution component;
and S503, triggering the target scene execution component to work according to the execution information so as to generate the vehicle end scene corresponding to the scene creation parameters.
In one embodiment, the execution information includes an authored work, which may include in step S502: and creating an creative work for the target scene execution component to display according to the scene creation parameters.
In one embodiment, the execution information includes an authored work, which may further include in step S502: acquiring an creation material collected by a vehicle-end multimedia assembly; and creating an creative work for the target scene execution component to display according to the creative material.
That is to say, the vehicle-end scene generation method executed by the cloud processing may be executed by the vehicle-end processor, and reference may be made to corresponding descriptions in the cloud vehicle-end scene generation method, which is not described herein again.
According to the technical scheme of the embodiment of the application, a whole set of scene execution strategies containing dimensionalities such as videos, music, fragrance, seat vibration and the like can be constructed by the vehicle end according to scene artistic conception provided by a user or real creation materials collected by the vehicle-mounted multimedia assembly, and a section of scene is reproduced or fictionally constructed through synchronous cooperation of multiple devices in the vehicle, so that the user can enjoy 5D immersive scene experience containing 3D, touch and smell in the vehicle.
EXAMPLE III
Fig. 6 is a block diagram illustrating a structure of a device for generating a vehicle-end scene according to an embodiment of the present application, where the device may be applied to a cloud end, and as shown in fig. 6, the device may include:
a request receiving module 601, configured to receive a scene generation request, where the scene generation request includes a scene creation parameter and a state of each scene execution component at a vehicle end;
an execution component determining module 602, configured to determine, according to the scene authoring parameter and a state of each of the scene execution components, at least one target scene execution component corresponding to the scene authoring parameter;
an execution strategy generation module 603, configured to generate a vehicle-end scene execution strategy, where the vehicle-end scene execution strategy includes execution information corresponding to the target scene execution component;
and an execution strategy returning module 604, configured to return the train end scene execution strategy to the train end, so that the train end triggers the target scene execution component to operate according to the execution information, so as to generate a train end scene corresponding to the scene creation parameter.
Fig. 7 is a block diagram showing a configuration of a vehicle-end scene generation device according to an embodiment of the present application. The apparatus may be applied to a vehicle end, and as shown in fig. 7, the apparatus may include:
a request sending module 701, configured to send a scene generation request to a cloud, where the scene generation request includes scene creation parameters and states of scene execution components at a vehicle end, so that the cloud determines a vehicle-end scene execution policy according to the scene creation parameters and the states of the scene execution components, where the vehicle-end scene execution policy includes execution information corresponding to a target scene execution component;
and the triggering module 702 is configured to trigger the target scene execution component to work according to the execution information according to the vehicle-end scene execution policy returned by the cloud, so as to generate a vehicle-end scene corresponding to the scene creation parameter.
The functions of each module in each apparatus in the embodiment of the present application may refer to corresponding descriptions in the above method, and are not described herein again.
Example four
Fig. 8 is a block diagram showing a configuration of a vehicle-end scene generation device according to an embodiment of the present application. The device may be applied to a vehicle end, as shown in fig. 8, and may include:
an execution component determining module 801, configured to determine, according to a scene authoring parameter and a state of each of the scene execution components, at least one target scene execution component corresponding to the scene authoring parameter;
an execution strategy generation module 802, configured to generate a vehicle-end scene execution strategy, where the vehicle-end scene execution strategy includes execution information corresponding to the target scene execution component;
and the triggering module 803 is configured to trigger the target scene execution component to operate according to the execution information, so as to generate a vehicle-end scene corresponding to the scene creation parameter.
The functions of each module in each apparatus in the embodiment of the present application may refer to corresponding descriptions in the above method, and are not described herein again.
EXAMPLE five
Fig. 9 shows a block diagram of a vehicle end or a cloud end according to an embodiment of the present application. As shown in fig. 9, the vehicle end or the cloud end includes: a memory 901 and a processor 902, the memory 901 having stored therein instructions executable on the processor 902. Processor 902, when executing the instructions, implements any of the fragrance release methods in the embodiments described above. The number of the memory 901 and the processor 902 may be one or more. The terminal or server is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The terminal or server may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
The vehicle end or the mobile terminal may further include a communication interface 903, which is used for communicating with an external device to perform data interactive transmission. The various devices are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor 902 may process instructions for execution within the terminal or server, including instructions stored in or on a memory to display graphical information of a GUI on an external input/output device (such as a display device coupled to an interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple terminals or servers may be connected, with each device providing portions of the necessary operations (e.g., as an array of servers, a group of blade servers, or a multi-processor system). The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 9, but this does not indicate only one bus or one type of bus.
Optionally, in a specific implementation, if the memory 901, the processor 902, and the communication interface 903 are integrated on a chip, the memory 901, the processor 902, and the communication interface 903 may complete mutual communication through an internal interface.
It should be understood that the processor may be a Central Processing Unit (CPU), other general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or any conventional processor or the like. It is noted that the processor may be an advanced reduced instruction set machine (ARM) architecture supported processor.
Embodiments of the present application provide a computer-readable storage medium (such as the above-mentioned memory 901), which stores computer instructions, and when executed by a processor, the program implements the method provided in the embodiments of the present application.
Alternatively, the memory 901 may include a program storage area and a data storage area, wherein the program storage area may store an operating system and an application program required by at least one function; the storage data area may store data created according to the use of a terminal or a server, and the like. Further, the memory 901 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 901 may optionally include memory located remotely from the processor 902, which may be connected to a terminal or server over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Wherein, the processor 902 may be a first processor or a second processor; the memory 901 may be a first memory or a second memory.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more (two or more) executable instructions for implementing specific logical functions or steps in the process. And the scope of the preferred embodiments of the present application includes other implementations in which functions may be performed out of the order shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. All or part of the steps of the method of the above embodiments may be implemented by hardware that is configured to be instructed to perform the relevant steps by a program, which may be stored in a computer-readable storage medium, and which, when executed, includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module may also be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. The storage medium may be a read-only memory, a magnetic or optical disk, or the like.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive various changes or substitutions within the technical scope of the present application, and these should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (16)

1. A method for generating a vehicle-side scene is applied to a cloud terminal, and is characterized by comprising the following steps:
receiving a scene generation request, wherein the scene generation request comprises scene creation parameters and the state of each scene execution component at a vehicle end;
determining at least one target scene execution component corresponding to the scene creation parameters according to the scene creation parameters and the states of the scene execution components;
generating a vehicle-end scene execution strategy, wherein the vehicle-end scene execution strategy comprises execution information corresponding to the target scene execution component;
and returning the vehicle end scene execution strategy to the vehicle end so that the vehicle end triggers the target scene execution component to work according to the execution information to generate the vehicle end scene corresponding to the scene creation parameters.
2. The generation method according to claim 1, wherein the execution information includes an authored work, and generating a vehicle-end scene execution policy includes:
and according to the scene authoring parameters and the state of the target scene execution component, authoring a corresponding authored work.
3. The generation method according to claim 1, wherein the execution information includes a storage address of the creative work, and generating a vehicle-end scene execution policy includes:
according to the scene authoring parameters and the state of the target scene execution component, authoring a corresponding authored work;
and storing the creative work and acquiring a storage address of the creative work.
4. The generation method according to claim 2 or 3, wherein the scene creation parameters include a storage address of creation materials, and the generation of the vehicle-end scene execution policy further includes:
calling the corresponding authoring material according to the storage address of the authoring material;
and creating a corresponding creative work according to the creative material.
5. A method for generating a vehicle-end scene is applied to a vehicle end, and is characterized by comprising the following steps:
sending a scene generation request to a cloud end, wherein the scene generation request comprises scene creation parameters and the states of all scene execution components of a vehicle end, so that the cloud end determines a vehicle end scene execution strategy according to the scene creation parameters and the states of all the scene execution components, and the vehicle end scene execution strategy comprises execution information corresponding to a target scene execution component;
and triggering the target scene execution component to work according to the execution information according to the vehicle end scene execution strategy returned by the cloud end so as to generate a vehicle end scene corresponding to the scene creation parameters.
6. The generation method according to claim 5, wherein the execution information includes a creative work or a storage address of the creative work, and the triggering of the object scene execution component to operate according to the execution information includes:
and triggering the target scene execution component to display the creative work according to the creative work or the storage address of the creative work.
7. The method of claim 6, wherein the scene authoring parameters include addresses of locations where authoring material is stored, and wherein sending the scene generation request to the cloud further comprises:
uploading an creation material collected by a vehicle-end multimedia assembly to the cloud end;
and receiving a storage address which is returned by the cloud and corresponds to the creation material.
8. A method for generating a vehicle-end scene is applied to a vehicle end, and is characterized by comprising the following steps:
determining at least one target scene execution component corresponding to the scene creation parameters according to the scene creation parameters and the states of the scene execution components;
generating a vehicle-end scene execution strategy, wherein the vehicle-end scene execution strategy comprises execution information corresponding to the target scene execution component;
and triggering the target scene execution component to work according to the execution information so as to generate the vehicle-end scene corresponding to the scene creation parameters.
9. The generation method according to claim 8, wherein the execution information includes an authored work, and generating a vehicle-end scene execution policy includes:
and creating an creative work for the target scene execution component to display according to the scene creation parameters.
10. The generation method according to claim 8, wherein the execution information includes an authored work, and the generating of the vehicle-end scene execution policy further includes:
acquiring an creation material collected by a vehicle-end multimedia assembly;
and creating an creative work for the target scene execution component to display according to the creative material.
11. The utility model provides a generating device of car end scene, is applied to the high in the clouds, a serial communication port, includes:
the request receiving module is used for receiving a scene generation request, wherein the scene generation request comprises scene creation parameters and the state of each scene execution component at the vehicle end;
the execution component determining module is used for determining at least one target scene execution component corresponding to the scene creation parameters according to the scene creation parameters and the states of the scene execution components;
the execution strategy generation module is used for generating a vehicle-end scene execution strategy, and the vehicle-end scene execution strategy comprises execution information corresponding to the target scene execution component;
and the execution strategy returning module is used for returning the vehicle end scene execution strategy to the vehicle end so that the vehicle end triggers the target scene execution component to work according to the execution information to generate the vehicle end scene corresponding to the scene creation parameters.
12. A generation device of car end scene is applied to car end, its characterized in that includes:
the request sending module is used for sending a scene generation request to a cloud end, wherein the scene generation request comprises scene creation parameters and the states of all scene execution components of a vehicle end, so that the cloud end determines a vehicle end scene execution strategy according to the scene creation parameters and the states of all the scene execution components, and the vehicle end scene execution strategy comprises execution information corresponding to a target scene execution component;
and the triggering module is used for triggering the target scene execution component to work according to the execution information according to the vehicle end scene execution strategy returned by the cloud end so as to generate the vehicle end scene corresponding to the scene creation parameters.
13. A method for generating a vehicle-end scene is applied to a vehicle end, and is characterized by comprising the following steps:
the execution component determining module is used for determining at least one target scene execution component corresponding to the scene creation parameters according to the scene creation parameters and the states of the scene execution components;
the execution strategy generation module is used for generating a vehicle-end scene execution strategy, and the vehicle-end scene execution strategy comprises execution information corresponding to the target scene execution component;
and the triggering module is used for triggering the target scene execution component to work according to the execution information so as to generate the vehicle-end scene corresponding to the scene creation parameters.
14. A cloud, comprising:
at least one first processor; and
a first memory communicatively coupled to the at least one first processor; wherein the content of the first and second substances,
the first memory stores instructions executable by the at least one first processor to enable the at least one first processor to perform the method of any one of claims 1 to 4.
15. A vehicle end, comprising:
at least one second processor; and
a second memory communicatively coupled to the at least one second processor; wherein the content of the first and second substances,
the second memory stores instructions executable by the at least one second processor to enable the at least one second processor to perform the method of any one of claims 5 to 10.
16. A computer readable storage medium having stored therein computer instructions which, when executed by a processor, implement the method of any one of claims 1 to 10.
CN202010581238.3A 2020-06-23 2020-06-23 Method and device for generating vehicle end scene, cloud end, vehicle end and storage medium Active CN111752538B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010581238.3A CN111752538B (en) 2020-06-23 2020-06-23 Method and device for generating vehicle end scene, cloud end, vehicle end and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010581238.3A CN111752538B (en) 2020-06-23 2020-06-23 Method and device for generating vehicle end scene, cloud end, vehicle end and storage medium

Publications (2)

Publication Number Publication Date
CN111752538A true CN111752538A (en) 2020-10-09
CN111752538B CN111752538B (en) 2024-03-15

Family

ID=72677035

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010581238.3A Active CN111752538B (en) 2020-06-23 2020-06-23 Method and device for generating vehicle end scene, cloud end, vehicle end and storage medium

Country Status (1)

Country Link
CN (1) CN111752538B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113628618A (en) * 2021-07-29 2021-11-09 中汽创智科技有限公司 Multimedia file generation method and device based on intelligent cabin and terminal
CN113923245A (en) * 2021-10-16 2022-01-11 安徽江淮汽车集团股份有限公司 A self-defined scene control system for intelligent networking vehicle

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105589680A (en) * 2014-10-20 2016-05-18 阿里巴巴集团控股有限公司 Information display method, providing method and device
CN106773875A (en) * 2017-01-23 2017-05-31 上海蔚来汽车有限公司 User's scene adjusting method and system
CN107479970A (en) * 2017-07-31 2017-12-15 广东欧珀移动通信有限公司 Resource allocation method and Related product
CN107776516A (en) * 2016-08-24 2018-03-09 法乐第(北京)网络科技有限公司 The adjusting method and device of in-vehicle device, the store method and device of shaping modes
CN108376061A (en) * 2016-10-13 2018-08-07 北京百度网讯科技有限公司 Method and apparatus for developing automatic driving vehicle application
CN108551665A (en) * 2018-05-16 2018-09-18 大连毅无链信息技术有限公司 A kind of system and method for realizing vehicle personalization electric function
CN108694073A (en) * 2018-05-11 2018-10-23 腾讯科技(深圳)有限公司 Control method, device, equipment and the storage medium of virtual scene
CN109131355A (en) * 2018-07-31 2019-01-04 上海博泰悦臻电子设备制造有限公司 Vehicle, vehicle device equipment and its vehicle-mounted scene interactive approach based on user's identification
CN109343902A (en) * 2018-09-26 2019-02-15 Oppo广东移动通信有限公司 Operation method, device, terminal and the storage medium of audio processing components
DE102017216891A1 (en) * 2017-09-25 2019-03-28 Bayerische Motoren Werke Aktiengesellschaft A method of determining objects in an environment of an ego vehicle, computer readable medium, system, and vehicle comprising the system
CN110155169A (en) * 2019-07-16 2019-08-23 华人运通(上海)新能源驱动技术有限公司 Control method for vehicle, device and vehicle
CN110308961A (en) * 2019-07-02 2019-10-08 广州小鹏汽车科技有限公司 A kind of the subject scenes switching method and device of car-mounted terminal
CN110871684A (en) * 2018-09-04 2020-03-10 比亚迪股份有限公司 In-vehicle projection method, device, equipment and storage medium
US20200081611A1 (en) * 2018-09-10 2020-03-12 Here Global B.V. Method and apparatus for providing a user reaction user interface for generating a passenger-based driving profile
CN110928409A (en) * 2019-11-12 2020-03-27 中国第一汽车股份有限公司 Vehicle-mounted scene mode control method and device, vehicle and storage medium
EP3644284A1 (en) * 2018-10-26 2020-04-29 Pegatron Corporation Vehicle simulation device and method
DE102019115676A1 (en) * 2018-10-30 2020-04-30 GM Global Technology Operations LLC METHOD AND SYSTEM FOR RECONSTRUCTING A VEHICLE SCENE IN A CLOUD LEVEL

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105589680A (en) * 2014-10-20 2016-05-18 阿里巴巴集团控股有限公司 Information display method, providing method and device
CN107776516A (en) * 2016-08-24 2018-03-09 法乐第(北京)网络科技有限公司 The adjusting method and device of in-vehicle device, the store method and device of shaping modes
CN108376061A (en) * 2016-10-13 2018-08-07 北京百度网讯科技有限公司 Method and apparatus for developing automatic driving vehicle application
CN106773875A (en) * 2017-01-23 2017-05-31 上海蔚来汽车有限公司 User's scene adjusting method and system
CN107479970A (en) * 2017-07-31 2017-12-15 广东欧珀移动通信有限公司 Resource allocation method and Related product
DE102017216891A1 (en) * 2017-09-25 2019-03-28 Bayerische Motoren Werke Aktiengesellschaft A method of determining objects in an environment of an ego vehicle, computer readable medium, system, and vehicle comprising the system
CN108694073A (en) * 2018-05-11 2018-10-23 腾讯科技(深圳)有限公司 Control method, device, equipment and the storage medium of virtual scene
CN108551665A (en) * 2018-05-16 2018-09-18 大连毅无链信息技术有限公司 A kind of system and method for realizing vehicle personalization electric function
CN109131355A (en) * 2018-07-31 2019-01-04 上海博泰悦臻电子设备制造有限公司 Vehicle, vehicle device equipment and its vehicle-mounted scene interactive approach based on user's identification
CN110871684A (en) * 2018-09-04 2020-03-10 比亚迪股份有限公司 In-vehicle projection method, device, equipment and storage medium
US20200081611A1 (en) * 2018-09-10 2020-03-12 Here Global B.V. Method and apparatus for providing a user reaction user interface for generating a passenger-based driving profile
CN109343902A (en) * 2018-09-26 2019-02-15 Oppo广东移动通信有限公司 Operation method, device, terminal and the storage medium of audio processing components
EP3644284A1 (en) * 2018-10-26 2020-04-29 Pegatron Corporation Vehicle simulation device and method
DE102019115676A1 (en) * 2018-10-30 2020-04-30 GM Global Technology Operations LLC METHOD AND SYSTEM FOR RECONSTRUCTING A VEHICLE SCENE IN A CLOUD LEVEL
CN110308961A (en) * 2019-07-02 2019-10-08 广州小鹏汽车科技有限公司 A kind of the subject scenes switching method and device of car-mounted terminal
CN110155169A (en) * 2019-07-16 2019-08-23 华人运通(上海)新能源驱动技术有限公司 Control method for vehicle, device and vehicle
CN110928409A (en) * 2019-11-12 2020-03-27 中国第一汽车股份有限公司 Vehicle-mounted scene mode control method and device, vehicle and storage medium

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
BENTLEY COMMUNITIES: "Scenario Sevices", Retrieved from the Internet <URL:https://communities.bentley.com/products/ram-staad/w/structural_analysis_and_design_wiki/23813/scenario-services> *
CRISTIAN OLARIU: "Cloud-support for collaborative services in connected cars scenarios", 2017 IEEE VEHICULAR NETWORIKING CONFERENCE(VNC), pages 255 - 258 *
智能交通技术: "基于车联网应用场景架构设计Pass平台以实现DevOps同行技术探讨经验总结", Retrieved from the Internet <URL:https://blog.csdn.net/weixin_55366265/article/details/122197187> *
林少媚等: "车联网产业云服务平台中互联网+场景运营分析", 《当代经济》, no. 2016, pages 23 - 25 *
环球网: "北京车展尽显汽车全场景云新生态", Retrieved from the Internet <URL:jingji.cctv.com/2016/04/27/ARTIvPrtAwYbcNkmPi9D0KPH160427.shtml> *
郝华奇: ""车机"用户体验研究与设计", 《工业设计研究》, no. 2014, pages 7 - 15 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113628618A (en) * 2021-07-29 2021-11-09 中汽创智科技有限公司 Multimedia file generation method and device based on intelligent cabin and terminal
CN113923245A (en) * 2021-10-16 2022-01-11 安徽江淮汽车集团股份有限公司 A self-defined scene control system for intelligent networking vehicle
CN113923245B (en) * 2021-10-16 2022-07-05 安徽江淮汽车集团股份有限公司 A self-defined scene control system for intelligent networking vehicle

Also Published As

Publication number Publication date
CN111752538B (en) 2024-03-15

Similar Documents

Publication Publication Date Title
US10866562B2 (en) Vehicle onboard holographic communication system
WO2021244110A1 (en) Work generation and edition methods and apparatuses, terminals, server and systems
CN107223270B (en) Display data processing method and device
CN111752538B (en) Method and device for generating vehicle end scene, cloud end, vehicle end and storage medium
CN113038264B (en) Live video processing method, device, equipment and storage medium
CN111601161A (en) Video work generation method, device, terminal, server and system
CN110567466A (en) map generation method and device, electronic equipment and readable storage medium
CN104918112A (en) Camera resource application method and device
CN111651231A (en) Work generation method and device, vehicle end and mobile terminal
CN108076357B (en) Media content pushing method, device and system
Wagner et al. SODA: Service-oriented architecture for runtime adaptive driver assistance systems
CN115243107B (en) Method, device, system, electronic equipment and medium for playing short video
CN111703278A (en) Fragrance release method, device, vehicle end, cloud end, system and storage medium
CN114529690A (en) Augmented reality scene presenting method and device, terminal equipment and storage medium
CN114077368B (en) Vehicle-mounted applet running method and device, computer equipment and storage medium
CN114064946A (en) Method and device for generating travel creative work, vehicle terminal and storage medium
CN104516774A (en) Operation method of remote application, terminal and server
WO2018107913A1 (en) Media information processing method, apparatus and system
CN114064944A (en) Method and device for generating travel creative work, vehicle terminal and storage medium
CN109392191A (en) Automobile data recorder and the communication means of terminal, automobile data recorder and storage medium
CN115119047A (en) Vehicle-based multimedia work generation method and system, storage medium and electronic equipment
US20230267775A1 (en) Collaborative data sharing for data anomalies
CN114089826A (en) Vehicle end scene generation method and device, vehicle end and storage medium
CN116684312A (en) Visualization method and device for domain controller DDS communication network architecture and vehicle
CN116801008A (en) Media information material processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant