CN112699257A - Method, device, terminal, server and system for generating and editing works - Google Patents

Method, device, terminal, server and system for generating and editing works Download PDF

Info

Publication number
CN112699257A
CN112699257A CN202110044146.6A CN202110044146A CN112699257A CN 112699257 A CN112699257 A CN 112699257A CN 202110044146 A CN202110044146 A CN 202110044146A CN 112699257 A CN112699257 A CN 112699257A
Authority
CN
China
Prior art keywords
work
creation
cloud
module
authoring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110044146.6A
Other languages
Chinese (zh)
Inventor
丁磊
王超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Human Horizons Shanghai Internet Technology Co Ltd
Human Horizons Shanghai New Energy Drive Technology Co Ltd
Original Assignee
Human Horizons Shanghai Internet Technology Co Ltd
Human Horizons Shanghai New Energy Drive Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Human Horizons Shanghai Internet Technology Co Ltd, Human Horizons Shanghai New Energy Drive Technology Co Ltd filed Critical Human Horizons Shanghai Internet Technology Co Ltd
Priority to CN202110044146.6A priority Critical patent/CN112699257A/en
Publication of CN112699257A publication Critical patent/CN112699257A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/48Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/438Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/955Retrieval from the web using information identifiers, e.g. uniform resource locators [URL]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/955Retrieval from the web using information identifiers, e.g. uniform resource locators [URL]
    • G06F16/9554Retrieval from the web using information identifiers, e.g. uniform resource locators [URL] by using bar codes

Abstract

The application provides a method, a device, a terminal, a server and a system for generating and editing works, wherein the method for generating the works comprises the following steps: detecting a scene condition of a vehicle end, wherein the scene condition comprises a facial image of a target user; determining an emotional state of the target user according to the facial image of the target user; controlling the vehicle-mounted multimedia assembly to collect corresponding multimedia resources according to the emotional state of the target user, and sending a work creation request to the cloud, wherein the work creation request comprises creation parameters corresponding to the multimedia resources, so that the cloud requests a work creation module to create corresponding created works according to the creation parameters; and receiving the creative works returned by the cloud and displaying the creative works at the vehicle end. According to the embodiment of the application, through communication between the vehicle end and the cloud end, work creation service of the vehicle end is provided for a user, and created materials such as photos, videos and music shot by a camera at the vehicle end can be utilized to provide created works such as poetry, painting, music and video clips for the user.

Description

Method, device, terminal, server and system for generating and editing works
The application is a divisional application of a prior application (title of the invention: method, device, terminal, server and system for generating and editing works; application date: 2020, 06, 04; application number: 202010497041.1).
Technical Field
The application relates to an artificial intelligence technology, in particular to a method, a device, a terminal, a server and a system for generating and editing works.
Background
The vehicle end refers to a vehicle-mounted information entertainment product installed in a vehicle for short, and can functionally realize information communication between people and the vehicle and between the vehicle and the outside (such as between the vehicle and the vehicle). At present, most of vehicle terminals provide multimedia services and navigation services for users, and the service forms are single. However, as vehicles become more popular, users expect that the vehicle end can provide diversified services.
Disclosure of Invention
The embodiment of the application provides a method, a device, a terminal, a server and a system for generating and editing works, which are used for solving the problems in the related technology, and the technical scheme is as follows:
in a first aspect, an embodiment of the present application provides a work generation method, including:
detecting a scene condition of a vehicle end, wherein the scene condition comprises a face image of a target user;
determining an emotional state of the target user according to the facial image of the target user;
controlling a vehicle-mounted multimedia assembly to collect corresponding multimedia resources according to the emotional state of the target user, and sending a work creation request to a cloud end, wherein the work creation request comprises creation parameters corresponding to the multimedia resources, so that the cloud end requests a work creation module to create corresponding created works according to the creation parameters;
and receiving the creative work returned by the cloud end and displaying the creative work at the vehicle end.
In a second aspect, an embodiment of the present application provides a work generation method, applied to a cloud, including:
receiving a work creation request sent by a vehicle end, wherein the work creation request comprises creation parameters corresponding to multimedia resources; the multimedia resource is acquired by controlling the vehicle-mounted multimedia component according to the emotion state of the target user determined by the facial image of the target user;
requesting a work creation module to create a corresponding created work according to the creation parameters;
and sending the corresponding creative work to the vehicle end.
In a third aspect, an embodiment of the present application provides a work generation method, applied to a vehicle end, including:
performing semantic recognition on the voice command to obtain a first recognition result;
and triggering the vehicle end to execute the work generation method of any one of the above aspects according to a target identification result, wherein the target identification result comprises the first identification result.
In one embodiment, the method further comprises:
sending the voice instruction to a cloud end so that the cloud end generates a second recognition result according to the voice instruction;
receiving the second recognition result returned by the cloud end;
and determining the target recognition result from the first recognition result and the second recognition result according to a preset arbitration rule.
In a fourth aspect, an embodiment of the present application provides a method for editing a work, which is applied to a mobile terminal, and the method includes:
identifying a work editing identifier displayed at a vehicle end;
determining the address of the creative work at the cloud end according to the work editing identification, wherein the creative work is generated according to any one of the work generation methods;
acquiring an authored work from a cloud according to an address;
and displaying the creative works at the mobile terminal.
In a fifth aspect, an embodiment of the present application provides a work generation apparatus, including:
the system comprises a work creation request sending module, a work creation request sending module and a cloud end, wherein the work creation request comprises creation parameters corresponding to multimedia resources, so that the cloud end requests the work creation module to create corresponding created works according to the creation parameters; the multimedia resource is acquired by controlling the vehicle-mounted multimedia component according to the emotion state of the target user determined by the facial image of the target user;
and the work display module is used for receiving and displaying the creative work at the vehicle end.
In a sixth aspect, an embodiment of the present application provides a work generation apparatus, including:
the system comprises a work creation request receiving module, a work creation request processing module and a work creation request processing module, wherein the work creation request receiving module is used for receiving a work creation request sent by a vehicle end, and the work creation request comprises creation parameters corresponding to multimedia resources; the multimedia resource is acquired by controlling the vehicle-mounted multimedia component according to the emotion state of the target user determined by the facial image of the target user;
the request module is used for requesting the work creation module to create the corresponding created work according to the creation parameters;
and the work sending module is used for sending the creative work to the vehicle end.
In a seventh aspect, an embodiment of the present application provides a work generation apparatus, including:
the semantic recognition module is used for performing semantic recognition on the voice command to obtain a first recognition result;
and the triggering execution module is used for triggering the vehicle end to execute the work generation method in any aspect according to the target identification result, and the target identification result comprises the first identification result.
In one embodiment, the apparatus further comprises:
the voice instruction sending module is used for sending the voice instruction to a cloud end so that the cloud end can generate a second recognition result according to the voice instruction;
the identification result receiving module is used for receiving the second identification result returned by the cloud end;
and the arbitration module is used for determining the target recognition result from the first recognition result and the second recognition result according to a preset arbitration rule.
In an eighth aspect, an embodiment of the present application provides a work editing apparatus, including:
the identification recognition module is used for recognizing the editing identification of the works displayed at the vehicle end;
the address determining module is used for determining the address of the creative work at the cloud end according to the work editing identification, and the creative work is generated according to any one of the work generating methods;
and the work acquisition and display module is used for acquiring the creative work from the cloud according to the address and displaying the creative work on the mobile terminal.
In a ninth aspect, an embodiment of the present application provides a vehicle end terminal, including:
at least one first processor; and
a first memory communicatively coupled to the at least one first processor; wherein the content of the first and second substances,
the first memory stores instructions executable by the at least one first processor, the instructions being executable by the at least one first processor to enable the at least one first processor to perform any of the above vehicle-end work generation methods.
In a tenth aspect, an embodiment of the present application provides a server, including:
at least one second processor; and
a second memory communicatively coupled to the at least one second processor; wherein the content of the first and second substances,
the second memory stores instructions executable by the at least one second processor to enable the at least one second processor to perform the work generation method of any of the above aspects.
In an eleventh aspect, an embodiment of the present application provides a mobile terminal, including:
at least one second processor; and
a second memory communicatively coupled to the at least one second processor; wherein the content of the first and second substances,
the second memory stores instructions executable by the at least one second processor to enable the at least one second processor to perform the work editing method of any of the above aspects.
In a twelfth aspect, an embodiment of the present application provides a work generation system, including the vehicle end in any one of the above aspects and the cloud server in any one of the above aspects.
In a thirteenth aspect, an embodiment of the present application provides a work generation system, including a vehicle end and a cloud end, where the vehicle end includes a work generation module, and the cloud end includes a creation service module, a work creation module, and an API gateway; the work generation module includes: the system comprises a vehicle-mounted multimedia assembly, an creation interface, a cloud end and a database, wherein the vehicle-mounted multimedia assembly is used for detecting scene conditions of the vehicle end, the scene conditions comprise facial images of target users, the emotional states of the target users are determined according to the facial images of the target users, the vehicle-mounted multimedia assembly is controlled to collect corresponding multimedia resources according to the emotional states of the target users, and the creation interface is used for sending a work creation request to the cloud end; wherein, the composition creation request comprises creation parameters corresponding to the multimedia resources; the authoring service module includes: an authoring interface for requesting the work authoring module to author a corresponding authored work according to the authoring parameters; a display interface for returning the creative work to the vehicle end; the cloud analyzes the work creation request through the API gateway so as to call the creation interface of the creation service module.
In a fourteenth aspect, the present application provides a computer-readable storage medium, in which computer instructions are stored, and when executed by a processor, the computer instructions implement the work generation method of any one of the above aspects.
The advantages or beneficial effects in the above technical solution at least include: through the communication of car end and high in the clouds, for the user provides the works creation service of car end, can utilize the photo or video, the composition material such as music that car end camera was shot, provide the works of creating such as poem, drawing, music, video clip for the user.
The foregoing summary is provided for the purpose of description only and is not intended to be limiting in any way. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features of the present application will be readily apparent by reference to the drawings and following detailed description.
Drawings
In the drawings, like reference numerals refer to the same or similar parts or elements throughout the several views unless otherwise specified. The figures are not necessarily to scale. It is appreciated that these drawings depict only some embodiments in accordance with the disclosure and are therefore not to be considered limiting of its scope.
FIG. 1 is a schematic diagram of a work generation method according to one implementation of an embodiment of the application;
FIG. 2 is a schematic diagram of a work generation method according to another implementation of an embodiment of the application;
FIG. 3 is a schematic diagram of a work generation method according to yet another implementation of an embodiment of the present application;
FIG. 4 is a schematic diagram of a work generation method according to yet another implementation of an embodiment of the present application;
fig. 5 is a schematic communication diagram of a vehicle end and a cloud end according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a work generation method according to one implementation of an embodiment of the present application;
fig. 7 is a schematic communication diagram of a vehicle end and a cloud end according to another embodiment of the present application;
FIG. 8 is a schematic diagram of a work generation method according to one implementation of an embodiment of the application;
fig. 9 is a schematic communication diagram of a vehicle end and a cloud end according to another embodiment of the present application;
FIGS. 10, 11-1 through 11-4 are various examples of a work creation system according to an embodiment of the present application;
FIG. 12 is a diagram illustrating a method for generating a work according to an embodiment of the present application;
FIG. 13 is a schematic diagram of a work generation method according to another implementation of an embodiment of the present application;
FIG. 14 is an example of a work creation system of an embodiment of the present application;
FIG. 15 is a schematic view of a work production device at a vehicle end according to an embodiment of the present application;
fig. 16 is a schematic diagram of a cloud-based work generation apparatus according to an embodiment of the present application;
FIG. 17 is a schematic view of a work production apparatus at a vehicle end according to another embodiment of the present application;
fig. 18 is a block diagram of a terminal or server that can implement embodiments of the present application.
Detailed Description
In the following, only certain exemplary embodiments are briefly described. As those skilled in the art will recognize, the described embodiments may be modified in various different ways, all without departing from the spirit or scope of the present application. Accordingly, the drawings and description are to be regarded as illustrative in nature, and not as restrictive.
The embodiment of the application provides a work generation system, including car machine end and high in the clouds, can realize the artificial intelligence creation of work through the communication of car machine end and high in the clouds. In one example, the vehicle end may provide a work generation module, such as an Artificial Intelligence Authoring (AIC) Application (APP), which is used to implement the vehicle end work generation method.
Example one
Fig. 1 shows a flowchart of a work generation method according to a first embodiment of the present application. The work generation method can be applied to a vehicle end, namely can be realized by the vehicle end.
As shown in fig. 1, the work generation method may include:
step S101, sending a work creation request to a cloud end, wherein the work creation request comprises creation parameters, so that the cloud end requests a work creation module to create corresponding works according to the creation parameters.
For example, a user may utilize input authoring instructions provided by the vehicle end and select appropriate authoring material, such as pictures, video, audio, and the like. And the vehicle end sends a work creation request to the cloud end according to the creation instruction. The creation material can be multimedia resources collected by the vehicle-mounted multimedia assembly, such as audio collected by the vehicle-mounted microphone multimedia assembly, image data collected by the vehicle-mounted camera multimedia assembly and the like, and the image data comprises pictures and videos.
In one example, the authoring material file is small, such as a picture. The authoring material may be sent to the cloud along with the work authoring request. However, in case of a large content of the created material, if the created material is directly transmitted, the communication efficiency will be affected. Therefore, the creation material can be sent (uploaded) to the cloud end, and after the cloud end receives the creation material, the cloud end can store the creation material to the cloud end and return the address of the creation material to the automobile end. In one example, the user uploaded authoring material may be saved to a web disk, such as an On-line Business Systems (OBS) at the cloud. Further, the address of the authoring material may be a Uniform Resource Locator (URL) address. The vehicle end can receive an address corresponding to the creation material returned by the cloud end, and the address is used as a creation parameter and is sent to the cloud end along with a work creation request.
In one example, authoring parameters may include user expectations, such as theme, style, prosody, etc., that a user may input through the AIC APP.
Step S102, receiving an Identity Document (Identity ID) corresponding to the work creation request returned by the cloud.
The cloud end can request the work creation module to create corresponding works according to creation parameters after receiving a work creation request sent by the vehicle end. In one example, An Intelligent (AI) authoring model may be included in the work authoring module, which may be derived by training a deep learning neural network with a large amount of sample data. The AI creation model can be a plurality of, like AI poetry creation model, AI drawing creation model, AI music creation model, AI video clip model etc. and then the corresponding model of different selections of work creation module according to the creation material.
In one embodiment, the authoring parameter is an address of the authoring material in the cloud, such as a URL address. The work creation module can download corresponding creation materials from the network disk according to the address.
In one example, the authoring parameters may include the user desires described above, which are input as input parameters into the corresponding AI authoring model by the work authoring module to conform the final authored work to the user desires.
Further, since the work creation is time consuming and the cloud is unlikely to immediately return the work to the vehicle end, the cloud may generate an creation task ID corresponding to the work creation request and send the creation task ID to the vehicle end.
Step S103, sending a first query request to the cloud, wherein the first query request comprises the creation task ID, so that the cloud queries the works corresponding to the creation task ID.
After the vehicle end receives the creation task ID, whether the corresponding works are finished or not can be inquired from the cloud end according to the creation task ID. And after receiving the first query request, the cloud end queries the corresponding works according to the creation task ID. In one example, when the product creation is completed, the product is stored in a Data Base (DB) of the cloud, and the cloud may search for the corresponding product from the product database according to the creation task ID, and send the searched product to the vehicle end.
In one embodiment, step S103 may include: and sending the first query request to the cloud according to a first preset time interval. That is, the vehicle terminal will poll the cloud terminal many times to inquire whether the creative work is completed. The cloud end inquires whether the corresponding creative work exists in the work database or not according to the first inquiry request, and if the corresponding creative work exists, the cloud end can send the creative work to the vehicle end.
And step S104, receiving and displaying the creative work corresponding to the creative task ID returned by the cloud end at the vehicle end.
After the vehicle end receives the creative works returned by the cloud end, the creative works can be displayed through the vehicle-mounted multimedia assembly.
According to the work generation method provided by the embodiment of the application, intelligent creation can be performed according to creation materials provided by a user, and the creation materials are displayed for the user at a vehicle end. For example: the photo and the video shot by the vehicle-mounted camera are used as creation materials, and personalized creation works such as AI poetry, AI painting, AI music, AI video and the like are created for a user.
In one embodiment, the authoring material includes multimedia assets collected by the in-vehicle multimedia component, and the step S101 may include: detecting scene conditions of a vehicle end; and controlling the corresponding vehicle-mounted multimedia assembly to collect the corresponding multimedia resource according to the scene condition, and sending a work creation request to the cloud, wherein the creation request comprises creation parameters corresponding to the multimedia resource.
That is, the sending of the work creation request from the vehicle end to the cloud end may be initiated by the user, or may be automatically initiated according to the scene condition of the vehicle end (vehicle). When the scene condition of the vehicle end is detected to meet the preset condition, the vehicle end automatically triggers the corresponding vehicle to collect the corresponding multimedia resource of the media component, and sends a work creation request to the cloud end.
In one example, the multimedia asset may be uploaded to the cloud as an authoring material in advance, such that an address returned by the cloud (e.g., an address of the multimedia asset on a web disk of the cloud) is received, and the authoring parameter may include the address, and be sent to the cloud along with the authoring request. When the creation service module at the cloud end requests the work creation module to create the works, the corresponding multimedia resources can be downloaded from the network disk through the address.
In one embodiment, the scene condition includes positioning information, and the controlling the corresponding vehicle-mounted multimedia component to acquire the corresponding multimedia resource according to the scene condition includes: and controlling the vehicle-mounted multimedia assembly to acquire multimedia resources under the condition that the positioning information corresponds to the target journey information.
For example: the target journey information is a certain scenic spot, if the current positioning information of the vehicle is detected to be matched with the scenic spot, the vehicle end can automatically trigger the vehicle-mounted multimedia component, such as a camera multimedia component or a microphone multimedia component to collect corresponding audio, images or videos, so that the seen smells in the journey are recorded for the user, corresponding AI creative works are generated based on the seen smells, and the AI creative works are displayed at the vehicle end.
In one embodiment, the scene condition includes an environmental audio, and the controlling the corresponding vehicle-mounted multimedia component to acquire the corresponding multimedia resource according to the scene condition includes: controlling a microphone multimedia assembly to collect environmental audio; controlling a camera multimedia assembly to acquire image data; the multimedia assets include ambient audio and image data.
For example: the vehicle end can automatically trigger the microphone multimedia assembly to collect the environmental audio and trigger the camera multimedia assembly to collect the image data, so that the corresponding AI creative work can be generated based on the environmental audio and the image data and can be displayed at the vehicle end. The ambient audio may be, among other things, audio inside or around the vehicle.
In one example, the vehicle end detects whether the environmental audio includes a preset trigger word, for example, semantic recognition may be performed on the environmental audio to obtain a corresponding recognition result, and detects whether the recognition result matches the preset trigger word. And if the environmental audio is matched with the environmental audio, triggering the microphone multimedia assembly to continue to collect the environmental audio, and triggering the camera multimedia assembly to collect the image data.
In one embodiment, the scene condition includes a facial image of a target user, and the controlling of the corresponding vehicle-mounted multimedia component to acquire the corresponding multimedia resource according to the scene condition includes: determining an emotional state of the target user according to the facial image of the target user; and controlling the vehicle-mounted multimedia assembly to acquire the multimedia resources under the condition that the emotional state of the target user corresponds to the preset emotional state.
The facial image of the user comprises facial features of the user, and the emotional state of the user can be identified according to the facial features. For example, facial features may be input into a trained emotion recognition model, and the emotional state of the user may be obtained. If the emotional state of the user is identified to be in accordance with the preset emotional state (such as happiness), the multimedia resources can be automatically triggered, the audio, the image or the video of the user in the emotional state are recorded, and corresponding AI creative works are generated based on the multimedia resources and displayed at the vehicle end.
According to the method of the embodiment, a user can automatically trigger the vehicle-mounted multimedia component to collect multimedia resources according to scene conditions of the vehicle in the using process of the vehicle, such as a journey process, record user behaviors, what you see, and the like, and generate corresponding AI creative works based on the materials to be displayed at the vehicle end.
In one implementation, as shown in fig. 2, the method for generating a work at a vehicle end according to an embodiment of the present application may further include: step S201, sending a second query request to the cloud end so that the cloud end queries a plurality of works from a work database, wherein the works comprise the creative works; step S202, receiving a plurality of works returned by the cloud; and step S203, displaying a plurality of works returned by the cloud at the vehicle end.
Based on the method, the user can inquire the works created by the full amount of AI through the AIC APP at the vehicle end. For example: and the vehicle end sends a second query request to the cloud end, and the cloud end queries a plurality of works from the work database of the cloud end after receiving the second query request and returns the works to the vehicle end. After the vehicle end receives a plurality of works, the works can be displayed on the vehicle end.
For example: and displaying a plurality of works in a waterfall flow mode by utilizing multimedia components at the vehicle end. Display form the embodiment of the present application is not limited, and may be arbitrarily set according to the user's needs. The works in the work database may be generated by a work creation module, that is, obtained by the work generation method according to any of the above embodiments, or may be pre-stored.
In one implementation, as shown in fig. 3, the method for generating a work at a vehicle end according to an embodiment of the present application may further include: step S301, sending a sharing request to a cloud end, wherein the sharing request comprises a work ID of the created work, so that the cloud end queries the address of the created work in a work database according to the work ID; step S302, receiving an address of the creative work returned by the cloud; step S303, generating a corresponding work sharing identifier according to the address of the created work; and S304, displaying the work sharing identification at the vehicle end.
Based on the method, the user can share the creative work to social media such as WeChat and microblog. For example: the user can trigger the sharing instruction of the creative work through the sharing entrance provided by the AIC APP at the vehicle end. The vehicle end sends a sharing request to the cloud end after receiving the sharing instruction, and after receiving the sharing request, the cloud end inquires an address of the creative work, such as a URL (uniform resource locator) address, from a work database of the cloud end according to the work ID of the creative work and sends the address to the vehicle end. The vehicle end generates a corresponding work sharing identifier, such as a two-dimensional code, from the address, and displays the work sharing identifier on the vehicle end. The user can use mobile terminals such as a mobile phone and the like to scan the two-dimensional code and share the work card to the social media. Wherein each work stored in the work database corresponds to a work ID, which may be associated with an authoring task ID during a work authoring phase.
In one embodiment, step S104 may include: editing the creative work according to the editing instruction of the user; and displaying the edited creative works at the vehicle end.
That is to say, after the AI creative work is returned by the cloud, the AI creative work can be edited at the vehicle end, for example, some characters or expressions are added, so that secondary creation at the vehicle end is realized, and the edited (secondary creation) creative work is displayed at the vehicle end.
In one implementation, the work generation method of this embodiment may further include: sending the edited creative work to a cloud end; sending a sharing request to a cloud end, wherein the sharing request comprises the product ID of the edited creative product, so that the cloud end inquires the address of the edited creative product at the cloud end according to the product ID; receiving an address of the edited creative work returned by the cloud; generating a corresponding work sharing identification according to the edited address of the creative work; and displaying the work sharing identification at the vehicle end.
That is, the edited creative work can be shared at the vehicle end, and the sharing method can refer to steps S301 to S304. Specifically, the edited creative work can be uploaded to the cloud end from the vehicle end, the cloud end can store the edited creative work at the vehicle end in the work database, and the edited creative work also corresponds to the work ID. When the vehicle end shares the edited creative work, the sharing request sent by the vehicle end to the cloud end comprises the work ID of the edited creative work, so that the cloud end inquires the corresponding edited creative work from the cloud end (such as in a work database) according to the work ID to obtain the address of the edited creative work and returns the address to the vehicle end. And after the vehicle end receives the address, generating a corresponding sharing identifier.
In one implementation, the method of the embodiment of the present application may include: generating a work editing identifier according to the address of the created work at the cloud end; and displaying the work editing identification at the vehicle end so that the mobile terminal determines the address of the created work at the cloud end according to the work editing identification, acquires the created work from the cloud end according to the address and displays the created work on the mobile terminal.
The vehicle end can generate a work editing identification, such as a two-dimensional code, wherein the work editing identification corresponds to the address of the work to be edited at the cloud end. The user can scan the two-dimensional code at the vehicle end through the mobile terminal, the creative work can be displayed at the mobile terminal, and a corresponding editing page is provided at the mobile terminal for the user to perform secondary creation on the creative work. The mobile terminal can be an intelligent device such as a mobile phone and a tablet personal computer. Furthermore, the mobile terminal can obtain corresponding editing materials according to the editing instructions of the user, and edits the creative works according to the editing materials, so that secondary creation of the mobile terminal is realized. The user may share the edited (secondarily authored) creative work to social media at the mobile terminal.
FIG. 4 shows a flow diagram of a work generation method according to one embodiment of the present application. The work generation method can be applied to the cloud. As shown in fig. 4, the method may include:
step S401, receiving a work creation request sent by a vehicle end, wherein the work creation request comprises creation parameters;
step S402, generating an authoring task ID corresponding to the work authoring request;
step S403, sending the authoring task ID to a vehicle end;
s404, requesting a work creation module to create a corresponding created work according to the creation parameters;
step S405, receiving a first query request sent by a vehicle end, wherein the first query request comprises an authoring task ID;
s406, inquiring an authored work corresponding to the authored task ID;
step S407, the creative work corresponding to the creative task ID is sent to the vehicle end.
Some specific implementation manners of the cloud work generation method implemented by the application may refer to corresponding descriptions in the above embodiments in combination with fig. 5, and are not described herein again.
In one embodiment, as shown in fig. 6, the cloud product generation method may include: step S601, receiving a second query request sent by a vehicle end; step S602, inquiring a plurality of works from a work database according to a second inquiry request, wherein the works comprise the creative works; and step S603, sending the plurality of works to the vehicle end. Specifically, refer to fig. 7 and the corresponding description in the above embodiments, which are not repeated herein.
In one embodiment, as shown in fig. 8, the cloud product generation method may include: step S801, receiving a sharing request sent by a vehicle end, wherein the sharing request comprises a work ID of a creative work; step S802, inquiring the address of the creative work in a work database according to the ID of the work; step S803, the address of the creative work is sent to the vehicle end. Specifically, refer to fig. 9 and the corresponding description in the above embodiments, and are not repeated herein.
The embodiment of the application further provides a method for editing a work of a mobile terminal, which includes: identifying a work editing identifier displayed at a vehicle end; determining the address of the creative work at the cloud end according to the work editing identification, wherein the creative work can be generated according to the work generation method in any one of the above embodiments; acquiring an authored work from a cloud according to an address; and displaying the creative works at the mobile terminal.
In one embodiment, the method for editing a work may further include: acquiring a corresponding editing material according to an editing instruction of a user; editing the creative work according to the editing materials; and sharing the edited creative work.
The specific implementation manner may refer to the corresponding description of the work editing method at the vehicle end, and is not described herein again.
Example two
Fig. 10 shows an architecture diagram of a work generation system in the embodiment of the present application. It should be noted that this architecture is merely an example of one implementation. The vehicle side comprises a work generation module (AIC APP), and the cloud side comprises an authoring service module, a work authoring module and an API (Application Programming Interface) gateway. Further, the composition generation module and the composition service module each include an authoring (create) interface and a presentation (pull) interface, respectively.
In one implementation, according to the architecture of the second embodiment, the step S101 may include: the work creation request is sent to the cloud end by calling the creation interface of the vehicle end, so that the cloud end calls the creation interface of the creation service module through the API gateway. Step S103 may include: and sending a first query request to the cloud terminal by calling a display interface at the vehicle terminal. Accordingly, step S401 may include: the step of receiving the work creation request transmitted by the vehicle end may include: the work creation request is parsed by the API gateway to invoke the creation interface of the creation service module. Steps S405 and S406 may include: and analyzing the first query request through the API gateway so as to call a display interface of the authoring service module.
In an implementation mode, the display interface of the work generation module is called by the vehicle end according to a first preset time interval, and the display interface of the creation service module is called by the cloud end according to a second preset time interval, so that asynchronous query of the vehicle end and the cloud end on the created works is achieved.
In one embodiment, the creation parameters include an address of the creation material in a cloud end, the vehicle end further includes an uploading interface, the cloud end includes a net disc, the vehicle end sends the creation material to the net disc by calling the uploading interface, and the address is the address of the creation material in the net disc. Further, the cloud end comprises a work database, and a plurality of works are stored in the work database.
In one embodiment, the authoring service module includes a Data Processing (Data Processing) sub-module. The data processing sub-module may perform data processing on data (e.g., authoring material corresponding to authoring parameters) received from the vehicle end, such as establishing a matching relationship of account information of the vehicle end or the user or filtering some sensitive words. The data processing module can also store the AI creative work into a work database after processing the AI creative work. The data processing module can be an auxiliary sub-module of the authoring service module. The authoring service module may be managed through an operations center.
In one embodiment, the composition generation module and the composition service module may each include a list interface and a share interface, respectively.
The step S201 may include: and sending a second query request to the cloud end by calling a list interface (list) of the vehicle end so that the cloud end calls the list interface of the creation service module through the API gateway. The step S301 may include: and sending a sharing request to the cloud end by calling a sharing (share) interface at the automobile end, so that the cloud end calls the sharing interface of the creation service module through the API gateway.
Accordingly, step S601 may include: the second query request is parsed by the API gateway to invoke a listing interface of the authoring service module to query the plurality of works from the works database. The step S801 may include: and analyzing the sharing request through the API gateway so as to call a sharing interface of the authoring service module.
Specifically, after the user selects creation through the AIC APP, the AIC APP calls an creation interface, and therefore a work creation request is sent to the cloud. The high in the clouds passes through the creation interface that API gateway called the creation service module, and this interface calls the corresponding creation model in the works creation module, carries out AI creative works creation. Because the authoring process is long, the authoring service module generates an authoring task ID and returns the authoring task ID to the AIC APP.
After receiving the authoring task ID, the AIC APP calls the display interface according to the authoring task ID, and therefore a first query request is sent to the cloud, and the authoring task ID is carried in the first query request. Correspondingly, the cloud end asynchronously calls a display interface of the creation service module through the API gateway, the display interface inquires the works corresponding to the creation task ID from the works database, and returns the works to the AIC APP under the condition that the works exist in the inquiry.
In one example, a user may query a full amount of works, including the works obtained by the work generation method of any of the above embodiments, i.e., AI authoring results, through the AIC APP. Specifically, a user inputs a full query instruction through the AIC APP, and the AIC APP calls the list interface, so that a second query request is sent to the cloud, and the second query request is used for requesting to query a plurality of works. Correspondingly, the cloud calls a list interface of the creation service module through the API gateway, a plurality of works are inquired from the work database, and the result is returned to the AIC APP. After receiving the plurality of works, the AIC APP can display the plurality of works in a waterfall flow mode.
In one example, a user may share cards of a creative work to social media using a cell phone to scan a two-dimensional code through a sharing portal of the AIC APP. The creative work may be a work obtained by the work generation method of any of the above embodiments, that is, an AI creative result. Specifically, the AIC APP sends a sharing request to the cloud by calling a sharing interface, and the sharing request carries the ID of the creative work. Correspondingly, the cloud calls the sharing interface of the creation service module through the API gateway, so that the address of the created works in the work database is inquired according to the ID of the works, and the address is sent to the vehicle end. And the AIC APP generates a corresponding work sharing identifier such as a two-dimensional code after receiving the address, and displays the work sharing identifier on the AIC APP.
Through the work generation method of the embodiment, the creation of corresponding AI creative works, such as AI poetry, AI painting, AI music, AI video and the like, can be realized based on the creative materials of the vehicle end, such as a vehicle end album (which can be shot by a camera), and the AI creative works can be displayed and shared at the vehicle end.
EXAMPLE III
In this embodiment, the cloud product generation method may further include: receiving a result inquiry ID returned by the work creation module according to the creation parameters; inquiring whether the works corresponding to the result inquiry ID are finished from the work creation module according to a second preset time interval; in the completed case, the work corresponding to the result query ID is obtained from the work authoring module.
Because the work creation module is time consuming to create the work, the work creation module may first return a result query ID to the creation service module. The result query ID is associated with authoring parameters provided by the vehicle end, namely, the authoring task ID returned by the cloud end to the vehicle end. The authoring service module may continuously poll the work authoring module with the result query ID at a second preset time interval until a work corresponding to the result query ID is obtained. That is, the in-vehicle end polling the work result in the cloud end and the polling the work result in the cloud end are performed asynchronously.
In one example, as shown in fig. 11-1, the work generation method of the embodiment of the present application may be applied to AI poetry creation. Specifically, the example method may include:
(1) and training an AI poetry creation model according to the sample data.
(2) A user selects materials from an album through poem creation application of an AIC APP at a vehicle end. And the picture material is uploaded to the network disk through the uploading interface and is taken to the URL address of the material. The user selects creation, calls the creation interface, and the interface calls the poetry creation interface of the creation service module through the API gateway to realize, and the realization calls the AI poetry creation model of the work creation module. And when the AI poetry is created, the creation service module generates an creation task ID and returns the creation task ID to the AIC APP at the vehicle end.
(3) The poetry creation application of the AIC APP at the vehicle end takes the creation task ID, calls the display interface according to the creation task ID, asynchronously calls the display interface of the creation service module through the API gateway, and polls AI poetry creation results (works) from the work creation module. And polling for multiple times, and displaying an AI poetry creation result (work) by returning the AI poetry creation result (work) to the AIC APP at the vehicle end.
(4) The user can inquire the total AI poetry creation results (works) through the poetry creation application of the AIC APP at the vehicle end. The method comprises the steps that a list interface is called by an AIC APP at a vehicle end, the list interface is called by an API gateway to be realized, a plurality of AI poetry creation results (works) are inquired from a work database, and the AI poetry creation results (works) are returned to the AIC APP at the vehicle end to be displayed in a waterfall flow mode.
(5) The user can use the mobile device to scan the two-dimensional code through a sharing entrance provided by the poetry creation application of the AIC APP at the vehicle end, and share cards of AI poetry creation results (works) to social media.
In one example, as shown in fig. 11-2, the work generation method of the embodiment of the present application may be applied to AI drawing creation. Specifically, the difference with the AI poetry creation method is that the AI drawing creation utilizes creation materials for creating drawing, the vehicle end AIC APP calls the drawing creation application, and the AI creation model based on the drawing creation application is the drawing creation model.
In one example, as shown in fig. 11-3, the work generation method of the embodiment of the present application may be applied to AI music composition. Specifically, the difference from the AI poetry creation method is that the creation material is a music material, such as a music style, whether to generate a singing or not, and the like, and is utilized during the AI drawing creation, so that the creation material does not need to be uploaded to a network disk in advance. In addition, the AIC APP at the vehicle end calls the music creation application, and the AI creation model based on the AIC APP is the music creation model.
Through the work generation method of the embodiment, the corresponding creation of AI poetry works, AI painting and AI music can be realized based on the creation materials at the vehicle end, and the AI creative works can be displayed and shared at the vehicle end.
Example four
In this embodiment, the creation material is a video, and the cloud-based work generation method may further include: and under the condition that the work creation module completes the works corresponding to the creation parameters, the work creation module calls a callback (callback) interface of the creation service module through the API gateway so as to store the works corresponding to the creation parameters to a work database.
In one example, as shown in fig. 11-4, the work generation method of the embodiment of the present application may be applied to an AI video clip. Specifically, the example method may include:
(1) and training an AI video creation model according to the sample data.
(2) A user selects video materials from the photo album through the video clip application of the AIC APP at the vehicle end, uploads the video materials to the network disk through the uploading interface and takes the video material addresses, such as URL addresses.
(3) The user selects creation, and the vehicle terminal AIC APP calls the creation interface, so that the creation request carrying the video material address list is sent to the cloud. The cloud calls a video clip creation interface of the creation service module through the API gateway, the interface carries a list of video material URL addresses, the video clip service of the work creation module is requested, the service downloads all video materials from the network disk, and a video clip task is started according to the AI video creation model. Because the video clip service is time-consuming, the authoring service module returns an authoring task ID to a video clip application of the AIC APP at the vehicle end.
(4) After the video clipping task is completed, namely the AI video is created, the work creation module calls a callback interface of the creation service module through the API gateway and stores the video clipping result (work) into the work database.
(5) After the video clip application of the AIC APP at the vehicle end is taken to the creation task ID, the display interface is called according to the creation task ID, the cloud calls the display interface of the creation service module through the API gateway, and the video clip result (work) corresponding to the creation task ID is polled. And (4) repeating training until a video clip result (work) is obtained and returned to a video clip application display of the AIC APP at the vehicle end.
(6) The user can query the full amount of video clip results (works) through the video clip application of the vehicle-side AIC APP. The video clip application of the AIC APP at the vehicle end calls a list interface, the interface calls a list interface of an creation service module through an API gateway to realize, the realization can inquire a plurality of video clip results (works) from the data of the works and returns the results to the video clip application of the AIC APP at the vehicle end for waterfall flow display.
(7) The user can scan the two-dimensional code through a sharing inlet of a video clip application of the AIC APP at the vehicle end, and share a card of a video clip result (work) to the social media.
Through the work generation method of the embodiment, the creation of the corresponding AI video clip work can be realized based on the video creation material (which can be shot by a camera) at the vehicle end, and the AI video clip work can be displayed and shared at the vehicle end.
EXAMPLE five
As shown in fig. 12, an embodiment of the present application provides a work generation method on a vehicle end, which may include:
step S1201, performing semantic recognition on the voice command to obtain a first recognition result;
step S1202, triggering the vehicle end to execute the work generation method according to the target recognition result, where the target recognition result includes the first recognition result.
The vehicle end may provide a vehicle-mounted voice assistant. For example: for inputting voice instructions: "help me make a poem", "draw a painting with sunset as a theme" for me, etc. And the vehicle-mounted voice assistant performs local semantic recognition after taking the audio file of the voice instruction to obtain a first recognition result. And calling the AIP APP at the vehicle end to perform creation, display, sharing and the like of the corresponding AI creative work according to the first identification result.
In one embodiment, as shown in fig. 13, step S1201 may include:
step S1301, sending the voice instruction to a cloud end so that the cloud end can generate a second recognition result according to the voice instruction;
step S1302, receiving a second recognition result returned by the cloud end;
step S1303, determining a target recognition result from the first recognition result and the second recognition result according to a preset arbitration rule.
After the vehicle-mounted voice assistant takes the audio of the voice instruction, the audio is sent to the cloud. The cloud end transmits audio to the voice service module through the voice gateway to perform ASR (Automatic Speech Recognition) and NLU (Natural Language Processing) Recognition. The voice gateway takes the recognition result, and the arbitration module at the cloud end routes the NLU result to the dialogue management module so as to generate a second recognition result according to the input. And the voice gateway takes the second recognition result and returns the second recognition result to the vehicle-mounted voice assistant. The vehicle-mounted voice assistant takes the second recognition result of the cloud and the first recognition result returned locally, and decides one of the second recognition result and the first recognition result as a target recognition result based on a preset arbitration rule.
The arbitration rules can be set according to different scenes. For example: if the application scene is a local session scene, preferentially using the first recognition result; if the application scene is a cloud session scene, preferentially using a second recognition result; and if the application scene is a mixed scene, determining to use the first recognition result or the second recognition result according to the arrival sequence of the recognition results.
Through the work generation method of the embodiment, the creation of the corresponding AI creative work can be realized by using the creative material of the vehicle end based on the voice conversation between the user and the vehicle end, and the AI creative work can be displayed and shared at the vehicle end.
In an example, as shown in fig. 14, the vehicle-mounted terminal further includes a first dialogue management module (vehicle-mounted voice assistant) for performing semantic recognition on the voice command to obtain a first recognition result; the cloud end comprises a second dialogue management module, a second dialogue management module and a second recognition module, wherein the second dialogue management module is used for receiving the voice command from the first dialogue management module and generating a second recognition result according to the voice command; the first dialogue management module is further used for determining a target recognition result from the first recognition result and the second recognition result according to a preset arbitration rule and triggering the work generation module according to the target recognition result.
In one embodiment, the first dialog management module comprises a first protocol encoder, a first protocol decoder, and a first dialog engine; the first protocol encoder is used for encoding the voice command and then sending the voice command to the second dialogue management module; the first protocol decoder is used for calling a first recognition result callback interface (an ASR callback and an NLU callback), and sending the decoded first recognition result to the first dialogue engine; the first protocol decoder is also used for calling a second recognition result callback interface and sending the decoded second recognition result to the first dialogue engine; the first dialogue engine is used for determining a target recognition result from the first recognition result and the second recognition result according to a preset arbitration rule and triggering the work generation module.
In one embodiment, the second dialogue management module comprises a second protocol encoder, a voice gateway, a voice service module and a second dialogue engine; the voice gateway receives the voice command, decodes the voice command through the second protocol decoder and then sends the voice command to the second protocol encoder; the second protocol encoder is used for encoding the voice command output by the second protocol decoder; the voice service module carries out semantic recognition on the voice instruction output by the second protocol encoder to obtain an initial recognition result; the second dialogue engine generates a second recognition result according to the initial recognition result; and the voice gateway encodes the second recognition result through the second protocol encoder and then sends the second recognition result to the first dialogue management module.
In one example, as shown in fig. 14, the work generation method of the present embodiment may include:
(1) the user inputs voice instructions through the vehicle-mounted voice assistant, such as: "help me make a poem", "draw a painting with sunset as a theme" for me, etc. The vehicle-mounted voice assistant takes the audio of the voice instruction, performs local recognition, namely, accesses an SDK (software development kit) through voice capacity to perform semantic recognition to generate a first recognition result, and simultaneously uploads the audio to a voice gateway at the cloud.
(2) The voice gateway decodes the audio through the second protocol decoder, encodes the audio into a format expected by the voice service module through the second protocol encoder, and transmits the encoded audio to the voice service module for ASR recognition and NLU recognition. And the voice gateway acquires results of ASR recognition and NLU recognition from the voice service module in a streaming mode.
(3) And an arbitration module of the voice gateway routes the result of the NLU identification to a second dialogue management module, so that the second dialogue management module generates a second identification result based on the result of the NLU identification. And the voice gateway takes the second recognition result, encodes the second recognition result by the second protocol encoder and returns the second recognition result to the vehicle-mounted voice assistant.
(4) And the vehicle-mounted voice assistant receives a second recognition result returned by the voice gateway through a second recognition result callback interface, and selects one of the second recognition result and the first recognition result as a target recognition result according to a preset arbitration strategy.
(5) And the vehicle-mounted voice assistant calls a vehicle terminal AIC APP to realize corresponding service according to the target recognition result by utilizing a first dialogue engine (legA). Meanwhile, the target recognition result is voice-broadcasted at the vehicle-end through a TTS (Text To Speech) engine.
(6) The vehicle-end AIC APP can open a click-to-speak interface for the vehicle-mounted voice assistant, and reuses the methods of the AI poetry creation, the AI drawing creation, the AI music creation and the AI video creation in any of the above embodiments through the atomic operation (target recognition result) output by the first dialog engine (legA).
It should be noted that, although the work generation method of the work generation system according to the embodiment of the present application is described in the above examples, those skilled in the art will understand that the present application should not be limited thereto. In fact, the user can flexibly set the implementation method of each module or each step according to personal preference and/or actual application scene.
EXAMPLE six
Fig. 15 is a block diagram showing a structure of a work generation device on a vehicle end according to an embodiment of the present application. As shown in fig. 15, the apparatus may include:
a work creation request sending module 1501, configured to send a work creation request to the cloud, where the work creation request includes creation parameters, so that the cloud requests the work creation module to create a corresponding work according to the creation parameters;
an authoring task ID receiving module 1502, configured to receive an authoring task ID corresponding to the authoring request returned by the cloud;
a first query request sending module 1503, configured to send a first query request to the cloud, where the first query request includes an authoring task ID, so that the cloud queries an authoring work corresponding to the authoring task ID;
and the work display module 1504 is used for receiving and displaying the creative work corresponding to the creative task ID returned by the cloud end at the vehicle end.
In an embodiment, the first query request sending module 1503 is configured to send the first query request to the cloud according to a first preset time interval.
In one embodiment, the authoring parameter is an address of authoring material at the cloud, and the composition generating apparatus further comprises:
the authoring material sending module is used for sending the authoring material to the cloud end;
and the authoring material receiving module is used for receiving the address which is returned by the cloud and corresponds to the authoring material.
In one embodiment, the work creation request transmission module includes:
the detection unit is used for detecting scene conditions of the vehicle end;
and the control unit is used for controlling the corresponding vehicle-mounted multimedia assembly to collect the corresponding multimedia resource according to the scene condition and sending a work creation request to the cloud, wherein the creation request comprises creation parameters corresponding to the multimedia resource.
In one embodiment, the scene condition includes positioning information, the control unit is further configured to: and controlling the vehicle-mounted multimedia assembly to acquire multimedia resources under the condition that the positioning information corresponds to the target journey information.
In one embodiment, the scene condition comprises ambient audio, the control unit is further configured to: controlling a microphone multimedia assembly to collect environmental audio; controlling a camera multimedia assembly to acquire image data; the multimedia assets include ambient audio and image data.
In one embodiment, the scene condition comprises an image of a face of the target user, the control unit is further configured to: determining an emotional state of the target user according to the facial image of the target user; and controlling the vehicle-mounted multimedia assembly to acquire the multimedia resources under the condition that the emotional state of the target user corresponds to the preset emotional state.
In one embodiment, the work generation apparatus further comprises:
the second query request sending module is used for sending a second query request to the cloud so that the cloud queries a plurality of works from the work database;
the work receiving module 1504 is further configured to receive the plurality of works returned by the cloud;
and the work display module is used for displaying the works at the vehicle end.
In one embodiment, the work generation apparatus further comprises:
the sharing request sending module is used for sending a sharing request to the cloud end, wherein the sharing request comprises a work ID of the creative work, so that the cloud end queries the address of the creative work in a work database according to the work ID;
the address receiving module is used for receiving the address of the creative work returned by the cloud end;
the identification generation module is used for generating a corresponding work sharing identification according to the address of the creative work;
and the mark display module is used for displaying the work sharing mark at the vehicle end.
In one embodiment, the work generation apparatus further comprises: and the creative work uploading module is used for sending the edited creative work to the cloud. Further, the sharing request sending module is used for sending a sharing request to the cloud, wherein the sharing request comprises a work ID of the edited creative work, so that the cloud queries the address of the edited creative work in the work database according to the work ID; the address receiving module is also used for receiving the address of the edited creative work returned by the cloud end; the mark generation module is also used for generating a corresponding work sharing mark according to the address of the edited creative work.
In one embodiment, the work creation request sending module 1501 sends the work creation request to the cloud end by calling a creation interface on the vehicle end, so that the cloud end calls a creation interface of the creation service module through the API gateway.
In one embodiment, the first query request sending module 1503 sends the first query request to the cloud by calling a display interface at the vehicle end.
In one embodiment, the second query request sending module sends the second query request to the cloud end by calling a list interface at the vehicle end, so that the cloud end calls the list interface of the creation service module through the API gateway.
In one embodiment, the sharing request sending module sends the sharing request to the cloud end by calling a sharing interface at the vehicle end, so that the cloud end calls the sharing interface of the creation service module through the API gateway.
In one embodiment, the work generation apparatus may further include:
the editing identifier generating module is used for generating a work editing identifier according to the address of the creative work at the cloud end;
and the editing identification display module is used for displaying the work editing identification at the automobile end so that the mobile terminal can determine the address of the created work at the cloud end according to the work editing identification and acquire the created work from the cloud end according to the address and display the created work at the mobile terminal.
In one embodiment, the edit identification presentation module may include:
the editing unit is used for editing the creative work according to an editing instruction of a user;
and the display unit is used for displaying the edited creative work at the vehicle end.
Fig. 16 is a block diagram illustrating a structure of a cloud product generation apparatus according to an embodiment of the present application. As shown in fig. 16, the apparatus may include:
a work creation request receiving module 1601, configured to receive a work creation request sent by a vehicle end, where the work creation request includes creation parameters;
an authoring task ID generating and sending module 1602, configured to generate an authoring task ID corresponding to the authoring request, and send the authoring task ID to the vehicle end;
a request module 1603 for requesting the work creation module to create a corresponding created work according to the creation parameters;
a first query request receiving module 1604, configured to receive a first query request sent by a vehicle end, where the first query request includes an authoring task ID;
a work query module 1605 for querying the creative work corresponding to the creative task ID;
a work sending module 1606, configured to send the work corresponding to the creation task ID to the vehicle end.
In one embodiment, the work generation apparatus further comprises:
a result inquiry ID receiving module for receiving the result inquiry ID returned by the composition creation module according to the creation parameters;
a work state query module for querying whether the creative work corresponding to the result query ID is completed from the work creative module according to a second preset time interval;
and the work acquisition module is used for acquiring the creative work corresponding to the result inquiry ID from the work creative module under the condition of completion.
In one embodiment, the authoring material corresponding to the authoring parameter is a video, and the composition generating apparatus further includes:
and the work storage module is used for calling a callback interface of the creation service module through the API gateway under the condition that the work creation module completes the works corresponding to the creation parameters so as to store the works corresponding to the creation parameters to the work database.
In one embodiment, the authoring parameter is an address of authoring material at the cloud, and the composition generating apparatus further comprises:
the authoring material receiving module is used for receiving and storing the authoring material sent by the vehicle end;
and the creation material sending module is used for sending the address corresponding to the creation material to the vehicle end.
In one embodiment, the work generation apparatus further comprises:
the second query request receiving module is used for receiving a second query request sent by the vehicle end;
the work inquiry module is also used for inquiring a plurality of works from the work database according to the second inquiry request;
the work sending module is also used for sending the works to the vehicle end.
In one embodiment, the work generation apparatus further comprises:
the sharing request receiving module is used for receiving a sharing request sent by the vehicle end, and the sharing request comprises a work ID of the creative work;
the address query module is used for querying the address of the creative work in the work database according to the ID of the work;
and the address sending module is used for sending the address of the creative work to the vehicle end.
In one embodiment, the work generation apparatus further includes a work receiving module, configured to receive and store a edited work at the vehicle end, where the edited work corresponds to a work ID. Further, the sharing request receiving module is used for receiving a sharing request sent by the vehicle end, wherein the sharing request comprises the work ID, so that the cloud end queries the address of the edited creative work at the cloud end according to the work ID; the address query module is used for querying the address of the creative work in the work database according to the ID of the work; and the address sending module is used for sending the address of the edited creative work at the cloud end to the vehicle end.
In one embodiment, the composition creation request receiving module 1601 parses the composition creation request through the API gateway to invoke the creation interface of the creation service module.
In one embodiment, the first query request receiving module 1604 parses the first query request through the API gateway to asynchronously invoke the exposure interface of the authoring service module.
In one embodiment, the second query request receiving module parses the second query request through the API gateway to invoke the listing interface of the authoring service module to query the plurality of works from the database of works.
In one embodiment, the sharing request receiving module parses the sharing request through the API gateway to call the sharing interface of the authoring service module.
Fig. 17 is a block diagram showing a structure of a work generation device on a vehicle end according to an embodiment of the present application. As shown in fig. 17, the apparatus may include:
a semantic recognition module 1701 for performing semantic recognition on the voice command to obtain a first recognition result;
a trigger execution module 1702 for triggering the vehicle end to execute the method of any one of claims 1 to 9 according to the target recognition result, the target recognition result including the first recognition result.
In one embodiment, as shown in FIG. 17, semantic recognition module 1701 may include:
the voice instruction sending module 1703 is configured to send the voice instruction to the cloud, so that the cloud generates a second recognition result according to the voice instruction;
an identification result receiving module 1704, configured to receive a second identification result returned by the cloud;
an arbitration module 1705, configured to determine a target recognition result from the first recognition result and the second recognition result according to a preset arbitration rule.
The functions of each module in each apparatus in the embodiment of the present application may refer to corresponding descriptions in the above method, and are not described herein again.
The embodiment of the application provides a works editing device, includes:
the identification recognition module is used for recognizing the editing identification of the works displayed at the vehicle end;
the address determining module is used for determining the address of the creative work at the cloud end according to the work editing identification, and the creative work is generated according to any one of the work generating methods;
and the work acquisition and display module is used for acquiring the creative work from the cloud according to the address and displaying the creative work on the mobile terminal.
In one embodiment, the composition editing apparatus further comprises:
the editing material acquisition module is used for acquiring corresponding editing materials according to the editing instructions of the user;
the editing module is used for editing the creative works according to the editing materials;
and the sharing module is used for sharing the edited creative works.
Fig. 18 is a block diagram showing a configuration of a terminal or a server according to an embodiment of the present application. As shown in fig. 18, the terminal or the server includes: a memory 1801 and a processor 1802, the memory 1801 having stored therein instructions executable on the processor 1802. The processor 1802, when executing the instructions, implements any of the work generation methods in the embodiments described above. The number of the memory 1801 and the processor 1802 may be one or more. The terminal or server is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The terminal or server may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
The terminal or the server may further include a communication interface 1803, which is used for communicating with an external device to perform data interactive transmission. The various devices are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor 1802 may process instructions for execution within the terminal or server, including instructions stored in or on a memory to display graphical information of a GUI on an external input/output device (such as a display device coupled to an interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple terminals or servers may be connected, with each device providing portions of the necessary operations (e.g., as an array of servers, a group of blade servers, or a multi-processor system). The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 18, but this does not mean only one bus or one type of bus.
Optionally, in a specific implementation, if the memory 1801, the processor 1802, and the communication interface 1803 are integrated on a chip, the memory 1801, the processor 1802, and the communication interface 1803 may complete mutual communication through an internal interface.
It should be understood that the processor may be a Central Processing Unit (CPU), other general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. A general purpose processor may be a microprocessor or any conventional processor or the like. It is noted that the processor may be an advanced reduced instruction set machine (ARM) architecture supported processor.
Embodiments of the present application provide a computer-readable storage medium (such as the above-mentioned memory 1801) storing computer instructions, which when executed by a processor implement the methods provided in embodiments of the present application.
Optionally, the memory 1801 may include a program storage area and a data storage area, wherein the program storage area may store an operating system and an application program required by at least one function; the storage data area may store data created according to the use of a terminal or a server, and the like. Further, the memory 1801 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 1801 may optionally include memory located remotely from the processor 1802 and such remote memory may be coupled to a terminal or server via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Wherein, the processor 1802 may be a first processor, a second processor or a third processor; the memory 1801 may be a first memory, a second memory, or a third memory.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more (two or more) executable instructions for implementing specific logical functions or steps in the process. And the scope of the preferred embodiments of the present application includes other implementations in which functions may be performed out of the order shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. All or part of the steps of the method of the above embodiments may be implemented by hardware that is configured to be instructed to perform the relevant steps by a program, which may be stored in a computer-readable storage medium, and which, when executed, includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module may also be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. The storage medium may be a read-only memory, a magnetic or optical disk, or the like.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive various changes or substitutions within the technical scope of the present application, and these should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (61)

1. A work generation method is applied to a vehicle end and is characterized by comprising the following steps:
detecting a scene condition at the vehicle end, wherein the scene condition comprises a facial image of a target user;
determining the emotional state of the target user according to the facial image of the target user;
controlling a vehicle-mounted multimedia assembly to collect corresponding multimedia resources according to the emotional state of the target user, and sending a work creation request to a cloud end, wherein the work creation request comprises creation parameters corresponding to the multimedia resources, so that the cloud end requests a work creation module to create corresponding created works according to the creation parameters;
and receiving the creative works returned by the cloud end and displaying the creative works at the vehicle end.
2. The method of claim 1, wherein the receiving and displaying the creative work returned by the cloud to the vehicle end comprises:
receiving an authoring task ID corresponding to the work authoring request returned by the cloud;
sending a first query request to the cloud, wherein the first query request comprises the creation task ID, so that the cloud queries the creation work corresponding to the creation task ID;
and receiving and displaying the creative work corresponding to the creative task ID at the vehicle end.
3. The method of claim 1, wherein determining the emotional state of the target user from the facial image of the target user comprises:
and determining the emotional state of the target user according to the facial features of the user in the facial image of the target user.
4. The method of claim 1, wherein the controlling the vehicle-mounted multimedia component to collect corresponding multimedia resources according to the emotional state of the target user and send a work creation request to a cloud, wherein the work creation request includes creation parameters corresponding to the multimedia resources, so that the cloud requests a work creation module to create corresponding created works according to the creation parameters, comprises:
controlling the vehicle-mounted multimedia assembly to acquire corresponding multimedia resources according to the emotional state of the target user;
sending a work creation request to a cloud end, wherein the work creation request comprises creation parameters corresponding to the multimedia resources, so that the cloud end requests a work creation module to create corresponding created works according to the creation parameters by utilizing an AI creation model;
wherein the AI authoring model comprises at least one of an AI poem authoring model, an AI painting authoring model, an AI music authoring model and an AI video clip model.
5. The method of claim 2, wherein sending the first query request to the cloud comprises:
and sending the first query request to the cloud according to a first preset time interval.
6. The method of claim 2, wherein the authoring parameters include an address of authoring material at the cloud, and wherein sending a request to the cloud for authoring a work further comprises:
sending the creation material to the cloud;
and receiving an address which is returned by the cloud and corresponds to the creation material.
7. The method of claim 1, wherein the scene condition further comprises positioning information, the method further comprising:
and controlling the vehicle-mounted multimedia assembly to acquire multimedia resources under the condition that the positioning information corresponds to the target journey information.
8. The method of claim 1, wherein the multimedia assets comprise environmental audio and image data, and wherein controlling the corresponding in-vehicle multimedia component to capture the corresponding multimedia asset comprises:
controlling a microphone multimedia component to collect the environmental audio;
and controlling a camera multimedia assembly to collect the image data.
9. The method according to claim 1, wherein the controlling the vehicle-mounted multimedia component to collect the corresponding multimedia resource according to the emotional state of the target user comprises:
and controlling the vehicle-mounted multimedia assembly to acquire multimedia resources under the condition that the emotional state of the target user corresponds to a preset emotional state.
10. The method of claim 1, further comprising:
sending a sharing request to the cloud end, wherein the sharing request comprises a work ID of the creative work, so that the cloud end queries the address of the creative work in a work database according to the work ID;
receiving the address of the creative work returned by the cloud end;
generating a corresponding work sharing identification according to the address of the creative work;
and displaying the work sharing identification at the vehicle end.
11. The method of claim 1, further comprising:
generating a work editing identifier according to the address of the creative work at the cloud end;
and displaying the work editing identification at the vehicle end so that the mobile terminal can determine the address of the creative work at the cloud end according to the work editing identification, and acquire the creative work from the cloud end according to the address and display the creative work at the mobile terminal.
12. The method of claim 1, wherein presenting the creative work corresponding to the creative task ID at the vehicle end comprises:
editing the creative work according to an editing instruction of a user;
and displaying the edited creative work at the vehicle end.
13. The method of claim 12, further comprising:
sending the edited creative work to the cloud end;
sending a sharing request to the cloud end, wherein the sharing request comprises a work ID of the edited creative work, so that the cloud end inquires the address of the edited creative work at the cloud end according to the work ID;
receiving an address of the edited creative work returned by the cloud;
generating a corresponding work sharing identification according to the edited address of the creative work;
and displaying the work sharing identification at the vehicle end.
14. The method of claim 10 or 13, wherein sending a sharing request to the cloud comprises:
and sending the sharing request to the cloud end by calling a sharing interface at the vehicle end so that the cloud end calls the sharing interface of the creation service module through the API gateway.
15. The method of claim 1, wherein sending a work creation request to a cloud comprises:
and sending the work creation request to the cloud end by calling the creation interface at the vehicle end so as to enable the cloud end to call the creation interface of the creation service module through the API gateway.
16. The method of claim 2, wherein sending the first query request to the cloud comprises:
and sending the first query request to the cloud terminal by calling a display interface at the vehicle terminal.
17. The method of claim 1, further comprising:
sending a second query request to the cloud end so as to enable the cloud end to query a plurality of works from a work database, wherein the works comprise the creative works;
receiving a plurality of works returned by the cloud;
and displaying the plurality of works returned by the cloud end at the vehicle end.
18. The method of claim 17, wherein sending a second query request to the cloud comprises:
and sending the second query request to the cloud end by calling a list interface of the vehicle end so that the cloud end calls the list interface of the creation service module through the API gateway.
19. A work generation method is applied to a cloud end and is characterized by comprising the following steps:
receiving a work creation request sent by a vehicle end, wherein the work creation request comprises creation parameters corresponding to multimedia resources; the multimedia resource is acquired by controlling a vehicle-mounted multimedia component according to the emotion state of the target user determined by the facial image of the target user;
requesting a work creation module to create a corresponding created work according to the creation parameters;
and sending the corresponding creative work to the vehicle end.
20. The method of claim 19, further comprising, after receiving a work composition request transmitted from a vehicle, the step of:
generating an authoring task ID corresponding to the work authoring request;
sending the authoring task ID to the vehicle end;
receiving a first query request sent by the vehicle end, wherein the first query request comprises the creation task ID;
and inquiring the creative work corresponding to the creative task ID.
21. The method of claim 19, wherein the emotional state of the target user is determined based on facial features of the user in the facial image of the target user.
22. The method of claim 19, wherein authoring a corresponding creative work according to the authoring parameters comprises:
utilizing an AI authoring model to author a corresponding creative work according to the authoring parameters; wherein the AI authoring model comprises at least one of an AI poem authoring model, an AI painting authoring model, an AI music authoring model and an AI video clip model.
23. The method of claim 20, further comprising:
receiving a result inquiry ID returned by the composition creation module according to the creation parameters;
inquiring whether the creative work corresponding to the result inquiry ID is finished from the work creative module according to a second preset time interval;
in the completed case, a creative work corresponding to the result query ID is obtained from the work creative module.
24. The method of claim 20, wherein the authoring material corresponding to the authoring parameters is a video, the method further comprising:
under the condition that the work creation module completes the work creation corresponding to the creation parameters, the work creation module calls a callback interface of the creation service module through an API gateway so as to store the work creation corresponding to the creation parameters to a work database.
25. The method of claim 20, wherein the authoring parameter is an address of authoring material at the cloud, the method further comprising:
receiving and storing the creation material sent by the vehicle end;
and sending the address corresponding to the creation material to the automobile terminal.
26. The method of claim 20, further comprising:
receiving a second query request sent by the vehicle end;
querying a plurality of works from a work database according to the second query request, the works including the creative works;
and sending the plurality of inquired works to the vehicle end.
27. The method of claim 20, further comprising:
receiving a sharing request sent by the vehicle end, wherein the sharing request comprises a work ID of the creative work;
inquiring the address of the creative work in a work database according to the ID of the work;
and sending the address of the creative work to the vehicle end.
28. The method of claim 20, further comprising:
receiving and storing the edited creative work at the vehicle end, wherein the edited creative work corresponds to a work ID;
receiving a sharing request sent by the vehicle end, wherein the sharing request comprises the work ID, so that the cloud end queries the address of the edited creative work at the cloud end according to the work ID;
and sending the edited creative work to the vehicle end at the address of the cloud end.
29. The method of claim 20, wherein receiving a work composition request from a vehicle end comprises:
and analyzing the work creation request through the API gateway so as to call an creation interface of the creation service module.
30. The method of claim 20, wherein receiving a first query request from the vehicle end comprises:
and analyzing the first query request through the API gateway so as to call a display interface of the authoring service module.
31. The method of claim 26, wherein receiving a second query request from the vehicle end to query a plurality of works from a database of works, comprises:
and analyzing the second query request through an API gateway to call a list interface of an authoring service module to query a plurality of works from the work database.
32. The method according to claim 27 or 28, wherein receiving the sharing request sent by the vehicle end comprises:
and analyzing the sharing request through the API gateway so as to call a sharing interface of the authoring service module.
33. A work generation method is applied to a vehicle end and is characterized by comprising the following steps:
performing semantic recognition on the voice command to obtain a first recognition result;
triggering the vehicle end to perform the method of any one of claims 1 to 18 according to a target recognition result, the target recognition result including the first recognition result.
34. The method of claim 33, further comprising:
sending the voice instruction to a cloud end so that the cloud end generates a second recognition result according to the voice instruction;
receiving the second recognition result returned by the cloud end;
and determining the target recognition result from the first recognition result and the second recognition result according to a preset arbitration rule.
35. A method for editing works is applied to a mobile terminal and is characterized by comprising the following steps:
identifying a work editing identifier displayed at a vehicle end;
determining an address of a creative work at a cloud according to the work editing identifier, the creative work being generated according to the method of any one of claims 1-34;
acquiring the creative work from the cloud according to the address;
and displaying the creative work at the mobile terminal.
36. The method of claim 35, further comprising:
acquiring a corresponding editing material according to an editing instruction of a user;
editing the creative work according to the editing materials;
and sharing the edited creative work.
37. A work generation apparatus, comprising:
the system comprises a work creation request sending module, a work creation request sending module and a cloud end, wherein the work creation request sending module is used for sending a work creation request to the cloud end, and the work creation request comprises creation parameters corresponding to multimedia resources so that the cloud end requests the work creation module to create a corresponding created work according to the creation parameters; the multimedia resource is acquired by controlling a vehicle-mounted multimedia assembly according to the emotion state of the target user determined by the facial image of the target user;
and the work display module is used for receiving and displaying the creative work at the vehicle end.
38. The apparatus of claim 37, further comprising:
the creation task ID receiving module is used for receiving a creation task ID which is returned by the cloud end and corresponds to the creation request of the work;
the first query request sending module is used for sending a first query request to the cloud, wherein the first query request comprises the creation task ID, so that the cloud queries the creation works corresponding to the creation task ID;
the work display module is further used for receiving and displaying the creative work corresponding to the creative task ID returned by the cloud end at the vehicle end.
39. A work generation apparatus, comprising:
the system comprises a work creation request receiving module, a work creation request processing module and a work creation request processing module, wherein the work creation request receiving module is used for receiving a work creation request sent by a vehicle end, and the work creation request comprises creation parameters corresponding to multimedia resources; the multimedia resource is acquired by controlling a vehicle-mounted multimedia component according to the emotion state of the target user determined by the facial image of the target user;
the request module is used for requesting the work creation module to create the corresponding created work according to the creation parameters;
and the work sending module is used for sending the creative work to the vehicle end.
40. The apparatus of claim 39, further comprising:
the creation task ID generating and sending module is used for generating a creation task ID corresponding to the creation request of the work and sending the creation task ID to the vehicle end;
a first query request receiving module, configured to receive a first query request sent by the vehicle end, where the first query request includes the creation task ID;
the work inquiry module is used for inquiring the creative work corresponding to the creative task ID;
and the work sending module is also used for sending the creative work corresponding to the creative task ID to the vehicle end.
41. A work generation apparatus, comprising:
the semantic recognition module is used for performing semantic recognition on the voice command to obtain a first recognition result;
a trigger execution module, configured to trigger the vehicle end to execute the method according to a target recognition result, where the target recognition result includes the first recognition result.
42. The apparatus of claim 41, further comprising:
the voice instruction sending module is used for sending the voice instruction to a cloud end so that the cloud end can generate a second recognition result according to the voice instruction;
the identification result receiving module is used for receiving the second identification result returned by the cloud end;
and the arbitration module is used for determining the target recognition result from the first recognition result and the second recognition result according to a preset arbitration rule.
43. A work editing apparatus, comprising:
the identification recognition module is used for recognizing the editing identification of the works displayed at the vehicle end;
an address determination module, configured to determine, according to the work editing identifier, an address of a creative work at a cloud, where the creative work is generated according to the method of any one of claims 1 to 34;
and the work acquisition and display module is used for acquiring the creative work from the cloud end according to the address and displaying the creative work on the mobile terminal.
44. A vehicle end terminal, comprising:
at least one first processor; and
a first memory communicatively coupled to the at least one first processor; wherein the content of the first and second substances,
the first memory stores instructions executable by the at least one first processor to enable the at least one first processor to perform the method of any one of claims 1 to 18, claim 33 and claim 34.
45. A server, comprising:
at least one second processor; and
a second memory communicatively coupled to the at least one second processor; wherein the content of the first and second substances,
the second memory stores instructions executable by the at least one second processor to enable the at least one second processor to perform the method of any one of claims 19 to 32.
46. A mobile terminal, comprising:
at least one third processor; and
a third memory communicatively coupled to the at least one third processor; wherein the content of the first and second substances,
the third memory stores instructions executable by the at least one third processor to enable the at least one third processor to perform the method of claim 35 or 36.
47. A work generation system comprising the vehicle-end terminal of claim 37 and the server of claim 45.
48. The system according to claim 47, further comprising a mobile terminal according to claim 46.
49. A work generation system is characterized by comprising a vehicle end and a cloud end, wherein the vehicle end comprises a work generation module, and the cloud end comprises an authoring service module, a work authoring module and an API gateway;
the work generation module includes: the system comprises a vehicle-mounted multimedia assembly, an creation interface, a cloud end and a database, wherein the vehicle-mounted multimedia assembly is used for acquiring a work creation request from a vehicle end; the composition creation request comprises creation parameters corresponding to the multimedia resources;
the authoring service module comprises: an authoring interface for requesting the work authoring module to author a corresponding authored work according to the authoring parameters; a display interface for returning the creative work to the vehicle end;
the cloud end analyzes the work creation request through the API gateway so as to call the creation interface of the creation service module.
50. The system of claim 49, wherein the work generation module further comprises a presentation interface configured to send a first query request to the cloud, wherein the first query request comprises an authoring task ID corresponding to the work authoring request returned by the cloud;
the display interface of the creation service module is also used for inquiring and returning the creation corresponding to the creation task ID to the vehicle end;
the cloud end analyzes the first query request through the API gateway so as to call an authoring interface of the authoring service module.
51. The system of claim 49, wherein the vehicle end invokes the display interface of the work generation module at a first predetermined time interval, and the cloud end invokes the display interface of the creation service module at a second predetermined time interval.
52. The system of claim 49, wherein the authoring parameters comprise an address of the authoring material in the cloud, the vehicle end further comprises an uploading interface, the cloud end comprises a web disk, the vehicle end sends the authoring material to the web disk by calling the uploading interface, and the address is the address of the authoring material in the web disk.
53. The system of claim 49, wherein the cloud comprises a work database having a plurality of works stored therein, the works including the work of creation.
54. The system of claim 53, wherein the composition generation module comprises a listing interface for sending a second query request to the cloud, and wherein the authoring service module comprises a listing interface for querying the plurality of compositions from the composition database, wherein the authoring service module parses the second query request through the API gateway to invoke the listing interface of the authoring service module.
55. The system of claim 53, wherein the composition generation module comprises a sharing interface for sending a sharing request to the cloud, the sharing request comprises a composition ID of the composition, the composition service module comprises a sharing interface for querying an address of the composition in the composition database returned to the vehicle end according to the composition ID, and the composition service module parses the sharing request through the API gateway to call the sharing interface of the composition service module.
56. The system of claim 53, wherein the authoring material corresponding to the authoring parameter is a video, wherein the authoring service module further comprises a callback interface, and wherein the composition authoring module calls the callback interface of the authoring service module via the API gateway to save the composition corresponding to the authoring parameter to the composition database when the composition authoring module completes the composition corresponding to the authoring parameter.
57. The system of claim 49, wherein the authoring service module further comprises a data processing sub-module for performing data processing on authoring material corresponding to the authoring parameters, the data processing including sensitive word filtering.
58. The system according to any one of claims 49 to 57, wherein the vehicle end further comprises a first dialogue management module for performing semantic recognition on the voice command to obtain a first recognition result; the cloud end comprises a second dialogue management module which is used for receiving the voice command from the first dialogue management module and generating a second recognition result according to the voice command; the first dialogue management module is further used for determining a target recognition result from the first recognition result and the second recognition result according to a preset arbitration rule, and triggering the work generation module according to the target recognition result.
59. The system according to claim 58, wherein said first session management module comprises a first protocol encoder, a first protocol decoder, and a first session engine;
the first protocol encoder is used for encoding the voice command and then sending the voice command to the second dialogue management module;
the first protocol decoder is used for calling a first recognition result callback interface and sending a decoded first recognition result to the first dialogue engine; the first protocol decoder is further used for calling a second recognition result callback interface and sending the decoded second recognition result to the first dialogue engine;
the first dialogue engine is used for determining the target recognition result from the first recognition result and the second recognition result according to a preset arbitration rule and triggering the work generation module.
60. The system according to claim 58, wherein said second session management module comprises a second protocol encoder, a voice gateway, a voice services module, and a second session engine;
the voice gateway receives the voice command, decodes the voice command through the second protocol decoder and then sends the voice command to the second protocol encoder;
the second protocol encoder is used for encoding the voice instruction output by the second protocol decoder;
the voice service module carries out semantic recognition on the voice command output by the second protocol encoder to obtain an initial recognition result;
the second dialogue engine generates a second recognition result according to the initial recognition result;
and the voice gateway encodes the second identification result through the second protocol encoder and then sends the second identification result to the first dialogue management module.
61. A computer readable storage medium having stored therein computer instructions which, when executed by a processor, implement the method of any one of claims 1 to 36.
CN202110044146.6A 2020-06-04 2020-06-04 Method, device, terminal, server and system for generating and editing works Pending CN112699257A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110044146.6A CN112699257A (en) 2020-06-04 2020-06-04 Method, device, terminal, server and system for generating and editing works

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010497041.1A CN111400518B (en) 2020-06-04 2020-06-04 Method, device, terminal, server and system for generating and editing works
CN202110044146.6A CN112699257A (en) 2020-06-04 2020-06-04 Method, device, terminal, server and system for generating and editing works

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202010497041.1A Division CN111400518B (en) 2020-06-04 2020-06-04 Method, device, terminal, server and system for generating and editing works

Publications (1)

Publication Number Publication Date
CN112699257A true CN112699257A (en) 2021-04-23

Family

ID=71433839

Family Applications (3)

Application Number Title Priority Date Filing Date
CN202110044146.6A Pending CN112699257A (en) 2020-06-04 2020-06-04 Method, device, terminal, server and system for generating and editing works
CN202110045091.0A Pending CN112699258A (en) 2020-06-04 2020-06-04 Method, device, terminal, server and system for generating and editing works
CN202010497041.1A Active CN111400518B (en) 2020-06-04 2020-06-04 Method, device, terminal, server and system for generating and editing works

Family Applications After (2)

Application Number Title Priority Date Filing Date
CN202110045091.0A Pending CN112699258A (en) 2020-06-04 2020-06-04 Method, device, terminal, server and system for generating and editing works
CN202010497041.1A Active CN111400518B (en) 2020-06-04 2020-06-04 Method, device, terminal, server and system for generating and editing works

Country Status (2)

Country Link
CN (3) CN112699257A (en)
WO (1) WO2021244110A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113628618A (en) * 2021-07-29 2021-11-09 中汽创智科技有限公司 Multimedia file generation method and device based on intelligent cabin and terminal
CN114157917A (en) * 2021-11-29 2022-03-08 北京百度网讯科技有限公司 Video editing method and device and terminal equipment
CN114584839A (en) * 2022-02-25 2022-06-03 智己汽车科技有限公司 Clipping method and device for shooting vehicle-mounted video, electronic equipment and storage medium

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112699257A (en) * 2020-06-04 2021-04-23 华人运通(上海)新能源驱动技术有限公司 Method, device, terminal, server and system for generating and editing works
CN114089826A (en) * 2020-07-30 2022-02-25 华人运通(上海)云计算科技有限公司 Vehicle end scene generation method and device, vehicle end and storage medium
CN112061058B (en) * 2020-09-07 2022-05-27 华人运通(上海)云计算科技有限公司 Scene triggering method, device, equipment and storage medium
CN113067854B (en) * 2021-03-12 2023-08-25 斑马网络技术有限公司 Method, device, equipment and storage medium for acquiring content resources of vehicle-mounted equipment
CN115410579B (en) * 2022-10-28 2023-03-31 广州小鹏汽车科技有限公司 Voice interaction method, voice interaction device, vehicle and readable storage medium
CN115830171B (en) * 2023-02-17 2023-05-09 深圳前海深蕾半导体有限公司 Image generation method based on artificial intelligence drawing, display equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140004507A (en) * 2012-07-03 2014-01-13 (주)뉴인 Authoring system for multimedia contents and computer readable storing medium providing authoring tool
CN104468847A (en) * 2014-12-31 2015-03-25 北京赛维安讯科技发展有限公司 Journey recorded information sharing method, equipment, server and system of vehicle
CN105637887A (en) * 2013-08-15 2016-06-01 真实眼私人有限公司 Method in support of video impression analysis including interactive collection of computer user data
CN108965397A (en) * 2018-06-22 2018-12-07 中央电视台 Cloud video editing method and device, editing equipment and storage medium
CN110113540A (en) * 2019-06-13 2019-08-09 广州小鹏汽车科技有限公司 A kind of vehicle image pickup method, device, vehicle and readable medium
CN110298934A (en) * 2019-06-18 2019-10-01 重庆长安汽车股份有限公司 Driving video capture and sharing method
CN110389676A (en) * 2018-04-20 2019-10-29 比亚迪股份有限公司 The vehicle-mounted middle multimedia operation interface of control determines method
CN111083155A (en) * 2019-12-25 2020-04-28 斑马网络技术有限公司 Vehicle machine and cloud interaction method and device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107038548A (en) * 2016-01-29 2017-08-11 深圳市艾特大师网络科技有限公司 A kind of message treatment method, device and terminal
JP2018207385A (en) * 2017-06-08 2018-12-27 株式会社Jvcケンウッド Display control device, display control system, display control method, and display control program
CN109922290A (en) * 2018-12-27 2019-06-21 蔚来汽车有限公司 Audio-video synthetic method, device, system, equipment and vehicle for vehicle
CN112699257A (en) * 2020-06-04 2021-04-23 华人运通(上海)新能源驱动技术有限公司 Method, device, terminal, server and system for generating and editing works

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140004507A (en) * 2012-07-03 2014-01-13 (주)뉴인 Authoring system for multimedia contents and computer readable storing medium providing authoring tool
CN105637887A (en) * 2013-08-15 2016-06-01 真实眼私人有限公司 Method in support of video impression analysis including interactive collection of computer user data
CN104468847A (en) * 2014-12-31 2015-03-25 北京赛维安讯科技发展有限公司 Journey recorded information sharing method, equipment, server and system of vehicle
CN110389676A (en) * 2018-04-20 2019-10-29 比亚迪股份有限公司 The vehicle-mounted middle multimedia operation interface of control determines method
CN108965397A (en) * 2018-06-22 2018-12-07 中央电视台 Cloud video editing method and device, editing equipment and storage medium
CN110113540A (en) * 2019-06-13 2019-08-09 广州小鹏汽车科技有限公司 A kind of vehicle image pickup method, device, vehicle and readable medium
CN110298934A (en) * 2019-06-18 2019-10-01 重庆长安汽车股份有限公司 Driving video capture and sharing method
CN111083155A (en) * 2019-12-25 2020-04-28 斑马网络技术有限公司 Vehicle machine and cloud interaction method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
科技湃: "第七代微软小冰正式亮相 全双工语音交互感官技术 新增车载场景", pages 2 - 5, Retrieved from the Internet <URL:https://www.sohu.com/a/334478885_356153> *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113628618A (en) * 2021-07-29 2021-11-09 中汽创智科技有限公司 Multimedia file generation method and device based on intelligent cabin and terminal
CN114157917A (en) * 2021-11-29 2022-03-08 北京百度网讯科技有限公司 Video editing method and device and terminal equipment
CN114157917B (en) * 2021-11-29 2024-04-16 北京百度网讯科技有限公司 Video editing method and device and terminal equipment
CN114584839A (en) * 2022-02-25 2022-06-03 智己汽车科技有限公司 Clipping method and device for shooting vehicle-mounted video, electronic equipment and storage medium

Also Published As

Publication number Publication date
WO2021244110A1 (en) 2021-12-09
CN112699258A (en) 2021-04-23
CN111400518A (en) 2020-07-10
CN111400518B (en) 2020-11-17

Similar Documents

Publication Publication Date Title
CN111400518B (en) Method, device, terminal, server and system for generating and editing works
CN108345692B (en) Automatic question answering method and system
US9530415B2 (en) System and method of providing speech processing in user interface
CN107463700B (en) Method, device and equipment for acquiring information
US10311877B2 (en) Performing tasks and returning audio and visual answers based on voice command
CN111651231B (en) Work generation method and device, vehicle end and mobile terminal
CN109564530A (en) The personal supplementary module for having the selectable state machine traversed is provided
CN109474843A (en) The method of speech control terminal, client, server
CN110866179A (en) Recommendation method based on voice assistant, terminal and computer storage medium
CN112165647B (en) Audio data processing method, device, equipment and storage medium
CN111601161A (en) Video work generation method, device, terminal, server and system
CN110968362B (en) Application running method, device and storage medium
CN116450202A (en) Page configuration method, page configuration device, computer equipment and computer readable storage medium
CN111703278B (en) Fragrance release method, device, vehicle end, cloud end, system and storage medium
EP3823270A1 (en) Video processing method and device, and terminal and storage medium
WO2016107278A1 (en) Method, device, and system for labeling user information
CN112818654A (en) Message storage method, message generation method, message storage device, electronic equipment and computer readable medium
CN112330534A (en) Animal face style image generation method, model training method, device and equipment
US11769504B2 (en) Virtual meeting content enhancement triggered by audio tracking
KR102530669B1 (en) Method, system, and computer readable record medium to write memo for audio file through linkage between app and web
JP7185712B2 (en) Method, computer apparatus, and computer program for managing audio recordings in conjunction with an artificial intelligence device
JP6944920B2 (en) Smart interactive processing methods, equipment, equipment and computer storage media
CN111625508A (en) Information processing method and device
KR102448356B1 (en) Method, system, and computer readable record medium to record conversations in connection with video communication service
CN113934946A (en) Scenic spot introduction audio broadcasting method, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination