CN114900734B - Vehicle type comparison video generation method and device, storage medium and computer equipment - Google Patents

Vehicle type comparison video generation method and device, storage medium and computer equipment Download PDF

Info

Publication number
CN114900734B
CN114900734B CN202210542387.8A CN202210542387A CN114900734B CN 114900734 B CN114900734 B CN 114900734B CN 202210542387 A CN202210542387 A CN 202210542387A CN 114900734 B CN114900734 B CN 114900734B
Authority
CN
China
Prior art keywords
video
target
picture
vehicle type
comparison
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210542387.8A
Other languages
Chinese (zh)
Other versions
CN114900734A (en
Inventor
付禄山
林里鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Pacific Computer Information Consulting Co ltd
Original Assignee
Guangzhou Pacific Computer Information Consulting Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Pacific Computer Information Consulting Co ltd filed Critical Guangzhou Pacific Computer Information Consulting Co ltd
Priority to CN202210542387.8A priority Critical patent/CN114900734B/en
Publication of CN114900734A publication Critical patent/CN114900734A/en
Application granted granted Critical
Publication of CN114900734B publication Critical patent/CN114900734B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7844Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using original textual content or text extracted from visual content or transcript of audio data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Library & Information Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides a vehicle type comparison video generation method, a vehicle type comparison video generation device, a storage medium and computer equipment. The method comprises the following steps: responding to the video generation instruction, and acquiring a material picture of each target vehicle type corresponding to the target view; the target view is determined based on a video template selected by the video generation instruction; determining a target vehicle type based on the video generation instruction; respectively synthesizing the material pictures of each target vehicle model corresponding to each target view into a comparison picture corresponding to the target view; splicing and synthesizing the comparison graphs of each target view to generate a video to be processed; and synthesizing the audio and video of the preset audio material and the video to be processed to generate a comparison video. The method and the device can quickly generate the vehicle type comparison video based on the selected target vehicle type.

Description

Vehicle type comparison video generation method and device, storage medium and computer equipment
Technical Field
The present application relates to the field of video technologies, and in particular, to a vehicle type comparison video generating method, device, storage medium, and computer apparatus.
Background
In the automobile industry, in order to help consumers select automobiles, comparison information of different automobile types can be generated according to the selection of the consumers, so that the consumers can intuitively see the differences among the different automobile types.
At present, a mode of generating a comparison table by using a webpage is mostly adopted for displaying comparison information, so that some automobile manufacturers and consumption platforms select to display the comparison information in a short video mode, and the comparison information is easier to view and popularize. However, the current mode of generating the vehicle comparison video can only be realized through manual editing, the efficiency is low, and the customized comparison video is difficult to generate rapidly.
Disclosure of Invention
The embodiment of the application provides a vehicle type comparison video generation method, a vehicle type comparison video generation device, a storage medium and computer equipment, which can quickly generate a vehicle type comparison video based on a selected target vehicle type.
In a first aspect, the present application provides a vehicle type comparison video generation method, where the method includes:
Responding to the video generation instruction, and acquiring a material picture of each target vehicle type corresponding to the target view; the target view is determined based on the video template selected by the video generation instruction; the target vehicle type is determined based on the video generation instruction;
Respectively synthesizing the material pictures of each target vehicle model corresponding to each target view into a comparison picture corresponding to the target view;
splicing and synthesizing the comparison graphs of each target view to generate a video to be processed;
And synthesizing the audio and video of the preset audio material and the video to be processed to generate a comparison video.
In one embodiment, the obtaining the material picture corresponding to the target view of each target vehicle type includes:
Acquiring an original picture of each target vehicle model corresponding to the target view and configuration parameters of each target vehicle model;
carrying out matting analysis on the original picture, and identifying a vehicle region in the original picture;
extracting a vehicle region in the original picture to generate a picture to be processed;
Generating a parameter text according to the configuration parameters of each target vehicle type;
And adding the parameter text of each target vehicle type into the picture to be processed of the corresponding target vehicle type, and generating a material picture.
In one embodiment, the extracting the vehicle region in the original picture generates a to-be-processed picture, including:
if the license plate exists in the vehicle area, fuzzy processing is carried out on the area where the license plate exists;
and extracting the vehicle region after the blurring process to generate a picture to be processed.
In one embodiment, the extracting the vehicle region in the original picture generates a to-be-processed picture, including:
extracting a vehicle region in the original picture to generate a vehicle picture;
and stacking a preset shadow map corresponding to each target view to the vehicle picture corresponding to the same target view, and generating a picture to be processed.
In one embodiment, the obtaining the original picture of each target vehicle model corresponding to the target view and the configuration parameter of each target vehicle model includes:
determining each target vehicle type according to the video generation instruction;
and acquiring each original picture corresponding to the target view and configuration parameters of the target vehicle model from a preset database according to each target vehicle model.
In one embodiment, the stitching and synthesizing the comparison graphs of each target view to generate the video to be processed includes:
Splicing the contrast pictures of each target view according to the preset switching time, and creating a preliminary synthesized video;
And splicing the preset video of the head and the tail with the preliminary synthesized video to generate the video to be processed.
In one embodiment, the method further comprises:
and sending the comparison video to a preset target address for storage.
In a second aspect, the present application provides a vehicle type comparison video generating device, including:
The material acquisition module is used for responding to the video generation instruction and acquiring a material picture of each target vehicle type corresponding to the target view; the target view is determined based on the video template selected by the video generation instruction; the target vehicle type is determined based on the video generation instruction;
the comparison picture synthesis module is used for respectively synthesizing the material pictures of the target vehicle types corresponding to each target view into comparison pictures corresponding to the target views;
the video stitching module is used for stitching and synthesizing the contrast pictures of each target view to generate a video to be processed;
And the audio and video synthesis module is used for synthesizing the audio and video of the preset audio material and the video to be processed to generate a comparison video.
In a third aspect, the present application provides a storage medium having stored therein computer readable instructions which, when executed by one or more processors, cause the one or more processors to perform the steps of the vehicle model comparison video generation method according to any one of the embodiments described above.
In a fourth aspect, the present application provides a computer device comprising: one or more processors, and memory;
The memory has stored therein computer readable instructions that, when executed by the one or more processors, perform the steps of the vehicle model comparison video generation method as described in any of the embodiments above.
From the above technical solutions, the embodiment of the present application has the following advantages:
According to the vehicle type comparison video generation method, device, storage medium and computer equipment, video automatic generation is carried out in response to the video generation instruction input by a consumer/video editing personnel, a target vehicle type and a video template to be selected for comparison video can be determined according to the video generation instruction, further, material pictures of target views contained in the video template are obtained according to each target vehicle type, the material pictures of the same target view of each target vehicle type are respectively synthesized into comparison pictures of the target views, namely the comparison pictures of all target vehicle types corresponding to the target views are synthesized into the material pictures of all target vehicle types, the comparison pictures of all template views are spliced into a video format, a to-be-processed video is generated, audio and video synthesis is carried out on preset audio materials and the to-be-processed video, automatic generation of the vehicle type comparison video is completed, and the consumer/video editing personnel only need to select the target vehicle type and the video template, so that customized comparison video can be quickly generated.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are only some embodiments of the application, and that other drawings can be obtained from these drawings without inventive faculty for a person skilled in the art.
FIG. 1 is a flow chart of a method for generating a vehicle model comparison video in an embodiment;
FIG. 2 is a flowchart illustrating a step of acquiring a material picture corresponding to a target view for each target vehicle type in one embodiment;
FIG. 3 is a flowchart illustrating a step of extracting a vehicle region from an original image to generate a processed image according to an embodiment;
FIG. 4 is a flowchart illustrating a step of extracting a vehicle region from an original image to generate a processed image according to another embodiment;
FIG. 5 is a structural frame diagram of a vehicle model comparison video generating device in one embodiment;
FIG. 6 is an internal block diagram of a computer device, in one embodiment.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The embodiment of the application provides a vehicle type comparison video generation method, as shown in fig. 1, comprising steps S101 to S104, wherein:
step S101, responding to a video generation instruction, and acquiring a material picture corresponding to a target view of each target vehicle type.
The target view is determined based on a video template selected by the video generation instruction, and the video template can comprise at least one target view and also can comprise a plurality of target views, for example, five target views including an oblique front view, a front view, a front side view, a front rear view and a central control foreground; the target vehicle model is determined based on the video generation instruction.
The consumer/video editor (hereinafter simply referred to as "user") can select a target vehicle model and a video template on an operation page of the terminal as required, and input a video generation instruction is completed. And acquiring the material pictures of each target view of each target vehicle type selected by the user based on the video generation instruction.
Step S102, respectively synthesizing the material pictures of the target vehicle types corresponding to each target view into a comparison graph corresponding to the target view.
The method comprises the steps of synthesizing material pictures of each target vehicle type corresponding to the same target view into a comparison graph, namely, the number of the comparison graphs is the same as that of the target views, each target view is provided with a comparison graph corresponding to the same target view, for example, four target vehicle types are provided, the material pictures of the oblique front view of each target vehicle type are synthesized into a comparison graph of the oblique front view, the material pictures of the front view of each target vehicle type are synthesized into a comparison graph of the front view, the material pictures of the front side view of each target vehicle type are synthesized into a comparison graph of the front side view and the rear side view, and the material pictures of the central control foreground of each target vehicle type are synthesized into a comparison graph of the central control foreground.
And step S103, splicing and synthesizing the comparison graphs of each target view to generate a video to be processed.
The splicing and synthesizing means that the contrast diagram of each target view is converted into a video format, namely, one contrast diagram is sequentially switched and displayed according to the display sequence of the video template, so as to generate a video to be processed.
Step S104, audio and video synthesis is carried out on the preset audio materials and the video to be processed, and a comparison video is generated.
The audio material may be preset for all videos, or may be preset for the current video selected by the user in advance, or may be preset audio material corresponding to the video template.
According to the embodiment, the video is automatically generated by responding to the video generation instruction input by the user, the target vehicle type and the video template to be selected for comparing the video can be determined according to the video generation instruction, further, the material pictures of the target view contained in the video template are obtained according to each target vehicle type, the material pictures of the same target view of each target vehicle type are respectively synthesized into the comparison pictures of the target view, namely, the comparison pictures of each target view synthesize the material pictures of all target vehicle types corresponding to the target view, the comparison pictures of the template views are spliced and synthesized into a video format, the video to be processed is generated, the audio and video synthesis is carried out on the preset audio material and the video to be processed, the automatic generation of the vehicle type comparison video is completed, the user only needs to select the target vehicle type and the video template, and the customized vehicle type comparison video can be quickly generated.
In one embodiment, the stitching and synthesizing the comparison graphs of each target view to generate the video to be processed includes:
Splicing the contrast pictures of each target view according to the preset switching time, and creating a preliminary synthesized video;
And splicing the preset video of the head and the tail with the primary synthesized video to generate a video to be processed.
In one embodiment, as shown in fig. 2, the step of obtaining the material picture corresponding to the target view for each target vehicle type includes steps S201 to S205, where:
Step S201, obtaining an original picture of each target vehicle model corresponding to the target view and configuration parameters of each target vehicle model.
The original picture and the configuration parameters may be stored in a database of the terminal or obtained from a database of the server.
In one embodiment, step S201 includes:
determining each target vehicle type according to the video generation instruction;
and acquiring each original picture corresponding to the target view and configuration parameters of the target vehicle model from a preset database according to each target vehicle model.
Step S202, carrying out matting analysis on the original picture, and identifying a vehicle region in the original picture.
The matting analysis can be implemented by using a pre-trained matting model, and a vehicle region of the target view is identified from the original picture, for example, if the original picture is a central control foreground, then the central control foreground region in the original picture is identified.
Step S203, extracting the vehicle region in the original picture to generate a to-be-processed picture.
Step S204, generating parameter texts according to the configuration parameters of each target vehicle type.
The configuration parameters are data obtained from a database and are converted into parameter texts so as to be added into the pictures to be processed for display.
Step S205, adding the parameter text of each target vehicle type into the to-be-processed picture of the corresponding target vehicle type, and generating a material picture.
In this embodiment, by acquiring an original picture of each target view of a target vehicle type and configuration parameters of each target vehicle type, matting the original picture, matting out a vehicle area to generate a to-be-processed picture, and synthesizing the to-be-processed picture with a parameter file generated according to the configuration parameters to generate a material picture. The original picture may be a picture with complex background such as a real shot picture, a part of the vehicle is extracted as a material through matting analysis, and the configuration parameters are inserted into the picture to be processed to obtain a vehicle type picture with the configuration parameters, so that the finally generated video can display the related information of the target vehicle type more clearly.
In one embodiment, as shown in fig. 3, the extracting the vehicle region in the original picture generates a to-be-processed picture, including steps S301 to S302, where:
Step S301, if the license plate exists in the vehicle area, fuzzy processing is carried out on the area where the license plate exists.
Step S302, extracting the vehicle area after the blurring process to generate a picture to be processed.
The blurring processing is to hide license plate information, and exemplarily, the blurring processing may perform mosaic processing on the license plate, or may use a custom picture to cover the license plate, so that the generated picture to be processed does not display license plate information.
In one embodiment, as shown in fig. 4, the extracting the vehicle region in the original picture generates a to-be-processed picture, including steps S401 to S402, where:
step S401, extracting a vehicle region in the original picture, and generating a vehicle picture.
Step S402, stacking a preset shadow map corresponding to each target view to a vehicle picture corresponding to the same target view, and generating a to-be-processed picture.
In order to optimize the look and feel, shadow layers corresponding to different target views can be preset and used for being overlapped to vehicle pictures corresponding to the target views, so that vehicles displayed by the pictures to be processed are shaded.
In one embodiment, the adaptation degree of the shadow layer and the vehicle region may be low, so that the stacked picture has low aesthetic property, and problems such as shadow dislocation occur.
In one embodiment, the method further comprises:
and sending the comparison video to a preset target address for storage.
The user can preset the target address for storing the comparison video, and the comparison video is automatically sent to the target address when the generation of the comparison video is completed, so that the operation is simplified.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps described above are not strictly limited in order and may be performed in other orders unless explicitly stated herein as a specific order. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
The text processing device provided by the embodiment of the application is described below, and the text processing device described below and the text processing method described above can be referred to correspondingly.
As shown in fig. 5, an embodiment of the present application provides a vehicle type comparison video generating device 500, including:
The material acquisition module 501 is configured to acquire a material picture corresponding to a target view of each target vehicle type in response to a video generation instruction; the target view is determined based on the video template selected by the video generation instruction; the target vehicle type is determined based on the video generation instruction;
The comparison diagram synthesizing module 502 is configured to synthesize the material pictures of each target vehicle model corresponding to each target view into comparison diagrams corresponding to the target views;
The video stitching module 503 is configured to stitch and synthesize the contrast graphs of each target view, and generate a video to be processed;
and the audio and video synthesis module 504 is configured to perform audio and video synthesis on the preset audio material and the video to be processed, so as to generate a comparison video.
In one embodiment, the material acquisition module includes:
the data acquisition unit is used for acquiring an original picture of each target vehicle type corresponding to the target view and configuration parameters of each target vehicle type;
the matting analysis unit is used for matting analysis on the original picture and identifying the vehicle region in the original picture;
The image matting unit is used for extracting a vehicle region in the original image to generate an image to be processed;
the text conversion unit is used for generating parameter texts according to the configuration parameters of each target vehicle type;
And the material picture generation unit is used for adding the parameter text of each target vehicle type into the picture to be processed of the corresponding target vehicle type to generate a material picture.
In one embodiment, the matting unit is used for carrying out fuzzy processing on the area where the license plate is located when the license plate exists in the vehicle area; and the method is also used for extracting the vehicle region after the blurring processing to generate a picture to be processed.
In one embodiment, the matting unit is further configured to extract a vehicle region in the original image, and generate a vehicle image; and stacking a preset shadow map corresponding to each target view to the vehicle picture corresponding to the same target view, and generating a picture to be processed.
In one embodiment, the data acquisition unit is used for determining each target vehicle type according to the video generation instruction; and acquiring each original picture corresponding to the target view and configuration parameters of the target vehicle model from a preset database according to each target vehicle model.
In one embodiment, a video stitching module includes:
the video creation unit is used for splicing the contrast pictures of each target view according to the preset switching time to create a primary synthesized video;
and the video processing unit is used for splicing the preset video of the head and the tail with the preliminary synthesized video to generate the video to be processed.
In one embodiment, the vehicle type comparison video generating device further includes:
and the video storage module is used for sending the comparison video to a preset target address for storage.
The above-mentioned division of each module in the vehicle type comparison video generating device is only used for illustration, and in other embodiments, the vehicle type comparison video generating device may be divided into different modules according to the need, so as to complete all or part of the functions of the vehicle type comparison video generating device. All or part of each module in the vehicle model comparison video generation device can be realized by software, hardware and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, the present application also provides a storage medium having stored therein computer readable instructions which, when executed by one or more processors, cause the one or more processors to perform the steps of the vehicle model comparison video generation method as set forth in any one of the above embodiments.
In one embodiment, the present application also provides a computer device having stored therein computer readable instructions, which when executed by one or more processors, cause the one or more processors to perform the steps of the vehicle model comparison video generation method as in any of the above embodiments.
In one embodiment, a computer device is provided, which may be a terminal, and the internal structure of which may be as shown in fig. 6. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and computer readable instructions. The internal memory provides an environment for the execution of an operating system and computer-readable instructions in a non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer readable instructions, when executed by a processor, implement a vehicle model comparison video generation method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in FIG. 6 is merely a block diagram of a portion of the structure associated with the inventive arrangements and is not limiting of the computer device to which the inventive arrangements are applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
Those skilled in the art will appreciate that implementing all or part of the processes of the methods of the embodiments described above may be accomplished by instructing the associated hardware by computer readable instructions stored on a non-transitory computer readable storage medium, which when executed may comprise processes of embodiments of the methods described above. Any reference to memory, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magneto-resistive random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (PHASE CHANGE Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in various forms such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), etc. The databases referred to in the embodiments provided herein may include at least one of a relational database and a non-relational database. The processor referred to in the embodiments provided in the present application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic unit, a data processing logic unit based on quantum computing, or the like, but is not limited thereto.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present application, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise.
In the present specification, each embodiment is described in a progressive manner, and each embodiment focuses on the difference from other embodiments, and may be combined according to needs, and the same similar parts may be referred to each other.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (9)

1. The vehicle type comparison video generation method is characterized by comprising the following steps of:
Responding to the video generation instruction, and acquiring a material picture of each target vehicle type corresponding to the target view; the target view is determined based on the video template selected by the video generation instruction; the target vehicle type is determined based on the video generation instruction;
Respectively synthesizing the material pictures of each target vehicle model corresponding to each target view into a comparison picture corresponding to the target view;
splicing and synthesizing the comparison graphs of each target view to generate a video to be processed;
Performing audio and video synthesis on the preset audio materials and the video to be processed to generate a comparison video;
The obtaining the material picture of each target vehicle model corresponding to the target view comprises the following steps:
Acquiring an original picture of each target vehicle model corresponding to the target view and configuration parameters of each target vehicle model;
carrying out matting analysis on the original picture, and identifying a vehicle region in the original picture;
extracting a vehicle region in the original picture to generate a picture to be processed;
Generating a parameter text according to the configuration parameters of each target vehicle type;
And adding the parameter text of each target vehicle type into the picture to be processed of the corresponding target vehicle type, and generating a material picture.
2. The vehicle type comparison video generation method according to claim 1, wherein the extracting the vehicle region in the original picture generates a picture to be processed, comprising:
if the license plate exists in the vehicle area, fuzzy processing is carried out on the area where the license plate exists;
and extracting the vehicle region after the blurring process to generate a picture to be processed.
3. The vehicle type comparison video generation method according to claim 1, wherein the extracting the vehicle region in the original picture generates a picture to be processed, comprising:
extracting a vehicle region in the original picture to generate a vehicle picture;
and stacking a preset shadow map corresponding to each target view to the vehicle picture corresponding to the same target view, and generating a picture to be processed.
4. The method for generating a vehicle type comparison video according to claim 1, wherein the obtaining the original picture of each target vehicle type corresponding to the target view and the configuration parameter of each target vehicle type comprises:
determining each target vehicle type according to the video generation instruction;
and acquiring each original picture corresponding to the target view and configuration parameters of the target vehicle model from a preset database according to each target vehicle model.
5. The vehicle type comparison video generation method according to claim 1, wherein the stitching and synthesizing the comparison graphs of each target view to generate the video to be processed includes:
Splicing the contrast pictures of each target view according to the preset switching time, and creating a preliminary synthesized video;
And splicing the preset video of the head and the tail with the preliminary synthesized video to generate the video to be processed.
6. The vehicle model comparison video generation method according to any one of claims 1 to 5, characterized by further comprising:
and sending the comparison video to a preset target address for storage.
7. The utility model provides a motorcycle type contrast video generation device which characterized in that includes:
The material acquisition module is used for responding to the video generation instruction and acquiring a material picture of each target vehicle type corresponding to the target view; the target view is determined based on the video template selected by the video generation instruction; the target vehicle type is determined based on the video generation instruction;
the comparison picture synthesis module is used for respectively synthesizing the material pictures of the target vehicle types corresponding to each target view into comparison pictures corresponding to the target views;
the video stitching module is used for stitching and synthesizing the contrast pictures of each target view to generate a video to be processed;
The audio and video synthesis module is used for synthesizing audio and video of the preset audio materials and the video to be processed to generate a comparison video;
The material acquisition module comprises:
the data acquisition unit is used for acquiring an original picture of each target vehicle type corresponding to the target view and configuration parameters of each target vehicle type;
the matting analysis unit is used for matting analysis on the original picture and identifying the vehicle region in the original picture;
The image matting unit is used for extracting a vehicle region in the original image to generate an image to be processed;
the text conversion unit is used for generating parameter texts according to the configuration parameters of each target vehicle type;
And the material picture generation unit is used for adding the parameter text of each target vehicle type into the picture to be processed of the corresponding target vehicle type to generate a material picture.
8. A storage medium, characterized by: the storage medium has stored therein computer readable instructions that, when executed by one or more processors, cause the one or more processors to perform the steps of the vehicle model comparison video generation method of any of claims 1 to 6.
9. A computer device, comprising: one or more processors, and memory;
Stored in the memory are computer readable instructions which, when executed by the one or more processors, perform the steps of the vehicle model comparison video generation method of any one of claims 1 to 6.
CN202210542387.8A 2022-05-18 2022-05-18 Vehicle type comparison video generation method and device, storage medium and computer equipment Active CN114900734B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210542387.8A CN114900734B (en) 2022-05-18 2022-05-18 Vehicle type comparison video generation method and device, storage medium and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210542387.8A CN114900734B (en) 2022-05-18 2022-05-18 Vehicle type comparison video generation method and device, storage medium and computer equipment

Publications (2)

Publication Number Publication Date
CN114900734A CN114900734A (en) 2022-08-12
CN114900734B true CN114900734B (en) 2024-05-03

Family

ID=82724355

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210542387.8A Active CN114900734B (en) 2022-05-18 2022-05-18 Vehicle type comparison video generation method and device, storage medium and computer equipment

Country Status (1)

Country Link
CN (1) CN114900734B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111935504A (en) * 2020-07-29 2020-11-13 广州华多网络科技有限公司 Video production method, device, equipment and storage medium
CN112040273A (en) * 2020-09-11 2020-12-04 腾讯科技(深圳)有限公司 Video synthesis method and device
WO2021073368A1 (en) * 2019-10-14 2021-04-22 北京字节跳动网络技术有限公司 Video file generating method and device, terminal, and storage medium
CN113379882A (en) * 2021-05-27 2021-09-10 车智互联(北京)科技有限公司 Network vehicle exhibition configuration method, computing device and storage medium
CN113938620A (en) * 2021-11-12 2022-01-14 深圳传音控股股份有限公司 Image processing method, mobile terminal and storage medium
CN114390218A (en) * 2022-01-17 2022-04-22 腾讯科技(深圳)有限公司 Video generation method and device, computer equipment and storage medium
CN114466222A (en) * 2022-01-29 2022-05-10 北京百度网讯科技有限公司 Video synthesis method and device, electronic equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021073368A1 (en) * 2019-10-14 2021-04-22 北京字节跳动网络技术有限公司 Video file generating method and device, terminal, and storage medium
CN111935504A (en) * 2020-07-29 2020-11-13 广州华多网络科技有限公司 Video production method, device, equipment and storage medium
CN112040273A (en) * 2020-09-11 2020-12-04 腾讯科技(深圳)有限公司 Video synthesis method and device
CN113379882A (en) * 2021-05-27 2021-09-10 车智互联(北京)科技有限公司 Network vehicle exhibition configuration method, computing device and storage medium
CN113938620A (en) * 2021-11-12 2022-01-14 深圳传音控股股份有限公司 Image processing method, mobile terminal and storage medium
CN114390218A (en) * 2022-01-17 2022-04-22 腾讯科技(深圳)有限公司 Video generation method and device, computer equipment and storage medium
CN114466222A (en) * 2022-01-29 2022-05-10 北京百度网讯科技有限公司 Video synthesis method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN114900734A (en) 2022-08-12

Similar Documents

Publication Publication Date Title
CN110266971B (en) Short video making method and system
US8570347B2 (en) Electronic device and method for image editing
KR101494844B1 (en) System for Transforming Chart Using Metadata and Method thereof
JP4782105B2 (en) Image processing apparatus and image processing method
KR101890831B1 (en) Method for Providing E-Book Service and Computer Program Therefore
WO2017024964A1 (en) Object-associated image quick preview method and device
CN110532497B (en) Method for generating panorama, method for generating three-dimensional page and computing device
CN116597039A (en) Image generation method and server
CN103440304A (en) Method and device for storing picture
CN107179920A (en) Network engine starts method and device
CN113821201A (en) Code development method and device, electronic equipment and storage medium
CN114866837B (en) Video processing method, device, computer equipment and storage medium
CN102385453B (en) Signal conditioning package and control method thereof
CN114900734B (en) Vehicle type comparison video generation method and device, storage medium and computer equipment
CN116110065A (en) Method, device, computer equipment and storage medium for generating question card data
CN114283184A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN111797192B (en) GIS point data rendering method and device, computer equipment and storage medium
CN104331527A (en) Picture generating method and picture generating device
CN114117161A (en) Display method and device
EP3454207B1 (en) Dynamic preview generation in a product lifecycle management environment
CN117632951A (en) Algorithm flow arranging method, device, computer equipment and storage medium
EP4350525A1 (en) Image detection algorithm debugging method and apparatus, and device, medium and program product
CN118151798A (en) Information display method, information display device, computer equipment and storage medium
WO2014050960A1 (en) Electronic publication creation device, electronic publication creation method, electronic publication perusal system, viewer device, and electronic publication perusal method
CN117610521A (en) Form editing method, form editing device, computer equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant