CN115052198A - Image synthesis method, device and system for intelligent farm - Google Patents

Image synthesis method, device and system for intelligent farm Download PDF

Info

Publication number
CN115052198A
CN115052198A CN202210585352.2A CN202210585352A CN115052198A CN 115052198 A CN115052198 A CN 115052198A CN 202210585352 A CN202210585352 A CN 202210585352A CN 115052198 A CN115052198 A CN 115052198A
Authority
CN
China
Prior art keywords
video
ith
edited
format
access area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210585352.2A
Other languages
Chinese (zh)
Other versions
CN115052198B (en
Inventor
范新民
付新平
陈建洋
钟志强
彭金祥
邓泗洲
陈广兴
谈昊
张双杰
谭嘉杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Vocational and Technical College
Original Assignee
Guangdong Vocational and Technical College
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Vocational and Technical College filed Critical Guangdong Vocational and Technical College
Priority to CN202210585352.2A priority Critical patent/CN115052198B/en
Publication of CN115052198A publication Critical patent/CN115052198A/en
Application granted granted Critical
Publication of CN115052198B publication Critical patent/CN115052198B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/816Monomedia components thereof involving special video data, e.g 3D video
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture
    • Y02A40/25Greenhouse technology, e.g. cooling systems therefor

Abstract

The invention discloses an image synthesis method, device and system for an intelligent farm, wherein the method comprises the following steps: acquiring a video to be edited in the growth process of crops; calling a video editing template from a video template library; obtaining an ith video access area and an ith video time length; averagely dividing a video to be edited into N units of videos to be edited on the basis of a time sequence to obtain the ith unit of videos to be edited; compressing the video to be clipped of the ith unit according to the duration of the ith video to obtain the video to be clipped of the ith target unit; accessing the video to be edited of the ith target unit into the ith video access area; and integrating the video editing templates to generate a target video. Apparatus and systems are provided for performing the above-described methods. The invention realizes the function of automatically editing the growth process video of the crops so as to form the corresponding short video.

Description

Image synthesis method, device and system for intelligent farm
Technical Field
The invention relates to the technical field of intelligent agriculture, in particular to an image synthesis method, device and system of an intelligent farm.
Background
With the development of information technology, the penetration of the internet technology and the agricultural field is gradually enhanced, the mutual fusion of the internet and the agricultural technology is one of important means for promoting the transformation of the traditional agriculture to the modern agriculture, and therefore, the construction of an agricultural information service system is enhanced, so that the deep fusion of the science and technology and the agricultural field is very important.
In order to attract the attention of consumers to the growth process of crops, the growth process of crops is recorded by adopting a short video mode. The existing short videos of crops are generally synthesized manually, and the synthesis mode wastes time and labor, so that how to automatically synthesize the growth videos of the crops is a technical direction which needs to be concerned in the industry.
Disclosure of Invention
The invention provides an image synthesis method, device and system for an intelligent farm, which are used for solving one or more technical problems in the prior art and at least providing a beneficial selection or creation condition.
In a first aspect, a method for synthesizing images of an intelligent farm is provided, including:
step 1, acquiring a video of a growth process of crops in a set time, and recording the video as a video to be edited;
step 2, calling a video editing template from a video template library, wherein the video editing template is sequentially provided with N video access areas according to the time sequence;
step 3, averagely dividing the video to be edited into N units of video to be edited based on the time sequence;
step 4, obtaining an ith video access area from the video editing template, obtaining the video access time required by the ith video access area to obtain the ith video time, and obtaining the ith unit video to be edited from N units of video to be edited;
step 5, compressing the video to be edited of the ith unit according to the duration of the ith video to obtain the video to be edited of the ith target unit;
step 6, accessing the video to be edited of the ith target unit into the ith video access area;
step 7, i = i +1, judging whether i is larger than or equal to N, if not, returning to the step 3, and if yes, entering the step 8;
step 8, integrating the video editing templates to generate a target video;
wherein the initial value of i is 1; n is a positive integer.
Furthermore, the video editing template is also provided with a material access area between adjacent video access areas, and animation materials are accessed into the material access area before the video editing template is integrated.
Further, the format of the animation material includes a GIF format, a FLIC format, an AWF format, or an AVI format.
Further, in step 8, integrating the video editing templates to generate the target video specifically includes: and converting the format of the video editing template into an MP4 format file to obtain an MP4 format video file, and taking the MP4 format video file as a target video.
In a second aspect, an image synthesis apparatus for an intelligent farm is provided, including:
a processor;
a memory for storing a computer readable program;
when the computer readable program is executed by the processor, the processor is enabled to implement the image synthesis method for the intelligent farm according to any one of the above technical solutions.
The third invention provides an image synthesis system for an intelligent farm, comprising:
the device comprises an acquisition module, a judgment module and a display module, wherein the acquisition module is used for acquiring a video of a growth process of crops in a set time and recording the video as a video to be edited;
the system comprises a calling module, an acquisition module and a video editing module, wherein the calling module is used for calling a video editing template from a video template library, the video editing template is sequentially provided with N video access areas according to a time sequence, calling a video to be edited from the acquisition module and averagely dividing the video to be edited into N units of video to be edited based on the time sequence;
a processing module to perform: s100, obtaining an ith video access area from a video editing template, obtaining the video access time required by the ith video access area to obtain the ith video time, and obtaining the ith unit video to be edited from N units of video to be edited; s200, compressing the video to be edited of the ith unit according to the duration of the ith video to obtain the video to be edited of the ith target unit; s300, accessing the video to be edited of the ith target unit into an ith video access area; s400, i = i +1, judging whether i is larger than or equal to N, if not, returning to S100, and if so, integrating the video editing template to generate a target video;
wherein the initial value of i is 1; n is a positive integer.
Furthermore, the video editing template is also provided with a material access area between adjacent video access areas, and the processing module accesses animation materials into the material access area before integrating the video editing template.
Further, the format of the animation material includes a GIF format, a FLIC format, an AWF format, or an AVI format.
Further, in S400, integrating the video editing templates to generate the target video specifically includes: and converting the format of the video editing template into an MP4 format file to obtain an MP4 format video file, and taking the MP4 format video file as a target video.
The invention has at least the following beneficial effects: the method of the invention sets the video editing template to have N video access areas. And dividing the video to be edited into N units of video to be edited according to the number of the video access areas. And conditioning the unit video with the clip and the video access area so as to obtain the ith target unit video to be clipped. And finally, recording the video to be clipped of the ith target unit into the ith video access area. And when all the videos to be edited of the ith target unit are accessed into the ith video access area, integrating the video editing templates so as to generate the target video. The invention realizes the function of automatically editing the growth process video of the crops so as to form the corresponding short video. In another aspect, the present invention provides an apparatus and system for performing an image synthesis method for an intelligent farm. The advantageous effects of the apparatus and system are similar to those of the image synthesis method of the smart farm, and thus the description thereof will not be repeated.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the example serve to explain the principles of the invention and not to limit the invention.
FIG. 1 is a flow chart of the steps of a method for image synthesis for an intelligent farm;
fig. 2 is a schematic diagram of a system module structure of an image synthesis system of an intelligent farm.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
It should be noted that although functional block divisions are provided in the system drawings and logical orders are shown in the flowcharts, in some cases, the steps shown and described may be performed in different orders than the block divisions in the systems or in the flowcharts. The terms first, second and the like in the description and in the claims, and the drawings described above, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
In a first aspect, referring to fig. 1, fig. 1 is a flowchart illustrating steps of an image synthesis method for an intelligent farm.
An image synthesis method for an intelligent farm is provided, which includes:
step 1, obtaining a video of a growth process of crops in a set time, and recording the video as a video to be edited.
The setting step 1 is used for acquiring a video of the crop in the growing process. The position of the erected camera can be selected according to different crops. For example, if it is desired to photograph the growth process of a tomato plant, a camera can be assumed in the greenhouse and aimed at the tomato plant. By setting the shooting time of three months, it is needless to say that continuous shooting is not necessary. The shooting may be performed by periodically turning on the camera. The main objective is to document the growth process of the tomato. And obtaining a video of the growth process of the tomato, wherein the video is an original video, and the story composition of the whole original video needs to be edited, so that the video is recorded as a video to be edited.
And 2, calling a video editing template from the video template library, wherein the video editing template is sequentially provided with N video access areas according to the time sequence.
The video template library is a preset database for storing video editing templates. The video editing template is a preset template, and can be classified according to the types of crops, for example, the video editing template aiming at tomato growth. The video editing template is mainly aimed at the growth process of the tomato. And a plurality of video access areas are arranged on the video editing template and used for accessing videos. And setting a story flow to enable the video access areas to be well linked. The stories of the crop growth can be well told by substituting the videos into the video access areas. Wherein the number of video access zones is defined as N.
And 3, averagely dividing the video to be edited into N units of video to be edited on the basis of the time sequence.
And 4, obtaining the ith video access area from the video editing template, obtaining the video access time required by the ith video access area to obtain the ith video time, and obtaining the ith unit video to be edited from the N units of video to be edited.
Because there is a difference between the time length of the video to be edited and the video access area, the video to be edited needs to be conditioned. Before conditioning, information of each video access area in the video editing template needs to be obtained. Specifically, the ith video access area can be obtained by querying from a video editing template. Since it is necessary to successively query all video access areas, for convenience of understanding, the queried video access area is defined as the ith video access area. After the ith video access area is obtained, the ith video time length can be obtained through the ith video access area. The ith video duration refers to a time length of a video required by the ith video access area.
After the parameter information of the video editing template is obtained, the video to be edited can be edited. The method specifically comprises the following steps: firstly, according to the number N of video access areas of a video editing template, a video to be edited is divided into N units of videos to be edited based on time. Because the video to be clipped reflects the basic growth process of crops, the basic growth process of the crops can be divided into various stages by dividing the video to be clipped into N units of the video to be clipped. Therefore, the ith unit of video to be clipped can be obtained from the N units of video to be clipped.
And 5, compressing the video to be edited of the ith unit according to the duration of the ith video to obtain the video to be edited of the ith target unit.
The duration of the video to be edited in the ith unit is different from the required duration in the ith video access area in the video editing template. The duration of the ith video is less than the duration of the video to be clipped in the ith unit, so that the video to be clipped in the ith unit needs to be compressed according to the duration of the ith video. Therefore, the time length of the ith unit of video to be clipped can meet the requirement of the ith video time length. And recording the compressed ith unit video to be clipped as the ith target unit video to be clipped.
And 6, accessing the video to be clipped of the ith target unit into the ith video access area.
Step 7, i = i +1, judging whether i is larger than or equal to N, if not, returning to the step 3, and if yes, entering the step 8;
step 8, integrating the video editing templates to generate a target video;
wherein the initial value of i is 1; n is a positive integer and is the number of preset video access areas of the video editing template.
The video editing template is set to have N video access areas. And dividing the video to be edited into N units of video to be edited according to the number of the video access areas. And conditioning the unit video with the clip and the video access area so as to obtain the ith target unit video to be clipped. And finally, recording the video to be clipped of the ith target unit into the ith video access area. And when all the videos to be edited of the ith target unit are accessed into the ith video access area, integrating the video editing templates so as to generate the target video. The function of automatically forming the short video in the growth process of the crops is realized.
In order to make the obtained target video more colorful, in some preferred embodiments, the video editing template is further provided with a material access area between adjacent video access areas, and before the video editing template is integrated, the method further includes accessing animation material in the material access area.
The animation material refers to some small video files desired by the user, wherein the format of the animation material comprises a GIF format, a FLIC format, an AWF format or an AVI format.
In some preferred embodiments, in step 8, integrating the video editing template, and generating the target video specifically includes: and converting the format of the video editing template into an MP4 format file to obtain an MP4 format video file, and taking the MP4 format video file as a target video.
In a second aspect, an image synthesis apparatus for an intelligent farm is provided, including: a processor and a memory for storing a computer readable program.
When executed by the processor, the computer readable program causes the processor to implement the image synthesis method for an intelligent farm according to any one of the embodiments.
One of ordinary skill in the art will appreciate that all or some of the steps, systems, and methods disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as is well known to those of ordinary skill in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by a computer. In addition, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as known to those skilled in the art.
In a third aspect, referring to fig. 2, fig. 2 is a schematic diagram of a system module structure of an image synthesis system of an intelligent farm.
Provided is an image synthesis system of an intelligent farm, comprising: the device comprises an acquisition module, a calling module and a processing module. The acquisition module is used for acquiring a video of a growth process of crops in a set time and recording the video as a video to be edited. The calling module is used for calling a video editing template from a video template library, the video editing template is sequentially provided with N video access areas according to the time sequence, a video to be edited is called from the acquisition module, and the video to be edited is averagely divided into N units of videos to be edited based on the time sequence.
The processing module is used for executing: s100, obtaining an ith video access area from a video editing template, obtaining the video access time length required by the ith video access area to obtain the ith video time length, and obtaining the ith unit video to be clipped from N units of video to be clipped; s200, compressing the video to be edited of the ith unit according to the duration of the ith video to obtain the video to be edited of the ith target unit; s300, accessing the video to be edited of the ith target unit into an ith video access area; and S400, i = i +1, judging whether i is larger than or equal to N, if not, returning to S100, and if so, integrating the video editing template to generate the target video. Wherein the initial value of i is 1; n is a positive integer.
The acquisition module is arranged to acquire the video of the crops in the growing process. The position of the erected camera can be selected according to different crops. For example, if it is desired to photograph the growth process of a tomato plant, a camera can be assumed in the greenhouse and aimed at the tomato plant. By setting the shooting time of three months, it is needless to say that continuous shooting is not necessary. The shooting may be performed by periodically turning on the camera. The main objective is to document the growth process of the tomato. And obtaining a video of the growth process of the tomato, wherein the video is an original video, and the story composition of the whole original video needs to be edited, so that the video is recorded as a video to be edited.
The video template library is a preset database for storing video editing templates. The calling module is used for calling the video editing template from the video template library. The video editing template is a preset template, and can be classified according to the types of crops, for example, the video editing template aiming at tomato growth. The video editing template is mainly aimed at the growth process of the tomato. And a plurality of video access areas are arranged on the video editing template and used for accessing videos. And a story flow is set, so that the video access areas can be well connected. The stories of the crop growth can be well spoken by substituting the videos into the video access area. Wherein the number of video access zones is defined as N.
In the processing module, since there is a difference between the time length of the video to be edited and the video access area, the video to be edited needs to be conditioned. Before conditioning, information of each video access area in the video editing template needs to be obtained. Specifically, the ith video access area can be obtained by querying from a video editing template. Since it is necessary to successively query all video access areas, for convenience of understanding, the queried video access area is defined as the ith video access area. After the ith video access area is obtained, the ith video time length can be obtained through the ith video access area. The ith video duration refers to a time length of a video required by the ith video access area.
After the parameter information of the video editing template is obtained, the video to be edited can be edited. The method specifically comprises the following steps: the calling module firstly divides the video to be edited into N units of video to be edited based on time according to the number N of video access areas of the video editing template. Because the video to be clipped reflects the basic growth process of crops, the basic growth process of the crops can be divided into various stages by dividing the video to be clipped into N units of the video to be clipped. Therefore, the ith unit of video to be clipped can be obtained from the N units of video to be clipped.
The duration of the video to be edited in the ith unit is different from the required duration in the ith video access area in the video editing template. The duration of the ith video is less than the duration of the video to be clipped in the ith unit, so that the video to be clipped in the ith unit needs to be compressed according to the duration of the ith video. Therefore, the time length of the ith unit of video to be clipped can meet the requirement of the ith video time length. And recording the compressed ith unit video to be clipped as the ith target unit video to be clipped. And after the ith target video to be clipped is obtained, the ith target video to be clipped can be accessed to the ith video access area. When other target videos to be edited are accessed to the ith video access area, the video editing templates can be integrated, so that the target videos are formed.
In some preferred embodiments, the video editing template further includes a material access area between adjacent video access areas, and the processing module further includes accessing the animation material in the material access area before integrating the video editing template.
In some preferred embodiments, the format of the animation material includes a GIF format, a FLIC format, an AWF format, or an AVI format.
In some preferred specific embodiments, in S400, integrating the video editing templates, and generating the target video specifically includes: and converting the format of the video editing template into an MP4 format file to obtain an MP4 format video file, and taking the MP4 format video file as a target video.
While the preferred embodiments of the present invention have been described in detail, it will be understood by those skilled in the art that the foregoing and various other changes, omissions and deviations in the form and detail thereof may be made without departing from the scope of this invention.

Claims (9)

1. An image synthesis method for an intelligent farm, comprising:
step 1, acquiring a video of a growth process of crops in a set time, and recording the video as a video to be edited;
step 2, calling a video editing template from a video template library, wherein the video editing template is sequentially provided with N video access areas according to the time sequence;
step 3, averagely dividing the video to be edited into N units of video to be edited based on the time sequence;
step 4, obtaining an ith video access area from the video editing template, obtaining the video access time required by the ith video access area to obtain the ith video time, and obtaining the ith unit video to be edited from N units of video to be edited;
step 5, compressing the video to be edited of the ith unit according to the duration of the ith video to obtain the video to be edited of the ith target unit;
step 6, accessing the video to be edited of the ith target unit into the ith video access area;
step 7, i = i +1, judging whether i is larger than or equal to N, if not, returning to the step 3, and if yes, entering the step 8;
step 8, integrating the video editing templates to generate a target video;
wherein the initial value of i is 1; n is a positive integer.
2. The method as claimed in claim 1, wherein the video editing template further comprises a material access area between adjacent video access areas, and the method further comprises accessing motion picture material in the material access area before the video editing template is integrated.
3. The image synthesis method for intelligent farm according to claim 2, wherein the format of the motion picture material includes GIF format, FLIC format, AWF format or AVI format.
4. The image synthesis method for intelligent farm according to claim 1, wherein the step 8 of integrating the video editing templates to generate the target video specifically comprises: and converting the format of the video editing template into an MP4 format file to obtain an MP4 format video file, and taking the MP4 format video file as a target video.
5. An image synthesizing apparatus for an intelligent farm, comprising:
a processor;
a memory for storing a computer readable program;
the computer readable program, when executed by the processor, causes the processor to implement the image synthesis method for an intelligent farm according to any one of claims 1 to 4.
6. An image synthesis system for an intelligent farm, comprising:
the device comprises an acquisition module, a judgment module and a display module, wherein the acquisition module is used for acquiring a video of a growth process of crops in a set time and recording the video as a video to be edited;
the system comprises a calling module, an acquisition module and a video editing module, wherein the calling module is used for calling a video editing template from a video template library, the video editing template is sequentially provided with N video access areas according to a time sequence, calling a video to be edited from the acquisition module and averagely dividing the video to be edited into N units of video to be edited based on the time sequence;
a processing module to perform: s100, obtaining an ith video access area from a video editing template, obtaining the video access time required by the ith video access area to obtain the ith video time, and obtaining the ith unit video to be edited from N units of video to be edited; s200, compressing the ith unit video to be clipped according to the ith video time length to obtain the ith target unit video to be clipped; s300, accessing the video to be edited of the ith target unit into an ith video access area; s400, i = i +1, judging whether i is larger than or equal to N, if not, returning to S100, and if so, integrating the video editing template to generate a target video;
wherein the initial value of i is 1; n is a positive integer.
7. The image synthesis system of an intelligent farm according to claim 6, wherein the video editing templates further have material access areas between adjacent video access areas, and the processing module further comprises accessing animation material in the material access areas before integrating the video editing templates.
8. The image synthesis system of an intelligent farm according to claim 7, wherein the format of the motion picture material includes GIF format, FLIC format, AWF format or AVI format.
9. The image synthesis system of an intelligent farm according to claim 6, wherein in S400, the integrating the video editing templates to generate the target video specifically comprises: and converting the format of the video editing template into an MP4 format file to obtain an MP4 format video file, and taking the MP4 format video file as a target video.
CN202210585352.2A 2022-05-27 2022-05-27 Image synthesis method, device and system for intelligent farm Active CN115052198B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210585352.2A CN115052198B (en) 2022-05-27 2022-05-27 Image synthesis method, device and system for intelligent farm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210585352.2A CN115052198B (en) 2022-05-27 2022-05-27 Image synthesis method, device and system for intelligent farm

Publications (2)

Publication Number Publication Date
CN115052198A true CN115052198A (en) 2022-09-13
CN115052198B CN115052198B (en) 2023-07-04

Family

ID=83158879

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210585352.2A Active CN115052198B (en) 2022-05-27 2022-05-27 Image synthesis method, device and system for intelligent farm

Country Status (1)

Country Link
CN (1) CN115052198B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10117322A (en) * 1996-10-11 1998-05-06 Matsushita Electric Ind Co Ltd Non-linear video editor
US20070074115A1 (en) * 2005-09-23 2007-03-29 Microsoft Corporation Automatic capturing and editing of a video
US7362946B1 (en) * 1999-04-12 2008-04-22 Canon Kabushiki Kaisha Automated visual image editing system
CN102819528A (en) * 2011-06-10 2012-12-12 中国电信股份有限公司 Method and device for generating video abstraction
CN106303290A (en) * 2016-09-29 2017-01-04 努比亚技术有限公司 A kind of terminal and the method obtaining video
CN110121104A (en) * 2018-02-06 2019-08-13 上海全土豆文化传播有限公司 Video clipping method and device
CN110139159A (en) * 2019-06-21 2019-08-16 上海摩象网络科技有限公司 Processing method, device and the storage medium of video material
CN111357277A (en) * 2018-11-28 2020-06-30 深圳市大疆创新科技有限公司 Video clip control method, terminal device and system
CN111597186A (en) * 2020-04-24 2020-08-28 广东职业技术学院 Block chain agricultural product management system
US20200293784A1 (en) * 2017-11-30 2020-09-17 Guangzhou Baiguoyuan Information Technology Co., Ltd. Method of pushing video editing materials and intelligent mobile terminal
CN112449231A (en) * 2019-08-30 2021-03-05 腾讯科技(深圳)有限公司 Multimedia file material processing method and device, electronic equipment and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10117322A (en) * 1996-10-11 1998-05-06 Matsushita Electric Ind Co Ltd Non-linear video editor
US7362946B1 (en) * 1999-04-12 2008-04-22 Canon Kabushiki Kaisha Automated visual image editing system
US20070074115A1 (en) * 2005-09-23 2007-03-29 Microsoft Corporation Automatic capturing and editing of a video
CN102819528A (en) * 2011-06-10 2012-12-12 中国电信股份有限公司 Method and device for generating video abstraction
CN106303290A (en) * 2016-09-29 2017-01-04 努比亚技术有限公司 A kind of terminal and the method obtaining video
US20200293784A1 (en) * 2017-11-30 2020-09-17 Guangzhou Baiguoyuan Information Technology Co., Ltd. Method of pushing video editing materials and intelligent mobile terminal
CN110121104A (en) * 2018-02-06 2019-08-13 上海全土豆文化传播有限公司 Video clipping method and device
CN111357277A (en) * 2018-11-28 2020-06-30 深圳市大疆创新科技有限公司 Video clip control method, terminal device and system
CN110139159A (en) * 2019-06-21 2019-08-16 上海摩象网络科技有限公司 Processing method, device and the storage medium of video material
CN112449231A (en) * 2019-08-30 2021-03-05 腾讯科技(深圳)有限公司 Multimedia file material processing method and device, electronic equipment and storage medium
CN111597186A (en) * 2020-04-24 2020-08-28 广东职业技术学院 Block chain agricultural product management system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
董擎辉;: "农业高清影像数据库建立及应用研究", 北方园艺, no. 08, pages 207 - 208 *
高凡;李超;皇甫诗男;孙亮;张立强;张美萍;: "大庆野生植物识别观赏视频的制作", 河南科技, no. 23, pages 4 - 5 *

Also Published As

Publication number Publication date
CN115052198B (en) 2023-07-04

Similar Documents

Publication Publication Date Title
US8375302B2 (en) Example based video editing
US7286723B2 (en) System and method for organizing images
JP4591982B2 (en) Audio signal and / or video signal generating apparatus and audio signal and / or video signal generating method
CN110139159A (en) Processing method, device and the storage medium of video material
US20080085053A1 (en) Sampling image records from a collection based on a change metric
WO2011123802A1 (en) Web platform for interactive design, synthesis and delivery of 3d character motion data
JP2007094762A (en) Information processor, information processing method, and program
CN101154419B (en) Recording-and-reproducing apparatus and content-managing method
JP2010246050A (en) Data management device, method of controlling the same, and program
CN102084641A (en) Method to control image processing apparatus, image processing apparatus, and image file
CN113841417A (en) Film generation method, terminal device, shooting device and film generation system
CN110569379A (en) Method for manufacturing picture data set of automobile parts
CN111835985A (en) Video editing method, device, apparatus and storage medium
CN115052198B (en) Image synthesis method, device and system for intelligent farm
CN116821647B (en) Optimization method, device and equipment for data annotation based on sample deviation evaluation
CN101506890A (en) Operating system shell management of video files
WO2018196173A1 (en) Method and system for producing plant growth video
CN111784816B (en) High-frequency material rendering method and system based on micro-surface theory
CN106649728B (en) Film and video media asset management system and method
RU2009134541A (en) METHOD FOR CREATING MOSAIC
CN111277915B (en) Video conversion method and device
CN1606864A (en) Description generation in the form of metadata
US11778167B1 (en) Method and system for preprocessing optimization of streaming video data
CN100438600C (en) Video check system and method
CN102572293A (en) Field recording-based retrieval system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant