WO2022048204A1 - 图像生成方法、装置、电子设备及计算机可读存储介质 - Google Patents

图像生成方法、装置、电子设备及计算机可读存储介质 Download PDF

Info

Publication number
WO2022048204A1
WO2022048204A1 PCT/CN2021/096536 CN2021096536W WO2022048204A1 WO 2022048204 A1 WO2022048204 A1 WO 2022048204A1 CN 2021096536 W CN2021096536 W CN 2021096536W WO 2022048204 A1 WO2022048204 A1 WO 2022048204A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature
video
video image
image set
images
Prior art date
Application number
PCT/CN2021/096536
Other languages
English (en)
French (fr)
Inventor
夏倩
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2022048204A1 publication Critical patent/WO2022048204A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/74Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/7867Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings

Definitions

  • the present application relates to the technical field of image processing, and in particular, to an image generation method, apparatus, electronic device, and computer-readable storage medium.
  • the mainstream method of converting video into image on the market is to manually screen the video and the image in the video, so as to selectively convert one or more frames in the video into an image.
  • This method relies too much on manual operation, is inefficient, and the screened images do not meet the personalized needs of users, and cannot achieve the purpose of generating images based on videos that are both efficient and personalized.
  • An image generation method provided by this application includes:
  • the to-be-pushed video image set is pushed by using a push queue task.
  • the present application also provides an image generation device, the device comprising:
  • the demand vector generation module is used to obtain the image demand, and transform the image demand into the word vector to obtain the demand vector;
  • a feature extraction module configured to perform feature extraction on the demand vector to obtain a demand feature, wherein the demand feature includes a definition feature, a time feature, and a feature of the number of extracted images;
  • a video image acquisition module for acquiring a target video, extracting images contained in the target video according to the definition feature, and obtaining a video image set
  • a video image screening module configured to select images from the video image set according to the time feature and the extracted image number feature to obtain a video image set to be pushed;
  • a video image push module is configured to push the to-be-pushed video image set by using a push queue task.
  • the present application also provides an electronic device, the electronic device comprising:
  • the memory stores instructions executable by the at least one processor, the instructions being executed by the at least one processor to enable the at least one processor to perform the steps of:
  • the to-be-pushed video image set is pushed by using a push queue task.
  • the present application also provides a computer-readable storage medium, comprising a storage data area and a storage program area, wherein the storage data area stores created data, and the storage program area stores a computer program; wherein the computer program is
  • the processor implements the following steps when executing:
  • the to-be-pushed video image set is pushed by using a push queue task.
  • FIG. 1 is a schematic flowchart of an image generation method provided by an embodiment of the present application
  • FIG. 2 is a schematic block diagram of an image generating apparatus according to an embodiment of the present application.
  • FIG. 3 is a schematic diagram of the internal structure of an electronic device for implementing an image generation method provided by an embodiment of the present application
  • the execution subject of the image generation method provided by the embodiment of the present application includes, but is not limited to, at least one of electronic devices such as a server and a terminal that can be configured to execute the method provided by the embodiment of the present application.
  • the image generation method may be executed by software or hardware installed in a terminal device or a server device, and the software may be a blockchain platform.
  • the server includes but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, and the like.
  • the present application provides an image generation method.
  • FIG. 1 it is a schematic flowchart of an image generation method according to an embodiment of the present application.
  • the image generation method includes:
  • the image requirements can be uploaded directly by the user, or can be obtained from a pre-built storage area for storing image requirements by using a java statement with a data calling function, wherein the storage area includes but Not limited to mysql database, Oracle database, client cache, blockchain nodes.
  • the image requirements are acquired from a requirements analysis system, for example, accessing the requirements analysis system at preset time intervals to acquire the image requirements generated by the requirements analysis system.
  • the image requirements are transformed into word vectors by using a word vector transformation model
  • the word vector transformation model is an NER (Named Entity Recognition, Named Entity Recognition) model that removes the CRF layer.
  • the NER model for removing the CRF layer includes:
  • word/word vector layer for converting the words and characters in the image requirements into word/word vectors
  • the Bi-LSTM layer is used for segmenting the word/word vector, and encoding the content of the segmented word/word vector to obtain the encoded representation of the word/word vector.
  • the Bi-LSTM layer is further used to divide the word/word vector to obtain the encoding representation (the encoding representation is the demand vector), which is beneficial to further Accurate information is quickly obtained from the demand vector (ie, the encoded representation) obtained from the image demand.
  • the Bi-LSTM layer can use java language to segment the word/word vector obtained by the word/word vector layer, and encode the content of the segmented word/word vector.
  • the NER model with the CRF layer removed is used as the word vector conversion model, which can simplify the structure level of the word vector conversion model, reduce the calculation amount of the model, and improve the efficiency of obtaining demand vectors.
  • the feature extraction for the demand vector includes:
  • Feature extraction is performed on the demand vector by using the trained convolutional neural network.
  • the embodiment of the present application uses the following loss function to calculate the difference value between the training requirement feature and the standard requirement feature
  • Y represents the standard requirement feature
  • the obtaining the target video includes:
  • the target video selection instruction contains one or more pieces of information among the video name, video size, and video storage address of the target video.
  • the obtaining the code stream address of the target video according to the target video selection instruction includes:
  • the stream code address of the target video is acquired according to the video name of the target video contained in the target selection instruction.
  • the obtaining the code stream address of the target video according to the target video selection instruction includes:
  • the image contained in the target video is extracted according to the definition feature to obtain a video image set, including:
  • the sharpness of any frame image in the described target video is not the described target sharpness, the sharpness of multiple frames of images in the described target video is converted to the described target sharpness;
  • the embodiment of the present application uses a definition conversion tool to convert the definition of the multi-frame images in the target video to the target definition.
  • the definition conversion tools include but are not limited to WonderFox converters, Video converters, and the like.
  • the time feature includes a time condition related to a time period or a specific time point that the user needs to extract;
  • the extracted image number feature includes the number of images to be extracted from the video image set, or the number to be extracted condition.
  • the method further includes:
  • the extraction of the time series features of the video image set according to the embodiment of the present application includes:
  • d u is the u-th video image in the video image set
  • i is the number of video images in the video image set
  • t u is the acquisition time of the u-th video image in the video image set
  • t u+1 is The acquisition time of the u+1 th video image in the video image set.
  • all video images in the video image set are sorted according to the time sequence feature, so that when selecting images according to the time feature, multiple images can be quickly selected. It can improve the efficiency and efficiency of image selection.
  • the use of the push queue task to push the to-be-pushed video image set includes:
  • the to-be-pushed video image set is pushed to the user according to the push sequence.
  • the set of video images to be pushed may include multiple video images to be pushed.
  • pushing through the push queue task can prevent multiple copies of the video images to be pushed from being pushed at the same time.
  • the data congestion caused by the operation improves the efficiency and success rate of image push.
  • the push queue task is implemented by subscriber notification message queue (MQ).
  • MQ subscriber notification message queue
  • multiple copies of the to-be-pushed video images that need to be pushed are processed in batches by setting the interval threshold of the time, so as to ensure the previous batch of the to-be-pushed video images. After the image is sent, continue to process the next batch of data.
  • the queuing of the notification message by the subscriber can reduce the occupation of computing resources, cut a large amount of data and push them in batches, so as to avoid occupation and waste of computing resources due to data congestion.
  • the demand features can be quickly obtained, which is conducive to the rapid acquisition of personalized demand information; at the same time, after the target video is obtained, the target video is extracted according to the definition feature.
  • the image contained in the video is obtained, and the video image set is obtained, and the image is selected from the video image set according to the time feature and the extracted image number feature, so that the image that meets the personalized demand information can be quickly generated based on the video. Therefore, the image generation method proposed in this application can achieve the purpose of generating images based on video in an efficient and personalized manner.
  • FIG. 2 it is a schematic block diagram of the image generating apparatus of the present application.
  • the image generating apparatus 100 described in this application may be installed in an electronic device.
  • the image generation apparatus 100 may include a demand vector generation module 101, a feature extraction module 102, a video image acquisition module 103, a video image screening module 104, and a video image push module 105.
  • the modules described in the present invention can also be called units, which refer to a series of computer program segments that can be executed by the electronic device processor and can perform fixed functions, and are stored in the memory of the electronic device.
  • each module/unit is as follows:
  • the demand vector generation module 101 is configured to obtain the image demand, and transform the image demand into a word vector to obtain a demand vector.
  • the image requirements can be uploaded directly by the user, or can be obtained from a pre-built storage area for storing image requirements by using a java statement with a data calling function, wherein the storage area includes but Not limited to mysql database, Oracle database, client cache, blockchain nodes.
  • the image requirements are acquired from a requirements analysis system, for example, accessing the requirements analysis system at preset time intervals to acquire the image requirements generated by the requirements analysis system.
  • the image requirements are transformed into word vectors by using a word vector transformation model
  • the word vector transformation model is an NER (Named Entity Recognition, Named Entity Recognition) model that removes the CRF layer.
  • the NER model for removing the CRF layer includes:
  • word/word vector layer for converting the words and characters in the image requirements into word/word vectors
  • the Bi-LSTM layer is used for segmenting the word/word vector, and encoding the content of the segmented word/word vector to obtain the encoded representation of the word/word vector.
  • the Bi-LSTM layer is further used to divide the word/word vector to obtain the encoding representation (the encoding representation is the demand vector), which is beneficial to further Accurate information is quickly obtained from the demand vector (ie, the encoded representation) obtained from the image demand.
  • the Bi-LSTM layer can use java language to segment the word/word vector obtained by the word/word vector layer, and encode the content of the segmented word/word vector.
  • the NER model with the CRF layer removed is used as the word vector conversion model, which can simplify the structure level of the word vector conversion model, reduce the calculation amount of the model, and improve the efficiency of obtaining demand vectors.
  • the feature extraction module 102 is configured to perform feature extraction on the demand vector to obtain a demand feature, wherein the demand feature includes a definition feature, a time feature, and a feature of the number of extracted images.
  • the feature extraction module 102 is specifically configured to:
  • Feature extraction is performed on the demand vector by using the trained convolutional neural network to obtain demand features.
  • the embodiment of the present application uses the following loss function to calculate the difference value between the training requirement feature and the standard requirement feature
  • Y represents the standard requirement feature
  • the embodiment of the present application performs feature extraction on the demand vector to obtain accurate demand features. It is conducive to the accurate generation of personalized images.
  • the video image acquisition module 103 is configured to acquire a target video, and extract images included in the target video according to the definition feature to obtain a video image set.
  • the video image acquisition module 103 includes an acquisition unit and an extraction unit.
  • the obtaining unit is configured to receive a target video selection instruction sent by a user terminal; obtain a code stream address of the target video according to the target video selection instruction; download the target video according to the code stream address.
  • the extraction unit is configured to extract images included in the target video according to the definition feature to obtain a video image set.
  • the target video selection instruction contains one or more pieces of information among the video name, video size, and video storage address of the target video.
  • the obtaining the code stream address of the target video according to the target video selection instruction includes:
  • the stream code address of the target video is acquired according to the video name of the target video contained in the target selection instruction.
  • the obtaining the code stream address of the target video according to the target video selection instruction includes:
  • the extraction unit is specifically used for:
  • any frame image in the target video is not the target definition, converting the definition of the multi-frame images in the target video to the target definition;
  • the embodiment of the present application uses a definition conversion tool to convert the definition of the multi-frame images in the target video to the target definition.
  • the definition conversion tools include but are not limited to WonderFox converters, Video converters, and the like.
  • the video image screening module 104 is configured to select images from the video image set according to the time feature and the extracted image number feature to obtain a video image set to be pushed.
  • the time feature includes a time condition related to a time period or a specific time point that the user needs to extract;
  • the extracted image number feature includes the number of images to be extracted from the video image set, or the number to be extracted condition.
  • the device further includes a timing extraction module, and the timing extraction module is configured to:
  • the extraction of the time series features of the video image set according to the embodiment of the present application includes:
  • d u is the u-th video image in the video image set
  • i is the number of video images in the video image set
  • t u is the acquisition time of the u-th video image in the video image set
  • t u+1 is The acquisition time of the u+1 th video image in the video image set.
  • all video images in the video image set are sorted according to the time sequence feature, so that when selecting images according to the time feature, multiple images can be quickly selected. It can improve the efficiency and efficiency of image selection.
  • the video image push module 105 is configured to push the to-be-pushed video image set by using a push queue task.
  • the video image push module 105 is specifically used for:
  • the set of video images to be pushed may include multiple video images to be pushed.
  • pushing through a push queue task can prevent multiple copies of the video images to be pushed from being pushed at the same time.
  • the data congestion caused by the operation improves the efficiency and success rate of image push.
  • the push queue task is implemented by subscriber notification message queue (MQ).
  • MQ subscriber notification message queue
  • multiple copies of the to-be-pushed video images that need to be pushed are processed in batches by setting the interval threshold of the time, so as to ensure the previous batch of the to-be-pushed video images. After the image is sent, continue to process the next batch of data.
  • the queuing of the notification message by the subscriber can reduce the occupation of computing resources, cut a large amount of data and push them in batches, so as to avoid occupation and waste of computing resources due to data congestion.
  • the demand features can be quickly obtained, which is conducive to the rapid acquisition of personalized demand information; at the same time, after the target video is obtained, the target video is extracted according to the definition feature.
  • the image contained in the video is obtained, and the video image set is obtained, and the image is selected from the video image set according to the time feature and the extracted image number feature, so that the image that meets the personalized demand information can be quickly generated based on the video. Therefore, the image generation device proposed in this application can achieve the purpose of generating images based on video in an efficient and personalized manner.
  • FIG. 3 it is a schematic structural diagram of an electronic device implementing the image generation method of the present application.
  • the electronic device 1 may include a processor 10, a memory 11 and a bus, and may also include a computer program stored in the memory 11 and executable on the processor 10, such as an image generation program 12.
  • the memory 11 includes at least one type of readable storage medium, and the readable storage medium includes flash memory, mobile hard disk, multimedia card, card-type memory (for example: SD or DX memory, etc.), magnetic memory, magnetic disk, CD etc.
  • the memory 11 may be an internal storage unit of the electronic device 1 in some embodiments, such as a mobile hard disk of the electronic device 1 .
  • the memory 11 may also be an external storage device of the electronic device 1, such as a pluggable mobile hard disk, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital) equipped on the electronic device 1. , SD) card, flash memory card (Flash Card), etc.
  • the memory 11 may also include both an internal storage unit of the electronic device 1 and an external storage device.
  • the memory 11 can not only be used to store application software installed in the electronic device 1 and various types of data, such as the code of the image generation program 12, etc., but also can be used to temporarily store data that has been output or will be output.
  • the processor 10 may be composed of integrated circuits, for example, may be composed of a single packaged integrated circuit, or may be composed of multiple integrated circuits packaged with the same function or different functions, including one or more integrated circuits.
  • Central Processing Unit CPU
  • microprocessor digital processing chip
  • graphics processor and combination of various control chips, etc.
  • the processor 10 is the control core (Control Unit) of the electronic device, and uses various interfaces and lines to connect the various components of the entire electronic device, by running or executing the program or module (for example, executing the program) stored in the memory 11. image generation program, etc.), and call data stored in the memory 11 to execute various functions of the electronic device 1 and process data.
  • the bus may be a peripheral component interconnect (PCI for short) bus or an extended industry standard architecture (Extended industry standard architecture, EISA for short) bus or the like.
  • PCI peripheral component interconnect
  • EISA Extended industry standard architecture
  • the bus can be divided into address bus, data bus, control bus and so on.
  • the bus is configured to implement connection communication between the memory 11 and at least one processor 10 and the like.
  • FIG. 3 only shows an electronic device with components. Those skilled in the art can understand that the structure shown in FIG. 3 does not constitute a limitation on the electronic device 1, and may include fewer or more components than those shown in the figure. components, or a combination of certain components, or a different arrangement of components.
  • the electronic device 1 may also include a power supply (such as a battery) for powering the various components, preferably, the power supply may be logically connected to the at least one processor 10 through a power management device, so that the power management
  • the device implements functions such as charge management, discharge management, and power consumption management.
  • the power source may also include one or more DC or AC power sources, recharging devices, power failure detection circuits, power converters or inverters, power status indicators, and any other components.
  • the electronic device 1 may further include various sensors, Bluetooth modules, Wi-Fi modules, etc., which will not be repeated here.
  • the electronic device 1 may also include a network interface, optionally, the network interface may include a wired interface and/or a wireless interface (such as a WI-FI interface, a Bluetooth interface, etc.), which is usually used in the electronic device 1 Establish a communication connection with other electronic devices.
  • a network interface optionally, the network interface may include a wired interface and/or a wireless interface (such as a WI-FI interface, a Bluetooth interface, etc.), which is usually used in the electronic device 1 Establish a communication connection with other electronic devices.
  • the electronic device 1 may further include a user interface, and the user interface may be a display (Display), an input unit (eg, a keyboard (Keyboard)), optionally, the user interface may also be a standard wired interface or a wireless interface.
  • the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode, organic light-emitting diode) touch device, and the like.
  • the display may also be appropriately called a display screen or a display unit, which is used for displaying information processed in the electronic device 1 and for displaying a visualized user interface.
  • the image generation program 12 stored in the memory 11 in the electronic device 1 is a combination of multiple instructions, and when running in the processor 10, can realize:
  • the to-be-pushed video image set is pushed by using a push queue task.
  • the modules/units integrated by the electronic device 1 are implemented in the form of software functional units and sold or used as independent products, they can be stored in a computer-readable storage medium, and the computer-readable storage medium can be stored in a computer-readable storage medium. It is volatile and can also be non-volatile.
  • the computer-readable storage medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer memory, a read-only memory (ROM, Read-Only Memory) ).
  • the computer-readable storage medium includes a storage data area and a storage program area, wherein the storage data area stores created data, and the storage program area stores a computer program; wherein, the computer program is implemented when executed by a processor:
  • the to-be-pushed video image set is pushed by using a push queue task.
  • modules described as separate components may or may not be physically separated, and the components shown as modules may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional module in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units can be implemented in the form of hardware, or can be implemented in the form of hardware plus software function modules.
  • the blockchain referred to in this application is a new application mode of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanism, and encryption algorithm.
  • Blockchain essentially a decentralized database, is a series of data blocks associated with cryptographic methods. Each data block contains a batch of network transaction information to verify its Validity of information (anti-counterfeiting) and generation of the next block.
  • the blockchain can include the underlying platform of the blockchain, the platform product service layer, and the application service layer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Library & Information Science (AREA)
  • Human Computer Interaction (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

一种图像生成方法,装置及计算机可读存储介质,涉及图像处理技术,所述方法包括:获取图像需求,将所述图像需求进行词向量转化,得到需求向量(S1);对所述需求向量进行特征提取,得到需求特征,其中,所述需求特征包括清晰度特征、时间特征、提取图像数目特征(S2);获取目标视频,根据所述清晰度特征提取所述目标视频包含的图像,得到视频图像集(S3);根据所述时间特征和所述提取图像数目特征从所述视频图像集中选取图像,得到待推送视频图像集(S4);利用推送队列任务推送所述待推送视频图像集(S5)。还涉及区块链技术,图像需求可存储于区块链节点中。所述方法可以既高效又个性化的基于视频生成图像。

Description

图像生成方法、装置、电子设备及计算机可读存储介质
本申请要求于2020年9月3日提交中国专利局、申请号为CN202010914469.1、名称为“图像生成方法、装置、电子设备及计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理技术领域,尤其涉及一种图像生成方法、装置、电子设备及计算机可读存储介质。
背景技术
随着网络的迅速发展,网络中随时都可能产生大量的视频和图像资讯。在进行视频转化图像时,即生成图像时,需先进行图像筛选,若不进行筛选,则视频转化成图像后将得到大量单帧图像,如果将大量的单帧图像都推送至用户端,将消耗较大的网络带宽,占用较大的存储资源,也降低服务器端和用户端的运行效率。如何筛选出符合用户的图像并推送给用户,成为了越来越重要的需求。
目前市场上主流的视频转化成图像的方法是人工筛选视频和视频中的图像,从而选择性的将视频中的一帧或多帧转化成图像。发明人意识到此种方法过于依赖于人工进行,效率低下且筛选出的图像不符合用户的个性化需求,无法达到既高效又个性化的基于视频生成图像的目的。
发明内容
本申请提供的一种图像生成方法,包括:
获取图像需求,将所述图像需求进行词向量转化,得到需求向量;
对所述需求向量进行特征提取,得到需求特征,其中,所述需求特征包括清晰度特征、时间特征、提取图像数目特征;
获取目标视频,根据所述清晰度特征提取所述目标视频包含的图像,得到视频图像集;
根据所述时间特征和所述提取图像数目特征从所述视频图像集中选取图像,得到待推送视频图像集;
利用推送队列任务推送所述待推送视频图像集。
本申请还提供一种图像生成装置,所述装置包括:
需求向量生成模块,用于获取图像需求,将所述图像需求进行词向量转化,得到需求向量;
特征提取模块,用于对所述需求向量进行特征提取,得到需求特征,其中,所述需求特征包括清晰度特征、时间特征、提取图像数目特征;
视频图像获取模块,用于获取目标视频,根据所述清晰度特征提取所述目标视频包含的图像,得到视频图像集;
视频图像筛选模块,用于根据所述时间特征和所述提取图像数目特征从所述视频图像集中选取图像,得到待推送视频图像集;
视频图像推送模块,用于利用推送队列任务推送所述待推送视频图像集。
本申请还提供一种电子设备,所述电子设备包括:
至少一个处理器;以及,
与所述至少一个处理器通信连接的存储器;其中,
所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行如下步骤:
获取图像需求,将所述图像需求进行词向量转化,得到需求向量;
对所述需求向量进行特征提取,得到需求特征,其中,所述需求特征包括清晰度特征、时间特征、提取图像数目特征;
获取目标视频,根据所述清晰度特征提取所述目标视频包含的图像,得到视频图像集;
根据所述时间特征和所述提取图像数目特征从所述视频图像集中选取图像,得到待推送视频图像集;
利用推送队列任务推送所述待推送视频图像集。
本申请还提供一种计算机可读存储介质,包括存储数据区和存储程序区,其中,所述存储数据区存储创建的数据,所述存储程序区存储有计算机程序;其中,所述计算机程序被处理器执行时实现如下步骤:
获取图像需求,将所述图像需求进行词向量转化,得到需求向量;
对所述需求向量进行特征提取,得到需求特征,其中,所述需求特征包括清晰度特征、时间特征、提取图像数目特征;
获取目标视频,根据所述清晰度特征提取所述目标视频包含的图像,得到视频图像集;
根据所述时间特征和所述提取图像数目特征从所述视频图像集中选取图像,得到待推送视频图像集;
利用推送队列任务推送所述待推送视频图像集。
附图说明
图1为本申请一实施例提供的图像生成方法的流程示意图;
图2为本申请一实施例提供的图像生成装置的模块示意图;
图3为本申请一实施例提供的实现图像生成方法的电子设备的内部结构示意图;
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
具体实施方式
应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
本申请实施例提供的图像生成方法的执行主体包括但不限于服务端、终端等能够被配置为执行本申请实施例提供的该方法的电子设备中的至少一种。换言之,所述图像生成方法可以由安装在终端设备或服务端设备的软件或硬件来执行,所述软件可以是区块链平台。所述服务端包括但不限于:单台服务器、服务器集群、云端服务器或云端服务器集群等。
本申请提供一种图像生成方法。参照图1所示,为本申请一实施例提供的图像生成方法的流程示意图。在本实施例中,所述图像生成方法包括:
S1、获取图像需求,将所述图像需求进行词向量转化,得到需求向量。
本申请实施例中,所述图像需求可直接由用户进行上传,也可利用具有数据调用功能的java语句从预先构建的用于存储图像需求的存储区内获取,其中,所述存储区包括但不限于mysql数据库、Oracle数据库、用户端缓存区、区块链节点。
进一步的,在本申请一优选实施例中,所述图像需求从需求分析系统获取,例如,间隔预设时间访问需求分析系统,获取需求分析系统产生的图像需求。
本申请实施例中,利用词向量转化模型将所述图像需求进行词向量转化,所述词向量转化模型为去除CRF层的NER(Named Entity Recognition,命名实体识别)模型。
本实施例中,所述去除CRF层的NER模型包括:
字/词向量层,用于将所述图像需求中的单词和字符转化为字/词向量;
Bi-LSTM层,用于将所述字/词向量进行分割,以及对所述字/词向量分割后的内容进行编码,得到所述字/词向量的编码表征。
由于所述图像需求中包含的文本可能较多,文本中的语句可能较长,因此利用字/词向量层将图像需求中的单词和字符转化所得到的字/词向量可能较长,不利于读取所述图像需求,因此本申请实施例中,进一步利用Bi-LSTM层将所述字/词向量进行分割,得到所 述编码表征(所述编码表征即所述需求向量),有利于进一步从图像需求得到的需求向量(即编码表征)中快速获取精准信息。
优选地,所述Bi-LSTM层可采用java语言将字/词向量层得到的字/词向量进行分割,并对字/词向量分割后的内容进行编码。
本申请实施例将去除CRF层的NER模型作为词向量转化模型,可简化词向量转化模型的结构层次,减少模型的计算量,提高获取需求向量的效率。
S2、对所述需求向量进行特征提取,得到需求特征,其中,所述需求特征包括清晰度特征、时间特征、提取图像数目特征。
本申请实施例中,所述对所述需求向量进行特征提取,包括:
获取训练需求向量,以及所述训练需求向量对应的标准需求特征;
利用卷积神经网络对所述训练需求向量进行特征提取,得到训练需求特征;
计算所述训练需求特征和所述标准需求特征的差异值;
若所述训练需求特征与所述标准需求特征的差异值大于预设误差,则调整所述卷积神经网络的参数后,再次进行特征提取;
若所述训练需求特征与所述标准需求特征的差异值小于所述预设误差,则确认训练完成,获取训练完成的卷积神经网络;
利用所述训练完成的卷积神经网络对所述需求向量进行特征提取。
详细地,本申请实施例利用如下损失函数计算所述训练需求特征和所述标准需求特征的差异值
Figure PCTCN2021096536-appb-000001
Figure PCTCN2021096536-appb-000002
其中,
Figure PCTCN2021096536-appb-000003
表示所述训练需求特征,Y表示所述标准需求特征。
实际应用中,所述需求向量中可能存在着大量无用的向量,例如,用户名,图像需求上传时间等,因此,本申请实施例对所述需求向量进行特征提取,得到精准的需求特征,有利于精准的进行个性化图像的生成。
S3、获取目标视频,根据所述清晰度特征提取所述目标视频包含的图像,得到视频图像集。
本申请实施例中,所述获取目标视频,包括:
接收用户端发送的目标视频选取指令;
根据所述目标视频选取指令获取目标视频的码流地址;
根据所述码流地址下载所述目标视频。
详细地,所述目标视频选取指令中含有所述目标视频的视频名称、视频大小、视频存储地址之中的一项或多项信息。
本申请一可选实施例中,所述根据所述目标视频选取指令获取目标视频的码流地址包括:
根据所述目标选取指令包含的所述目标视频的视频名称获取所述目标视频的流码地址。
本申请另一可选实施例中,所述根据所述目标视频选取指令获取目标视频的码流地址包括:
从所述目标视频选取指令中解析所述目标视频的码流地址。
本申请实施例中,所述根据所述清晰度特征提取所述目标视频包含的图像,得到视频图像集,包括:
根据所述清晰度特征确定所述目标视频中多帧图像的目标清晰度;
判断所述目标视频中任意帧图像的清晰度是否为所述目标清晰度;
若所述目标视频中任意帧图像的清晰度不为所述目标清晰度,将所述目标视频中多帧 图像的清晰度转换为所述目标清晰度;
确定所述目标视频中多帧图像进行清晰度转换后得到的多张图像构成所述视频图像集。
较佳地,若所述目标视频中任意帧图像的清晰度不为所述目标清晰度,本申请实施例利用清晰度转化工具将所述目标视频中多帧图像的清晰度转换为所述目标清晰度。所述清晰度转化工具包括但不限于WonderFox转换器、Video转换器等。
S4、根据所述时间特征和所述提取图像数目特征从所述视频图像集中选取图像,得到待推送视频图像集。
本申请实施例中,所述时间特征包括与用户需要提取的时间段或特定时间点有关的时间条件;所述提取图像数目特征包括需要从视频图像集中提取的图像的数量,或者待提取的数量条件。
优选地,所述根据所述时间特征和所述提取图像数目特征从所述视频图像集中选取视频图像之前,所述方法还包括:
提取所述视频图像集的时序特征;
按照所述时序特征对所述视频图像集中的视频图像进行排序。
详细地,本申请实施例所述提取所述视频图像集的时序特征,包括:
利用如下时序特征提取算法提取所述视频图像集的时序特征b u(t):
Figure PCTCN2021096536-appb-000004
其中,d u为所述视频图像集中第u张视频图像,i为所述视频图像集中视频图像的数量,t u为所述视频图像集中第u张视频图像的获取时间,t u+1为所述视频图像集中第u+1张视频图像的获取时间。
本申请实施例中,获取所述视频图像集的时序特征后,将所述视频图像集中所有的视频图像按照所述时序特征进行排序,则在根据时间特征进行选取图像时,可以快速地选取多个符合时间特征的图像,提高图像选取的效率效率。
S5、利用推送队列任务推送所述待推送视频图像集。
详细地,所述利用推送队列任务推送所述待推送视频图像集,包括:
获取推送队列任务;
根据所述推送队列任务确定推送顺序;
根据所述推送顺序向用户推送所述待推送视频图像集。
本申请实施例中,所述待推送视频图像集可包含多张待推送视频图像,在批量推送待推送视频图像时,通过推送队列任务进行推送能够防止因同时对多份待推送视频图像进行推送操作而造成的数据拥塞,提高了对图像推送的效率和成功率。
优选地,所述推送队列任务采用订阅方通知消息列队(MQ)实现,具体的,通过设定时间的间隔阈值分批处理多份需要推送的待推送视频图像,从而确保前一批待推送视频图像送结束再继续处理后一批数据。
本申请实施例中,通过订阅方通知消息列队可以降低计算资源占用,将大量的数据进行切割并分批进行推送,避免因为数据拥塞而导致计算资源的占用与浪费。
本申请实施例通过将图像需求转化为需求向量,进而进行特征提取,能够快速的获取需求特征,有利于快速地获取个性化需求信息;同时,获取目标视频后,根据所述清晰度特征提取目标视频包含的图像,得到视频图像集,以及根据时间特征和提取图像数目特征从视频图像集中选取图像,从而能够快速的基于视频生成符合个性化需求信息的图像。因此本申请提出的图像生成方法,可以实现既高效又个性化的基于视频生成图像的目的。
如图2所示,是本申请图像生成装置的模块示意图。
本申请所述图像生成装置100可以安装于电子设备中。根据实现的功能,所述图像生 成装置100可以包括需求向量生成模块101、特征提取模块102、视频图像获取模块103、视频图像筛选模块104和视频图像推送模块105。本发所述模块也可以称之为单元,是指一种能够被电子设备处理器所执行,并且能够完成固定功能的一系列计算机程序段,其存储在电子设备的存储器中。
在本实施例中,关于各模块/单元的功能如下:
所述需求向量生成模块101,用于获取图像需求,将所述图像需求进行词向量转化,得到需求向量。
本申请实施例中,所述图像需求可直接由用户进行上传,也可利用具有数据调用功能的java语句从预先构建的用于存储图像需求的存储区内获取,其中,所述存储区包括但不限于mysql数据库、Oracle数据库、用户端缓存区、区块链节点。
进一步的,在本申请一优选实施例中,所述图像需求从需求分析系统获取,例如,间隔预设时间访问需求分析系统,获取需求分析系统产生的图像需求。
本申请实施例中,利用词向量转化模型将所述图像需求进行词向量转化,所述词向量转化模型为去除CRF层的NER(Named Entity Recognition,命名实体识别)模型。
本实施例中,所述去除CRF层的NER模型包括:
字/词向量层,用于将所述图像需求中的单词和字符转化为字/词向量;
Bi-LSTM层,用于将所述字/词向量进行分割,以及对所述字/词向量分割后的内容进行编码,得到所述字/词向量的编码表征。
由于所述图像需求中包含的文本可能较多,文本中的语句可能较长,因此利用字/词向量层将图像需求中的单词和字符转化所得到的字/词向量可能较长,不利于读取所述图像需求,因此本申请实施例中,进一步利用Bi-LSTM层将所述字/词向量进行分割,得到所述编码表征(所述编码表征即所述需求向量),有利于进一步从图像需求得到的需求向量(即编码表征)中快速获取精准信息。
优选地,所述Bi-LSTM层可采用java语言将字/词向量层得到的字/词向量进行分割,并对字/词向量分割后的内容进行编码。
本申请实施例将去除CRF层的NER模型作为词向量转化模型,可简化词向量转化模型的结构层次,减少模型的计算量,提高获取需求向量的效率。
所述特征提取模块102,用于对所述需求向量进行特征提取,得到需求特征,其中,所述需求特征包括清晰度特征、时间特征、提取图像数目特征。
本申请实施例中,所述特征提取模块102具体用于:
获取训练需求向量,以及所述训练需求向量对应的标准需求特征;
利用卷积神经网络对所述训练需求向量进行特征提取,得到训练需求特征;
计算所述训练需求特征和所述标准需求特征的差异值;
若所述训练需求特征与所述标准需求特征的差异值大于预设误差,则调整所述卷积神经网络的参数后,再次进行特征提取;
若所述训练需求特征与所述标准需求特征的差异值小于所述预设误差,则确认训练完成,获取训练完成的卷积神经网络;
利用所述训练完成的卷积神经网络对所述需求向量进行特征提取,得到需求特征。
详细地,本申请实施例利用如下损失函数计算所述训练需求特征和所述标准需求特征的差异值
Figure PCTCN2021096536-appb-000005
Figure PCTCN2021096536-appb-000006
其中,
Figure PCTCN2021096536-appb-000007
表示所述训练需求特征,Y表示所述标准需求特征。
实际应用中,所述需求向量中可能存在着大量无用的向量,例如,用户名,图像需求上传时间等,因此,本申请实施例对所述需求向量进行特征提取,得到精准的需求特征, 有利于精准的进行个性化图像的生成。
所述视频图像获取模块103,用于获取目标视频,根据所述清晰度特征提取所述目标视频包含的图像,得到视频图像集。
本申请实施例中,所述视频图像获取模块103包括获取单元和提取单元。
所述获取单元,用于接收用户端发送的目标视频选取指令;根据所述目标视频选取指令获取目标视频的码流地址;根据所述码流地址下载所述目标视频。
所述提取单元,用于根据所述清晰度特征提取所述目标视频包含的图像,得到视频图像集。
详细地,所述目标视频选取指令中含有所述目标视频的视频名称、视频大小、视频存储地址之中的一项或多项信息。
本申请一可选实施例中,所述根据所述目标视频选取指令获取目标视频的码流地址包括:
根据所述目标选取指令包含的所述目标视频的视频名称获取所述目标视频的流码地址。
本申请另一可选实施例中,所述根据所述目标视频选取指令获取目标视频的码流地址包括:
从所述目标视频选取指令中解析所述目标视频的码流地址。
本申请实施例中,所述提取单元具体用于:
根据所述清晰度特征确定所述目标视频中多帧图像的目标清晰度;
判断所述目标视频中任意帧图像的清晰度是否为所述目标清晰度;
若所述目标视频中任意帧图像的清晰度不为所述目标清晰度,将所述目标视频中多帧图像的清晰度转换为所述目标清晰度;
确定所述目标视频中多帧图像进行清晰度转换后得到的多张图像构成所述视频图像集。
较佳地,若所述目标视频中任意帧图像的清晰度不为所述目标清晰度,本申请实施例利用清晰度转化工具将所述目标视频中多帧图像的清晰度转换为所述目标清晰度。所述清晰度转化工具包括但不限于WonderFox转换器、Video转换器等。
所述视频图像筛选模块104,用于根据所述时间特征和所述提取图像数目特征从所述视频图像集中选取图像,得到待推送视频图像集。
本申请实施例中,所述时间特征包括与用户需要提取的时间段或特定时间点有关的时间条件;所述提取图像数目特征包括需要从视频图像集中提取的图像的数量,或者待提取的数量条件。
优选地,所述装置还包括时序提取模块,所述时序提取模块用于:
根据所述时间特征和所述提取图像数目特征从所述视频图像集筛选视频图像之前,提取所述视频图像集的时序特征;按照所述时序特征对所述视频图像集中的视频图像进行排序。
详细地,本申请实施例所述提取所述视频图像集的时序特征,包括:
利用如下时序特征提取算法提取视频图像集的时序特征b u(t):
Figure PCTCN2021096536-appb-000008
其中,d u为所述视频图像集中第u张视频图像,i为所述视频图像集中视频图像的数量,t u为所述视频图像集中第u张视频图像的获取时间,t u+1为所述视频图像集中第u+1张视频图像的获取时间。
本申请实施例中,获取所述视频图像集的时序特征后,将所述视频图像集中所有的视频图像按照所述时序特征进行排序,则在根据时间特征进行选取图像时,可以快速地选取 多个符合时间特征的图像,提高图像选取的效率效率。
所述视频图像推送模块105,用于利用推送队列任务推送所述待推送视频图像集。
详细地,所述视频图像推送模块105具体用于:
获取推送队列任务;
根据所述推送队列任务确定推送顺序;
根据所述推送顺序向用户推送所述待推送视频图像集中。
本申请实施例中,所述待推送视频图像集可包含多张待推送视频图像,在批量推送待推送视频图像时,通过推送队列任务进行推送能够防止因同时对多份待推送视频图像进行推送操作而造成的数据拥塞,提高了对图像推送的效率和成功率。
优选地,所述推送队列任务采用订阅方通知消息列队(MQ)实现,具体的,通过设定时间的间隔阈值分批处理多份需要推送的待推送视频图像,从而确保前一批待推送视频图像送结束再继续处理后一批数据。
本申请实施例中,通过订阅方通知消息列队可以降低计算资源占用,将大量的数据进行切割并分批进行推送,避免因为数据拥塞而导致计算资源的占用与浪费。
本申请实施例通过将图像需求转化为需求向量,进而进行特征提取,能够快速的获取需求特征,有利于快速地获取个性化需求信息;同时,获取目标视频后,根据所述清晰度特征提取目标视频包含的图像,得到视频图像集,以及根据时间特征和提取图像数目特征从视频图像集中选取图像,从而能够快速的基于视频生成符合个性化需求信息的图像。因此本申请提出的图像生成装置,可以实现既高效又个性化的基于视频生成图像的目的。
如图3所示,是本申请实现图像生成方法的电子设备的结构示意图。
所述电子设备1可以包括处理器10、存储器11和总线,还可以包括存储在所述存储器11中并可在所述处理器10上运行的计算机程序,如图像生成程序12。
其中,所述存储器11至少包括一种类型的可读存储介质,所述可读存储介质包括闪存、移动硬盘、多媒体卡、卡型存储器(例如:SD或DX存储器等)、磁性存储器、磁盘、光盘等。所述存储器11在一些实施例中可以是电子设备1的内部存储单元,例如该电子设备1的移动硬盘。所述存储器11在另一些实施例中也可以是电子设备1的外部存储设备,例如电子设备1上配备的插接式移动硬盘、智能存储卡(Smart Media Card,SMC)、安全数字(Secure Digital,SD)卡、闪存卡(Flash Card)等。进一步地,所述存储器11还可以既包括电子设备1的内部存储单元也包括外部存储设备。所述存储器11不仅可以用于存储安装于电子设备1的应用软件及各类数据,例如图像生成程序12的代码等,还可以用于暂时地存储已经输出或者将要输出的数据。
所述处理器10在一些实施例中可以由集成电路组成,例如可以由单个封装的集成电路所组成,也可以是由多个相同功能或不同功能封装的集成电路所组成,包括一个或者多个中央处理器(Central Processing unit,CPU)、微处理器、数字处理芯片、图形处理器及各种控制芯片的组合等。所述处理器10是所述电子设备的控制核心(Control Unit),利用各种接口和线路连接整个电子设备的各个部件,通过运行或执行存储在所述存储器11内的程序或者模块(例如执行图像生成程序等),以及调用存储在所述存储器11内的数据,以执行电子设备1的各种功能和处理数据。
所述总线可以是外设部件互连标准(peripheral component interconnect,简称PCI)总线或扩展工业标准结构(extended industry standard architecture,简称EISA)总线等。该总线可以分为地址总线、数据总线、控制总线等。所述总线被设置为实现所述存储器11以及至少一个处理器10等之间的连接通信。
图3仅示出了具有部件的电子设备,本领域技术人员可以理解的是,图3示出的结构并不构成对所述电子设备1的限定,可以包括比图示更少或者更多的部件,或者组合某些部件,或者不同的部件布置。
例如,尽管未示出,所述电子设备1还可以包括给各个部件供电的电源(比如电池),优选地,电源可以通过电源管理装置与所述至少一个处理器10逻辑相连,从而通过电源管理装置实现充电管理、放电管理、以及功耗管理等功能。电源还可以包括一个或一个以上的直流或交流电源、再充电装置、电源故障检测电路、电源转换器或者逆变器、电源状态指示器等任意组件。所述电子设备1还可以包括多种传感器、蓝牙模块、Wi-Fi模块等,在此不再赘述。
进一步地,所述电子设备1还可以包括网络接口,可选地,所述网络接口可以包括有线接口和/或无线接口(如WI-FI接口、蓝牙接口等),通常用于在该电子设备1与其他电子设备之间建立通信连接。
可选地,该电子设备1还可以包括用户接口,用户接口可以是显示器(Display)、输入单元(比如键盘(Keyboard)),可选地,用户接口还可以是标准的有线接口、无线接口。可选地,在一些实施例中,显示器可以是LED显示器、液晶显示器、触控式液晶显示器以及OLED(Organic Light-Emitting Diode,有机发光二极管)触摸器等。其中,显示器也可以适当的称为显示屏或显示单元,用于显示在电子设备1中处理的信息以及用于显示可视化的用户界面。
应该了解,所述实施例仅为说明之用,在专利申请范围上并不受此结构的限制。
所述电子设备1中的所述存储器11存储的图像生成程序12是多个指令的组合,在所述处理器10中运行时,可以实现:
获取图像需求,将所述图像需求进行词向量转化,得到需求向量;
对所述需求向量进行特征提取,得到需求特征,其中,所述需求特征包括清晰度特征、时间特征、提取图像数目特征;
获取目标视频,根据所述清晰度特征提取所述目标视频包含的图像,得到视频图像集;
根据所述时间特征和所述提取图像数目特征从所述视频图像集中选取图像,得到待推送视频图像集;
利用推送队列任务推送所述待推送视频图像集。
进一步地,所述电子设备1集成的模块/单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读存储介质中,所述计算机可读存储介质可以是易失性的,也可以是非易失性的。所述计算机可读存储介质可以包括:能够携带所述计算机程序代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(ROM,Read-Only Memory)。
所述计算机可读存储介质包括存储数据区和存储程序区,其中,所述存储数据区存储创建的数据,所述存储程序区存储有计算机程序;其中,所述计算机程序被处理器执行时实现如下步骤:
获取图像需求,将所述图像需求进行词向量转化,得到需求向量;
对所述需求向量进行特征提取,得到需求特征,其中,所述需求特征包括清晰度特征、时间特征、提取图像数目特征;
获取目标视频,根据所述清晰度特征提取所述目标视频包含的图像,得到视频图像集;
根据所述时间特征和所述提取图像数目特征从所述视频图像集中选取图像,得到待推送视频图像集;
利用推送队列任务推送所述待推送视频图像集。
在本申请所提供的几个实施例中,应该理解到,所揭露的设备,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络 单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能模块的形式实现。
对于本领域技术人员而言,显然本申请不限于上述示范性实施例的细节,而且在不背离本申请的精神或基本特征的情况下,能够以其他的具体形式实现本申请。
因此,无论从哪一点来看,均应将实施例看作是示范性的,而且是非限制性的,本申请的范围由所附权利要求而不是上述说明限定,因此旨在将落在权利要求的等同要件的含义和范围内的所有变化涵括在本申请内。不应将权利要求中的任何附关联图表记视为限制所涉及的权利要求。
本申请所指区块链是分布式数据存储、点对点传输、共识机制、加密算法等计算机技术的新型应用模式。区块链(Blockchain),本质上是一个去中心化的数据库,是一串使用密码学方法相关联产生的数据块,每一个数据块中包含了一批次网络交易的信息,用于验证其信息的有效性(防伪)和生成下一个区块。区块链可以包括区块链底层平台、平台产品服务层以及应用服务层等。
此外,显然“包括”一词不排除其他单元或步骤,单数不排除复数。系统权利要求中陈述的多个单元或装置也可以由一个单元或装置通过软件或者硬件来实现。第二等词语用来表示名称,而并不表示任何特定的顺序。
最后应说明的是,以上实施例仅用以说明本申请的技术方案而非限制,尽管参照较佳实施例对本申请进行了详细说明,本领域的普通技术人员应当理解,可以对本申请的技术方案进行修改或等同替换,而不脱离本申请技术方案的精神和范围。

Claims (20)

  1. 一种图像生成方法,其中,所述方法包括:
    获取图像需求,将所述图像需求进行词向量转化,得到需求向量;
    对所述需求向量进行特征提取,得到需求特征,其中,所述需求特征包括清晰度特征、时间特征、提取图像数目特征;
    获取目标视频,根据所述清晰度特征提取所述目标视频包含的图像,得到视频图像集;
    根据所述时间特征和所述提取图像数目特征从所述视频图像集中选取图像,得到待推送视频图像集;
    利用推送队列任务推送所述待推送视频图像集。
  2. 如权利要求1所述的图像生成方法,其中,所述获取目标视频,包括:
    接收用户端发送的目标视频选取指令;
    根据所述目标视频选取指令获取目标视频的码流地址;
    根据所述码流地址下载所述目标视频。
  3. 如权利要求1所述的图像生成方法,其中,所述根据所述清晰度特征提取所述目标视频包含的图像,得到视频图像集,包括:
    根据所述清晰度特征确定所述目标视频中多帧图像的目标清晰度;
    判断所述目标视频中任意帧图像的清晰度是否为所述目标清晰度;
    若所述目标视频中任意帧图像的清晰度不为所述目标清晰度,将所述目标视频中多帧图像的清晰度转换为所述目标清晰度;
    确定所述目标视频中多帧图像进行清晰度转换后得到的多张图像构成所述视频图像集。
  4. 如权利要求1至3中任一项所述的图像生成方法,其中,所述根据所述时间特征和所述提取图像数目特征从所述视频图像集中选取图像之前,所述方法还包括:
    提取所述视频图像集的时序特征;
    按照所述时序特征对所述视频图像集中的视频图像进行排序。
  5. 如权利要求4所述的图像生成方法,其中,所述提取所述视频图像集的时序特征,包括:
    利用如下时序特征提取算法提取所述视频图像集的时序特征b u(t):
    Figure PCTCN2021096536-appb-100001
    其中,d u为所述视频图像集中第u张视频图像,i为所述视频图像集中视频图像的数量,t u为所述视频图像集中第u张视频图像的获取时间,t u+1为所述视频图像集中第u+1张视频图像的获取时间。
  6. 如权利要求1至3中任一项所述的图像生成方法,其中,所述利用推送队列任务推送所述待推送视频图像集,包括:
    获取推送队列任务;
    根据所述推送队列任务确定推送顺序;
    根据所述推送顺序向用户推送所述待推送视频图像集。
  7. 如权利要求1至3中任一项所述的图像生成方法,其中,所述对所述需求向量进行特征提取,包括:
    获取训练需求向量,以及所述训练需求向量对应的标准需求特征;
    利用卷积神经网络对所述训练需求向量进行特征提取,得到训练需求特征;
    计算所述训练需求特征和所述标准需求特征的差异值;
    若所述训练需求特征与所述标准需求特征的差异值大于预设误差,则调整所述卷积神 经网络的参数后,再次进行特征提取;
    若所述训练需求特征与所述标准需求特征的差异值小于所述预设误差,则确认训练完成,获取训练完成的卷积神经网络;
    利用所述训练完成的卷积神经网络对所述需求向量进行特征提取。
  8. 一种图像生成装置,其中,所述装置包括:
    需求向量生成模块,用于获取图像需求,将所述图像需求进行词向量转化,得到需求向量;
    特征提取模块,用于对所述需求向量进行特征提取,得到需求特征,其中,所述需求特征包括清晰度特征、时间特征、提取图像数目特征;
    视频图像获取模块,用于获取目标视频,根据所述清晰度特征提取所述目标视频包含的图像,得到视频图像集;
    视频图像筛选模块,用于根据所述时间特征和所述提取图像数目特征从所述视频图像集中选取图像,得到待推送视频图像集;
    视频图像推送模块,用于利用推送队列任务推送所述待推送视频图像集。
  9. 一种电子设备,其中,所述电子设备包括:
    至少一个处理器;以及,
    与所述至少一个处理器通信连接的存储器;其中,
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行如下步骤:
    获取图像需求,将所述图像需求进行词向量转化,得到需求向量;
    对所述需求向量进行特征提取,得到需求特征,其中,所述需求特征包括清晰度特征、时间特征、提取图像数目特征;
    获取目标视频,根据所述清晰度特征提取所述目标视频包含的图像,得到视频图像集;
    根据所述时间特征和所述提取图像数目特征从所述视频图像集中选取图像,得到待推送视频图像集;
    利用推送队列任务推送所述待推送视频图像集。
  10. 如权利要求9所述的电子设备,其中,所述获取目标视频,包括:
    接收用户端发送的目标视频选取指令;
    根据所述目标视频选取指令获取目标视频的码流地址;
    根据所述码流地址下载所述目标视频。
  11. 如权利要求9所述的电子设备,其中,所述根据所述清晰度特征提取所述目标视频包含的图像,得到视频图像集,包括:
    根据所述清晰度特征确定所述目标视频中多帧图像的目标清晰度;
    判断所述目标视频中任意帧图像的清晰度是否为所述目标清晰度;
    若所述目标视频中任意帧图像的清晰度不为所述目标清晰度,将所述目标视频中多帧图像的清晰度转换为所述目标清晰度;
    确定所述目标视频中多帧图像进行清晰度转换后得到的多张图像构成所述视频图像集。
  12. 如权利要求9至11中任一项所述的电子设备,其中,所述根据所述时间特征和所述提取图像数目特征从所述视频图像集中选取图像之前,所述指令被所述至少一个处理器执行时还实现如下步骤:
    提取所述视频图像集的时序特征;
    按照所述时序特征对所述视频图像集中的视频图像进行排序。
  13. 如权利要求12所述的电子设备,其中,所述提取所述视频图像集的时序特征,包括:
    利用如下时序特征提取算法提取所述视频图像集的时序特征b u(t):
    Figure PCTCN2021096536-appb-100002
    其中,d u为所述视频图像集中第u张视频图像,i为所述视频图像集中视频图像的数量,t u为所述视频图像集中第u张视频图像的获取时间,t u+1为所述视频图像集中第u+1张视频图像的获取时间。
  14. 如权利要求9至11中任一项所述的电子设备,其中,所述利用推送队列任务推送所述待推送视频图像集,包括:
    获取推送队列任务;
    根据所述推送队列任务确定推送顺序;
    根据所述推送顺序向用户推送所述待推送视频图像集。
  15. 如权利要求9至11中任一项所述的电子设备,其中,所述对所述需求向量进行特征提取,包括:
    获取训练需求向量,以及所述训练需求向量对应的标准需求特征;
    利用卷积神经网络对所述训练需求向量进行特征提取,得到训练需求特征;
    计算所述训练需求特征和所述标准需求特征的差异值;
    若所述训练需求特征与所述标准需求特征的差异值大于预设误差,则调整所述卷积神经网络的参数后,再次进行特征提取;
    若所述训练需求特征与所述标准需求特征的差异值小于所述预设误差,则确认训练完成,获取训练完成的卷积神经网络;
    利用所述训练完成的卷积神经网络对所述需求向量进行特征提取。
  16. 一种计算机可读存储介质,包括存储数据区和存储程序区,其中,所述存储数据区存储创建的数据,所述存储程序区存储有计算机程序;其中,所述计算机程序被处理器执行时实现如下步骤:
    获取图像需求,将所述图像需求进行词向量转化,得到需求向量;
    对所述需求向量进行特征提取,得到需求特征,其中,所述需求特征包括清晰度特征、时间特征、提取图像数目特征;
    获取目标视频,根据所述清晰度特征提取所述目标视频包含的图像,得到视频图像集;
    根据所述时间特征和所述提取图像数目特征从所述视频图像集中选取图像,得到待推送视频图像集;
    利用推送队列任务推送所述待推送视频图像集。
  17. 如权利要求16所述的计算机可读存储介质,其中,所述获取目标视频,包括:
    接收用户端发送的目标视频选取指令;
    根据所述目标视频选取指令获取目标视频的码流地址;
    根据所述码流地址下载所述目标视频。
  18. 如权利要求16所述的计算机可读存储介质,其中,所述根据所述清晰度特征提取所述目标视频包含的图像,得到视频图像集,包括:
    根据所述清晰度特征确定所述目标视频中多帧图像的目标清晰度;
    判断所述目标视频中任意帧图像的清晰度是否为所述目标清晰度;
    若所述目标视频中任意帧图像的清晰度不为所述目标清晰度,将所述目标视频中多帧图像的清晰度转换为所述目标清晰度;
    确定所述目标视频中多帧图像进行清晰度转换后得到的多张图像构成所述视频图像集。
  19. 如权利要求16至18中任一项所述的计算机可读存储介质,其中,所述根据所述时间特征和所述提取图像数目特征从所述视频图像集中选取图像之前,所述计算机程序被处理器执行时还实现如下步骤:
    提取所述视频图像集的时序特征;
    按照所述时序特征对所述视频图像集中的视频图像进行排序。
  20. 如权利要求19所述的计算机可读存储介质,其中,所述提取所述视频图像集的时序特征,包括:
    利用如下时序特征提取算法提取所述视频图像集的时序特征b u(t):
    Figure PCTCN2021096536-appb-100003
    其中,d u为所述视频图像集中第u张视频图像,i为所述视频图像集中视频图像的数量,t u为所述视频图像集中第u张视频图像的获取时间,t u+1为所述视频图像集中第u+1张视频图像的获取时间。
PCT/CN2021/096536 2020-09-03 2021-05-27 图像生成方法、装置、电子设备及计算机可读存储介质 WO2022048204A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010914469.1 2020-09-03
CN202010914469.1A CN111984822A (zh) 2020-09-03 2020-09-03 图像生成方法、装置、电子设备及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2022048204A1 true WO2022048204A1 (zh) 2022-03-10

Family

ID=73448677

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/096536 WO2022048204A1 (zh) 2020-09-03 2021-05-27 图像生成方法、装置、电子设备及计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN111984822A (zh)
WO (1) WO2022048204A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114625340A (zh) * 2022-05-11 2022-06-14 深圳市商用管理软件有限公司 基于需求分析的商用软件研发方法、装置、设备及介质
CN116540792A (zh) * 2023-06-25 2023-08-04 福建天甫电子材料有限公司 一种草酸系ito蚀刻液制备的流量自动化控制方法及系统

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111984822A (zh) * 2020-09-03 2020-11-24 平安科技(深圳)有限公司 图像生成方法、装置、电子设备及计算机可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090142029A1 (en) * 2007-12-03 2009-06-04 Institute For Information Industry Motion transition method and system for dynamic images
CN110868598A (zh) * 2019-10-17 2020-03-06 上海交通大学 基于对抗生成网络的视频内容替换方法及系统
CN110929070A (zh) * 2019-12-09 2020-03-27 北京字节跳动网络技术有限公司 图像处理方法、装置、电子设备及存储介质
CN111126056A (zh) * 2019-12-06 2020-05-08 北京明略软件系统有限公司 一种识别触发词的方法及装置
CN111984822A (zh) * 2020-09-03 2020-11-24 平安科技(深圳)有限公司 图像生成方法、装置、电子设备及计算机可读存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090142029A1 (en) * 2007-12-03 2009-06-04 Institute For Information Industry Motion transition method and system for dynamic images
CN110868598A (zh) * 2019-10-17 2020-03-06 上海交通大学 基于对抗生成网络的视频内容替换方法及系统
CN111126056A (zh) * 2019-12-06 2020-05-08 北京明略软件系统有限公司 一种识别触发词的方法及装置
CN110929070A (zh) * 2019-12-09 2020-03-27 北京字节跳动网络技术有限公司 图像处理方法、装置、电子设备及存储介质
CN111984822A (zh) * 2020-09-03 2020-11-24 平安科技(深圳)有限公司 图像生成方法、装置、电子设备及计算机可读存储介质

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114625340A (zh) * 2022-05-11 2022-06-14 深圳市商用管理软件有限公司 基于需求分析的商用软件研发方法、装置、设备及介质
CN116540792A (zh) * 2023-06-25 2023-08-04 福建天甫电子材料有限公司 一种草酸系ito蚀刻液制备的流量自动化控制方法及系统
CN116540792B (zh) * 2023-06-25 2023-09-12 福建天甫电子材料有限公司 一种草酸系ito蚀刻液制备的流量自动化控制方法及系统

Also Published As

Publication number Publication date
CN111984822A (zh) 2020-11-24

Similar Documents

Publication Publication Date Title
WO2022048204A1 (zh) 图像生成方法、装置、电子设备及计算机可读存储介质
CN112052242B (zh) 数据查询方法、装置、电子设备及存储介质
CN112541745B (zh) 用户行为数据分析方法、装置、电子设备及可读存储介质
WO2022160449A1 (zh) 文本分类方法、装置、电子设备及存储介质
WO2022142020A1 (zh) 资讯推送方法、装置、电子设备及计算机可读存储介质
WO2022116435A1 (zh) 标题生成方法、装置、电子设备及存储介质
WO2021189827A1 (zh) 识别模糊图像的方法、装置、设备及计算机可读存储介质
CN112702228B (zh) 服务限流响应方法、装置、电子设备及可读存储介质
WO2020015087A1 (zh) 大规模图片处理方法、系统、计算机设备及计算机存储介质
WO2019056496A1 (zh) 图片复审概率区间生成方法及图片复审判定方法
JP7309811B2 (ja) データ注釈方法、装置、電子機器および記憶媒体
CN113435308B (zh) 文本多标签分类方法、装置、设备及存储介质
WO2022178933A1 (zh) 基于上下文的语音情感检测方法、装置、设备及存储介质
WO2023241385A1 (zh) 一种模型迁移方法、装置及电子设备
CN115409041B (zh) 一种非结构化数据提取方法、装置、设备及存储介质
CN111047657A (zh) 图片压缩方法、装置、介质及电子设备
CN103577604A (zh) 一种用于Hadoop分布式环境的图像索引结构
WO2022077914A1 (zh) 医学图片优化方法、装置、设备及计算机可读存储介质
WO2023173551A1 (zh) 短地址生成方法、装置、电子设备及计算机可读存储介质
CN112631675A (zh) 工作流配置方法、装置、设备及计算机可读存储介质
CN111738005A (zh) 命名实体对齐方法、装置、电子设备及可读存储介质
CN111667411A (zh) 一种图像传输方法、装置、电子设备及存储介质
CN117349030B (zh) 基于云计算集群的医疗数字系统、方法及设备
CN112527753B (zh) Dns解析记录无损压缩方法、装置、电子设备及存储介质
CN113836335B (zh) 一种缩略图显示方法、装置、电子设备及可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21863274

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21863274

Country of ref document: EP

Kind code of ref document: A1