CN111385489A - Method, device and equipment for manufacturing short video cover and storage medium - Google Patents
Method, device and equipment for manufacturing short video cover and storage medium Download PDFInfo
- Publication number
- CN111385489A CN111385489A CN202010203531.6A CN202010203531A CN111385489A CN 111385489 A CN111385489 A CN 111385489A CN 202010203531 A CN202010203531 A CN 202010203531A CN 111385489 A CN111385489 A CN 111385489A
- Authority
- CN
- China
- Prior art keywords
- position information
- image
- target object
- preview area
- reference line
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 45
- 238000003860 storage Methods 0.000 title claims abstract description 40
- 238000004519 manufacturing process Methods 0.000 title claims abstract description 22
- 239000013598 vector Substances 0.000 claims description 46
- 230000015654 memory Effects 0.000 claims description 19
- 238000003062 neural network model Methods 0.000 claims description 14
- 238000004364 calculation method Methods 0.000 claims description 13
- 238000012549 training Methods 0.000 claims description 2
- 206010063385 Intellectualisation Diseases 0.000 abstract description 5
- 230000000694 effects Effects 0.000 abstract description 4
- 238000012545 processing Methods 0.000 abstract description 4
- 238000013519 translation Methods 0.000 abstract description 4
- 238000010586 diagram Methods 0.000 description 10
- 230000009286 beneficial effect Effects 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 238000004590 computer program Methods 0.000 description 4
- 238000009826 distribution Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 4
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000003796 beauty Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The application discloses a method, a device, equipment and a storage medium for manufacturing a short video cover, and relates to the technical field of image processing. The specific implementation scheme is as follows: acquiring an image for making a short video cover; identifying position information of a target object in the image and position information of a layout reference line of the target object in the image; and displaying the target object and the image content around the target object in the front cover preview area according to the position information of the target object and the position information of the layout reference line in the image. According to the embodiment of the application, the target object and the surrounding image content are automatically displayed in the preview area by taking the layout reference line as the position reference, so that the effect similar to translation is realized, and the image does not need to be dragged manually; moreover, the target object can be displayed at a position with attractive appearance and balance sense in the front cover preview area according to the layout reference line, and the intellectualization of front cover generation is realized.
Description
Technical Field
The present application relates to computer technology, and in particular, to the field of image processing technology.
Background
When making a cover of multimedia such as an electronic book or video, a user generally needs to select an image suitable for the cover first and drag the image to a preview area of the cover. And clicking to determine when the user feels that the partial image in the preview area is appropriate, so that the partial image in the preview area can be used as a cover.
In reality, since the size of the front cover is limited and the size of the image is generally large, the user is required to manually drag the image in the preview area so that the image that the user feels is displayed in the preview area. Obviously, the method of manually dragging the image is not intelligent enough and is cumbersome to operate.
Disclosure of Invention
The embodiment of the application provides a method, a device, equipment and a storage medium for manufacturing a short video cover, so as to realize automation and intellectualization in the short video cover manufacturing process.
In a first aspect, an embodiment of the present application provides a method for manufacturing a short video cover, including:
acquiring an image for making a short video cover;
identifying position information of a target object in the image and position information of a layout reference line of the target object in the image;
calculating the relative position information of the target object relative to the layout reference line in the image according to the position information of the target object and the position information of the layout reference line in the image;
determining target position information of the target object in the front cover preview area according to the relative position information of the target object relative to the layout reference line in the image and the position information of the layout reference line in the front cover preview area;
and displaying the target object and the image content around the target object in the front cover preview area according to the target position information.
The position information of the target object in the image and the position information of the layout reference line of the target object in the image are identified, and the relative position information of the target object and the layout reference line is calculated, so that the position of the target object distributed on the layout reference line is obtained; the front cover preview area in the embodiment has a fixed layout reference line, and on the premise, the target position information of the target object in the front cover preview area can be determined according to the relative position information and the position information of the layout reference line in the front cover preview area; and then automatically displaying the target object and the image content around the target object in the front cover preview area according to the target position information. In the embodiment, the target object and the surrounding image content are automatically displayed in the preview area by taking the layout reference line as the position reference, so that the effect similar to translation is realized without manually dragging the image; moreover, the target object can be displayed at a position with attractive appearance and balance sense in the front cover preview area according to the layout reference line, and the intellectualization of front cover generation is realized.
Optionally, the identifying the position information of the target object in the image includes:
identifying position information of key points of a target object in the image;
the calculating the relative position information of the target object relative to the layout reference line in the image according to the position information of the target object and the position information of the layout reference line in the image includes:
calculating the relative position information of the key points relative to the layout reference lines in the image according to the position information of the key points and the position information of the layout reference lines in the image;
determining target position information of the target object in the front cover preview area according to the relative position information of the target object relative to the layout reference line in the image and the position information of the layout reference line in the front cover preview area, including:
and determining the target position information of the key points in the front cover preview area according to the relative position information of the key points relative to the layout reference line in the image and the position information of the layout reference line in the front cover preview area.
In an optional implementation manner in the above application, the key points are visual interest points in the target object, and then a small number of interested key points are used to represent the target object for subsequent calculation, instead of all pixel points of the target object, so that the calculation amount can be reduced; and at the same time, the position of the target object can be accurately represented.
Optionally, identifying the position information of the key point of the target object in the image and the position information of the layout reference line of the target object in the image includes:
inputting the image into a deep neural network model to obtain position information of key points of a target object in the image output by the deep neural network model and position information of a layout reference line of the target object in the image;
the deep neural network model is trained by adopting image samples marked with key points and layout reference lines.
In an optional implementation manner in the above application, the position information of the key points and the layout reference lines can be accurately identified by using the deep neural network model, so that the layout reference lines in the image are not fixed, and the layout condition of the key points of the target object is reflected, so that the key points and the pixel points of the surrounding pixels are displayed according to the layout reference lines in the preview region.
Optionally, displaying the target object and image content around the target object in the front page preview area according to the target position information, including:
calculating a position offset vector of the key point according to the target position information of the key point in the front page preview area and the position information of the key point in the image;
providing the pixel values of the key points into the cover preview area for display according to the position offset vector;
and providing the pixel values of the pixel points around the key point into the cover preview area for displaying.
In an optional implementation manner in the above application, the pixel values of the key points are "translated" to the cover preview area according to the position offset vector, and this translation-like display method is beneficial to improving the display efficiency; displaying key points by adopting position offset vectors obtained based on the layout reference lines, and then displaying pixel points around the key points, wherein the key points are preferentially ensured to be displayed at the position with attractive layout in the front cover preview area; moreover, when the number of the key points is at least two, the pixel values of the pixel points around each key point can be provided in parallel to the front cover preview area for display, and the display efficiency is improved.
Optionally, the number of the key points is at least two;
before the providing, according to the position offset vector, the pixel values of the key points into the cover preview area for display, further comprising:
storing the pixel value of the non-overlapped part of each key point and other key points into a first storage table;
storing pixel values of an overlapping portion between the key points in a second storage table;
providing, by the computing device, pixel values of the keypoints for display in the cover preview area according to the position offset vector, including:
reading the pixel value of the non-overlapped part of each key point and other key points from the first storage table, and providing the pixel value of the non-overlapped part of each key point and other key points into the front cover preview area for displaying according to the offset vector corresponding to each key point;
and reading the pixel values of the overlapping parts of the key points from the second storage table, and providing the pixel values of the overlapping parts of the key points into the front cover preview area for displaying according to the offset vectors corresponding to the overlapped key points.
In an alternative embodiment of the foregoing application, the key points are divided into non-overlapping portions and overlapping portions, and pixel values of the two portions are stored in a storage table respectively; when the pixel values of the key points are displayed subsequently, the pixel values are directly read from the storage table, so that the display efficiency is improved; and simultaneously adopting a display strategy based on the offset vector for the non-overlapped part and the overlapped part, so that the key points in the image are consistent with the key points in the front cover preview area.
Optionally, providing pixel values of pixel points around the key point to the front cover preview area for displaying includes:
calculating the relative position information of pixel points around the key point relative to the key point in the image;
and providing pixel values of the pixel points around the key point into the cover preview area for displaying according to the relative position information of the pixel points around the key point relative to the key point and the position offset vector.
In an optional implementation manner in the above application, pixel points around the key point "translate" into the cover preview area for display according to the relative position information relative to the key point and the position offset vector of the key point, and this translation-like display method is beneficial to improving the display efficiency; meanwhile, the relative positions of the key points and the surrounding pixel points in the image are kept consistent with those in the cover preview area.
In a second aspect, an embodiment of the present application further provides a device for making a short video cover, including:
the acquisition module is used for acquiring an image for manufacturing a short video cover;
the identification module is used for identifying the position information of a target object in the image and the position information of a layout reference line of the target object in the image;
a calculation module, configured to calculate, according to the position information of the target object and the position information of the layout reference line in the image, relative position information of the target object with respect to the layout reference line in the image;
the determining module is used for determining target position information of the target object in the front cover preview area according to the relative position information of the target object relative to the layout reference line in the image and the position information of the layout reference line in the front cover preview area;
and the display module is used for displaying the target object and the image content around the target object in the front cover preview area according to the target position information.
In a third aspect, an embodiment of the present application further provides an electronic device, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to execute a method for making a short video cover as provided in the embodiments of the first aspect.
In a fourth aspect, embodiments of the present application further provide a non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute a method for making a short video cover as provided in the first aspect.
Other effects of the above-described alternative will be described below with reference to specific embodiments.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
FIG. 1a is a flowchart of a method for making a short video cover according to a first embodiment of the present application;
FIG. 1b is a schematic diagram of a layout reference line in an image according to one embodiment of the present application;
FIG. 1c is a schematic diagram of a layout of reference lines in another image according to one embodiment of the present application;
FIG. 2 is a flowchart of a method for making a short video cover according to the second embodiment of the present application;
FIG. 3a is a flowchart of a method for making a short video cover according to a third embodiment of the present invention;
FIG. 3b is a schematic diagram of a short video cover production process according to a third embodiment of the present invention;
fig. 4 is a structural diagram of a short video cover manufacturing apparatus according to a fourth embodiment of the present application;
fig. 5 is a block diagram of an electronic device for implementing a method for making a short video cover according to an embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Example one
Fig. 1a is a flowchart of a method for making a short video cover according to a first embodiment of the present application, where the method is implemented by a device for making a short video cover, the device being implemented by software and/or hardware and being specifically configured in an electronic device with certain data computation capability.
A method for making a short video cover as shown in fig. 1a includes:
and S110, acquiring an image for making a short video cover.
In this embodiment, the short video cover may be a vlog short video, where vlog is called video weblog or video blog entirely, and is a variant derived from "blog", meaning "video blog" and "video weblog". Before making a short video cover, selecting an image, for example, extracting a first image frame of a video when making the video cover; for another example, when creating a head portrait, an image for creating a head portrait is acquired in response to a user's photographing operation or a selection operation of a photo in an album.
And S120, identifying the position information of the target object in the image and the position information of the layout reference line of the target object in the image.
The target object in the image may be a person, an animal, an object, or the like. Alternatively, the location information of a target object in the image is identified, considering that a short video cover will typically display a target object mostly. Specifically, a person in the image is preferentially identified, and an animal or an object in the image is not identified when the person is not identified. If multiple people are identified, one person is optionally targeted. Optionally, the position information of the target object may be position information of an external rectangular frame of the target object, such as a central coordinate of the external rectangular frame in an image coordinate system. The image coordinate system takes the lower left corner of the image as an origin, the transverse edge of the image is the X axis, and the longitudinal edge of the image is the Y axis.
The layout reference line (also called composition reference line) may be a squared figure, a golden section line, a cross line, etc. for representing layout information in the image. In a photographing scene, the layout reference line is used for guiding a user to layout objects in an image so as to make the layout of the objects in the image beautiful and have a balanced feeling. In this embodiment, on the premise that the target object is in the existing image, the position information of the layout reference line of the target object in the image is reversely identified. As can be seen, the layout reference line characterizes the distribution position of the target object in the image. Fig. 1b is a schematic diagram of a layout reference line in an image in the first embodiment of the present application, and fig. 1c is a schematic diagram of a layout reference line in another image in the first embodiment of the present application. As shown in fig. 1b, if the target object is on the left side of the image, the layout reference line (indicated by a dotted line) is also on the left side of the image and matches the distribution position of the target object; as shown in fig. 1c, if the target object is on the right side of the image, the layout reference line (indicated by a dotted line) is also on the right side of the image and matches the distribution position of the target object. Optionally, the position information of the layout reference line may be represented by an expression of the layout reference line in an image coordinate system; when the layout reference line has an intersection, the position information of the layout reference line is expressed by the coordinates of the intersection in the image coordinate system.
And S130, calculating the relative position information of the target object relative to the layout reference line in the image according to the position information of the target object and the position information of the layout reference line in the image.
Specifically, the position information of the target object is subtracted from the position information of the layout reference line to obtain relative position information.
Illustratively, the position information of the target object is the center coordinate of an external rectangular frame in the image coordinate system, the position information of the layout reference line is represented by the intersection point coordinate in the image coordinate system, and the difference value between the center coordinate and the intersection point coordinate is calculated to obtain the relative position information.
Illustratively, the position information of the target object is the central coordinate of an external rectangular frame under the image coordinate system, the position information of the layout reference line is represented by an expression of the layout reference line under the image coordinate system, the foot coordinate of the central coordinate on the layout reference line is calculated, and the difference value between the central coordinate and the foot coordinate is calculated to obtain the relative position information.
S140, determining target position information of the target object in the front cover preview area according to the relative position information of the target object relative to the layout reference line in the image and the position information of the layout reference line in the front cover preview area.
In this embodiment, the cover preview area may be circular or rectangular and smaller in size than the image. The embodiment needs to automatically display part of the content in the image in the front cover preview area, and the image in the front cover preview area will be used as the short video cover, so as to realize automatic making of the short video cover.
Since the target object is often an object with a certain semantic meaning, the present embodiment displays the target object in the front cover preview area, specifically, in the target position in the front cover preview area. Specifically, the front cover preview area includes a fixed layout reference line, which may be in a displayed state or a hidden state. The layout reference lines in the front cover preview area are identical to the layout reference lines themselves in the image, such as nine-square grids, and each grid is the same size. The position information of the layout reference line in the preview area of the cover can be represented by an expression of the layout reference line in the preview area coordinate system, or by intersection coordinates. The preview area coordinate system takes the lower left corner or the center of the preview area as an origin, the transverse direction is an X axis, and the longitudinal direction is a Y axis.
In the embodiment, the relative position information of the target object in the image relative to the layout reference line is consistent with the relative position information of the target object in the front cover preview area relative to the layout reference line, so that the beauty and balance feeling in the front cover preview area are ensured. And adding the relative position information of the target object relative to the layout reference line in the image to the position information of the layout reference line in the cover preview area to obtain the position information of the target object in the cover preview area, namely the target position information.
And S150, displaying the target object and the image content around the target object in the front cover preview area according to the target position information.
And the target position information is the position information of the target object in the front cover preview area, and the pixel value of the target object is provided to the target position information for displaying. In order to completely 'translate' a partial image of the front cover required to be generated into the front cover preview area, the image content around the target object also needs to be displayed in the front cover preview area, and specifically, the pixel values of the image content are provided to the periphery of the target position in the front cover preview area to be displayed according to the relative position relationship between the image content and the target object.
It should be noted that, in order to fill the front cover preview area and ensure the integrity of the front cover, the sum of the sizes of the target object and the image content around the target object should be in accordance with the size of the front cover preview area. Accordingly, after the target object is recognized, the size of the image content around the target object is determined according to the size of the jacket preview area and the size of the target object, and the image content within the determined size is displayed within the jacket preview area.
The method comprises the steps of recognizing position information of a target object in an image and position information of a layout reference line of the target object in the image, and calculating relative position information of the target object and the layout reference line, so that the position of the target object distributed on the layout reference line is obtained; the front cover preview area in the embodiment has a fixed layout reference line, and on the premise, the target position information of the target object in the front cover preview area can be determined according to the relative position information and the position information of the layout reference line in the front cover preview area; and then automatically displaying the target object and the image content around the target object in the front cover preview area according to the target position information. In the embodiment, the target object and the surrounding image content are automatically displayed in the preview area by taking the layout reference line as the position reference, so that the effect similar to translation is realized without manually dragging the image; moreover, the target object can be displayed at a position with attractive appearance and balance sense in the front cover preview area according to the layout reference line, and the intellectualization of front cover generation is realized.
Example two
Fig. 2 is a flowchart of a method for making a short video cover in the second embodiment of the present application, which is further optimized based on the above embodiments, and specifically, the operation "identifying the position information of the target object in the image and the position information of the layout reference line of the target object in the image" is refined into "identifying the position information of the key point of the target object in the image and the position information of the layout reference line of the key point in the image" to perform position calculation by using the key point instead of the target object.
The method for making the short video cover as shown in fig. 2 comprises the following steps:
s210, acquiring an image for making a short video cover.
And S220, identifying the position information of the key points of the target object in the image and the position information of the layout reference line of the target object in the image.
Illustratively, the target object is a human, and the key points include head, neck, shoulder, chest, and joints of the limbs, etc. Keypoints are also visual points of interest.
Optionally, the deep neural network model is trained in advance by using the image sample. And if the image sample is derived from the image shot by the user by adopting the layout reference line, the image sample is provided with the layout reference line. Then, the key points are marked on the image sample. After the model training is finished, the image for manufacturing the short video cover is input into the deep neural network model, and the position information of the key points of the target object in the image output by the deep neural network model and the position information of the layout reference line of the target object in the image are obtained.
And S230, calculating the relative position information of the key points relative to the layout reference lines in the image according to the position information of the key points and the position information of the layout reference lines in the image.
Similarly to the above-described embodiment, the position information of the key point is subtracted from the position information of the layout reference line to obtain relative position information. If the number of the key points is at least two, the position information of each key point needs to be differed from the position information of the layout reference line to obtain the relative position information of each key point relative to the layout reference line in the image.
Illustratively, the position information of the layout reference line is represented by intersection point coordinates in an image coordinate system, and the difference between the coordinates of each key point and the intersection point coordinates is calculated to obtain the relative position information of each key point relative to the layout reference line in the image.
Illustratively, the position information of the layout reference line is represented by an expression of the layout reference line in an image coordinate system, the foot coordinate of each key point coordinate on the layout reference line is calculated, and the difference value between each key point coordinate and the foot coordinate is calculated, so as to obtain the relative position information of each key point relative to the layout reference line in the image.
S240, determining target position information of the key points in the front page preview area according to the relative position information of the key points relative to the layout reference line in the image and the position information of the layout reference line in the front page preview area.
Specifically, the target position information of each key point in the front page preview area is determined according to the relative position information of each key point relative to the layout reference line in the image and the position information of the layout reference line in the front page preview area.
And S250, displaying the target object and the image content around the target object in the front cover preview area according to the target position information.
Specifically, according to the target position information of each key point in the front page preview area, the pixel value of the key point is provided to the target position information for displaying.
Further, according to the relative position relationship between the pixels around each key point and each key point, the pixel values of the pixels around each key point are provided to the periphery of the target position of each key point in the front cover preview area for display.
In the embodiment, the key points are visual interest points in the target object, and a small amount of interested key points are adopted to represent the target object for subsequent calculation instead of all pixel points of the target object, so that the calculation amount can be reduced; meanwhile, the key points can accurately represent the position of the target object.
Furthermore, the position information of the key points and the layout reference lines can be accurately identified by adopting the deep neural network model, so that the fact that the layout reference lines in the image are not fixed is also explained, the layout condition of the key points of the target object is reflected, and the key points and the pixel points of the surrounding pixels are displayed according to the layout reference lines in the preview area.
EXAMPLE III
Fig. 3a is a flowchart of a method for making a short video cover according to a third embodiment of the present invention, and fig. 3b is a schematic diagram of a process for making a short video cover according to a third embodiment of the present invention. The present embodiment is further optimized based on the above embodiments, specifically, the operation "displaying the target object and the image content around the target object in the front cover preview area according to the target position information" is refined to "calculating the position offset vector of the key point according to the target position information of the key point in the front cover preview area and the position information of the key point in the image; providing the pixel values of the key points into a front cover preview area for displaying according to the position offset vector; and providing pixel values of pixel points around the key point into the cover preview area for displaying, and providing a display method in the cover preview area.
The method for making the short video cover shown in fig. 3a includes:
s310, acquiring an image for making a short video cover.
The first image from the left of fig. 3b shows an image comprising a person.
And S320, identifying the position information of the key points of the target object in the image and the position information of the layout reference line of the target object in the image.
The second from the left of fig. 3b shows the 5 key points of the head, neck, shoulders and chest of the person in the image, and the position information of the target object in the squared figure (indicated by the dashed line).
S330, calculating the relative position information of the key points relative to the layout reference lines in the image according to the position information of the key points and the position information of the layout reference lines in the image.
S340, determining target position information of the key points in the front page preview area according to the relative position information of the key points relative to the layout reference line in the image and the position information of the layout reference line in the front page preview area.
Details of S310-S340 are described in the above embodiments, and are not described herein.
And S350, calculating a position offset vector of the key point according to the target position information of the key point in the front page preview area and the position information of the key point in the image.
And S360, providing the pixel values of the key points into the front cover preview area for displaying according to the position offset vector.
And S370, providing the pixel values of the pixel points around the key point into the cover preview area for displaying.
The third plot from the left of FIG. 3b shows the pixel values of the keypoints being provided for display in the preview area of the cover. The fourth plot from the left of FIG. 3b shows that pixel values for pixel points around the keypoint are provided to the cover preview area for display. Therefore, the pixel points around the key point not only include the pixel points of the target object but also include the pixel points around the target object.
At S350, the target position information of each key point in the front page preview area is subtracted from the position information of each key point in the image, so as to obtain a position offset vector of each key point. For example, if the target position information of a certain keypoint is (70,80), the position information of the keypoint in the image is (30, 50), and the position offset vector is (40, 30). It should be noted that, in general, the offset vectors for each keypoint are consistent, thereby ensuring that the relative positions of the keypoints in the preview area of the front cover are consistent with those in the image.
Next, at S360, the position information of the key points in the image is added to the position offset vector to obtain the pixel values at the target positions, so that the key points are "translated" into the cover preview area for display. After the pixel values of the key points are provided to the front cover preview area for display, the pixel values of the pixel points around the key points are immediately provided to the front cover preview area for display.
There are many pixel points covered by the key points, and if the number of the key points is at least two and the distribution is dense, there may be a pixel point overlapping part between the key points. Based on this, after S320 and before S350, the method further includes: storing the pixel value of the non-overlapped part of each key point and other key points into a first storage table; the pixel values of the overlapping portions between the keypoints are stored in a second storage table. The storage format of the first storage table is the name of the key point, the pixel point position and the pixel value of the non-overlapped part of the key point and other key points, and the storage format of the second storage table is the name of the overlapped key point, the pixel point position and the pixel value of the overlapped part.
Providing pixel values of the key points to a front cover preview area for display according to the position offset vector, wherein the pixel values comprise: reading the pixel value of the non-overlapped part of each key point and other key points from the first storage table, and providing the pixel value of the non-overlapped part of each key point and other key points into the front cover preview area for displaying according to the offset vector corresponding to each key point; and reading the pixel values of the overlapped parts among the key points from the second storage table, and providing the pixel values of the overlapped parts among the key points into the front cover preview area for displaying according to the offset vectors corresponding to the overlapped key points. The offset vector corresponding to the overlapped key point may be an offset vector corresponding to any overlapped key point.
At S370, relative position information of pixel points around the key point in the image with respect to the key point is calculated. When the number of the key points is at least two, the relative position information of the pixel points around all the key points relative to the appointed key point can be calculated from any appointed pixel point in at least two pixel points without distinguishing which key point is around a certain pixel point. And then, according to the relative position information and the position offset vector of the pixel points around the key point relative to the key point, providing the pixel values of the pixel points around the key point into a cover preview area for displaying. Specifically, the relative position information of the pixel points around the key point relative to the key point is added with the position offset vector to obtain the position offset vectors of the pixel points around all the key points, and then the pixel values of the pixel points around are provided to the cover preview area for displaying according to the position offset vectors of the pixel points around.
In the embodiment, the pixel values of the key points are translated to the cover preview area according to the position offset vector, and the translation-like display method is beneficial to improving the display efficiency; displaying key points by adopting position offset vectors obtained based on the layout reference lines, and then displaying pixel points around the key points, wherein the key points are preferentially ensured to be displayed at the position with attractive layout in the front cover preview area; moreover, when the number of the key points is at least two, the pixel values of the pixel points around each key point can be provided in parallel to the front cover preview area for display, and the display efficiency is improved.
Further, in order to avoid repeated display, the key points are divided into non-overlapped parts and overlapped parts with other key points, and pixel values of the two parts are respectively stored in a storage table; when the pixel values of the key points are displayed subsequently, the pixel values are directly read from the storage table, so that the display efficiency is improved; and simultaneously adopting a display strategy based on the offset vector for the non-overlapped part and the overlapped part, so that the key points in the image are consistent with the key points in the front cover preview area.
Further, pixel points around the key points are translated into the front cover preview area for display according to the relative position information relative to the key points and the position offset vectors of the key points, conversion from an image coordinate system to a preview area coordinate system is not needed, and the translation-like display method is beneficial to improving the display efficiency; meanwhile, the relative positions of the key points and the surrounding pixel points in the image are kept consistent with those in the cover preview area.
Example four
Fig. 4 is a structural diagram of an apparatus for creating a short video cover according to a fourth embodiment of the present invention, which is implemented by software and/or hardware and is specifically configured in an electronic device with certain data calculation capability, and is suitable for a case where an image is used to automatically create a short video cover in a cover preview area.
The apparatus 400 for making a short video cover as shown in fig. 4 comprises: an acquisition module 401, an identification module 402, a calculation module 403, a determination module 404 and a display module 405; wherein,
an obtaining module 401, configured to obtain an image for making a short video cover;
an identifying module 402, configured to identify position information of a target object in an image and position information of a layout reference line of the target object in the image;
a calculating module 403, configured to calculate relative position information of the target object with respect to the layout reference line in the image according to the position information of the target object and the position information of the layout reference line in the image;
a determining module 404, configured to determine target position information of the target object in the front cover preview area according to the relative position information of the target object with respect to the layout reference line in the image and the position information of the layout reference line in the front cover preview area;
and a display module 405, configured to display the target object and image content around the target object in the front page preview area according to the target position information.
The method comprises the steps of recognizing position information of a target object in an image and position information of a layout reference line of the target object in the image, and calculating relative position information of the target object and the layout reference line, so that the position of the target object distributed on the layout reference line is obtained; the front cover preview area in the embodiment has a fixed layout reference line, and on the premise, the target position information of the target object in the front cover preview area can be determined according to the relative position information and the position information of the layout reference line in the front cover preview area; and then automatically displaying the target object and the image content around the target object in the front cover preview area according to the target position information. In the embodiment, the target object and the surrounding image content are automatically displayed in the preview area by taking the layout reference line as the position reference, so that the effect similar to translation is realized without manually dragging the image; moreover, the target object can be displayed at a position with attractive appearance and balance sense in the front cover preview area according to the layout reference line, and the intellectualization of front cover generation is realized.
Further, the identifying module 402 is specifically configured to: identifying position information of key points of a target object in an image and position information of a layout reference line of the target object in the image; the calculation module 403 is specifically configured to: calculating the relative position information of the key points relative to the layout reference lines in the image according to the position information of the key points and the position information of the layout reference lines in the image; the determining module 404 is specifically configured to: and determining the target position information of the key points in the front cover preview area according to the relative position information of the key points relative to the layout reference line in the image and the position information of the layout reference line in the front cover preview area.
Further, the identifying module 402 is specifically configured to: inputting the image into a deep neural network model to obtain position information of key points of a target object in the image output by the deep neural network model and position information of a layout reference line of the target object in the image; the deep neural network model is trained by adopting image samples marked with key points and layout reference lines.
Further, the display module 405 includes a position offset vector calculation unit, a first display unit, and a second display unit. The position offset vector calculation unit is used for calculating the position offset vector of the key point according to the target position information of the key point in the front cover preview area and the position information of the key point in the image, the first display unit is used for providing the pixel value of the key point into the front cover preview area for display according to the position offset vector, and the second display unit is used for providing the pixel value of the pixel point around the key point into the front cover preview area for display.
Further, the number of the key points is at least two; the apparatus also includes a storage module to: storing the pixel value of the non-overlapped part of each key point and other key points into a first storage table; storing pixel values of an overlapping portion between the key points in a second storage table; the first display unit is specifically configured to: reading the pixel value of the non-overlapped part of each key point and other key points from the first storage table, and providing the pixel value of the non-overlapped part of each key point and other key points into the front cover preview area for displaying according to the offset vector corresponding to each key point; and reading the pixel values of the overlapped parts among the key points from the second storage table, and providing the pixel values of the overlapped parts among the key points into the front cover preview area for displaying according to the offset vectors corresponding to the overlapped key points.
Further, the second display unit is specifically configured to: calculating the relative position information of pixel points around the key points in the image relative to the key points; and providing the pixel values of the pixel points around the key point into a cover preview area for display according to the relative position information and the position offset vector of the pixel points around the key point relative to the key point.
The device for manufacturing the short video cover can execute the method for manufacturing the short video cover provided by any embodiment of the application, and has the corresponding functional modules and beneficial effects for executing the method for manufacturing the short video cover.
EXAMPLE five
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
Fig. 5 is a block diagram of an electronic device implementing the method for making a short video cover according to the embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 5, the electronic apparatus includes: one or more processors 501, memory 502, and interfaces for connecting the various components, including high-speed interfaces and low-speed interfaces. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). In fig. 5, one processor 501 is taken as an example.
The memory 502 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created by use of an electronic device that implements the making method of the short video cover, and the like. Further, the memory 502 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, memory 502 optionally includes memory located remotely from processor 501, which may be connected via a network to an electronic device that performs the method of making the short video cover. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device performing the method of making a short video cover may further include: an input device 503 and an output device 504. The processor 501, the memory 502, the input device 503 and the output device 504 may be connected by a bus or other means, and fig. 5 illustrates the connection by a bus as an example.
The input device 503 may receive input numeric or character information and generate key signal inputs related to user settings and function control of an electronic apparatus that performs the method of making the short video cover, such as a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointing stick, one or more mouse buttons, a track ball, a joystick, or other input devices. The output devices 504 may include a display device, auxiliary lighting devices (e.g., LEDs), and haptic feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), the internet, and blockchain networks.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.
Claims (10)
1. A method for making a short video cover is characterized by comprising the following steps:
acquiring an image for making a short video cover;
identifying position information of a target object in the image and position information of a layout reference line of the target object in the image;
calculating the relative position information of the target object relative to the layout reference line in the image according to the position information of the target object and the position information of the layout reference line in the image;
determining target position information of the target object in the front cover preview area according to the relative position information of the target object relative to the layout reference line in the image and the position information of the layout reference line in the front cover preview area;
and displaying the target object and the image content around the target object in the front cover preview area according to the target position information.
2. The method of claim 1, wherein the identifying location information of a target object in the image comprises:
identifying position information of key points of a target object in the image;
the calculating the relative position information of the target object relative to the layout reference line in the image according to the position information of the target object and the position information of the layout reference line in the image includes:
calculating the relative position information of the key points relative to the layout reference lines in the image according to the position information of the key points and the position information of the layout reference lines in the image;
determining target position information of the target object in the front cover preview area according to the relative position information of the target object relative to the layout reference line in the image and the position information of the layout reference line in the front cover preview area, including:
and determining the target position information of the key points in the front cover preview area according to the relative position information of the key points relative to the layout reference line in the image and the position information of the layout reference line in the front cover preview area.
3. The method according to claim 2, wherein the identifying the position information of the key points of the target object in the image and the position information of the layout reference lines of the target object in the image comprises:
inputting the image into a deep neural network model to obtain position information of key points of a target object in the image output by the deep neural network model and position information of a layout reference line of the target object in the image;
the deep neural network model is obtained by training image samples marked with key points and layout reference lines.
4. The method of claim 2, wherein the displaying the target object and image content around the target object within the front cover preview area according to the target location information comprises:
calculating a position offset vector of the key point according to the target position information of the key point in the front page preview area and the position information of the key point in the image;
providing the pixel values of the key points into the cover preview area for display according to the position offset vector;
and providing the pixel values of the pixel points around the key point into the cover preview area for displaying.
5. The method of claim 4, wherein the number of keypoints is at least two;
before the providing, according to the position offset vector, the pixel values of the key points into the cover preview area for display, further comprising:
storing the pixel value of the non-overlapped part of each key point and other key points into a first storage table;
storing pixel values of an overlapping portion between the key points in a second storage table;
providing, by the computing device, pixel values of the keypoints for display in the cover preview area according to the position offset vector, including:
reading the pixel value of the non-overlapped part of each key point and other key points from the first storage table, and providing the pixel value of the non-overlapped part of each key point and other key points into the front cover preview area for displaying according to the offset vector corresponding to each key point;
and reading the pixel values of the overlapping parts of the key points from the second storage table, and providing the pixel values of the overlapping parts of the key points into the front cover preview area for displaying according to the offset vectors corresponding to the overlapped key points.
6. The method of claim 4, wherein providing pixel values of pixel points surrounding the keypoint into the cover preview region for display comprises:
calculating the relative position information of pixel points around the key point relative to the key point in the image;
and providing pixel values of the pixel points around the key point into the cover preview area for displaying according to the relative position information of the pixel points around the key point relative to the key point and the position offset vector.
7. A device for making a short video cover, comprising:
the acquisition module is used for acquiring an image for manufacturing a short video cover;
the identification module is used for identifying the position information of a target object in the image and the position information of a layout reference line of the target object in the image;
a calculation module, configured to calculate, according to the position information of the target object and the position information of the layout reference line in the image, relative position information of the target object with respect to the layout reference line in the image;
the determining module is used for determining target position information of the target object in the front cover preview area according to the relative position information of the target object relative to the layout reference line in the image and the position information of the layout reference line in the front cover preview area;
and the display module is used for displaying the target object and the image content around the target object in the front cover preview area according to the target position information.
8. The apparatus of claim 7,
the identification module is specifically configured to: identifying position information of key points of a target object in the image and position information of a layout reference line of the target object in the image.
9. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of making a short video cover of any one of claims 1-6.
10. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of making a short video cover according to any one of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010203531.6A CN111385489B (en) | 2020-03-20 | 2020-03-20 | Method, device and equipment for manufacturing short video cover and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010203531.6A CN111385489B (en) | 2020-03-20 | 2020-03-20 | Method, device and equipment for manufacturing short video cover and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111385489A true CN111385489A (en) | 2020-07-07 |
CN111385489B CN111385489B (en) | 2022-09-23 |
Family
ID=71217285
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010203531.6A Active CN111385489B (en) | 2020-03-20 | 2020-03-20 | Method, device and equipment for manufacturing short video cover and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111385489B (en) |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6215914B1 (en) * | 1997-06-24 | 2001-04-10 | Sharp Kabushiki Kaisha | Picture processing apparatus |
JP2003141525A (en) * | 2001-11-05 | 2003-05-16 | Minolta Co Ltd | Image processing apparatus |
JP2009015540A (en) * | 2007-07-04 | 2009-01-22 | San Metsuse Kk | Page layout system |
CN102739954A (en) * | 2011-03-31 | 2012-10-17 | 卡西欧计算机株式会社 | Imaging device capable of combining images |
CN104240180A (en) * | 2014-08-08 | 2014-12-24 | 沈阳东软医疗系统有限公司 | Method and device for achieving automatic adjusting of images |
US20160012595A1 (en) * | 2014-07-10 | 2016-01-14 | Ditto Labs, Inc. | Systems, Methods, and Devices for Image Matching and Object Recognition in Images Using Image Regions |
JP2016045837A (en) * | 2014-08-26 | 2016-04-04 | 富士通株式会社 | Information processing apparatus, image determination method, and program |
EP3113077A1 (en) * | 2015-06-30 | 2017-01-04 | Lingaro Sp. z o.o. | A method and a system for image feature point description |
CN106709495A (en) * | 2017-01-22 | 2017-05-24 | 广东小天才科技有限公司 | Image area centering method and device |
CN107578439A (en) * | 2017-07-19 | 2018-01-12 | 阿里巴巴集团控股有限公司 | Generate the method, apparatus and equipment of target image |
CN109271085A (en) * | 2018-08-24 | 2019-01-25 | 广州优视网络科技有限公司 | Image display method, device and electronic equipment |
CN110223301A (en) * | 2019-03-01 | 2019-09-10 | 华为技术有限公司 | A kind of image cropping method and electronic equipment |
-
2020
- 2020-03-20 CN CN202010203531.6A patent/CN111385489B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6215914B1 (en) * | 1997-06-24 | 2001-04-10 | Sharp Kabushiki Kaisha | Picture processing apparatus |
JP2003141525A (en) * | 2001-11-05 | 2003-05-16 | Minolta Co Ltd | Image processing apparatus |
JP2009015540A (en) * | 2007-07-04 | 2009-01-22 | San Metsuse Kk | Page layout system |
CN102739954A (en) * | 2011-03-31 | 2012-10-17 | 卡西欧计算机株式会社 | Imaging device capable of combining images |
US20160012595A1 (en) * | 2014-07-10 | 2016-01-14 | Ditto Labs, Inc. | Systems, Methods, and Devices for Image Matching and Object Recognition in Images Using Image Regions |
CN104240180A (en) * | 2014-08-08 | 2014-12-24 | 沈阳东软医疗系统有限公司 | Method and device for achieving automatic adjusting of images |
JP2016045837A (en) * | 2014-08-26 | 2016-04-04 | 富士通株式会社 | Information processing apparatus, image determination method, and program |
EP3113077A1 (en) * | 2015-06-30 | 2017-01-04 | Lingaro Sp. z o.o. | A method and a system for image feature point description |
CN106709495A (en) * | 2017-01-22 | 2017-05-24 | 广东小天才科技有限公司 | Image area centering method and device |
CN107578439A (en) * | 2017-07-19 | 2018-01-12 | 阿里巴巴集团控股有限公司 | Generate the method, apparatus and equipment of target image |
CN109271085A (en) * | 2018-08-24 | 2019-01-25 | 广州优视网络科技有限公司 | Image display method, device and electronic equipment |
CN110223301A (en) * | 2019-03-01 | 2019-09-10 | 华为技术有限公司 | A kind of image cropping method and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN111385489B (en) | 2022-09-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111652828B (en) | Face image generation method, device, equipment and medium | |
CN111860167B (en) | Face fusion model acquisition method, face fusion model acquisition device and storage medium | |
CN111259751B (en) | Human behavior recognition method, device, equipment and storage medium based on video | |
CN111294665B (en) | Video generation method and device, electronic equipment and readable storage medium | |
US10186084B2 (en) | Image processing to enhance variety of displayable augmented reality objects | |
CN111832745B (en) | Data augmentation method and device and electronic equipment | |
KR20150131358A (en) | Content creation tool | |
US20220291809A1 (en) | Systems and methods for augmented or mixed reality writing | |
KR102642866B1 (en) | Image recognition method and apparatus, electronic device, and medium | |
US10891801B2 (en) | Method and system for generating a user-customized computer-generated animation | |
CN112163577A (en) | Character recognition method and device in game picture, electronic equipment and storage medium | |
JP2021114313A (en) | Face composite image detecting method, face composite image detector, electronic apparatus, storage medium and computer program | |
CN113269781A (en) | Data generation method and device and electronic equipment | |
CN112036315A (en) | Character recognition method, character recognition device, electronic equipment and storage medium | |
KR20210139203A (en) | Commodity guiding method, apparatus, device and storage medium and computer program | |
CN113867875A (en) | Method, device, equipment and storage medium for editing and displaying marked object | |
CN105022480A (en) | Input method and terminal | |
CN112488126A (en) | Feature map processing method, device, equipment and storage medium | |
CN111385489B (en) | Method, device and equipment for manufacturing short video cover and storage medium | |
CN112686990B (en) | Three-dimensional model display method and device, storage medium and computer equipment | |
CN113761281A (en) | Virtual resource processing method, device, medium and electronic equipment | |
CN112560678A (en) | Expression recognition method, device, equipment and computer storage medium | |
CN111640179B (en) | Display method, device, equipment and storage medium of pet model | |
US20230119741A1 (en) | Picture annotation method, apparatus, electronic device, and storage medium | |
CN111325984B (en) | Sample data acquisition method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |