CN108958592B - Video processing method and related product - Google Patents

Video processing method and related product Download PDF

Info

Publication number
CN108958592B
CN108958592B CN201810756384.8A CN201810756384A CN108958592B CN 108958592 B CN108958592 B CN 108958592B CN 201810756384 A CN201810756384 A CN 201810756384A CN 108958592 B CN108958592 B CN 108958592B
Authority
CN
China
Prior art keywords
images
image
target
recall
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810756384.8A
Other languages
Chinese (zh)
Other versions
CN108958592A (en
Inventor
陈标
曹威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810756384.8A priority Critical patent/CN108958592B/en
Publication of CN108958592A publication Critical patent/CN108958592A/en
Application granted granted Critical
Publication of CN108958592B publication Critical patent/CN108958592B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The embodiment of the application discloses a video processing method and a related product, wherein the method comprises the following steps: receiving a video fusion instruction for a first recall video and a second recall video, the first recall video comprising a first set of recall graphs and the second recall video comprising a second set of recall graphs; determining a plurality of repeated images between the first set of recall images and the second set of recall images; performing de-duplication operation on the images in the first recollection image set and the second recollection image set according to the multiple repeated images to obtain multiple target images; and generating a target recall video according to the plurality of target images. By the aid of the method and the device, video fusion of recall videos with different themes can be achieved, and fusion effect is improved.

Description

Video processing method and related product
Technical Field
The application relates to the technical field of electronic equipment, and mainly relates to a video processing method and a related product.
Background
With the development of electronic device technology, more and more users use electronic devices (such as mobile phones, tablet computers, and the like) to capture images. By means of shooting images or videos, the time, the place, the people, the scenery and the like corresponding to the current scene can be recorded. The information is recorded in the form of images or videos, and the recall videos are generated according to the images or videos, so that a user can conveniently recall corresponding scenes according to the images displayed in the recall videos.
Disclosure of Invention
The embodiment of the application provides a video processing method and a related product, which can realize video fusion of recall videos with different themes and improve the fusion effect.
In a first aspect, an embodiment of the present application provides a video processing method, including:
receiving a video fusion instruction for a first recall video and a second recall video, the first recall video comprising a first set of recall graphs and the second recall video comprising a second set of recall graphs;
determining a plurality of repeated images between the first set of recall images and the second set of recall images;
performing de-duplication operation on the images in the first recollection image set and the second recollection image set according to the multiple repeated images to obtain multiple target images;
and generating a target recall video according to the plurality of target images.
In a second aspect, an embodiment of the present application provides a video processing apparatus, including:
a receiving unit, configured to receive a video fusion instruction for a first recall video and a second recall video, where the first recall video includes a first recall atlas and the second recall video includes a second recall atlas;
a determining unit, configured to determine a plurality of repeated images between the first recollection atlas and the second recollection atlas;
the de-duplication unit is used for performing de-duplication operation on the images in the first recollection image set and the second recollection image set according to the multiple repeated images to obtain multiple target images;
and the generating unit is used for generating a target recall video according to the plurality of target images.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the program includes instructions for some or all of the steps described in the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, where the computer program makes a computer perform some or all of the steps as described in the first aspect of the present application.
In a fifth aspect, embodiments of the present application provide a computer program product, where the computer program product comprises a non-transitory computer-readable storage medium storing a computer program, the computer program being operable to cause a computer to perform some or all of the steps as described in the first aspect of embodiments of the present application. The computer program product may be a software installation package.
The embodiment of the application has the following beneficial effects:
after the video processing method and the related products are adopted, the electronic device receives a video fusion instruction aiming at a first recall video and a second recall video, wherein the first recall video comprises a first recall atlas, and the second recall video comprises a second recall atlas. Then, a plurality of repeated images between the first recall map set and the second recall map set are determined, the images in the first recall map set and the second recall map set are subjected to repeated removing operation according to the plurality of repeated images to obtain a plurality of target images, then a target recall video is generated according to the plurality of target images, video fusion of recall videos with different themes can be achieved, video fusion is conducted based on associated images, and video fusion effects can be improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Wherein:
fig. 1A is a schematic flowchart of a video processing method according to an embodiment of the present disclosure;
fig. 1B is a schematic diagram of an editing page corresponding to a recall video according to an embodiment of the present application;
fig. 2 is a schematic flowchart of another video processing method according to an embodiment of the present application;
fig. 3 is a schematic flowchart of another video processing method according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The electronic devices involved in the embodiments of the present application may include various handheld devices, vehicle-mounted devices, wearable devices, computing devices or other processing devices connected to a wireless modem with wireless communication functions, as well as various forms of User Equipment (UE), Mobile Stations (MS), terminal equipment (terminal device), and so on. For convenience of description, the above-mentioned devices are collectively referred to as electronic devices. The following describes embodiments of the present application in detail.
Referring to fig. 1A, an embodiment of the present application provides a flow chart illustrating a video processing method. Specifically, as shown in fig. 1A, a video processing method includes:
s101: video fusion instructions for the first recall video and the second recall video are received.
In the embodiment of the application, the electronic device classifies and selects the images or videos stored in the album in advance according to the time, the place, the people, the scenery and other subjects to obtain the recall atlas, and then generates the recall video according to the recall atlas and the corresponding playing parameters of different subjects. If one of the images or videos meets a plurality of topics, the image or video can be stored in the recall atlas corresponding to the corresponding topic, that is, the images or videos are included in the plurality of topics.
It should be noted that the recall video does not occupy the actual storage space, and is a file in a non-video format, similar to a slide. And recall that the atlas may include video in addition to images.
If the recall atlas contains a video, one or more frames of images in the video are extracted when the recall video is generated, so that the playing time is shortened. The method for extracting the image frame is not limited, the image frame can be extracted at intervals of a preset time length, the image frame can be extracted according to the image content of the image frame, and the like, so that the accuracy of image extraction can be improved, and the browsing experience of a user is improved.
The method for selecting the image or the video to obtain the recall atlas is not limited, and the evaluation value of the image or the video may be obtained and selected according to the evaluation value, where the evaluation dimension of the evaluation value may include color, exposure, definition, beauty effect, and the like, or may include the number of clicks by the user, the number of shares, whether the image or the video in the preference atlas is the image or the video in the preference atlas, or may be the number of praise obtained after the user uploads the image or the video to the social network, the evaluation information of the comment area, and the like, and is not limited herein.
In addition to the electronic device synthesizing the recall video according to the theme, the user may synthesize the recall video from a plurality of images or videos in the selected album, or delete or add images or videos in the recall video. The editing operation of the recall video is not limited, as shown in a display page on the left side in fig. 1B, the recall video can be selected through a selection component C1 in the display page, then, as shown on the right side of the display page, after a first recall video P1 and a second recall video P2 are selected, a corresponding selection component C2 displays a hook, and a selection component of a third recall video P3 that is not selected does not display the hook, the first recall video P1 and the second recall video P2 corresponding to the selection operation can be deleted through a deletion component C3 in the display page, or a friend can be sent or shared to a social media through a sending component C4 in the display page; the recall video may also be set, that is, play parameters such as a play duration, a play object, play music, a theme style, a cover image, title content, and the like, and at least two of the recall videos may also be fused, and the like, which is not limited herein.
In the embodiment of the application, an image or a video corresponding to a first theme is selected from an album to obtain a first recall atlas, an image or a video corresponding to a second theme is selected from the album to obtain a second recall atlas, the first recall atlas generates a first recall video, the second recall atlas generates a second recall video, the first recall video also corresponds to the first theme, and the second recall video also corresponds to the second theme.
The video fusion instruction is used for fusing the first recall video and the second recall video so as to generate a new recall video. The receiving method of the video fusion instruction in the embodiment of the application is not limited, and may be that the electronic device receives a selection operation and a fusion operation of a user for a first recall video and a second recall video, for example, in a display page on the right side as shown in fig. 1B, after selecting a first recall video P1 and a second recall video P2, then selects a fusion component C5; or the electronic device may perform fusion according to a search instruction input by a user, for example: assume that the first recall video is a recall video of the first half of the year 2017, and the second recall video is a recall video of the second half of the year 2017. When a searching instruction for viewing the recall video in 2017 sent by a user is received, a video fusion instruction for the first recall video and the second recall video is generated according to the searching instruction.
S102: a plurality of repeated images between the first set of recall images and the second set of recall images is determined.
In the embodiment of the application, the multiple repeated images are images with similarity between two images of the first recall atlas and the second recall atlas being larger than a threshold value. The method for obtaining the multiple repeated images is not limited, optionally, a first theme corresponding to the first recall atlas is determined, and a second theme corresponding to the second recall atlas is determined; searching a plurality of first associated images corresponding to the first theme in the second memory map set, and searching a plurality of second associated images corresponding to the second theme in the first memory map set; obtaining a similarity value between each first associated image in the plurality of first associated images and each second associated image in the plurality of second associated images to obtain a plurality of similarity values; and determining the plurality of repeated images according to the plurality of similarity values.
The first theme and the second theme are already described in step S101, and are not described herein again.
The method for obtaining the similarity between the two images is not limited, and a similarity detection method can be carried out by referring to a histogram, a Euclidean distance, a perceptual Hash value algorithm and the like; the method for determining the repeated images according to the similarity is not limited, and a similarity threshold value may be preset, that is, when the similarity value between two images is greater than the similarity threshold value, the two images are determined to be the repeated images.
It can be understood that, since one image may correspond to at least one topic, the images satisfying the first topic in the second recollection image set are searched to obtain a plurality of first associated images, and the images satisfying the second topic in the first recollection image set are searched to obtain a plurality of second associated images, that is, the associated images including another topic in one recollection image set are respectively determined. Then, a similarity value between each first associated image in the first associated images and each second associated image in the second associated images is obtained, and repeated images existing between the first associated images and the second associated images are determined according to the similarity values, so that the repeated images in the associated images are determined on the basis of determining the associated images in the two recall map sets, repeated calculation of the repeated images between the first recall map set and the second recall map set can be avoided, and user browsing experience is improved conveniently.
For example, the first recollection atlas includes 50 images, the second recollection atlas includes 38 images, the number of the images associated with the second theme in the first recollection atlas is 10, and the number of the images associated with the second theme in the second recollection atlas is 10, that is, 10 of the first associated images and 10 of the second associated images. And if the number of the repeated images is 5, determining that the first recollection image set and the second recollection image set comprise 83 target images.
In one example, the method further comprises: determining a target association value between the first topic and the second topic; if the target association value is smaller than or equal to an association threshold value, the step of searching a plurality of first association images corresponding to the first theme in the second recall atlas set is executed.
The target association value is used to describe the association degree between the corresponding words of the first theme and the second theme, for example: the first theme is 1 month in 2018, the second theme is Guangzhou, one is time, the other is place, the correlation value between the two is small, and the target correlation value is taken as 0; the first theme is Guangzhou, the second theme is Chongqing, the two themes are both places, the correlation value between the two themes is large, and the target correlation value is 0.8.
The size of the association threshold is not limited and may be set to 0. The association threshold is used for judging whether the first recall video and the second recall video are strong-association recall videos, namely when the target association value is less than or equal to the association threshold, the first recall video and the second recall video are judged to be two weak-association recall videos, the target recall video can be generated directly according to the first recall video and the second recall video, otherwise, the two strong-association recall videos are judged, the step of searching the multiple first association images corresponding to the first theme in the second recall graph set is executed, namely the step of determining the multiple repeated images is executed, so that the multiple target images are obtained by performing the de-duplication operation on the images in the first recall graph set and the second recall graph set according to the multiple repeated images, then the target recall videos are generated by the multiple target images, so that the video fusion of the recall videos of different themes is realized, and the video fusion is performed on the associated images when the association of the two recall videos is strong, when the relevance of the two recalling videos is weak, the videos can be directly fused without limitation, so that the video fusion effect and the video fusion efficiency are improved.
S103: and performing de-duplication operation on the images in the first recollection image set and the second recollection image set according to the multiple repeated images to obtain multiple target images.
That is, if the deduplication operation is deleting one duplicated image in the first recall map set and the second recall map set, two duplicated images are not included in the target images.
S104: and generating a target recall video according to the plurality of target images.
In the embodiment of the application, a method for generating the target recall video according to the plurality of target images is not limited, and the target recall video can be synthesized according to the time sequence of the plurality of target images; or selecting several images to be synthesized from the images not containing the repeated images in the first recall image set, selecting several images to be synthesized from the images not containing the repeated images in the second recall image set, determining the playing parameters of the images to be synthesized, and synthesizing a target recall video and the like according to the images to be synthesized and the playing parameters of the images to be synthesized.
The method for selecting the image to be synthesized is not limited, and one image from the similar images in the multiple related images can be used as the image to be synthesized or the multiple similar images can be synthesized into one frame of image to be synthesized.
Optionally, the generating a target recall video according to the plurality of target images includes: acquiring an image evaluation value corresponding to each target image in the multiple target images to obtain multiple image evaluation values; selecting a plurality of images to be synthesized from the plurality of target images according to the plurality of image evaluation values; and generating the target recall video according to the plurality of images to be synthesized.
In an optional embodiment, the method for determining the image evaluation value is not limited, and multiple evaluation dimensions may be preset, each evaluation dimension corresponds to a preset weight, wherein the sum of the scoring weights corresponding to each dimension in the multiple dimensions is 1, and the scoring weights may be set according to influence parameters of the image evaluation value or set by recommendation. The evaluation value corresponding to each evaluation dimension is respectively obtained, and the image evaluation value is obtained by weighting and calculating the evaluation value corresponding to the evaluation dimension and the preset weight, so that the image is evaluated from multiple aspects, and the accuracy of determining the image evaluation value can be improved.
In a possible example, if the plurality of target images include reference images, the obtaining the image evaluation value corresponding to each of the plurality of target images to obtain a plurality of image evaluation values includes: acquiring a target associated value between a first associated value corresponding to the reference image and the first theme and the second theme; acquiring an average correlation value between the first correlation value and the second correlation value; acquiring an evaluation dimension corresponding to the image type of the reference image; acquiring a reference evaluation value corresponding to the reference image according to the evaluation dimension; and acquiring an image evaluation value corresponding to the reference image according to the average correlation value and the reference evaluation value.
The first correlation value is a correlation value corresponding to the reference image and the first theme, and the second correlation value is a correlation value corresponding to the reference image and the second theme.
In an alternative embodiment, the evaluation dimensions for different image types may be preset, for example: a landscape type image whose evaluation dimensions are color, exposure, sharpness; an image of a person type whose evaluation dimension is color, focus, exposure, sharpness, and/or beauty effect; and a gourmet-type image whose evaluation dimension is color, exposure, sharpness, and/or beauty effect.
For example, in combination with an application scenario for acquiring a reference evaluation value of a reference image, if the image type of the reference image is a landscape type, assuming that the evaluation weight of color is 0.3, the evaluation weight of exposure is 0.2, and the evaluation weight of definition is 0.5, when the evaluation value of color is 80, the evaluation value of exposure is 60, and the evaluation value of definition is 80, the evaluation value of the reference image is 76.
In addition, the reference evaluation value can also obtain the evaluation value of the preset image evaluation algorithm to the reference image, and the evaluation values of a plurality of objects such as users or friends uploaded to the social network to the reference image, different evaluation values are obtained because different objects have different senses to image effects, the average value of the plurality of evaluation values corresponding to the reference image can be used as the reference evaluation value, and the plurality of evaluation values can also be weighted according to the weights corresponding to different evaluation types to obtain the reference evaluation value, so that the accuracy of determining the reference evaluation value is improved.
It is understood that the target association value between the reference image and the first subject and the second subject is obtained first, that is, the association relationship between the reference image and the first subject and the second subject is determined. And then acquiring a reference evaluation value corresponding to the reference image according to the evaluation dimension corresponding to the image type of the reference image, so as to determine the image evaluation value of the reference image according to the target correlation value and the reference evaluation value, namely determining the image evaluation value from the image quality of the reference image and the correlation between the first memory video and the second memory video, thereby improving the accuracy of determining the image evaluation value and facilitating the improvement of the accuracy of selecting the target image.
In one possible example, the generating of the target recall video from the plurality of images to be synthesized includes: acquiring an image type corresponding to each image to be synthesized in the plurality of images to be synthesized to obtain a plurality of image types; counting the number of images of each image type in the plurality of image types to obtain a plurality of image numbers; determining the playing parameter of each image type in the plurality of image types according to the pre-stored total playing time and the number of the plurality of images; and generating the target recall video according to each image to be synthesized in the plurality of images to be synthesized and the playing parameters of the corresponding image type.
The playing parameters include playing duration, playing sequence, frame rate, playing music, caption type, special effect, etc. The method for selecting the target image according to the plurality of image evaluation values and determining the playing parameters of the target image is not limited, the number of playing frames can be determined according to the playing time length stored in advance and the number of images of the first memory map set and the second memory map set, then the target image is determined according to the image evaluation values, if the image evaluation values of the plurality of images are equal, the playing sequence of the plurality of images is consistent, namely the images are played in the same frame; it is also possible to determine the number of playing frames, based on the fact that similar images are pieced together into one frame of image, such as: images of the same location, images of the same day, etc.
It can be understood that the image types corresponding to a plurality of images to be synthesized are obtained, then the number of images corresponding to each image type is counted, then the playing parameter of each image type is determined according to the pre-stored total playing time and the number of the plurality of images, and then the target recall video is generated according to each image to be synthesized and the playing parameter of the image type corresponding to the image. That is to say, the playing parameters corresponding to each image type are determined according to the preset total playing time and the number of the images, that is, the playing parameters corresponding to different image types are different, so that the browsing effect of the target recall video is improved, and the user experience is improved conveniently. And then, a target recall video is generated according to the playing parameters of the images to be synthesized, so that the video fusion of the recall videos with different themes can be realized, and the video fusion effect can be improved by performing the video fusion based on the associated images.
In an optional embodiment, the image evaluation value corresponding to each of the plurality of target images is obtained to obtain a plurality of image evaluation values, and then the plurality of images to be synthesized are selected from the plurality of target images according to the plurality of image evaluation values, so that an image with a higher image evaluation value is selected within a limited time, and then the plurality of images to be synthesized generate the target recall video, thereby improving the accuracy of selecting the target images and the user browsing experience.
In the video processing method as shown in fig. 1A, an electronic device receives a video fusion instruction for a first recall video and a second recall video, wherein the first recall video includes a first set of recall maps and the second recall video includes a second set of recall maps. Then, a plurality of repeated images between the first recall map set and the second recall map set are determined, the images in the first recall map set and the second recall map set are subjected to repeated removing operation according to the plurality of repeated images to obtain a plurality of target images, then a target recall video is generated according to the plurality of target images, video fusion of recall videos with different themes can be achieved, video fusion is conducted based on associated images, and video fusion effects can be improved.
It should be noted that, in the embodiment of the present application, a first recall video and a second recall video are taken as an example for description, and when a plurality of recall videos exist and need to be merged, reference may be made to a method provided by the present application, that is, when a plurality of repeated images exist in any two recall videos among the plurality of recall videos, a deduplication operation is performed on the plurality of repeated images to obtain a plurality of target images, and then the target recall video is generated according to the plurality of target images, so that video merging of the recall videos of a plurality of different topics is achieved, and video merging is performed based on associated images, so that a video merging effect can be improved.
Referring to fig. 2, fig. 2 is a schematic flow chart of another video processing method according to an embodiment of the present application, and as shown in fig. 2, the video processing method includes:
s201: video fusion instructions for the first recall video and the second recall video are received.
Wherein the first recall video comprises a first set of recall diagrams and the second recall video comprises a second set of recall diagrams.
S202: and determining a first theme corresponding to the first recollection atlas and determining a second theme corresponding to the second recollection atlas.
S203: and searching a plurality of first associated images corresponding to the first theme in the second memory map set, and searching a plurality of second associated images corresponding to the second theme in the first memory map set.
S204: and obtaining a plurality of similarity values by obtaining the similarity value between each first associated image in the plurality of first associated images and each second associated image in the plurality of second associated images.
S205: determining a plurality of repeated images between the first recollection atlas and the second recollection atlas according to the plurality of similarity values.
S206: and performing de-duplication operation on the images in the first recollection image set and the second recollection image set according to the multiple repeated images to obtain multiple target images.
S207: and generating a target recall video according to the plurality of target images.
In the video processing method shown in fig. 2, an electronic device receives a video fusion instruction for a first recall video and a second recall video, determines a first topic corresponding to the first recall video and a second topic corresponding to the second recall video, and searches a plurality of first associated images corresponding to the first topic in the second recall graph set, that is, determines that one recall graph set contains an associated image of another topic. Then, a similarity value between each first associated image in the first associated images and each second associated image in the second associated images is obtained, and a repeated image between the first associated images and the second associated images is determined according to the similarity value, so that the repeated image in the associated images is determined on the basis of determining the associated images in the two recall image sets. Then, the images in the first recollection image set and the second recollection image set are subjected to repeated operation according to the repeated images to obtain a plurality of target images, and repeated image calculation between the first recollection image set and the second recollection image set can be avoided. And then, a target recall video is generated according to a plurality of target images, so that video fusion of recall videos with different themes can be realized, and the video fusion effect can be improved by performing video fusion based on the associated images.
Referring to fig. 3, fig. 3 is a schematic flow chart of another video processing method according to an embodiment of the present application, and as shown in fig. 3, the video processing method includes:
s301: video fusion instructions for the first recall video and the second recall video are received.
Wherein the first recall video comprises a first set of recall diagrams and the second recall video comprises a second set of recall diagrams.
S302: a plurality of repeated images between the first set of recall images and the second set of recall images is determined.
S303: and performing de-duplication operation on the images in the first recollection image set and the second recollection image set according to the multiple repeated images to obtain multiple target images.
S304: and acquiring an image evaluation value corresponding to each target image in the plurality of target images to obtain a plurality of image evaluation values.
Optionally, the obtaining the image evaluation value corresponding to each of the target images to obtain the plurality of image evaluation values includes: acquiring a target association value between the reference image and the first subject and the second subject; acquiring an evaluation dimension corresponding to the image type of the reference image; acquiring a reference evaluation value corresponding to the reference image according to the evaluation dimension; and acquiring an image evaluation value corresponding to the reference image according to the target correlation value and the reference evaluation value.
It is understood that the target association value between the reference image and the first subject and the second subject is obtained first, that is, the association relationship between the reference image and the first subject and the second subject is determined. And then acquiring a reference evaluation value corresponding to the reference image according to the evaluation dimension corresponding to the image type of the reference image, so as to determine the image evaluation value of the reference image according to the target correlation value and the reference evaluation value, namely determining the image evaluation value from the image quality of the reference image and the correlation between the first memory video and the second memory video, thereby improving the accuracy of determining the image evaluation value and facilitating the improvement of the accuracy of selecting the target image.
S305: and selecting a plurality of images to be synthesized from the plurality of target images according to the plurality of image evaluation values.
Optionally, the generating the target recall video according to the plurality of images to be synthesized includes: acquiring an image type corresponding to each image to be synthesized in the plurality of images to be synthesized to obtain a plurality of image types; counting the number of images of each image type in the plurality of image types to obtain a plurality of image numbers; determining the playing parameter of each image type in the plurality of image types according to the pre-stored total playing time and the number of the plurality of images; and generating the target recall video according to each image to be synthesized in the plurality of images to be synthesized and the playing parameters of the corresponding image type.
It can be understood that the image types corresponding to a plurality of images to be synthesized are obtained, then the number of images corresponding to each image type is counted, then the playing parameter of each image type is determined according to the pre-stored total playing time and the number of the plurality of images, and then the target recall video is generated according to each image to be synthesized and the playing parameter of the image type corresponding to the image. That is to say, the playing parameters corresponding to each image type are determined according to the preset total playing time and the number of the images, that is, the playing parameters corresponding to different image types are different, so that the browsing effect of the target recall video is improved, and the user experience is improved conveniently. And then, a target recall video is generated according to the playing parameters of the images to be synthesized, so that the video fusion of the recall videos with different themes can be realized, and the video fusion effect can be improved by performing the video fusion based on the associated images.
S306: and generating a target recall video according to the plurality of images to be synthesized.
In the video processing method as shown in fig. 3, an electronic device receives a video fusion instruction for a first recall video and a second recall video, wherein the first recall video comprises a first set of recall maps and the second recall video comprises a second set of recall maps. Then, a plurality of repeated images between the first recollection image set and the second recollection image set are determined, the images in the first recollection image set and the second recollection image set are subjected to repeated removing operation according to the plurality of repeated images to obtain a plurality of target images, then, an image evaluation value corresponding to each image in the plurality of target images is obtained to obtain a plurality of image evaluation values, a plurality of images to be synthesized are selected from the plurality of target images according to the plurality of image evaluation values, therefore, an image with a higher image evaluation value is selected within a limited time, a target recollection video is generated by the plurality of images to be synthesized, video fusion of the recollection videos with different themes can be realized, and the video fusion effect can be improved by performing video fusion based on the associated images.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present application, and as shown in fig. 4, the video processing apparatus 400 includes a receiving unit 401, a determining unit 402, a deduplication unit 403, and a generating unit 404, where:
a receiving unit 401, configured to receive a video fusion instruction for a first recall video and a second recall video, where the first recall video includes a first recall atlas and the second recall video includes a second recall atlas;
a determining unit 402, configured to determine a plurality of repeated images between the first recollection atlas and the second recollection atlas;
a deduplication unit 403, configured to perform deduplication operations on the images in the first recollection image set and the second recollection image set according to the multiple repeated images to obtain multiple target images;
a generating unit 404, configured to generate a target recall video according to the plurality of target images.
It is to be understood that the receiving unit 501 receives a fusion instruction for a first recall video and a second recall video, wherein the first recall video includes a first recall atlas and the second recall video includes a second recall atlas. Then, a plurality of repeated images between the first recall map set and the second recall map set are determined, the images in the first recall map set and the second recall map set are subjected to repeated removing operation according to the plurality of repeated images to obtain a plurality of target images, then a target recall video is generated according to the plurality of target images, video fusion of recall videos with different themes can be achieved, video fusion is conducted based on associated images, and video fusion effects can be improved.
In one possible example, in terms of the determining the multiple repeated images between the first recall atlas and the second recall atlas, the determining unit 402 is specifically configured to determine a first topic corresponding to the first recall atlas and determine a second topic corresponding to the second recall atlas; searching a plurality of first associated images corresponding to the first theme in the second memory map set, and searching a plurality of second associated images corresponding to the second theme in the first memory map set; obtaining a similarity value between each first associated image in the plurality of first associated images and each second associated image in the plurality of second associated images to obtain a plurality of similarity values; and determining the plurality of repeated images according to the plurality of similarity values.
In one possible example, in the aspect of generating the target recall video according to the multiple target images, the generating unit 404 is specifically configured to acquire an image evaluation value corresponding to each target image in the multiple target images, so as to obtain multiple image evaluation values; selecting a plurality of images to be synthesized from the plurality of target images according to the plurality of image evaluation values; and generating the target recall video according to the plurality of images to be synthesized.
In a possible example, in the aspect of generating the target recall video according to the images to be synthesized, the generating unit 404 is specifically configured to obtain an image type corresponding to each image to be synthesized in the images to be synthesized, so as to obtain a plurality of image types; counting the number of images of each image type in the plurality of image types to obtain a plurality of image numbers; determining the playing parameter of each image type in the plurality of image types according to the pre-stored total playing time and the number of the plurality of images; and generating the target recall video according to each image to be synthesized in the plurality of images to be synthesized and the playing parameters of the corresponding image type.
In a possible example, the multiple target images include reference images, and in terms of obtaining a plurality of image evaluation values corresponding to each target image in the multiple target images, the generating unit 404 is specifically configured to obtain a target association value between the reference image and the first subject and the second subject; acquiring an evaluation dimension corresponding to the image type of the reference image; acquiring a reference evaluation value corresponding to the reference image according to the evaluation dimension; and acquiring an image evaluation value corresponding to the reference image according to the target correlation value and the reference evaluation value.
Referring to fig. 5, fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. As shown in fig. 5, the electronic device 500 includes a processor 510, a memory 520, a communication interface 530, and one or more programs 540, wherein the one or more programs 540 are stored in the memory 520 and configured to be executed by the processor 510, and the programs 540 include instructions for:
receiving a video fusion instruction for a first recall video and a second recall video, the first recall video comprising a first set of recall graphs and the second recall video comprising a second set of recall graphs;
determining a plurality of repeated images between the first set of recall images and the second set of recall images;
performing de-duplication operation on the images in the first recollection image set and the second recollection image set according to the multiple repeated images to obtain multiple target images;
and generating a target recall video according to the plurality of target images.
It is to be appreciated that the electronic device 500 receives video fusion instructions for a first recall video and a second recall video, wherein the first recall video includes a first set of recall diagrams and the second recall video includes a second set of recall diagrams. Then, a plurality of repeated images between the first recall map set and the second recall map set are determined, the images in the first recall map set and the second recall map set are subjected to repeated removing operation according to the plurality of repeated images to obtain a plurality of target images, then a target recall video is generated according to the plurality of target images, video fusion of recall videos with different themes can be achieved, video fusion is conducted based on associated images, and video fusion effects can be improved.
In one possible example, in connection with the determining the plurality of repeated images between the first recollection atlas and the second recollection atlas, the instructions in the program 540 are specifically to:
determining a first theme corresponding to the first recollection atlas and determining a second theme corresponding to the second recollection atlas;
searching a plurality of first associated images corresponding to the first theme in the second memory map set, and searching a plurality of second associated images corresponding to the second theme in the first memory map set;
obtaining a similarity value between each first associated image in the plurality of first associated images and each second associated image in the plurality of second associated images to obtain a plurality of similarity values;
and determining the plurality of repeated images according to the plurality of similarity values.
In one possible example, in the aspect of generating the target recall video according to the plurality of target images, the instructions in the program 540 are specifically configured to:
acquiring an image evaluation value corresponding to each target image in the multiple target images to obtain multiple image evaluation values;
selecting a plurality of images to be synthesized from the plurality of target images according to the plurality of image evaluation values;
and generating the target recall video according to the plurality of images to be synthesized.
In one possible example, in the aspect of generating the target recall video according to the plurality of images to be synthesized, the instructions in the program 540 are specifically configured to perform the following operations:
acquiring an image type corresponding to each image to be synthesized in the plurality of images to be synthesized to obtain a plurality of image types;
counting the number of images of each image type in the plurality of image types to obtain a plurality of image numbers;
determining the playing parameter of each image type in the plurality of image types according to the pre-stored total playing time and the number of the plurality of images;
and generating the target recall video according to each image to be synthesized in the plurality of images to be synthesized and the playing parameters of the corresponding image type.
In one possible example, the multiple target images include reference images, and in terms of obtaining the image evaluation value corresponding to each target image in the multiple target images, obtaining multiple image evaluation values, the instructions in the program 540 are specifically configured to:
acquiring a target association value between the reference image and the first subject and the second subject;
acquiring an evaluation dimension corresponding to the image type of the reference image;
acquiring a reference evaluation value corresponding to the reference image according to the evaluation dimension;
and acquiring an image evaluation value corresponding to the reference image according to the target correlation value and the reference evaluation value.
Embodiments of the present application also provide a computer storage medium, where the computer storage medium stores a computer program for causing a computer to execute a part or all of the steps of any one of the methods as described in the method embodiments, and the computer includes an electronic device.
Embodiments of the application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as recited in the method embodiments. The computer program product may be a software installation package and the computer comprises the electronic device.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art will also appreciate that the embodiments described in this specification are presently preferred and that no particular act or mode of operation is required in the present application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a hardware mode or a software program mode.
The integrated unit, if implemented in the form of a software program module and sold or used as a stand-alone product, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method of the embodiments of the present application. And the aforementioned memory comprises: various media capable of storing program codes, such as a usb disk, a read-only memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and the like.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash disk, ROM, RAM, magnetic or optical disk, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (12)

1. A video processing method, comprising:
receiving a video fusion instruction for a first recall video and a second recall video, the first recall video comprising a first set of recall graphs and the second recall video comprising a second set of recall graphs;
determining a first theme corresponding to the first recollection atlas and a second theme corresponding to the second recollection atlas;
determining a target association value between the first topic and the second topic;
if the target correlation value is smaller than or equal to a preset correlation threshold value, directly fusing the first recall video and the second recall video to generate a target recall video;
if the target association value is larger than the preset association threshold value, determining a plurality of repeated images between the first recollection atlas and the second recollection atlas;
performing de-duplication operation on the images in the first recollection image set and the second recollection image set according to the multiple repeated images to obtain multiple target images;
and generating the target recall video according to the plurality of target images.
2. The method of claim 1, wherein the determining a plurality of repeated images between the first set of recollections and the second set of recollections comprises:
searching a plurality of first associated images corresponding to the first theme in the second memory map set, and searching a plurality of second associated images corresponding to the second theme in the first memory map set;
obtaining a similarity value between each first associated image in the plurality of first associated images and each second associated image in the plurality of second associated images to obtain a plurality of similarity values;
and determining the plurality of repeated images according to the plurality of similarity values.
3. The method according to claim 1 or 2, wherein the generating of the target recall video from the plurality of target images comprises:
acquiring an image evaluation value corresponding to each target image in the multiple target images to obtain multiple image evaluation values;
selecting a plurality of images to be synthesized from the plurality of target images according to the plurality of image evaluation values;
and generating the target recall video according to the plurality of images to be synthesized.
4. The method according to claim 3, wherein the generating the target recall video from the plurality of images to be synthesized comprises:
acquiring an image type corresponding to each image to be synthesized in the plurality of images to be synthesized to obtain a plurality of image types;
counting the number of images of each image type in the plurality of image types to obtain a plurality of image numbers;
determining the playing parameter of each image type in the plurality of image types according to the pre-stored total playing time and the number of the plurality of images;
and generating the target recall video according to each image to be synthesized in the plurality of images to be synthesized and the playing parameters of the corresponding image type.
5. The method according to claim 3 or 4, wherein the plurality of target images include a reference image, and the obtaining the image evaluation value corresponding to each target image of the plurality of target images to obtain a plurality of image evaluation values comprises:
acquiring a target association value between the reference image and the first subject and the second subject;
acquiring an evaluation dimension corresponding to the image type of the reference image;
acquiring a reference evaluation value corresponding to the reference image according to the evaluation dimension;
and acquiring an image evaluation value corresponding to the reference image according to the target correlation value and the reference evaluation value.
6. A video processing apparatus, comprising:
a receiving unit, configured to receive a video fusion instruction for a first recall video and a second recall video, where the first recall video includes a first recall atlas and the second recall video includes a second recall atlas;
a determining unit, configured to determine a first theme corresponding to the first recollection atlas and a second theme corresponding to the second recollection atlas; and for determining a target association value between the first topic and the second topic; and for determining a plurality of repeating images between the first set of recollections and the second set of recollections;
the de-duplication unit is used for performing de-duplication operation on the images in the first recollection image set and the second recollection image set according to the multiple repeated images to obtain multiple target images when the target correlation value is larger than a preset correlation threshold value;
the generating unit is used for directly fusing the first recall video and the second recall video to generate a target recall video when the target correlation value is smaller than or equal to a preset correlation threshold value; and when the target correlation value is larger than the preset correlation threshold value, generating a target recall video according to the target images.
7. The apparatus according to claim 6, wherein, in the determining the plurality of repeated images between the first recollection atlas and the second recollection atlas, the determining unit is specifically configured to search the second recollection atlas for a plurality of first associated images corresponding to the first topic, and search the first recollection atlas for a plurality of second associated images corresponding to the second topic; obtaining a similarity value between each first associated image in the plurality of first associated images and each second associated image in the plurality of second associated images to obtain a plurality of similarity values; and determining the plurality of repeated images according to the plurality of similarity values.
8. The apparatus according to claim 6 or 7, wherein in the generating of the target recall video from the plurality of target images, the generating unit is specifically configured to acquire an image evaluation value corresponding to each of the plurality of target images, and obtain a plurality of image evaluation values; selecting a plurality of images to be synthesized from the plurality of target images according to the plurality of image evaluation values; and generating the target recall video according to the plurality of images to be synthesized.
9. The apparatus according to claim 8, wherein in the aspect of generating the target recall video from the plurality of images to be synthesized, the generating unit is specifically configured to obtain an image type corresponding to each of the plurality of images to be synthesized, so as to obtain a plurality of image types; counting the number of images of each image type in the plurality of image types to obtain a plurality of image numbers; determining the playing parameter of each image type in the plurality of image types according to the pre-stored total playing time and the number of the plurality of images; and generating the target recall video according to each image to be synthesized in the plurality of images to be synthesized and the playing parameters of the corresponding image type.
10. The apparatus according to claim 8 or 9, wherein the plurality of target images include a reference image, and the generating unit is specifically configured to obtain the target association value between the reference image and the first subject and the second subject in terms of obtaining a plurality of image evaluation values corresponding to each of the plurality of target images; acquiring an evaluation dimension corresponding to the image type of the reference image; acquiring a reference evaluation value corresponding to the reference image according to the evaluation dimension; and acquiring an image evaluation value corresponding to the reference image according to the target correlation value and the reference evaluation value.
11. An electronic device comprising a processor, a memory, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps of the method of any of claims 1-5.
12. A computer-readable storage medium for storing a computer program, wherein the computer program causes a computer to perform the method according to any one of claims 1-5.
CN201810756384.8A 2018-07-11 2018-07-11 Video processing method and related product Active CN108958592B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810756384.8A CN108958592B (en) 2018-07-11 2018-07-11 Video processing method and related product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810756384.8A CN108958592B (en) 2018-07-11 2018-07-11 Video processing method and related product

Publications (2)

Publication Number Publication Date
CN108958592A CN108958592A (en) 2018-12-07
CN108958592B true CN108958592B (en) 2021-06-25

Family

ID=64483621

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810756384.8A Active CN108958592B (en) 2018-07-11 2018-07-11 Video processing method and related product

Country Status (1)

Country Link
CN (1) CN108958592B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111382628B (en) * 2018-12-28 2023-05-16 成都云天励飞技术有限公司 Method and device for judging peer
CN114009058B (en) * 2019-04-26 2024-05-31 摹恩帝株式会社 Multi-reaction image generation method and storage medium thereof
CN110730381A (en) * 2019-07-12 2020-01-24 北京达佳互联信息技术有限公司 Method, device, terminal and storage medium for synthesizing video based on video template
CN113344812A (en) * 2021-05-31 2021-09-03 维沃移动通信(杭州)有限公司 Image processing method and device and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103631799A (en) * 2012-08-23 2014-03-12 深圳市世纪光速信息技术有限公司 Network group image aggregating method and system and image searching method and system
CN105426485A (en) * 2015-11-20 2016-03-23 小米科技有限责任公司 Image combination method and device, intelligent terminal and server
CN105868417A (en) * 2016-05-27 2016-08-17 维沃移动通信有限公司 Picture processing method and mobile terminal

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103631799A (en) * 2012-08-23 2014-03-12 深圳市世纪光速信息技术有限公司 Network group image aggregating method and system and image searching method and system
CN105426485A (en) * 2015-11-20 2016-03-23 小米科技有限责任公司 Image combination method and device, intelligent terminal and server
CN105868417A (en) * 2016-05-27 2016-08-17 维沃移动通信有限公司 Picture processing method and mobile terminal

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Social album: Linking and merging online albums based on social relationship;Kai-Yin Cheng 等;《Proceedings of The 2012 Asia Pacific Signal and Information Processing Association Annual Summit and Conference》;20120311;第1-8页 *

Also Published As

Publication number Publication date
CN108958592A (en) 2018-12-07

Similar Documents

Publication Publication Date Title
CN108958592B (en) Video processing method and related product
JP4988011B2 (en) Electronic apparatus and image processing method
US8346014B2 (en) Image processing apparatus and method and program
CN105744292A (en) Video data processing method and device
JP5667069B2 (en) Content management apparatus, content management method, content management program, and integrated circuit
CN106250421A (en) A kind of method shooting process and terminal
EP2824633A1 (en) Image processing method and terminal device
CN106407358B (en) Image searching method and device and mobile terminal
CN109660714A (en) Image processing method, device, equipment and storage medium based on AR
CN111182359A (en) Video preview method, video frame extraction method, video processing device and storage medium
JP2022541358A (en) Video processing method and apparatus, electronic device, storage medium, and computer program
CN109408652B (en) Picture searching method, device and equipment
CN108540817B (en) Video data processing method, device, server and computer readable storage medium
JP2006140559A (en) Image reproducing apparatus and image reproducing method
CN110580508A (en) video classification method and device, storage medium and mobile terminal
CN111401238A (en) Method and device for detecting character close-up segments in video
CN115379290A (en) Video processing method, device, equipment and storage medium
CN109167939B (en) Automatic text collocation method and device and computer storage medium
US20170163904A1 (en) Picture processing method and electronic device
US10924637B2 (en) Playback method, playback device and computer-readable storage medium
JP4940333B2 (en) Electronic apparatus and moving image reproduction method
CN117459662A (en) Video playing method, video identifying method, video playing device, video playing equipment and storage medium
CN109151568B (en) Video processing method and related product
US20170139933A1 (en) Electronic Device, And Computer-Readable Storage Medium For Quickly Searching Video Segments
CN109327713B (en) Method and device for generating media information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant