CN113792182B - Image progress prompting method and device, storage medium and electronic equipment - Google Patents

Image progress prompting method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN113792182B
CN113792182B CN202111093486.4A CN202111093486A CN113792182B CN 113792182 B CN113792182 B CN 113792182B CN 202111093486 A CN202111093486 A CN 202111093486A CN 113792182 B CN113792182 B CN 113792182B
Authority
CN
China
Prior art keywords
image
video
progress
target image
text information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111093486.4A
Other languages
Chinese (zh)
Other versions
CN113792182A (en
Inventor
陈泽宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Boguan Information Technology Co Ltd
Original Assignee
Guangzhou Boguan Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Boguan Information Technology Co Ltd filed Critical Guangzhou Boguan Information Technology Co Ltd
Priority to CN202111093486.4A priority Critical patent/CN113792182B/en
Publication of CN113792182A publication Critical patent/CN113792182A/en
Application granted granted Critical
Publication of CN113792182B publication Critical patent/CN113792182B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/74Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content

Abstract

The disclosure relates to the technical field of video analysis, in particular to a prompting method and device for image progress, a storage medium and electronic equipment, wherein the method comprises the following steps: processing a video to be processed acquired from a current video into a first image; determining a second image corresponding to the current video according to the video identification of the current video, and matching the first image with the second image; and when the first target image of the first image is matched with the second target image of the second image, prompting the image progress of the second target image in the second image at the video position corresponding to the first target image in the current video. Through the technical scheme of the embodiment of the disclosure, the problem that searching the image corresponding to the video is inconvenient can be solved.

Description

Image progress prompting method and device, storage medium and electronic equipment
Technical Field
The disclosure relates to the technical field of video analysis, in particular to a prompting method and device for image progress, a storage medium and electronic equipment.
Background
With the rapid development of the film and television entertainment industry, a large number of different types of film and television dramas are emerging so as to meet the entertainment demands of the masses. In recent years, the development of cartoon is also increasingly strong, and the cartoon is accepted by the masses of various ages. Typically, a cartoon is adapted from a cartoon, so the progress of the cartoon is generally slower than that of the cartoon.
Because the cartoon is obtained by adaptation, the cartoon is inevitably deviated from the cartoon to a certain extent, and a user may want to watch the cartoon corresponding to the cartoon when watching the cartoon, so as to know the description of the cartoon on the current scenario; alternatively, after the user finishes viewing the updated animation, the user may want to view the corresponding comic next to the progress of the animation. At this time, the user can only find one by one in the cartoon corresponding to the cartoon, or ask other users who have seen the cartoon, so as to know the corresponding chapter of the cartoon that the user wants to see.
However, in the related art scheme, other users who have already seen the cartoon are queried to learn the corresponding section of the cartoon which the users want to see, or the cartoon sections which the users want to see are searched one by one in the cartoon, a lot of time is required, and other users can not necessarily feed back the cartoon sections which the users want to see in time, so that the viewing experience of the users is poor.
It should be noted that the information disclosed in the above background section is only for enhancing understanding of the background of the present disclosure and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The invention aims to provide a prompting method and device for image progress, a storage medium and electronic equipment, which can solve the problem of inconvenience in searching images corresponding to videos.
Other features and advantages of the present disclosure will be apparent from the following detailed description, or may be learned in part by the practice of the disclosure.
According to a first aspect of the present disclosure, there is provided a method for prompting an image progress, including: processing a video to be processed acquired from a current video into a first image; determining a second image corresponding to the current video according to the video identification of the current video, and matching the first image with the second image; and when the first target image of the first image is matched with the second target image of the second image, prompting the image progress of the second target image in the second image at the video position corresponding to the first target image in the current video.
Optionally, processing the video to be processed acquired from the current video into the first image includes: responding to the image progress prompting operation aiming at the current video, and acquiring the video to be processed; the video to be processed is converted into a first image.
Optionally, the second image includes a plurality of second sub-images, and matching the first image with the second image includes: obtaining a second sub-image corresponding to the second image by image segmentation; the first image is matched with the second sub-image.
Optionally, matching the first image with the second image includes: matching the first image with the second image includes: dividing the second image into a plurality of second image subsets; the second image subset has a corresponding relation with the video progress of the current video; acquiring a video progress corresponding to the current video, and acquiring a corresponding second image subset according to the current video progress; the first image is matched with the images in the second subset of images.
Optionally, the matching of the first target image of the first image with the second target image of the second image includes: acquiring the matching degree of the first target image and the second target image; and when the matching degree of the first target image and the second target image is larger than or equal to a first preset threshold value, determining that the first target image of the first image is matched with the second target image of the second image.
Optionally, the matching of the first target image of the first image with the second target image of the second image includes: acquiring first text information in a first target image and acquiring second text information in a second target image; carrying out semantic analysis on the first text information and the second text information, and determining the similarity of the first text information and the second text information according to the result of the semantic analysis; and when the similarity of the first text information and the second text information is greater than or equal to a second preset threshold value, determining that the first target image of the first image is matched with the second target image of the second image.
Optionally, before the semantic analysis is performed on the first text information and the second text information, the method further includes: acquiring the language of the first text information and the language of the second text information; when the languages of the first text information and the second text information are different, unified processing is carried out on the languages of the first text information and the second text information.
Optionally, the video progress identifier corresponds to the current video, and the prompting of the image progress of the second target image in the second image is performed at the video position corresponding to the first target image in the current video, including: acquiring a video position of a video progress mark corresponding to a first target image in a current video; and displaying the image progress of the second target image in the second image in the associated area of the video position where the video progress mark corresponding to the first target image is located.
According to a second aspect of the present disclosure, there is provided a prompting device for progress of a virtual image, wherein the device includes: the first image acquisition module is used for processing the video to be processed acquired from the current video into a first image; the image matching module is used for determining a second image corresponding to the current video according to the video identification of the current video and matching the first image with the second image; and the image progress prompting module is used for prompting the image progress of the second target image in the second image at the video position corresponding to the first target image in the current video when the first target image of the first image is matched with the second target image of the second image.
According to a third aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, causes the processor to implement a prompting method for image progress as in the first aspect in the above-described embodiments.
According to a fourth aspect of the present disclosure, there is provided an electronic device comprising:
one or more processors; and
and a memory for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the method of prompting image progress as in the first aspect of the embodiments described above.
The technical scheme provided by the embodiment of the disclosure can comprise the following beneficial effects:
according to the prompting method for the image progress provided by the embodiment of the disclosure, a user can timely know the image progress corresponding to the video, the user does not need to search for the image contents to be seen one by one in the image set, the problem that feedback of other users is not timely when other users are inquired can be avoided, time of the user is saved, the image contents to be seen can be seen timely, and further viewing experience of the user is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure. It will be apparent to those of ordinary skill in the art that the drawings in the following description are merely examples of the disclosure and that other drawings may be derived from them without undue effort. In the drawings:
FIG. 1 schematically illustrates a schematic diagram of an exemplary system architecture of a method for prompting image progress in an exemplary embodiment of the present disclosure;
FIG. 2 schematically illustrates a flow chart of a method of prompting image progress in an exemplary embodiment of the present disclosure;
FIG. 3 schematically illustrates a flow chart for capturing a video to be processed and processing the video to be processed into a first image in an exemplary embodiment of the disclosure;
FIG. 4 schematically illustrates a flowchart of acquiring a second sub-image corresponding to a second image using image segmentation and matching a first image with the second sub-image in an exemplary embodiment of the present disclosure;
FIG. 5 schematically illustrates a schematic diagram of separating a second image into a plurality of second sub-images in an exemplary embodiment of the present disclosure;
FIG. 6 schematically illustrates a flowchart of acquiring a corresponding second subset of images according to a current video schedule, matching a first image with an image in the second subset of images in an exemplary embodiment of the present disclosure;
fig. 7 schematically illustrates a flowchart of determining that a first target image of a first image matches a second target image of a second image when a degree of matching between the first target image and the second target image is greater than or equal to a first preset threshold in an exemplary embodiment of the present disclosure;
fig. 8 schematically illustrates a flowchart of matching a first target image of a first image with a second target image of a second image when a similarity of the first text information and the second text information is greater than or equal to a second preset threshold in an exemplary embodiment of the present disclosure;
FIG. 9 schematically illustrates a flowchart of unified processing of a language of a first text message and a language of a second text message in an exemplary embodiment of the present disclosure;
FIG. 10 schematically illustrates a flowchart of displaying an image progress of a second target image in a second image in an associated region of a video location where a video progress marker corresponding to the first target image is located in an exemplary embodiment of the present disclosure;
Fig. 11 schematically illustrates a schematic diagram of displaying an image progress of a second target image in the second image in an associated area of a video position where a video progress marker corresponding to the first target image is located in an exemplary embodiment of the present disclosure;
FIG. 12 schematically illustrates a schematic diagram of displaying a skip identifier of an image progress in an associated region of a video position where a video progress identifier corresponding to a first target image in a current video is located in an exemplary embodiment of the present disclosure;
FIG. 13 schematically illustrates a composition diagram of a prompting device for image progress in an exemplary embodiment of the present disclosure;
fig. 14 schematically shows a schematic of a computer system suitable for use in implementing the electronic device of the exemplary embodiments of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the disclosed aspects may be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known structures, methods, devices, implementations, materials, or operations are not shown or described in detail to avoid obscuring aspects of the disclosure.
The block diagrams depicted in the figures are merely functional entities and do not necessarily correspond to physically separate entities. That is, these functional entities may be implemented in software, or in one or more software-hardened modules, or in different networks and/or processor devices and/or microcontroller devices.
FIG. 1 illustrates a schematic diagram of an exemplary system architecture to which a hinting method for image progress of embodiments of the present disclosure can be applied.
As shown in fig. 1, system architecture 1000 may include one or more of terminal devices 1001, 1002, 1003, a network 1004, and a server 1005. The network 1004 serves as a medium for providing a communication link between the terminal apparatuses 1001, 1002, 1003 and the server 1005. The network 1004 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. For example, the server 1005 may be a server cluster formed by a plurality of servers.
A user can interact with a server 1005 via a network 1004 using terminal apparatuses 1001, 1002, 1003 to receive or transmit messages or the like. The terminal devices 1001, 1002, 1003 may be various electronic devices having a display screen including, but not limited to, smartphones, tablet computers, portable computers, desktop computers, and the like. In addition, the server 1005 may be a server providing various services.
In one embodiment, the execution subject of the image progress prompting method of the present disclosure may be the server 1005. Taking an example of a prompting method of the image progress, the server 1005 may acquire a video sent by the terminal devices 1001, 1002, 1003, acquire a video to be processed from a current video, process the video to be processed acquired from the current video into a first image, determine a second image corresponding to the current video according to a video identifier of the current video, match the first image with the second image, obtain the image progress of the second target image when the first target image of the first image is matched with the second target image of the second image, and then return the obtained image progress to the terminal devices 1001, 1002, 1003, and prompt the image progress of the second target image in the second image at a video position corresponding to the first target image in the current video in the terminal devices 1001, 1002, 1003. In addition, the method for prompting the image progress of the present disclosure may also be performed by the terminal device 1001, 1002, 1003, or the like, so as to implement a process of prompting the image progress of the second target image in the second image at a video position corresponding to the first target image in the current video when the first target image of the first image is matched with the second target image of the second image.
In addition, the implementation procedure of the prompting method of the image progress of the present disclosure may also be implemented jointly by the terminal apparatuses 1001, 1002, 1003 and the server 1005. Taking an example of a prompting method of an image progress, the terminal device 1001, 1002, 1003 may acquire a current video, acquire a video to be processed from the current video, process the video to be processed acquired from the current video into a first image, determine a second image corresponding to the current video according to a video identifier of the current video, match the first image with the second image, and then send a first target image and a second target image obtained by matching to the server 1005, so that the server 1005 may prompt an image progress of the second target image in the second image at a video position corresponding to the first target image in the current video.
With the rapid development of the film and television entertainment industry, a large number of different types of film and television dramas are emerging so as to meet the entertainment demands of the masses. In recent years, the development of cartoon is also increasingly strong, and the cartoon is accepted by the masses of various ages. Typically, a cartoon is adapted from a cartoon, so the progress of the cartoon is generally slower than that of the cartoon.
Because the cartoon is obtained by adaptation, the cartoon is inevitably deviated from the cartoon to a certain extent, a user may want to watch the cartoon corresponding to the cartoon when watching the cartoon, know the description of the cartoon on the current scenario, or want to watch the cartoon corresponding to the progress of the cartoon after the user finishes watching the updated cartoon. At this time, the user can only find one by one in the cartoon corresponding to the cartoon, or ask other users who have seen the cartoon, so as to know the corresponding chapter of the cartoon that the user wants to see.
However, in the related art scheme, other users who have already seen the cartoon are queried to learn the corresponding section of the cartoon which the users want to see, or the cartoon sections which the users want to see are searched one by one in the cartoon, a lot of time is required, and other users can not necessarily feed back the cartoon sections which the users want to see in time, so that the viewing experience of the users is poor.
In an example embodiment of the present disclosure, a video to be processed obtained from a current video is processed into a first image, a second image corresponding to the current video is determined according to a video identifier of the current video, the first image is matched with the second image, and when a first target image of the first image is matched with a second target image of the second image, an image progress of the second target image in the second image is prompted at a video position corresponding to the first target image in the current video. Referring to fig. 2, a flowchart illustrating a prompting method of image progress in the present exemplary embodiment may include the steps of:
Step S210: processing a video to be processed acquired from a current video into a first image;
step S220: determining a second image corresponding to the current video according to the video identification of the current video, and matching the first image with the second image;
step S230: and when the first target image of the first image is matched with the second target image of the second image, prompting the image progress of the second target image in the second image at the video position corresponding to the first target image in the current video.
In the method for prompting the image progress provided by the embodiment of the disclosure, when the first target image is matched with the second target image, the image progress corresponding to the video can be prompted. The user can timely know the image progress corresponding to the video, the user is not required to search the image contents to be seen one by one in the image set, the problem that feedback of other users is not timely when other users are inquired can be avoided, time of the user is saved, the image contents to be seen can be timely seen, and further viewing experience of the user is improved.
Next, steps S210 to S230 of the image progress presenting method in the present exemplary embodiment will be described in more detail with reference to fig. 1 and the embodiment.
Step S210, processing a video to be processed acquired from a current video into a first image;
in one example embodiment of the present disclosure, a video to be processed, which is acquired from a current video, may be processed as a first image. Specifically, the video to be processed may be obtained from the current video. For example, a video with a specific duration may be taken from the current video as a video to be processed, or the head and tail of the current video may be removed, and the remaining video is taken as the video to be processed. It should be noted that, the present disclosure does not limit the manner of acquiring the video to be processed from the current video in particular. Wherein the number of first images may be one or more. Specifically, the video to be processed may be processed into a plurality of frame images according to the video frame, and the frame images may be used as the first images, or the frame images may be processed, and a plurality of images obtained after the image processing may be used as the first images. Video processing may also be performed on the video to be processed. For example, a black and white special effect may be added to the video so that the video to be processed becomes black and white video. And processing the video to be processed after video processing into a plurality of frame images according to video frames, and taking the frame images obtained after processing as first images. It should be noted that, the specific manner of processing the video to be processed into the plurality of first images is not particularly limited in this disclosure.
Further, the video to be processed may be processed into a plurality of black-and-white line manuscript images. Specifically, after the video processing to be processed is divided into a plurality of frame images, one or more of de-coloring, inverting, color reduction, minimum value, gaussian blur and high contrast retention are performed on the frame images to obtain a plurality of black-and-white line manuscript images.
For example, after a segment of the animation obtained through the above steps is obtained, the animation may be processed into a plurality of frame images according to the video frame, for each frame image, the frame image may be adjusted to a black-and-white frame image, then the black-and-white frame image is subjected to an inversion operation, then the image after the inversion operation is subjected to a color-reducing operation, then the image after the color-reducing operation is subjected to a gaussian blur operation, and a specific value of the gaussian blur is adjusted to obtain a clear black-and-white draft image.
For another example, after a segment of the cartoon obtained through the steps is obtained, the cartoon can be processed into a plurality of frame images according to video frames, for each frame image, the frame image after the frame image is subjected to the color reduction operation, the frame image after the color reduction operation is subjected to the minimum value operation, the frame image after the minimum value operation is subjected to the high contrast retaining operation, so that the line definition of the black-white line manuscript image is deepened, and then the black-white line manuscript image is obtained by adjusting the color level to remove the variegates influencing the black-white line manuscript image.
Further, when a plurality of frame images corresponding to the video are processed into a plurality of first images, the plurality of frame images may be adjusted. For example, the tone or the like of a plurality of frame images may be adjusted. The specific manner of adjusting the plurality of frame images is not particularly limited in the present disclosure.
Further, for a plurality of frame images, each frame image may be processed separately, or a plurality of frame images may be processed in batch.
In one example embodiment of the present disclosure, a video to be processed may be acquired in response to an image progress prompt operation for a current video, and converted into a first image. Referring to fig. 3, acquiring a video to be processed and processing the video to be processed into a first image may include the following steps S310 to S320:
step S310, responding to the image progress prompt operation aiming at the current video, and acquiring the video to be processed;
in step S320, the video to be processed is converted into a first image.
In one example embodiment of the present disclosure, a video to be processed is acquired in response to an image progress prompt operation for a current video. Specifically, the video to be processed may include a complete video, for example, for a certain animation, the animation may be used as the video to be processed; alternatively, the video to be processed may also include a portion of the video taken from a complete current video, for example, for a cartoon, a video of a preset duration may be taken as the video to be processed, for example, in response to an image progress prompt operation for the current video, a video of 5 seconds may be taken as the video to be processed from the complete video; or, the head music and the tail music corresponding to the animation can be removed, and the rest video is used as the video to be processed. It should be noted that, the specific manner of acquiring the video to be processed is not particularly limited in this disclosure.
For example, when a user needs to know a cartoon corresponding to a certain cartoon, a video to be processed in a time range corresponding to a certain time point can be intercepted. For example, the video to be processed in a certain time range before the time point may be intercepted, the video to be processed in a certain time range after the time point may be intercepted, the video to be processed in a certain time range including the time point may be intercepted, and the like. It should be noted that, the specific manner of capturing the video to be processed in the time range corresponding to a certain time point is not particularly limited in the present disclosure.
In one example embodiment of the present disclosure, an image progress prompt operation may be used to learn the image progress corresponding to the current video. For example, the image progress prompting operation may act on the image prompting control in the current video playing page, and at this time, the image progress of the second target image may be displayed in the current video playing page; or when the mouse is moved to the progress bar corresponding to the current video, the image progress prompt control can be displayed at the corresponding position of the progress bar of the current video, and the image progress of the second target image can be displayed by acting on the image progress prompt control. The image progress prompting operation may include a key operation, a touch operation, a voice control operation, and the like, wherein the touch operation may include a sliding touch operation, a pressing touch operation, a gesture touch operation, a long-press touch operation, a clicking touch operation, a dragging touch operation, and the like. Note that, the specific form of the image progress prompt operation is not particularly limited in the present disclosure.
Through the steps S310 to S320 described above, the video to be processed may be acquired in response to the image progress prompt operation for the current video, and converted into the first image.
Step S220, determining a second image corresponding to the current video according to the video identification of the current video, and matching the first image with the second image;
in one example embodiment of the present disclosure, the second image corresponding to the current video may be determined according to the video identification of the current video. In particular, the video identification may include a name, number, tag, etc. of the video. For example, the video is a cartoon of "cuttlefish king", and at this time, the video identifier of the current video may be the name "cuttlefish king", and the corresponding cartoon of "cuttlefish king" may be determined by the name "cuttlefish king". It should be noted that, the specific manner of determining the second image corresponding to the current video according to the video identifier of the current video is not particularly limited in this disclosure.
Specifically, the second image corresponding to the current video refers to an image associated with the video, where the current video may be a video adapted according to the second image, or the second image may be adapted according to the current video. For example, the current video is a cartoon festival video of "cuttlefish king", and the second image corresponding to the current video is a plurality of images in a cartoon of "cuttlefish king"; or, the current video is the captured video of cartoon of fire shadow holder, and the second image corresponding to the current video is a plurality of images in cartoon of fire shadow holder.
In one example embodiment of the present disclosure, after determining a second image corresponding to a current video according to a video identification of the current video, the first image may be matched with the second image. Specifically, when matching is performed, the content, the characteristics, the structure, the relation, the correspondence of textures, the gray scale and the like between the first image and the second image, the similarity and the consistency analysis can be obtained to determine whether a certain first image and a certain second image are matched. For example, a similarity measure, a normalized gray-scale matching method, an image feature matching method, or the like may be employed. For example, it may be determined whether the first image and the second image match by comparing line similarities of the first image and the second image. Note that, the manner of matching the first image and the second image in the present disclosure is not particularly limited.
Further, the plurality of second images corresponding to the current video may include a cartoon image. Wherein the caricature image may include a color caricature image and a black and white caricature image.
In one example embodiment of the present disclosure, image segmentation may be utilized to obtain a second sub-image corresponding to a second image and match the first image to the second sub-image. Referring to fig. 4, the steps of obtaining a second sub-image corresponding to the second image by image segmentation and matching the first image with the second sub-image may include the following steps S410 to S420:
Step S410, obtaining a second sub-image corresponding to the second image by image segmentation;
step S420, the first image is matched with the second sub-image.
In one example embodiment of the present disclosure, after acquiring the plurality of second images corresponding to the video, the second sub-images corresponding to the second images may be acquired using image segmentation. In particular, the second image may be segmented into one or more second sub-images by image segmentation. For example, as shown in fig. 5, the second image 500 may be partitioned into a plurality of second sub-images 510. Note that, the specific manner of acquiring the second sub-image corresponding to the second image by using image segmentation is not particularly limited in this disclosure.
Step S420, the first image is matched with the second sub-image.
In an example embodiment of the present disclosure, after the second sub-image corresponding to the second image is obtained through the above steps, the first image may be matched with the second sub-image. For example, in some cartoons, a page of the cartoons may include a plurality of cartoons, so when matching, a first image obtained by processing a cartoon to be processed needs to be matched with a plurality of cartoons corresponding to the cartoon.
Through the steps S410 to S420, the second sub-image corresponding to the second image may be obtained by image segmentation, and the first image and the second sub-image may be matched.
In an example embodiment of the present disclosure, the second image may be divided into a plurality of second image subsets, a video progress corresponding to the current video is obtained, the corresponding second image subset is obtained according to the current video progress, and the first image is matched with the images in the second image subset. Referring to fig. 6, the step of obtaining a corresponding second subset of images according to the current video progress, and matching the first image with the images in the second subset of images may include the following steps S610 to S630:
step S610, dividing the second image into a plurality of second image subsets;
step S620, acquiring a video progress corresponding to the current video, and acquiring a corresponding second image subset according to the current video progress;
step S630, matching the first image with the images in the second image subset.
In one example embodiment of the present disclosure, the second image may be divided into a plurality of second image subsets, and the corresponding second image subsets are acquired according to the video schedule. The second image subset has a corresponding relation with the video progress of the current video. Specifically, the video progress of the current video may include the number of sets of videos. At this time, the second image may be divided into a plurality of second image subsets according to the number of sets of videos. For example, the cartoon corresponding to the twelfth set of a certain cartoon video is the 114 th to 118 th contents, and at this time, the images of the 114 th to 118 th cartoon are a second subset of images, and the video corresponding to the second subset of cartoon video is the twelfth set of cartoon video. It should be noted that the present disclosure is not limited to a specific manner of dividing the second image into the plurality of second image subsets.
Through the steps S610 to S630, the second image may be divided into a plurality of second image subsets, the video progress corresponding to the current video is obtained, the corresponding second image subset is obtained according to the current video progress, and the first image and the images in the second image subset are matched.
In step S230, when the first target image of the first image matches with the second target image of the second image, the image progress of the second target image in the second image is prompted at the video position corresponding to the first target image in the current video.
In an example embodiment of the present disclosure, a first image and a second image are matched, a first target image of the first image and a second target image of the second image that are successfully matched may be obtained, and an image progress of the second target image in the second image is prompted at a video position corresponding to the first target image in a current video. Specifically, the first target image is a video frame of the current video, that is, the video position corresponding to the first target image refers to the position of the video frame in the current video. The image progress of the second image may be used to indicate a position of the second target image in the second image, that is, the position of the second target image in the second image may be known through the image progress of the second target image. The first target image may be any one of the plurality of first images, and the second target image may be any one of the plurality of second images.
For example, the image progress of the second target image may include an image chapter of the second target image in the plurality of second images, or the image progress of the second target image may include an image page number of the second target image in the plurality of second images, or the image progress of the second target image may include an image number of the second target image in the plurality of second images. Note that, the specific type of the image progress of the second target image is not particularly limited in the present disclosure.
In one example embodiment of the present disclosure, the image progress of the second target image in the second image may be prompted at a video position corresponding to the first target image in the current video. Specifically, the image progress of the second target image may include one image progress, or may include a plurality of image progress. For example, if the step has only one set of first target image and second target image matching, only one image progress of the second target image is obtained at this time, and if the step has multiple sets of first target image and second target image matching, then image progress of multiple second target images can be obtained.
In one example embodiment of the present disclosure, an image progress of a second target image in a second image may be prompted at a video position corresponding to a first target image in a video page corresponding to a current video. The video page corresponding to the video may include a web page, a client interface, and the like. It should be noted that, the specific form of the video page corresponding to the current video is not limited in this disclosure.
In an example embodiment of the present disclosure, a degree of matching of a first target image and a second target image may be acquired, and when the degree of matching of the first target image and the second target image is greater than or equal to a first preset threshold, the first target image of the first image and the second target image of the second image are determined to match. Referring to fig. 7, when the matching degree of the first target image and the second target image is greater than or equal to the first preset threshold, determining that the first target image of the first image matches the second target image of the second image may include the following steps S710 to S720:
step S710, obtaining the matching degree of the first target image and the second target image;
in one example embodiment of the present disclosure, a degree of matching of a first target image with a second target image may be obtained. Specifically, when a plurality of first images and second images are matched through the above steps, the matching degree of the first images and the second images can be obtained. Note that, the specific manner of acquiring the matching degree of the first target image and the second target image is not particularly limited in the present disclosure.
For example, when the plurality of first images obtained by video processing are black-and-white line-drawing images and the second image is a black-and-white cartoon image, the image similarity between the black-and-white line-drawing images and the black-and-white cartoon image may be compared and the image similarity between the black-and-white line-drawing images and the black-and-white cartoon image may be used as the matching degree.
In step S720, when the matching degree of the first target image and the second target image is greater than or equal to the first preset threshold, it is determined that the first target image of the first image matches the second target image of the second image.
In an example embodiment of the present disclosure, after the matching degree of the first target image and the second target image is obtained through the above steps, a first preset threshold may be obtained, and when the matching degree of the first target image and the second target image is greater than or equal to the first preset threshold, it is determined that the first target image of the first image and the second target image of the second image are matched. Specifically, the matching degree may be used to represent the similarity degree between the images, and when the matching degree between the first target image and the second target image is greater than or equal to the first preset threshold, the similarity degree between the two images is considered to be higher, and at this time, it may be determined that the first target image and the second target image are matched. The first preset threshold value can be adjusted according to different videos and different matching methods. In addition, the first preset threshold may be stored in the terminal device or the server, and may be invoked in the terminal device or the server when the first preset threshold needs to be used. It should be noted that, the specific value of the first preset threshold is not particularly limited in this disclosure.
For example, the matching degree of the first target image and the second target image is 85, and the matching degree is greater than the first preset threshold 80, where it may be determined that the first target image and the second target image match.
Through the steps S710 to S720, the matching degree of the first target image and the second target image may be obtained, and when the matching degree of the first target image and the second target image is greater than or equal to the first preset threshold, it is determined that the first target image of the first image and the second target image of the second image match.
In an example embodiment of the present disclosure, first text information in a first target image may be acquired, second text information in a second target image may be acquired, semantic analysis is performed on the first text information and the second text information, and a similarity between the first text information and the second text information is determined according to a result of the semantic analysis, where when the similarity between the first text information and the second text information is greater than or equal to a second preset threshold, the first target image of the first image is matched with the second target image of the second image. Referring to fig. 8, when the similarity between the first text information and the second text information is greater than or equal to the second preset threshold, the first target image of the first image is matched with the second target image of the second image, and the steps S810 to S830 may include:
Step S810, acquiring first text information in a first target image and acquiring second text information in a second target image;
in one example embodiment of the present disclosure, first textual information in a first target image may be acquired and second textual information in a second target image may be acquired. Specifically, the first text information in the first target image and the second text information in the second target image can be obtained through a text recognition technology. For example, after the target image is acquired, the text image area may be divided, then separated into a plurality of text, then individual text is identified, and the individual text is connected in text order to obtain text information. Note that, the specific manner of acquiring the text information in the target image is not particularly limited in the present disclosure.
Furthermore, before character recognition, the target image can be subjected to the pre-steps of inclination correction, image definition processing and the like so as to improve the accuracy of character recognition.
For example, the first text information in the first target image is caption information in the cartoon video, and the second text information in the second target image is text information in the cartoon.
Step S820, carrying out semantic analysis on the first text information and the second text information, and determining the similarity of the first text information and the second text information according to the result of the semantic analysis;
in an example embodiment of the present disclosure, after the first text information in the first target image and the second text information in the second target image are obtained through the above steps, semantic analysis may be performed on the first text information and the second text information, and the similarity between the first text information and the second text information may be determined according to the result of the semantic analysis. Specifically, the similarity between the first text information and the second text information may be used to indicate the similarity between the meaning of the sentence expressed by the first text information and the meaning of the sentence expressed by the second text information.
Specifically, when the semantic analysis is performed on the first text information and the second text information, the semantic analysis may be performed by a linear comparison method, for example, the number of the same words in the first text information and the second text information may be compared, or the semantic analysis may be performed by an LDA training algorithm (linear discriminant analysis ) and the similarity between the first text information and the second text information may be determined. Or, semantic analysis can be performed in a word vector mode to determine the similarity of the first text information and the second text information. It should be noted that, the specific manner of performing semantic analysis on the first text information and the second text information in the present disclosure is not limited in particular.
In step S830, when the similarity between the first text information and the second text information is greater than or equal to the second preset threshold, it is determined that the first target image of the first image matches the second target image of the second image.
In an example embodiment of the present disclosure, after the similarity between the first text information and the second text information is obtained through the above steps, a second preset threshold may be obtained, and when the similarity between the first text information and the second text information is greater than or equal to the second preset threshold, it is determined that the first target image of the first image matches the second target image of the second image. The second preset threshold may be used to represent a degree of similarity of text information corresponding to the image, and when the degree of similarity of the first target image and the second target image is greater than or equal to the second preset threshold, the degree of similarity of the images corresponding to the two text information is considered to be higher, and at this time, it may be determined that the first target image of the first image matches with the second target image of the second image. The second preset threshold value can be adjusted according to different semantic analysis methods. In addition, the second preset threshold may be stored in the terminal device or the server, and when the second preset threshold needs to be used, the first preset threshold may be called in the terminal device or the server. It should be noted that, the specific value of the second preset threshold is not particularly limited in this disclosure.
Through the steps S810 to S830, the first text information in the first target image may be acquired, the second text information in the second target image may be acquired, the first text information and the second text information may be subjected to semantic analysis, the similarity between the first text information and the second text information may be determined according to the result of the semantic analysis, and when the similarity between the first text information and the second text information is greater than or equal to the second preset threshold, the first target image of the first image and the second target image of the second image may be determined to be matched.
In an example embodiment of the present disclosure, a matching degree of a first target image and a second target image may be obtained, and a similarity of first text information and second text information may be obtained, where when the matching degree of the first target image and the second target image is greater than or equal to a first preset threshold and the similarity of the first text information and the second text information is greater than or equal to a second preset threshold, it is determined that the first target image of the first image and the second target image of the second image are matched.
In an example embodiment of the present disclosure, the language of the first text information and the language of the second text information may be obtained, and when the language of the first text information is different from the language of the second text information, unified processing is performed on the language of the first text information and the language of the second text information. Referring to fig. 9, the unified processing of the language of the first text information and the language of the second text information may include the following steps S910 to S920:
Step S910, obtaining the language of the first text information and the language of the second text information;
in step S920, when the language of the first text information is different from the language of the second text information, unified processing is performed on the language of the first text information and the language of the second text information.
In an example embodiment of the present disclosure, the language of the first text information and the language of the second text information may be obtained, and the language of the first text information and the language of the second text information may be processed in a unified manner. Specifically, the languages of the characters included in the first image obtained by processing the video may be different from the languages of the characters included in the second image corresponding to the video, and at this time, the languages of the first text information and the languages of the second text information need to be processed uniformly. The language of the first text information may be processed into the same language as the language of the second text information, or the language of the second text information may be processed into the same language as the language of the first text information, or the language of the first text information and the language of the second text information may be processed into different languages from the language of the first text information and the language of the second text information, so that the language after the adjustment of the first text information and the language after the adjustment of the second text information remain consistent. When unified processing is performed, text information can be translated. The specific manner of performing the unified processing on the language of the first text information and the language of the second text information is not particularly limited.
Through the steps S910 to S920, the languages of the first text information and the second text information can be obtained, and when the languages of the first text information and the second text information are different, unified processing is performed on the languages of the first text information and the second text information.
In an example embodiment of the present disclosure, a video position where a video progress identifier corresponding to a first target image in a current video is located may be obtained, and an image progress of a second target image in the second image is displayed in an associated area of the video position where the video progress identifier corresponding to the first target image is located. Referring to fig. 10, displaying the image progress of the second target image in the second image in the associated area of the video position where the video progress marker corresponding to the first target image is located may include the following steps S1010 to S1020:
step S1010, obtaining a video position of a video progress mark corresponding to a first target image in a current video;
in step S1020, displaying the image progress of the second target image in the second image in the associated area of the video position where the video progress identifier corresponding to the first target image is located.
In an example embodiment of the present disclosure, a video position where a video progress identifier corresponding to a first target image in a current video is located may be obtained. Specifically, when the current video is played, the current playing progress can be indicated by the video progress mark. At this time, the video position where the video progress identifier corresponding to the first target image in the current video is located may be obtained. Specifically, the first target image is obtained by processing the current video, that is, the first target image is obtained by processing a certain video frame in the video, at this time, a video position corresponding to the video frame can be obtained, and an image progress of the second target image in the second image is displayed in an associated area of the video position where the video progress identifier corresponding to the first target image is located. It should be noted that, the specific manner of acquiring the video position where the video progress identifier corresponding to the first target image in the current video is located is not particularly limited in the present disclosure.
For example, as shown in fig. 11, the image progress 1110 "chapter 243 of the cartoon" of the second target image in the second image is displayed in the associated area 1140 of the video location 1120 where the video progress mark 1130 corresponding to the first target image is located.
Through the steps S1010 to S1020, the video position where the video progress identifier corresponding to the first target image in the current video is located may be obtained, and the image progress of the second target image in the second image is displayed in the associated area of the video position where the video progress identifier corresponding to the first target image is located.
In an example embodiment of the present disclosure, a skip identifier of the image progress may be displayed at a video position where a video progress identifier corresponding to the first target image in the current video is located. Specifically, the first target image is obtained according to the current video processing, that is, the first target image is obtained by processing a certain frame of image in the current video, at this time, the video position of the video frame corresponding to the first target image can be obtained in the video progress mark, and the jump mark of the image progress is displayed in the associated area of the video position. The jump mark of the image progress can comprise a shape of a circle, an ellipse, a triangle and the like. The shape of the jump mark of the associated area and the graphics progress is not particularly limited in the present disclosure. For example, the association area may include an area directly above the video progress mark, or an area above left or above right of the video progress mark, which is not particularly limited in this embodiment.
For example, as shown in fig. 12, the skip identifier 1240 "of the image progress may be displayed in the association area 1210 of the video location 1220 where the video progress identifier 1230 corresponding to the first target image is located in the current video to" watch the cartoon ".
Further, after receiving a progress skip operation for the skip flag, an image position of the image progress in the image order may be acquired, and a second image may be opened from the image position. Specifically, the second images have an image order, that is, the plurality of second images are arranged in a certain image order. The image progress may be used to indicate an image position of the second target image in the second image, at which time a plurality of second images may be opened from the acquired image position. The progress jump operation may include a key operation, a touch operation, a voice control, and other manners, where the touch operation may include a sliding touch operation, a pressing touch operation, a gesture touch operation, a long-press touch operation, a clicking touch operation, a dragging touch operation, and the like. It should be noted that the specific form of the progress jump operation is not particularly limited in this disclosure.
For example, the progress of the image obtained through the above steps is chapter 243, and at this time, the image position 256 pages of chapter 243 in the image sequence can be obtained, i.e., a plurality of second images can be opened from page 256.
In an example embodiment of the present disclosure, a video to be processed obtained from a current video may be processed into a first image, a second image corresponding to the current video is determined according to a video identifier of the current video, the first image is matched with the second image, and when a first target image of the first image is matched with a second target image of the second image, an image progress of the second target image in the second image is prompted at a video position corresponding to the first target image in the current video.
According to the prompting method for the image progress provided by the embodiment of the disclosure, a user can timely know the image progress corresponding to the video, the user does not need to search for the image contents to be seen one by one in the image set, the problem that feedback of other users is not timely when other users are inquired can be avoided, time of the user is saved, the image contents to be seen can be seen timely, and further viewing experience of the user is improved.
It is noted that the above-described figures are merely schematic illustrations of processes involved in a method according to exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily appreciated that the processes shown in the above figures do not indicate or limit the temporal order of these processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, for example, among a plurality of modules.
In addition, in the exemplary embodiment of the disclosure, a prompting device for image progress is also provided. Referring to fig. 13, an image progress prompting device 1300 includes: a first image acquisition module 1310, an image matching module 1320, an image progress prompt module 1330.
The first image acquisition module is used for processing the video to be processed acquired from the current video into a first image; the image matching module is used for determining a second image corresponding to the current video according to the video identification of the current video and matching the first image with the second image; and the image progress prompting module is used for prompting the image progress of the second target image in the second image at the video position corresponding to the first target image in the current video when the first target image of the first image is matched with the second target image of the second image.
Optionally, the apparatus further comprises: the progress prompt operation response unit is used for responding to the image progress prompt operation aiming at the current video and acquiring the video to be processed; the video to be processed is converted into a first image.
Optionally, the second image includes a plurality of second sub-images, and the first image is matched with the second image, and the apparatus further includes: the second sub-image acquisition unit is used for acquiring a second sub-image corresponding to the second image by utilizing image segmentation; and the image matching unit is used for matching the first image with the second sub-image.
Optionally, the first image is matched with the second image, and the apparatus further includes: an image subset acquisition module for dividing the second image into a plurality of second image subsets; the second image subset has a corresponding relation with the video progress of the current video; the video progress acquisition unit is used for acquiring a video progress corresponding to the current video and acquiring a corresponding second image subset according to the current video progress; and the image matching unit is used for matching the first image with the images in the second image subset.
Optionally, the first target image of the first image matches the second target image of the second image, and the apparatus further comprises: the matching degree acquisition unit is used for acquiring the matching degree of the first target image and the second target image; the first matching determining unit is used for determining that the first target image of the first image is matched with the second target image of the second image when the matching degree of the first target image and the second target image is larger than or equal to a first preset threshold value.
Optionally, the first target image of the first image matches the second target image of the second image, and the apparatus further comprises: the text information acquisition unit is used for acquiring first text information in the first target image and acquiring second text information in the second target image; the semantic analysis unit is used for carrying out semantic analysis on the first text information and the second text information and determining the similarity of the first text information and the second text information according to the result of the semantic analysis; and the second matching determining unit is used for determining that the first target image of the first image is matched with the second target image of the second image when the similarity of the first text information and the second text information is larger than or equal to a second preset threshold value.
Optionally, before the semantic analysis is performed on the first text information and the second text information, the apparatus further includes: the language acquisition unit is used for acquiring the languages of the first text information and the second text information; and the unified processing unit is used for carrying out unified processing on the languages of the first text information and the second text information when the languages of the first text information and the second text information are different.
Optionally, the current video corresponds to a video progress identifier, and at a video position corresponding to the first target image in the current video, the image progress of the second target image in the second image is prompted, and the device further includes: the video position acquisition unit is used for acquiring the video position of the video progress mark corresponding to the first target image in the current video; and the associated area display module is used for displaying the image progress of the second target image in the second image in the associated area of the video position where the video progress mark corresponding to the first target image is located.
Since each functional module of the image progress prompting device of the exemplary embodiment of the present disclosure corresponds to a step of the exemplary embodiment of the image progress prompting method described above, for details not disclosed in the embodiment of the apparatus of the present disclosure, please refer to the embodiment of the image progress prompting method described above in the present disclosure.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit in accordance with embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
In addition, in the exemplary embodiment of the present disclosure, an electronic device capable of implementing the virtual light control method of the virtual concert hall is also provided.
Those skilled in the art will appreciate that the various aspects of the present disclosure may be implemented as a system, method, or program product. Accordingly, various aspects of the disclosure may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" system.
An electronic device 1400 according to such an embodiment of the present disclosure is described below with reference to fig. 14. The electronic device 1400 shown in fig. 14 is merely an example and should not be construed as limiting the functionality and scope of use of the disclosed embodiments.
As shown in fig. 14, the electronic device 1400 is embodied in the form of a general purpose computing device. Components of electronic device 1400 may include, but are not limited to: the at least one processing unit 1410, the at least one memory unit 1420, a bus 1430 connecting the different system components (including the memory unit 1420 and the processing unit 1410), and a display unit 1440.
Wherein the storage unit stores program code that is executable by the processing unit 1410, such that the processing unit 1410 performs steps according to various exemplary embodiments of the present disclosure described in the above section of the "exemplary method" of the present specification. For example, the processing unit 1410 may perform step S210 as shown in fig. 2: processing the video into a first image; step S220: determining a second image corresponding to the video according to the video identification of the video, and matching the first image with the second image; step S230: and prompting the image progress of the second target image in the video page corresponding to the video when the first target image of the first image is matched with the second target image of the second image.
As another example, the electronic device may implement the various steps shown in fig. 2.
The memory unit 1420 may include readable media in the form of volatile memory units, such as Random Access Memory (RAM) 1421 and/or cache memory 1422, and may further include Read Only Memory (ROM) 1423.
The memory unit 1420 may also include a program/utility 1424 having a set (at least one) of program modules 1425, such program modules 1425 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
Bus 1430 may be a local bus representing one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or using any of a variety of bus architectures.
The electronic device 1400 may also communicate with one or more external devices 1470 (e.g., keyboard, pointing device, bluetooth device, etc.), one or more devices that enable a user to interact with the electronic device 1400, and/or any device (e.g., router, modem, etc.) that enables the electronic device 1400 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 1450. Also, electronic device 1400 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet, through network adapter 1460. As shown, the network adapter 1460 communicates with other modules of the electronic device 1400 via the bus 1430. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with electronic device 1400, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, and includes several instructions to cause a computing device (may be a personal computer, a server, a terminal device, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
In an exemplary embodiment of the present disclosure, a computer-readable storage medium having stored thereon a program product capable of implementing the method described above in the present specification is also provided. In some possible embodiments, the various aspects of the present disclosure may also be implemented in the form of a program product comprising program code for causing a terminal device to carry out the steps according to the various exemplary embodiments of the disclosure as described in the "exemplary methods" section of this specification, when the program product is run on the terminal device.
The computer readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
Furthermore, the above-described figures are only schematic illustrations of processes included in the method according to the exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily appreciated that the processes shown in the above figures do not indicate or limit the temporal order of these processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, for example, among a plurality of modules.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (11)

1. An image progress prompting method is characterized by comprising the following steps:
processing a video to be processed acquired from a current video into a first image;
determining a second image corresponding to the current video according to the video identification of the current video, and matching the first image with the second image;
And when the first target image of the first image is matched with the second target image of the second image, prompting the image progress of the second target image in the second image at the video position corresponding to the first target image in the current video.
2. The method for prompting image progress according to claim 1, wherein the processing the video to be processed acquired from the current video into the first image includes:
responding to the image progress prompting operation aiming at the current video, and acquiring the video to be processed;
and converting the video to be processed into a first image.
3. The method of claim 1, wherein the second image includes a plurality of second sub-images, and the matching the first image with the second image includes:
obtaining a second sub-image corresponding to the second image by image segmentation;
and matching the first image with the second sub-image.
4. The method for prompting image progress according to claim 1, wherein said matching said first image with said second image comprises:
dividing the second image into a plurality of second image subsets; the second image subset has a corresponding relation with the video progress of the current video;
Acquiring a video progress corresponding to the current video, and acquiring a corresponding second image subset according to the current video progress;
matching the first image with the images in the second image subset.
5. The method of claim 1, wherein the matching of the first target image of the first image with the second target image of the second image comprises:
acquiring the matching degree of the first target image and the second target image;
and when the matching degree of the first target image and the second target image is larger than or equal to a first preset threshold value, determining that the first target image of the first image is matched with the second target image of the second image.
6. The method of claim 1, wherein the matching of the first target image of the first image with the second target image of the second image comprises:
acquiring first text information in the first target image and acquiring second text information in the second target image;
carrying out semantic analysis on the first text information and the second text information, and determining the similarity of the first text information and the second text information according to the result of the semantic analysis;
And when the similarity of the first text information and the second text information is larger than or equal to a second preset threshold value, determining that the first target image of the first image is matched with the second target image of the second image.
7. The method of claim 6, further comprising, prior to said semantically analyzing said first and second textual information:
acquiring the language of the first text information and the language of the second text information;
and when the languages of the first text information and the second text information are different, uniformly processing the languages of the first text information and the second text information.
8. The method for prompting image progress according to claim 1, wherein the current video corresponds to a video progress identifier, and the prompting the image progress of the second target image in the second image at the video position corresponding to the first target image in the current video includes:
acquiring a video position of a video progress mark corresponding to the first target image in the current video;
And displaying the image progress of the second target image in the second image in an associated area of the video position where the video progress mark corresponding to the first target image is located.
9. A device for prompting progress of an image, the device comprising:
the first image acquisition module is used for processing the video to be processed acquired from the current video into a first image;
the image matching module is used for determining a second image corresponding to the current video according to the video identification of the current video and matching the first image with the second image;
and the image progress prompting module is used for prompting the image progress of the second target image in the second image at the video position corresponding to the first target image in the current video when the first target image of the first image is matched with the second target image of the second image.
10. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when executed by a processor, causes the processor to implement the prompting method of image progress according to any one of claims 1 to 8.
11. An electronic device, comprising:
One or more processors; and
a memory for storing one or more programs that, when executed by the one or more processors, cause the one or more processors to implement the method of prompting image progress of any of claims 1-8.
CN202111093486.4A 2021-09-17 2021-09-17 Image progress prompting method and device, storage medium and electronic equipment Active CN113792182B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111093486.4A CN113792182B (en) 2021-09-17 2021-09-17 Image progress prompting method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111093486.4A CN113792182B (en) 2021-09-17 2021-09-17 Image progress prompting method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN113792182A CN113792182A (en) 2021-12-14
CN113792182B true CN113792182B (en) 2023-08-08

Family

ID=78878800

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111093486.4A Active CN113792182B (en) 2021-09-17 2021-09-17 Image progress prompting method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN113792182B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013156946A (en) * 2012-01-31 2013-08-15 Toppan Printing Co Ltd Comic image data detecting device, comic image data detecting program, and comic image date detecting method
CN106469067A (en) * 2015-08-14 2017-03-01 广州市动景计算机科技有限公司 Context progress update method and device
CN109299326A (en) * 2018-10-31 2019-02-01 网易(杭州)网络有限公司 Video recommendation method and device, system, electronic equipment and storage medium
CN110413800A (en) * 2019-07-17 2019-11-05 上海掌门科技有限公司 It is a kind of that the method and apparatus of novel information is provided
CN110430253A (en) * 2019-07-30 2019-11-08 上海连尚网络科技有限公司 It is a kind of that the method and apparatus of novel update notification information is provided
EP3617906A1 (en) * 2018-08-29 2020-03-04 Beijing Baidu Netcom Science and Technology Co., Ltd. Method and apparatus for updating information
CN112799561A (en) * 2021-02-05 2021-05-14 北京字节跳动网络技术有限公司 Information display method and device and computer storage medium
CN113238823A (en) * 2021-04-20 2021-08-10 北京达佳互联信息技术有限公司 Information display method and device, electronic equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013156946A (en) * 2012-01-31 2013-08-15 Toppan Printing Co Ltd Comic image data detecting device, comic image data detecting program, and comic image date detecting method
CN106469067A (en) * 2015-08-14 2017-03-01 广州市动景计算机科技有限公司 Context progress update method and device
EP3617906A1 (en) * 2018-08-29 2020-03-04 Beijing Baidu Netcom Science and Technology Co., Ltd. Method and apparatus for updating information
CN109299326A (en) * 2018-10-31 2019-02-01 网易(杭州)网络有限公司 Video recommendation method and device, system, electronic equipment and storage medium
CN110413800A (en) * 2019-07-17 2019-11-05 上海掌门科技有限公司 It is a kind of that the method and apparatus of novel information is provided
CN110430253A (en) * 2019-07-30 2019-11-08 上海连尚网络科技有限公司 It is a kind of that the method and apparatus of novel update notification information is provided
CN112799561A (en) * 2021-02-05 2021-05-14 北京字节跳动网络技术有限公司 Information display method and device and computer storage medium
CN113238823A (en) * 2021-04-20 2021-08-10 北京达佳互联信息技术有限公司 Information display method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113792182A (en) 2021-12-14

Similar Documents

Publication Publication Date Title
US11023716B2 (en) Method and device for generating stickers
US20230308730A1 (en) Subtitle editing method and apparatus, and electronic device
US11917344B2 (en) Interactive information processing method, device and medium
CN109753968B (en) Method, device, equipment and medium for generating character recognition model
CN109740085B (en) Page content display method, device, equipment and storage medium
US11475588B2 (en) Image processing method and device for processing image, server and storage medium
US11954455B2 (en) Method for translating words in a picture, electronic device, and storage medium
CN110740389B (en) Video positioning method, video positioning device, computer readable medium and electronic equipment
CN111836112B (en) Multimedia file output method, device, medium and electronic equipment
CN108509611B (en) Method and device for pushing information
CN111836118B (en) Video processing method, device, server and storage medium
US20230334880A1 (en) Hot word extraction method and apparatus, electronic device, and medium
CN111783508A (en) Method and apparatus for processing image
CN111432282A (en) Video recommendation method and device
CN111897950A (en) Method and apparatus for generating information
JP2021039715A (en) Content embedding method, device, electronic device, storage medium, and program
CN112989112B (en) Online classroom content acquisition method and device
CN111881900B (en) Corpus generation method, corpus translation model training method, corpus translation model translation method, corpus translation device, corpus translation equipment and corpus translation medium
CN113792182B (en) Image progress prompting method and device, storage medium and electronic equipment
US11854422B2 (en) Method and device for information interaction
WO2022213801A1 (en) Video processing method, apparatus, and device
CN113965798A (en) Video information generating and displaying method, device, equipment and storage medium
CN113641853A (en) Dynamic cover generation method, device, electronic equipment, medium and program product
US20210049161A1 (en) Systems and methods for pushing content
CN111355985A (en) Video advertisement generation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant