CN113296660A - Image processing method and device and electronic equipment - Google Patents

Image processing method and device and electronic equipment Download PDF

Info

Publication number
CN113296660A
CN113296660A CN202011135433.XA CN202011135433A CN113296660A CN 113296660 A CN113296660 A CN 113296660A CN 202011135433 A CN202011135433 A CN 202011135433A CN 113296660 A CN113296660 A CN 113296660A
Authority
CN
China
Prior art keywords
target
interface
image
screenshot
image processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011135433.XA
Other languages
Chinese (zh)
Inventor
张立焘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN202011135433.XA priority Critical patent/CN113296660A/en
Publication of CN113296660A publication Critical patent/CN113296660A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour

Abstract

The embodiment of the application provides an image processing method, when the image processing method in the embodiment is adopted to capture a screenshot of a currently displayed target interface to obtain a target image, the currently displayed target interface is captured firstly, and after the target interface is captured, a target position of a target identifier in the target interface is identified; wherein the target identification is an external pointing identification provided by an external pointing device. And then, determining a target area through the target position and a set screenshot strategy aiming at the interface. And finally, performing screenshot processing on the target interface according to the target area to obtain a target image. When the image processing method of the embodiment is used for screenshot, the target area determined by the target position of the external indication mark provided by the external indication equipment on the target interface is utilized, and the screenshot is carried out by utilizing the target area, so that the method can be applied to wider screenshot scenes, and the problem of poor applicability of the screenshot mode in the prior art is solved.

Description

Image processing method and device and electronic equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to two image processing methods and apparatuses, an electronic device, and a computer storage medium.
Background
Conferencing becomes an important communication mode in modern working modes. People can train or show working matters such as working processes and the like through meetings during work. In particular, the conference may be conducted by means of the PPT (i.e. presentation software), or by presenting images or videos, in combination with the interpretation. The content taught is typically content presented in a PPT, image or video. In conducting a meeting, meeting minutes are typically needed to better comb the work.
The existing conference recording mode can be mainly realized by recording audio files in the conference explanation process or capturing the screen shot of the displayed contents in the PPT, images or videos. For the above-mentioned manner of recording audio files, character recognition is required to be performed on the audio files at a later stage, and in the process of character recognition, if character recognition is wrong, a conference recording file which is subsequently filed is wrong. For the screenshot mode, the prior art mainly selects the region to be screenshot through an internal control, and the mode is only suitable for the situation that the region to be screenshot is definite in advance. If a certain area is selected through an external device in a conference explanation, the prior art cannot screenshot for the situation. In other words, the screenshot mode in the prior art has poor applicability.
Disclosure of Invention
The embodiment of the application provides an image processing method, which aims to solve the problem that the screenshot mode in the prior art is poor in applicability.
An embodiment of the present application provides an image processing method, including:
capturing a currently displayed target interface;
identifying a target location of a target identifier in the target interface; the target mark is an external indicating mark provided by external indicating equipment;
determining a target area in the target interface according to the target position and a set screenshot strategy aiming at the interface;
and performing screenshot processing on the target interface according to the target area to obtain a target image.
Optionally, identifying the target position of the target identifier in the target interface includes:
sending request information for obtaining the target position of the target identification in the target interface to a position obtaining device;
and obtaining the target position of the target identifier sent by the position acquisition device aiming at the request information in the target interface.
Optionally, identifying the target position of the target identifier in the target interface includes:
obtaining a starting time when the target identification begins to appear in the target interface and a terminating time when the target identification disappears in the target interface;
obtaining each position of the target identifier in the target interface within a time period corresponding to the starting time to the ending time, and confirming each position as the target position; wherein each position is located within the monitoring range of the position acquisition device.
Optionally, the obtaining of each position of the target identifier in the target interface within a time period corresponding to the starting time to the ending time includes:
establishing a virtual rectangular coordinate system within the monitoring range of the position acquisition device;
obtaining information of each coordinate point in the target interface within a time period corresponding to the target identifier from the starting time to the ending time according to the virtual rectangular coordinate system;
and determining the positions based on the coordinate point information.
Optionally, the determining a target area in the target interface according to the target position and a set screenshot policy for the interface includes:
based on the target position, obtaining a target track of the target identification in the target interface;
and obtaining a target area according to the target track and the screenshot strategy aiming at the interface.
Optionally, the screenshot policy for the interface includes rule information for screenshot set for the target track;
the obtaining a target area according to the target track and the screenshot strategy for the interface comprises: and obtaining a target area according with the rule determined by the rule information based on the target track and the rule information.
Optionally, the rule information for screenshot includes: and taking the area which comprises the target track and conforms to the specified shape as the target area.
Optionally, the capturing the currently displayed target interface includes: an interface of at least one of a document, an image, and a video currently playing in the conference is captured as a target interface.
Optionally, the capturing the currently displayed target interface includes: and capturing the currently displayed target interface by using an external capturing device.
Optionally, the target image is an image required for recording conference content.
Optionally, the method further includes: and providing the target image to a client.
Optionally, after obtaining the target image, the method further includes: judging whether the target image meets a preset screenshot condition or not;
the providing the target image to a client includes: and if the target image meets the preset screenshot condition, providing the target image for a client.
Optionally, the method further includes: obtaining a request message sent by a client for requesting to obtain the target image;
the providing the target image to a client includes:
and providing the target image to a client based on the request message for obtaining the target image.
Optionally, the method further includes: and displaying the target image.
Optionally, identifying the target position of the target identifier in the target interface includes:
identifying laser marks appearing in the target interface; and taking the position where the laser mark appears in the target interface as the target position of the target mark in the target interface.
Optionally, the size of the target area is smaller than or equal to the size of the target interface.
An embodiment of the present application provides another image processing method, including:
capturing a currently displayed target interface;
and outputting a screenshot image aiming at a target area in the target interface, wherein the target area is an area determined based on a set screenshot strategy aiming at the interface and a target position of a target identifier in the target interface, and the target identifier is an external indication identifier provided by an external indication device.
Optionally, before outputting the screenshot image for the target area in the target interface, the method further includes: and identifying a target position of the target identification in the target interface.
Optionally, after identifying the target position of the target identifier in the target interface, the method further includes: and determining a target area in the target interface according to the screenshot strategy aiming at the interface.
Optionally, after determining the target area, before outputting the screenshot image for the target area in the target interface, the method further includes: and performing screenshot processing on the target interface according to the target area to obtain a screenshot image aiming at the target area in the target interface.
Optionally, the size of the target area is smaller than or equal to the size of the target interface.
Correspondingly, an embodiment of the present application provides an image processing apparatus, including:
the capturing unit is used for capturing the currently displayed target interface;
the identification unit is used for identifying a target position of a target identifier in the target interface; the target mark is an external indicating mark provided by external indicating equipment;
the determining unit is used for determining a target area in the target interface according to the target position and a set screenshot strategy aiming at the interface;
and the target image obtaining unit is used for carrying out screenshot processing on the target interface according to the target area to obtain a target image.
Correspondingly, another image processing apparatus is provided in an embodiment of the present application, including:
the capturing unit is used for capturing the currently displayed target interface;
and the output unit is used for outputting a screenshot image of a target area in the target interface, wherein the target area is an area determined based on a set screenshot strategy for the interface and a target position of a target identifier in the target interface, and the target identifier is an external indication identifier provided by an external indication device.
An embodiment of the present application provides an electronic device, including:
a processor;
a memory for storing a computer program to be executed by the processor for performing the above two image processing methods.
An embodiment of the present application provides a computer storage medium, which stores a computer program that is executed by a processor to perform the above two image processing methods.
An embodiment of the present application further provides an image processing method, including:
capturing a currently displayed target interface; the currently displayed target interface is an interface of the currently displayed conference content;
identifying a target location of a target identifier in the target interface; the target mark is a laser mark used for indicating position information of a target object in the target interface;
determining a target area in the target interface according to the target position and a set screenshot strategy aiming at the interface;
performing screenshot processing on the target interface according to the target area to obtain a target image; wherein the target image is an image for recording conference content.
Compared with the prior art, the embodiment of the application has the following advantages:
an embodiment of the present application provides an image processing method, including: capturing a currently displayed target interface; identifying a target location of a target identifier in the target interface; the target mark is an external indicating mark provided by external indicating equipment; determining a target area in the target interface according to the target position and a set screenshot strategy aiming at the interface; and performing screenshot processing on the target interface according to the target area to obtain a target image. When the image processing method in the embodiment is adopted to capture the screenshot of the currently displayed target interface to obtain the target image, firstly, the currently displayed target interface is captured, and after the target interface is captured, the target position of the target identifier in the target interface is identified; wherein the target identification is an external pointing identification provided by an external pointing device. And then, determining a target area through the target position and a set screenshot strategy aiming at the interface. And finally, performing screenshot processing on the target interface according to the target area to obtain a target image. When the image processing method of the embodiment is used for screenshot, the target area determined by the target position of the external indication mark provided by the external indication equipment on the target interface is utilized, and the screenshot is carried out by utilizing the target area, so that the method can be applied to wider screenshot scenes, and the problem of poor applicability of the screenshot mode in the prior art is solved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art according to the drawings.
FIG. 1 is a schematic view of an application scenario of the image processing method of the present application;
fig. 2 is a flowchart of an image processing method according to a first embodiment of the present application;
fig. 3A is a schematic diagram of a process of determining a target track of a target identifier according to a first embodiment of the present application;
fig. 3B is a schematic diagram of a process for determining a target area based on a target track according to a first embodiment of the present application;
FIG. 3C is a schematic diagram of an obtained target image provided by the first embodiment of the present application;
fig. 4 is a flowchart of an image processing method according to a second embodiment of the present application;
fig. 5 is a schematic diagram of an image processing apparatus according to a third embodiment of the present application;
fig. 6 is a schematic diagram of an image processing apparatus according to a fourth embodiment of the present application;
fig. 7 is a schematic diagram of an electronic device according to a fifth embodiment of the present application;
fig. 8 is a flowchart of an image processing method according to a seventh embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is capable of implementation in many different ways than those herein set forth and of similar import by those skilled in the art without departing from the spirit of this application and is therefore not limited to the specific implementations disclosed below.
In order to more clearly show the image processing method provided by the embodiment of the present application, an application scenario of the image processing method provided by the embodiment of the present application is introduced first. The image processing method provided by the embodiment of the application can be applied to scenes for capturing the conference display content in the conference. The conference presentation content may be one or more of a document, an image, or a video. Of course, some other conference presentation content not listed may be possible. Specifically, a screenshot image for a target area in a target interface may be obtained by capturing a currently presented target interface. The above-described acquisition of the screenshot image for the target area in the target interface is mainly obtained in the following manner.
First, after capturing a currently presented target interface, a target location of a target identifier in the target interface is identified. When the conference content is displayed, the current conference key content can be directly indicated or indicated through the laser pen mark. The laser pointer mark is called a target mark. It should be noted that, in this embodiment, the laser pointer is identified as the external indication identifier provided by the external indication device, that is, in this application, it is different from the existing method that a screenshot is performed on the currently displayed target interface by using an internal control component. In this embodiment, the identification of the laser pointer on the current target interface may be captured by the camera. The screenshot is not required to be carried out through the components which are mutually connected and controlled inside, so that the problem of poor applicability of the screenshot mode in the prior art is solved.
For example, in a meeting, the product sales of a certain merchant for a certain number of years needs to be stated, and can be explained by combining a sales change trend chart in the PPT. At this time, the conference speaker can directly indicate the change trend graph in the PPT through the laser pen, so that other people can clearly understand the content spoken by the current conference. The laser pen necessarily has marks in the PPT, and the marks are called laser pen marks. Of course, the conference speaker may also point directly to the explanation content in the PPT with a finger. At this time, the gesture identifier appearing in the PPT is also the target identifier.
In the existing process of recording the conference content, generally, an audio file in the conference explanation process is recorded, or the content displayed in the PPT, the image or the video is captured, that is: and carrying out screen capture on the equipment bearing the conference display content. For the above-mentioned manner of recording the audio file, in the later stage, when the conference content is sorted and recorded, the audio file needs to be subjected to character recognition, and in the process of character recognition, if the character recognition is wrong, the subsequently filed conference recording file is wrong. For the screenshot mode, the prior art mainly performs screenshot in a voice control mode or by using a remote controller, for example: the instruction of screenshot can be directly sent to the equipment playing the conference content at present, or the screenshot can be carried out by using a remote controller of the equipment playing the conference content. However, due to the screenshot mode, the image obtained by screenshot is an image of the whole display screen. Therefore, the obtained images are inaccurate, and the stored conference images contain a large amount of redundant information, which causes that the content in the images needs to be further screened when the conference content is finally archived or sorted, thereby undoubtedly resulting in that the archiving or sorting work is complicated. For the screenshot mode, the prior art mainly selects the region to be screenshot through an internal control, and the mode is only suitable for the situation that the region to be screenshot is definite in advance. If a certain area is selected through an external device in a conference explanation, the prior art cannot screenshot for the situation. Therefore, the screenshot mode in the prior art is poor in applicability.
According to the image processing method, when the screenshot image is obtained, the currently displayed target interface is captured firstly, the target position of the target mark in the currently displayed target interface is identified, and then the target area is determined in the target interface according to the target position and the screenshot strategy aiming at the interface. And after the target area is obtained, performing screenshot processing on the target interface according to the target area to obtain a target image.
Specifically, identifying the target position of the target identifier in the target interface may be: sending request information for obtaining the target position of the target identification in the target interface to a position obtaining device; and obtaining the target position of the target identifier sent by the position acquisition device aiming at the request information in the target interface. The position acquiring device may be a unit that operates in the background and is used to acquire the position of the target identifier, and the unit that is used to acquire the position of the target identifier may monitor the position of the target identifier on the currently displayed target interface at any time, so as to identify the target position of the target identifier in the target interface. For example: in the process of meeting, the currently displayed target interface is an interface A in the PPT displayed on the large screen, and if a meeting speaker wants to explain some contents in the interface A in detail, the meeting speaker can use a laser pen to indicate the contents explained currently. Assuming that the other contents in the interface a are not explained in a focused manner, it can be understood that the conference speaker wants to emphasize the contents in the interface a indicated by the laser pen in a focused manner. Therefore, when a conference is recorded, the content indicated by the laser pen can be captured, and not all the content displayed by the interface A. This enables the captured screenshot to be free of a large amount of redundant information. In addition, the target position of the target identifier in the target interface is identified, which may be: identifying laser marks appearing in a target interface; and taking the position where the laser mark appears in the target interface as the target position of the target mark in the target interface.
Further, identifying the target location of the target identifier in the target interface may refer to: first, a starting time when the target identifier begins to appear in the target interface and a terminating time when the target identifier disappears in the target interface are obtained. Then, obtaining each position in the target interface in a time period corresponding to the target identifier from the starting time to the ending time, and determining each position as a target position; wherein each position is located within the monitoring range of the position acquisition device.
For example, when the interface A is displayed, the laser pen mark appears on the interface A at the initial moment t1The end time of the disappearance of the laser pointer mark on the interface A is t2. Only need to obtain from t1To t2During the time, the laser pen marks each position appearing on the interface A, and each position is determined as a target position. Since each position is obtained by the position obtaining device, each position is located within the monitoring range of the position obtaining device.
After the target position is obtained, a target area can be determined in the target interface according to the screenshot strategy aiming at the interface. Specifically, as a way to determine the target area, a target track of the target identifier in the target interface may be obtained based on the target position; and obtaining a target area comprising the target track according to the target track and the screenshot strategy aiming at the interface. For example, in the process of a conference, a screenshot of an interface a currently shown in the conference needs to be acquired, and when a situation of a target position of a laser pen identifier on the interface a is obtained, a target track of the laser pen identifier on the interface a can be acquired. If the screenshot strategy for the interface is to use the minimum rectangle including the target track as the target area, the minimum rectangle including the target track of the laser pen identifier appearing in the interface a may be used as the target area. Of course, the screenshot strategy can also be other scenarios, for example, taking the smallest circle or other polygon that includes the target trajectory as the target area. The screenshot strategies of the other situations are all within the protection scope of the embodiment.
After the target area is obtained, screenshot processing can be carried out on the target interface according to the target area, and a target image is obtained. Of course, in the meeting log file, the target image may be used to identify the presentation object in the target interface. Continuing to take the above scenario as an example, when screenshot is performed on the current display interface of the conference, if the target area for screenshot is already obtained, the target area in the current display interface a is directly intercepted, and then the target image can be obtained.
Fig. 1 is a schematic view of an application scenario of the image processing method of the present application. When image processing is performed, the client may send request information for a currently displayed target interface screenshot to the server, that is: the client sends screenshot request information, the server receives the screenshot request information of the client, and the server executes the following operation after receiving the screenshot request information. First, the server captures a currently displayed target interface. Thereafter, a target location of the target identification in the target interface is identified. And then, determining a target area in the target interface according to the target position and a set screenshot strategy aiming at the interface. And finally, performing screenshot processing on the target interface according to the target area to obtain a target image. After obtaining the target image, the server may send the target image to the client, and display the target image at the client.
The image processing method can also complete the process of acquiring the target image only at the server side without acquiring screenshot request information sent by the client side in advance. The target image is displayed at the client side only after the target image is obtained. In addition, the whole image processing process can be completed only at the client so as to obtain the target image.
The application scenario of the image processing method is only one embodiment of the application scenario of the image processing method provided by the present application, and the application scenario embodiment is provided to facilitate understanding of the image processing method provided by the present application, and is not used to limit the image processing method provided by the present application. Other application scenarios of the provided image processing method are not described in detail in the embodiment of the present application.
First embodiment
A first embodiment of the present application provides an image processing method, which is described below with reference to fig. 2 to 3C.
Fig. 2 is a flowchart of an image processing method according to a first embodiment of the present application.
In step S201, the currently presented target interface is captured.
In this embodiment, the image processing method may be applied to the screenshot of the current presentation interface of the conference to obtain a screenshot image for a conference record. The currently displayed target interface needs to be captured in advance because the currently displayed interface needs to be subjected to screenshot. Specifically, the currently presented target interface may be an interface of at least one of a document, an image, and a video. For example, in a meeting, if a meeting speaker explains in conjunction with a document, the document must be played. If multiple documents need to be played in a meeting, the target interface for capturing the current presentation may be the presentation interface for capturing the document currently being played. If a video is being played during the meeting, capturing the currently presented target interface may refer to capturing a frame that the video currently being played is displaying. If multiple images are being played, capturing the currently presented target interface may refer to capturing the image currently being played.
Specifically, the capturing of the currently displayed target interface may be capturing the currently displayed target interface by using an external capturing device. For example, a camera may be employed to capture a currently presented target interface as an interface image for subsequent use in obtaining a target location of a target identifier in the target interface.
In step S202, a target location of a target identification in a target interface is identified.
In this embodiment, the target identifier is an external indication identifier provided by an external indication device, and may be used to locate an identifier of a device that shows an object in the target interface. When the conference content is displayed, the current conference focus content may be indicated directly or indicated through a laser pointer identification, and the conference focus content may be a target object in a target interface. The laser pointer identification is called a target identification. Of course, the speaking person in the conference can also directly point to the explanation content in the target interface by using a finger. At this time, the gesture identifier appearing in the target interface is also the target identifier. In the conference, the current explanation content is the display object. The presentation object may be any object in the presentation interface.
In this embodiment, the target positions refer to all positions where the target identifier appears in the currently presented target interface. For example, when a speaker in a conference uses a laser pen to indicate the content on the interface currently presented in the conference, all positions where the laser pen appears on the current interface are the target positions.
After capturing the currently presented target interface in step S201, a target position of the target identifier in the target interface is identified.
Specifically, as a way of identifying the target position of the target identifier in the target interface, it may be: first, request information for obtaining a target position of a target identifier in a target interface is sent to a position obtaining device. And then, acquiring the target position of the target identifier sent by the position acquisition device aiming at the request information in the target interface. The position acquiring device may be a unit that operates in the background and is used to acquire the position of the target identifier, and the unit that is used to acquire the position of the target identifier may monitor the position of the target identifier on the currently displayed target interface at any time, so as to identify the target position of the target identifier in the target interface.
More specifically, the target position of the recognition target identifier in the target interface may be identified as follows: first, a starting time when the target identifier begins to appear in the target interface and a terminating time when the target identifier disappears in the target interface are obtained. And then, obtaining each position in the target interface in a time period corresponding to the target identifier from the starting time to the ending time, and determining each position as the target position. It should be noted that each position is located within the monitoring range of the position acquisition device.
The obtaining of each position in the target interface within the time period corresponding to the target identifier from the start time to the end time may be: first, a virtual rectangular coordinate system is established within the monitoring range of the position acquisition device. And then, obtaining the information of each coordinate point in the target interface in the time period corresponding to the starting time to the ending time of the target identifier according to the virtual rectangular coordinate system. Finally, based on the respective coordinate point information, the respective positions are determined.
Since in this embodiment, the external pointing device may be a laser pointer, as one way to identify the target location of the target identifier in the target interface, it may be: identifying laser marks appearing in a target interface; and taking the position where the laser mark appears in the target interface as the target position of the target mark in the target interface. Since in step S201, it has been explained that: the capturing of the currently displayed target interface may be an interface image, so as to recognize the laser identifier appearing in the target interface, and may be extracting an image feature in the interface image, and using a position containing the laser feature in the image feature as a target position of the target identifier in the target interface.
In step S203, a target area is determined in the target interface according to the target position and the set screenshot policy for the interface.
After identifying the target location of the target identifier in the target interface at step S202, the target area is determined in the target interface. Here, it should be noted that the size of the target area may be smaller than or equal to the size of the currently displayed target interface. Therefore, when performing screenshot, the image processing method in this embodiment is different from a method of directly performing screenshot on the entire currently displayed target interface in the prior art, but may selectively perform screenshot on an object displayed on the currently displayed target interface. In this embodiment, an object of a currently displayed target interface is selected based on a target identifier, specifically, a target position of the target identifier on the currently displayed target interface is obtained, and a target area is determined on the target interface based on the target position and a screenshot policy for the interface.
The determining of the target area in the target interface according to the target position and the set screenshot policy for the interface may be: firstly, based on the target position, the target track of the target mark in the target interface is obtained. And then, obtaining a target area according to the target track and a screenshot strategy aiming at the interface.
As shown in fig. 3A to 3C, it is a schematic diagram of a method for determining a target area based on a target track of a target identifier. By establishing a virtual rectangular coordinate system within the monitoring range of the position acquiring device, the virtual rectangular coordinate system is the X-axis and the Y-axis illustrated in fig. 3A and 3B, in this embodiment, the position acquiring device may be a camera. If the meeting is that the PPT is played through the projector, a camera can be installed in the projector as a position acquisition device. The camera can monitor the identification of the laser pen or the finger on the current display interface in real time, so that the target positions of the various identifications are obtained. After the target position is obtained, a target area can be determined in the target interface according to the target position and the screenshot strategy aiming at the interface. FIG. 3A is a diagram of obtaining a target trajectory based on a target location of a target identification. After the target track is obtained, the target area can be obtained. As shown in fig. 3B, the target track is included by a dotted rectangular frame, which is the smallest rectangle including the target track, that is, the target area. The target image shown in fig. 3C is obtained based on the target area.
In this embodiment, the screenshot policy for the interface may refer to rule information for screenshot set for the target track.
The obtaining of the target area including the target track according to the target track and the screenshot strategy for the interface may be obtaining the target area including the target track based on the target track and the rule information. The rule information for the screenshot includes: and taking the area which comprises the target track and conforms to the specified shape as a target area. For example, it may be that a minimum rectangle including the target trajectory is used as the rule information for the screen shot. Of course, other rule information than this is also possible.
In step S204, screenshot processing is performed on the target interface according to the target area, so as to obtain a target image.
After the target area is obtained in step S203, screenshot processing is performed on the target interface according to the target area, so as to obtain a target image.
Specifically, as a way of obtaining the target image by performing screenshot processing on the target interface according to the target area, screenshot processing may be performed on a portion of the target area corresponding to the target interface to obtain the target image.
In this embodiment, the obtained target image may be used to identify a presentation object in the target interface. For example, in the process of meeting display, the displayed object is a sales volume change trend graph, and the target image obtained by screenshot is an image including the sales volume change trend graph. Therefore, the meeting record content is the sales volume change trend graph. When the image processing method is applied to a scene in a conference screenshot, the target image is an image required for recording conference content.
If the image processing process is carried out at the server side, the target image can be provided to the client side.
In order to further improve the accuracy of the image obtained by the client, after the target image is obtained, whether the target image meets the preset screenshot condition can be judged. For example, the background may be whether the target image includes an irrelevant background, and if so, the irrelevant background in the target image may be further deleted to obtain a new target image.
After the determination, providing the target image to the client may refer to: and if the target image meets the preset screenshot condition, providing the target image for the client.
Of course, the server may obtain in advance a request message sent by the client to request obtaining the target image before providing the target image to the client. After obtaining the request message for requesting to obtain the target image, the server provides the target image to the client.
After receiving the target image provided by the server, the client can display the target image on the client.
Of course, if the image processing method of the embodiment is directly applied to the client, the client can directly obtain the target image and display the target image, so that the target image does not need to be obtained through the server.
The embodiment of the application provides an image processing method, when the image processing method in the embodiment is adopted to capture a screenshot of a currently displayed target interface to obtain a target image, the currently displayed target interface is captured firstly, and after the target interface is captured, a target position of a target identifier in the target interface is identified; wherein the target identification is an external pointing identification provided by an external pointing device. And then, determining a target area through the target position and a set screenshot strategy aiming at the interface. And finally, performing screenshot processing on the target interface according to the target area to obtain a target image. When the image processing method of the embodiment is used for screenshot, the target area determined by the target position of the external indication mark provided by the external indication equipment on the target interface is utilized, and the screenshot is carried out by utilizing the target area, so that the method can be applied to wider screenshot scenes, and the problem of poor applicability of the screenshot mode in the prior art is solved.
Second embodiment
A second embodiment of the present application provides another image processing method. Since the second embodiment is partially similar to the image processing method of the first embodiment, so that the description is relatively simple, and for the relevant points, reference may be made to the partial description of the first embodiment, and the image processing method embodiments described below are only illustrative.
Fig. 4 is a flowchart of an image processing method according to a second embodiment of the present application.
In step S401, the currently presented target interface is captured.
In step S402, a screenshot image for a target area in the target interface is output.
In this embodiment, the size of the target area may be smaller than or equal to the size of the target interface, so that a local screenshot or a global screenshot is conveniently performed on the target interface, where the local screenshot is a screenshot performed on a partial area of the target interface, and the global screenshot is a screenshot performed on an area of the entire target interface. The target area is an area determined based on a set screenshot strategy aiming at the interface and a target position of a target identifier in the target interface, and the target identifier is an external indication identifier provided by an external indication device.
In this embodiment, before outputting the screenshot image for the target area in the target interface, the method further includes: a target location of a target identification in a target interface is identified.
In this embodiment, after identifying the target location of the target identifier in the target interface, the method further includes: and determining a target area in the target interface according to the screenshot strategy aiming at the interface.
In this embodiment, after determining the target area, before outputting the screenshot image for the target area in the target interface, the method further includes: and performing screenshot processing on the target interface according to the target area to obtain a screenshot image aiming at the target area in the target interface.
Third embodiment
Corresponding to the application scenario embodiment of the image processing method and the image processing method provided by the first embodiment, a third embodiment of the present application further provides an image processing apparatus. Since the device embodiment is basically similar to the application scenario embodiment and the first embodiment, the description is relatively simple, and reference may be made to the application scenario embodiment and a part of the description of the first embodiment for relevant points. The device embodiments described below are merely illustrative.
Fig. 5 is a schematic diagram of an image processing apparatus according to a third embodiment of the present application.
The image processing apparatus includes:
a capturing unit 501, configured to capture a currently displayed target interface;
a recognition unit 502, configured to recognize a target location of a target identifier in the target interface; the target mark is an external indicating mark provided by external indicating equipment;
a determining unit 503, configured to determine a target area in the target interface according to the target position and a set screenshot policy for the interface;
and a target image obtaining unit 504, configured to perform screenshot processing on the target interface according to the target area to obtain a target image.
Optionally, the identification unit is specifically configured to:
sending request information for obtaining the target position of the target identification in the target interface to a position obtaining device;
and obtaining the target position of the target identifier sent by the position acquisition device aiming at the request information in the target interface.
Optionally, the identification unit is specifically configured to:
obtaining a starting time when the target identification begins to appear in the target interface and a terminating time when the target identification disappears in the target interface;
obtaining each position of the target identifier in the target interface within a time period corresponding to the starting time to the ending time, and confirming each position as the target position; wherein each position is located within the monitoring range of the position acquisition device.
Optionally, the identification unit is specifically configured to:
establishing a virtual rectangular coordinate system within the monitoring range of the position acquisition device;
obtaining information of each coordinate point in the target interface within a time period corresponding to the target identifier from the starting time to the ending time according to the virtual rectangular coordinate system;
and determining the positions based on the coordinate point information.
Optionally, the determining unit is specifically configured to:
based on the target position, obtaining a target track of the target identification in the target interface;
and obtaining a target area according to the target track and the screenshot strategy aiming at the interface.
Optionally, the screenshot policy for the interface includes rule information for screenshot set for the target track;
the determining unit is specifically configured to: and obtaining a target area according with the rule determined by the rule information based on the target track and the rule information.
Optionally, the rule information for screenshot includes: and taking the area which comprises the target track and conforms to the specified shape as the target area.
Optionally, the capturing unit is specifically configured to: an interface of at least one of a document, an image, and a video currently playing in the conference is captured as a target interface.
Optionally, the capturing unit is specifically configured to: and capturing the currently displayed target interface by using an external capturing device.
Optionally, the target image is an image required for recording conference content.
Optionally, the method further includes: a providing unit; the providing unit is specifically configured to: and providing the target image to a client.
Optionally, the method further includes: a judgment unit; the judging unit is specifically configured to: after the target image is obtained, judging whether the target image meets a preset screenshot condition;
the providing unit is specifically configured to: and if the target image meets the preset screenshot condition, providing the target image for a client.
Optionally, the method further includes: a request information obtaining unit; the request information obtaining unit is specifically configured to: obtaining a request message sent by a client for requesting to obtain the target image;
the providing unit is specifically configured to: and providing the target image to a client based on the request message for obtaining the target image.
Optionally, the method further includes: a display unit; the display unit is specifically configured to: and displaying the target image.
Optionally, the identification unit is specifically configured to:
identifying laser marks appearing in the target interface; and taking the position where the laser mark appears in the target interface as the target position of the target mark in the target interface.
Optionally, the size of the target area is smaller than or equal to the size of the target interface.
Fourth embodiment
A fourth embodiment of the present application also provides an image processing apparatus corresponding to the image processing method provided in the second embodiment of the image processing method of the present application. Since the apparatus embodiment is substantially similar to the second embodiment, it is relatively simple to describe, and reference may be made to some descriptions of the second embodiment for relevant points. The device embodiments described below are merely illustrative.
Fig. 6 is a schematic diagram of an image processing apparatus according to a fourth embodiment of the present application.
The image processing apparatus includes:
the capturing unit 601 is configured to capture a currently displayed target interface;
an output unit 602, configured to output a screenshot image for a target area in the target interface, where the target area is an area determined based on a set screenshot policy for the interface and a target location of a target identifier in the target interface, and the target identifier is an external indication identifier provided by an external indication device.
Optionally, the method further includes: an identification unit; the identification unit is specifically configured to: identifying a target location of the target identification in the target interface prior to outputting the screenshot image for a target region in the target interface.
Optionally, the method further includes: a determination unit; the determining unit is specifically configured to: after the target position of the target identification in the target interface is identified, a target area is determined in the target interface according to the screenshot strategy aiming at the interface.
Optionally, the method further includes: a screenshot image obtaining unit; the screenshot image obtaining unit is specifically configured to: after the target area is determined, before a screenshot image for the target area in the target interface is output, screenshot processing is conducted on the target interface according to the target area, and the screenshot image for the target area in the target interface is obtained.
Optionally, the size of the target area is smaller than or equal to the size of the target interface.
Fifth embodiment
Corresponding to the image processing methods provided in the first and second embodiments of the present application, a fifth embodiment of the present application further provides an electronic device.
As shown in fig. 7, fig. 7 is a schematic view of an electronic device provided in a fifth embodiment of the present application.
The electronic device includes:
a processor 701;
the memory 702 is used for storing a computer program, which is executed by the processor to execute the image processing methods of the first and second embodiments.
Sixth embodiment
In correspondence with the image processing methods provided in the first and second embodiments of the present application, a sixth embodiment of the present application also provides a computer storage medium storing a computer program executed by a processor to execute the image processing methods of the first and second embodiments.
Seventh embodiment
A seventh embodiment of the present application provides another image processing method. The seventh embodiment is an application scenario embodiment of the first embodiment, and since the seventh embodiment is partially similar to the image processing method of the first embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the first embodiment, and the image processing method embodiment described below is only illustrative.
Fig. 8 is a flowchart of an image processing method according to a seventh embodiment of the present application.
In step S801, capturing a currently displayed target interface; and the currently displayed target interface is the interface of the currently displayed conference content.
In this embodiment, since the image processing method in the first embodiment is used for recording a scene embodiment of a conference summary, the current interface currently presented may be an interface of the currently presented conference content. The interface to the meeting content may be an interface to at least one of a meeting document, a meeting image, and a meeting video.
In step S802, a target position of the target identifier in the target interface is identified; the target mark is a laser mark for indicating position information of a target object in the target interface.
When the conference content interface is displayed, the laser pen can be adopted as an auxiliary to emphatically display or indicate a target object in the conference content interface. The target object may be any object in the meeting content interface that is currently being explained in the meeting.
In step S803, a target area is determined in the target interface according to the target position and the set screenshot policy for the interface.
In step S804, performing screenshot processing on the target interface according to the target area to obtain a target image; wherein the target image is an image for recording conference contents.
By means of the embodiment, the conference screenshot image aiming at the conference display content can be obtained by screenshot the conference content interface according to the target area, and therefore the conference screenshot image is stored as the conference summary.
Although the present application has been described with reference to the preferred embodiments, it is not intended to limit the present application, and those skilled in the art can make variations and modifications without departing from the spirit and scope of the present application, therefore, the scope of the present application should be determined by the claims that follow.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory. The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
1. Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer-readable medium does not include non-transitory computer-readable storage media (non-transitory computer readable storage media), such as modulated data signals and carrier waves.
2. As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.

Claims (26)

1. An image processing method, comprising:
capturing a currently displayed target interface;
identifying a target location of a target identifier in the target interface; the target mark is an external indicating mark provided by external indicating equipment;
determining a target area in the target interface according to the target position and a set screenshot strategy aiming at the interface;
and performing screenshot processing on the target interface according to the target area to obtain a target image.
2. The image processing method of claim 1, wherein identifying the target location in the target interface comprises:
sending request information for obtaining the target position of the target identification in the target interface to a position obtaining device;
and obtaining the target position of the target identifier sent by the position acquisition device aiming at the request information in the target interface.
3. The image processing method of claim 2, wherein identifying the target location in the target interface comprises:
obtaining a starting time when the target identification begins to appear in the target interface and a terminating time when the target identification disappears in the target interface;
obtaining each position of the target identifier in the target interface within a time period corresponding to the starting time to the ending time, and confirming each position as the target position; wherein each position is located within the monitoring range of the position acquisition device.
4. The image processing method according to claim 3, wherein the obtaining the respective positions of the target identifier in the target interface within a time period corresponding to the starting time and the ending time comprises:
establishing a virtual rectangular coordinate system within the monitoring range of the position acquisition device;
obtaining information of each coordinate point in the target interface within a time period corresponding to the target identifier from the starting time to the ending time according to the virtual rectangular coordinate system;
and determining the positions based on the coordinate point information.
5. The image processing method according to claim 1, wherein the determining a target area in the target interface according to the target position and a set screenshot strategy for the interface comprises:
based on the target position, obtaining a target track of the target identification in the target interface;
and obtaining a target area according to the target track and the screenshot strategy aiming at the interface.
6. The image processing method according to claim 5, wherein the screenshot policy for the interface includes rule information for screenshot set for the target trajectory;
the obtaining a target area according to the target track and the screenshot strategy for the interface comprises: and obtaining a target area according with the rule determined by the rule information based on the target track and the rule information.
7. The image processing method according to claim 6, wherein the rule information for screenshot includes: and taking the area which comprises the target track and conforms to the specified shape as the target area.
8. The method of image processing according to claim 1, wherein said capturing a currently presented target interface comprises: an interface of at least one of a document, an image, and a video currently playing in the conference is captured as a target interface.
9. The method of image processing according to claim 1, wherein said capturing a currently presented target interface comprises: and capturing the currently displayed target interface by using an external capturing device.
10. The image processing method according to claim 1, wherein the target image is an image required for recording conference content.
11. The image processing method according to claim 1, further comprising: and providing the target image to a client.
12. The image processing method according to claim 11, further comprising, after obtaining the target image: judging whether the target image meets a preset screenshot condition or not;
the providing the target image to a client includes: and if the target image meets the preset screenshot condition, providing the target image for a client.
13. The image processing method according to claim 11, further comprising: obtaining a request message sent by a client for requesting to obtain the target image;
the providing the target image to a client includes:
and providing the target image to a client based on the request message for obtaining the target image.
14. The image processing method according to claim 1, further comprising: and displaying the target image.
15. The image processing method of claim 1, wherein identifying the target location in the target interface comprises:
identifying laser marks appearing in the target interface; and taking the position where the laser mark appears in the target interface as the target position of the target mark in the target interface.
16. The image processing method according to claim 1, wherein the size of the target area is equal to or smaller than the size of the target interface.
17. An image processing method, comprising:
capturing a currently displayed target interface;
and outputting a screenshot image aiming at a target area in the target interface, wherein the target area is an area determined based on a set screenshot strategy aiming at the interface and a target position of a target identifier in the target interface, and the target identifier is an external indication identifier provided by an external indication device.
18. The image processing method of claim 17, prior to outputting the screenshot image for the target region in the target interface, further comprising: and identifying a target position of the target identification in the target interface.
19. The image processing method of claim 18, after identifying the target location of the target identification in the target interface, further comprising: and determining a target area in the target interface according to the screenshot strategy aiming at the interface.
20. The image processing method of claim 19, wherein after determining the target area, before outputting the screenshot image for the target area in the target interface, further comprising: and performing screenshot processing on the target interface according to the target area to obtain a screenshot image aiming at the target area in the target interface.
21. The image processing method of claim 17, wherein the size of the target area is equal to or smaller than the size of the target interface.
22. An image processing apparatus characterized by comprising:
the capturing unit is used for capturing the currently displayed target interface;
the identification unit is used for identifying a target position of a target identifier in the target interface; the target mark is an external indicating mark provided by external indicating equipment;
the determining unit is used for determining a target area in the target interface according to the target position and a set screenshot strategy aiming at the interface;
and the target image obtaining unit is used for carrying out screenshot processing on the target interface according to the target area to obtain a target image.
23. An image processing apparatus characterized by comprising:
the capturing unit is used for capturing the currently displayed target interface;
and the output unit is used for outputting a screenshot image of a target area in the target interface, wherein the target area is an area determined based on a set screenshot strategy for the interface and a target position of a target identifier in the target interface, and the target identifier is an external indication identifier provided by an external indication device.
24. An electronic device, comprising:
a processor;
a memory for storing a computer program for execution by the processor to perform the image processing method of any of claims 1 to 21.
25. A computer storage medium, characterized in that the computer storage medium stores a computer program that is executed by a processor to perform the image processing method of any one of claims 1 to 21.
26. An image processing method, comprising:
capturing a currently displayed target interface; the currently displayed target interface is an interface of the currently displayed conference content;
identifying a target location of a target identifier in the target interface; the target mark is a laser mark used for indicating position information of a target object in the target interface;
determining a target area in the target interface according to the target position and a set screenshot strategy aiming at the interface;
performing screenshot processing on the target interface according to the target area to obtain a target image; wherein the target image is an image for recording conference content.
CN202011135433.XA 2020-10-21 2020-10-21 Image processing method and device and electronic equipment Pending CN113296660A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011135433.XA CN113296660A (en) 2020-10-21 2020-10-21 Image processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011135433.XA CN113296660A (en) 2020-10-21 2020-10-21 Image processing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN113296660A true CN113296660A (en) 2021-08-24

Family

ID=77318396

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011135433.XA Pending CN113296660A (en) 2020-10-21 2020-10-21 Image processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN113296660A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113947954A (en) * 2021-10-18 2022-01-18 贵州振华信息技术有限公司 Manuscript demonstration system with cutting function and demonstration method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113947954A (en) * 2021-10-18 2022-01-18 贵州振华信息技术有限公司 Manuscript demonstration system with cutting function and demonstration method

Similar Documents

Publication Publication Date Title
US11397502B2 (en) Systems and methods for bulk redaction of recorded data
US10425679B2 (en) Method and device for displaying information on video image
KR102087882B1 (en) Device and method for media stream recognition based on visual image matching
CN110675399A (en) Screen appearance flaw detection method and equipment
US10452130B2 (en) Dynamic augmented reality media creation
CN109729429B (en) Video playing method, device, equipment and medium
WO2019052053A1 (en) Whiteboard information reading method and device, readable storage medium and electronic whiteboard
CN109656800B (en) Method and device for testing image recognition application, terminal and storage medium
CN112559341A (en) Picture testing method, device, equipment and storage medium
CN104104900A (en) Data playing method
CN111262987A (en) Mobile phone detection method and equipment
CN113296660A (en) Image processing method and device and electronic equipment
CN112055237B (en) Method, system, apparatus, device and storage medium for determining screen-to-screen delay
CN111756672A (en) Conference content saving and acquiring method, conference system, conference equipment and storage medium
CN114546939A (en) Conference summary generation method and device, electronic equipment and readable storage medium
JP2022043130A5 (en)
CN114238119A (en) Automatic testing method and system for android application and storage medium
CN114760460A (en) Video quality detection method, device, storage medium and apparatus
CN111814714A (en) Image identification method, device and equipment based on audio and video recording and storage medium
CN106775701B (en) Client automatic evidence obtaining method and system
JP2015207258A (en) Information output device, information output method, program, information provision device, information provision method, and program
US11468657B2 (en) Storage medium, information processing apparatus, and line-of-sight information processing method
US20200387540A1 (en) Systems and methods for automatic generation of bookmarks utilizing focused content analysis
JP6852191B2 (en) Methods, systems and media for converting fingerprints to detect rogue media content items
TWI669629B (en) Electronic signature device and electronic signature method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination