CN111324261A - Intercepting method and device of target object, electronic equipment and storage medium - Google Patents

Intercepting method and device of target object, electronic equipment and storage medium Download PDF

Info

Publication number
CN111324261A
CN111324261A CN202010063436.0A CN202010063436A CN111324261A CN 111324261 A CN111324261 A CN 111324261A CN 202010063436 A CN202010063436 A CN 202010063436A CN 111324261 A CN111324261 A CN 111324261A
Authority
CN
China
Prior art keywords
target object
intercepting
motion
graph
motion track
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010063436.0A
Other languages
Chinese (zh)
Other versions
CN111324261B (en
Inventor
谢飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Youzhuju Network Technology Co Ltd
Original Assignee
Beijing Infinite Light Field Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Infinite Light Field Technology Co Ltd filed Critical Beijing Infinite Light Field Technology Co Ltd
Priority to CN202010063436.0A priority Critical patent/CN111324261B/en
Publication of CN111324261A publication Critical patent/CN111324261A/en
Application granted granted Critical
Publication of CN111324261B publication Critical patent/CN111324261B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present disclosure provides a method, an apparatus, an electronic device and a storage medium for intercepting a target object, and relates to the technical field of computers, wherein the method comprises the following steps: acquiring a target instruction to be executed, and analyzing a motion track formed by the target instruction arranged along a time axis; forming a track graph based on the motion track, and determining a target object in the area covered by the track graph; adjusting the track graph by using the boundary line of the target object to form an interception frame; and intercepting the target object according to the intercepting box. According to the scheme, the track graph is adjusted by using the boundary line of the target object, the intercepting frame is obtained, the target object is intercepted by using the intercepting frame, accurate intercepting of the target object is achieved, the complexity of drawing the intercepting frame by a user is reduced, and intercepting efficiency of the target object is improved.

Description

Intercepting method and device of target object, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and an apparatus for intercepting a target object, an electronic device, and a storage medium.
Background
Along with the development of science and technology, the functions of the intelligent equipment are more and more diversified, and a user can browse a webpage, view pictures and intercept content, pictures and the like displayed on an interface at any time and any place through the intelligent equipment. At present, the screenshot function is mainly realized through a screenshot frame, a user can drag the screenshot frame to the position of a picture to be intercepted, if the user wants to change the size of the screenshot frame, the user can change the screenshot frame by dragging the frame or the corner of the screenshot frame, and then screenshot is carried out through the adjusted screenshot frame.
The inventor of the present disclosure finds in research that, in the existing screenshot method, only a regular and closed screenshot box can be used for clipping, the shape is fixed, the flexibility is low, when a user clips scattered content, the screenshot box needs to be reduced, multiple times of clipping are performed, then the desired content is obtained by splicing, the operation steps are complex, if the boundary of an object to be clipped is irregular, the screenshot clipped by using the screenshot box with the fixed shape often includes redundant data information, and the clipping precision of the content to be clipped is low.
Disclosure of Invention
This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
A first aspect of the present disclosure provides a method for intercepting a target object, including:
acquiring a target instruction to be executed, and analyzing a motion track formed by the target instruction arranged along a time axis;
forming a track graph based on the motion track, and determining a target object in the area covered by the track graph;
adjusting the track graph by using the boundary line of the target object to form an interception frame;
and intercepting the target object according to the intercepting box.
A second aspect of the present disclosure provides an intercepting apparatus of a target object, including:
the motion track acquisition module is used for acquiring target instructions to be executed and analyzing motion tracks formed by the target instructions distributed along a time axis;
a target object determining module, configured to form a trajectory graph based on the motion trajectory, and determine a target object in an area covered by the trajectory graph;
a capture frame forming module, configured to adjust the trajectory graph by using the boundary line of the target object to form a capture frame;
and the intercepting module is used for intercepting the target object according to the intercepting box.
A third aspect of the present disclosure provides an electronic device, comprising:
a memory and a processor;
the memory has a computer program stored therein;
a processor for performing the method of the first aspect when executing the computer program.
A fourth aspect of the disclosure provides a computer readable medium having stored thereon a computer program which, when executed by a processor, performs the method of the first aspect.
The technical scheme provided by the disclosure has the following beneficial effects:
the utility model provides a target object's intercepting scheme, form the orbit figure based on the motion trail, combine target object's boundary line adjustment orbit figure, the orbit figure after will adjusting is used as target object's intercepting frame, utilize intercepting frame intercepting target object, realize the accurate intercepting to target object, especially to the target object that has complicated boundary line, the user need not the boundary line of accurate portrayal target object, this disclosure can obtain target object's intercepting frame based on rough motion trail, utilize this intercepting frame can obtain accurate and complete target object, reduce the user and draw the complexity of intercepting frame, improve target object's intercepting efficiency.
The intercepting scheme of the target object provided by the disclosure forms the intercepting frame based on the self-defined track graph, realizes the shape of the self-defined intercepting frame and the automatic adjustment of the shape of the intercepting frame, intercepts the target object according to the self-defined intercepting frame, and can only acquire a complete target object through one-time intercepting for the target object with dispersed positions, thereby reducing the steps of intercepting operation and improving the user experience. Moreover, the area covered by the intercepting frame only comprises the target object, so that the accurate positioning and intercepting of the target object are realized, a user does not need to accurately draw the boundary of the target object, and the operation complexity of the user is reduced.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
Fig. 1 is a flowchart of an intercepting method of a target object according to an embodiment of the present disclosure;
fig. 2 is a flowchart for adjusting the trajectory graph by using a boundary line of a target object according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram illustrating that a motion trajectory provided by an embodiment of the present disclosure intersects a boundary line of a target object;
fig. 4 is a flowchart illustrating a trajectory graph formed based on a motion trajectory when there is a discontinuity in the motion trajectory according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an intercepting apparatus for a target object according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and its variants, as used herein, are inclusive, i.e., "including but not limited to"; the term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing the devices, modules or units, and are not used for limiting the devices, modules or units to be different devices, modules or units, and also for limiting the sequence or interdependence relationship of the functions executed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The following describes the technical solutions of the present disclosure and how to solve the above technical problems in detail with specific embodiments. The following embodiments may be combined, and the same or similar concepts or processes may not be described in detail in some embodiments. Embodiments of the present disclosure will be described below with reference to the accompanying drawings.
Referring to fig. 1, the present disclosure provides a method for intercepting a target object, where a flowchart of the method for intercepting a target object in an embodiment is shown in fig. 1, the method may be executed by an electronic device, where the electronic device may be a terminal device, and the terminal device may be a desktop device or a mobile terminal, and the solution provided by the present disclosure may be applied to multiple application programs, such as drawing software, instant messaging software, and the like, and specifically, the present disclosure includes the following steps:
step S101, acquiring a target instruction to be executed, and analyzing a motion track formed by the target instruction arranged along a time axis;
step S102, forming a track graph based on the motion track, and determining a target object positioned in the area covered by the track graph;
step S103, adjusting the track graph by using the boundary line of the target object to form an intercepting frame;
and step S104, intercepting the target object according to the intercepting box.
The target instruction to be executed comprises: the process of forming a motion track according to the target instruction by the control instruction acting on the touch interface is as follows: and detecting a screenshot starting instruction, responding to the screenshot starting instruction, recording and analyzing a control instruction on the touch interface, and arranging along a time axis according to the control instruction to form a motion track.
The target instruction is used for indicating that a motion track is formed in a preset area according to drawing parameters and input data, analyzing the target instruction to acquire the drawing parameters carried by the target instruction, and arranging the input data along a time axis according to the drawing parameters to form the motion track, wherein the input data comprise contact positions acting on a touch interface, contact areas and the like, the motion track represents the trend of the input data, and the drawing parameters comprise: the color, the display area and other parameters of the motion track, and the motion track can be an intermittent track point or a continuous track line.
After the movement track is formed, a track graph is formed based on the movement track, and the track graph is an overall graph formed on the basis of the movement track and can be a regular or irregular graph, such as a circle, a rectangle, a pentagon, an arbitrary polygon and the like. The track graph provided by the disclosure is a closed graph, and if the graph formed based on the motion track is a non-closed graph, the non-closed graph needs to be complemented to obtain a closed track graph.
Determining a target object in an area covered by the track graph, wherein at least part of a boundary line of the target object exists in the area covered by the track graph, setting an object with the ratio of the boundary line in the track graph exceeding a preset threshold value as the target object, acquiring the boundary line of the target object, adjusting the track graph according to the boundary line of the target object, taking the adjusted track graph as an intercepting frame, and intercepting the target object by using the intercepting frame.
The utility model aims at forming a track graph based on the motion trail, combining the boundary line of the target object to adjust the track graph, using the adjusted track graph as the intercepting frame of the target object, intercepting the target object by using the intercepting frame, realizing the accurate intercepting of the target object, reducing the complexity of drawing the intercepting frame by a user, improving the intercepting efficiency of the target object, especially for the target object with a complex boundary line, the user does not need to accurately draw the boundary line of the target object, the intercepting frame can be obtained according to the rough motion trail by the disclosure, the accurate and complete target object is obtained by using the intercepting frame, and the intercepting accuracy and efficiency of the target object are improved.
The intercepting scheme of the target object provided by the disclosure forms the intercepting frame based on the self-defined motion track, realizes the shape of the self-defined intercepting frame and the automatic adjustment of the shape of the intercepting frame, intercepts the target object according to the self-defined intercepting frame, reduces the steps of screenshot operation for the target object with dispersed position, and improves the user experience. Moreover, the area covered by the intercepting frame only comprises the target object, so that the accurate positioning and intercepting of the target object are realized, a user does not need to accurately draw the boundary of the target object, and the operation complexity of the user is reduced.
For example: the target objects are characters distributed at two rows of end points, only the characters at two ends can be intercepted by utilizing the user-defined intercepting frame, the purpose of intercepting the target characters at one time is realized, repeated intercepting is not needed, and the operation complexity of intercepting the target characters is reduced.
In order to make clearer the intercepting scheme of the target object and the technical effect thereof provided by the present disclosure, specific embodiments thereof are described in detail with a plurality of examples.
In one embodiment, the step of analyzing the motion trajectory formed by arranging the target instructions along the time axis in step S101 includes the following sub-steps:
a1, receiving the contact area of the target instruction on the touch interface, and determining the drawing parameters of the motion trail according to the contact area;
and A2, drawing a corresponding motion track according to the drawing parameters.
Input data received in the preset area of the touch interface comprises a contact position, a contact area when the touch mode is contacted with the touch interface and the like, wherein the touch mode comprises a finger, a touch pen and the like, the contact areas corresponding to different touch modes are different, the contact area is related to the fine degree of a motion track, the contact area of the finger and a touch screen is larger, a drawing line which can be set to be the motion track is thicker, the corresponding contact area when the touch pen is used for drawing is smaller, the corresponding drawing line can be set to be thinner, the motion track precision is higher, therefore, the drawing parameters of the motion track are determined according to the contact area of the input data, when the motion track is a line, the corresponding drawing parameters comprise: line thickness, line colour etc. can set up that the contact area is big more, and the line is thick more, and the line colour is bright-colored more, and the striking degree of motion trail is high more, and the contact area is little more corresponding the line for the motion trail of drawing is more meticulous.
The present disclosure also provides other embodiments for determining the drawing parameters of the motion trajectory, such as determining the drawing parameters according to the area of the target object, and such as setting the larger the area of the target object, the finer the motion trajectory. The drawing parameters of the motion trajectory may also be determined according to the attribute parameters of the boundary line of the target object, such as setting the line thickness of the motion trajectory to be the same as the boundary line of the target object.
According to the embodiment, the drawing parameters of the motion trail are determined according to the area of the contact point on the touch interface, so that the motion trail drawn according to the drawing parameters includes the boundary line of the target object as much as possible, the fault tolerance rate is increased, the motion trail can be adjusted into the line adaptive to the motion trail according to the area of the contact point, and the visual effect is improved.
In one embodiment, the step of adjusting the trajectory graph by using the boundary line of the target object in step S103 may be performed as follows, and the flowchart is shown in fig. 2, and includes the following sub-steps:
s201, calling a preset boundary algorithm to extract boundary data of the target object, and determining a boundary line of the target object;
s202, when the motion trail intersects with the boundary line of the target object, the motion trail between the intersection points is adjusted to be outside the boundary line, so that the area covered by the trail graph covers the target object.
The preset boundary algorithm may be a boundary tracking algorithm, and for a target object with a closed boundary line, the process of obtaining the boundary line of the target object by using the boundary algorithm is as follows: firstly, determining a starting point of a boundary line of a target object; determining a corresponding boundary judgment criterion and a corresponding search criterion, wherein the judgment criterion is used for judging whether one point is a boundary point, and the search criterion is used for guiding how to search the next boundary point; determining the end condition of the search, determining the end point of the boundary line by using the end condition, and sequentially connecting the determined boundary points from the start point of the boundary line to the end point to form the boundary line of the target object.
When the motion trajectory intersects with the target object, the number and positions of the intersections are not limited, fig. 3 is a schematic diagram illustrating that the motion trajectory provided by an embodiment intersects with the boundary line of the target object, and 2 intersections of the motion trajectory and the boundary line of the target object are a and b, respectively.
In the embodiment, the trajectory graph is adjusted to the shape completely containing the boundary line of the target object under the condition that the intersection point exists between the motion trajectory and the boundary line of the target object, so that the trajectory graph is corrected, and the target object can be conveniently and completely intercepted based on the corrected trajectory graph.
The present disclosure also provides another way to achieve the purpose of adjusting the motion trajectory by using the boundary line of the target object, and a schematic flow chart thereof is shown in fig. 2, and includes:
s203, acquiring a target object with a complete boundary line in an area covered by the track graph;
s204, adjusting the motion track of the track image to be overlapped with the boundary line of the target object.
The scheme provided by the embodiment is that when the track graph is not crossed with the boundary line of the target object and the target object is completely covered in the area covered by the track graph, the track graph is reduced to be overlapped with the boundary line of the target object, so that the purpose of accurate interception is achieved.
The scheme provided by this embodiment may be performed after step S202, that is, if the motion trajectory intersects the boundary line of the target object, the motion trajectory between the intersection points is adjusted to be outside the boundary line of the target object through the schemes provided in S201 to S202, and then the motion trajectory is adjusted to be overlapped with the boundary line of the target object through the schemes provided in S203 to S204, so as to achieve accurate positioning of the target object, so as to perform accurate interception on the target object subsequently.
When the area covered by the track graph comprises at least two target objects with complete boundary lines, the track graph is adjusted to cover the target object cluster or the track graph only covers the target objects, different target objects are connected through straight lines, and the area ratio of the target objects in the intercepting frame is improved.
In the case that the motion trajectory forms a non-closed figure, that is, there is a discontinuity in the motion trajectory, and the motion trajectory includes a known motion trajectory segment and a discontinuity, in this case, the step of forming the trajectory figure based on the motion trajectory may be performed in the following manner, and a flowchart thereof is shown in fig. 4, and includes:
s401, completing the discontinuous points based on a preset completing mode to form a post-repairing motion track section;
s402, the known motion track section and the rear complement motion track section are spliced to form a closed track graph.
When the motion trail has the break points, a preset completion mode is called to complete the motion trail sections between the break points, wherein the completion mode can be preset in various ways, the corresponding completion mode is determined according to the characteristics of the break points so as to improve the completion effect, the completed motion trail sections and the current motion trail sections are spliced after the motion trail between the break points is completed by the completion mode, and if the motion trail sections obtained by the completion mode have multiple sections, the splicing operation is sequentially executed, and finally a closed trail graph is formed.
Specifically, if there are a plurality of discontinuities, each discontinuity is numbered in sequence, and the difference between the distance between the first discontinuity and the last discontinuity and the distance between any two other adjacent discontinuities is within a preset error range, that is, the first discontinuity and the last discontinuity form an adjacent discontinuity, and a closed graph can be formed after the adjacent discontinuities are connected, so that the closed graph is a track graph. If the difference between the distance between the first discontinuity point and the last discontinuity point and the distance between any two other adjacent discontinuity points exceeds a preset error range, a closed graph cannot be formed after the adjacent discontinuity points are connected, and the discontinuity points need to be supplemented again to form a closed track graph.
The following embodiments of the present disclosure provide several compensation formulas for compensating a motion trajectory between discontinuities, which are as follows:
in one embodiment, the step of completing the discontinuities based on a predetermined completion method to form post-completion motion track segments includes:
b1, acquiring the known curvature of the known motion track segment, and determining the unknown curvature of the post-compensation motion track segment according to the known curvature;
b2, forming a post-compensation motion trajectory segment between the discontinuities according to the unknown curvature.
If the continuous motion track segment has multiple curvatures, determining the curvature corresponding to the post-compensation motion track segment based on the multiple curvatures, for example, taking the mean value of all known curvatures as the curvature corresponding to the motion track segment between the discontinuities, or determining the curvature corresponding to the post-compensation motion track segment according to the trend of the multiple curvatures. Furthermore, the post-compensation motion track section can be divided into a plurality of sub post-compensation motion track sections, the curvature of each sub post-compensation motion track section is determined in sequence according to the trend of the known curvature, the unknown curvature of the post-compensation motion track section can be obtained more accurately, and a more accurate track graph is obtained based on the unknown curvature.
The scheme provided by the disclosure is suitable for the condition that the continuous motion track section has obvious curvature, the unknown curvature of the post-compensation motion track section between the break points is determined based on the known curvature of the continuous motion track section, compared with the mode of linearly compensating and connecting the break points, the motion track compensated according to the unknown curvature is more consistent with the real intention of a user, and the track graph obtained based on the post-compensation motion track section is more accurate.
The present disclosure provides another embodiment of the method for obtaining a track graph, specifically, the step of completing the break points based on a preset completing method to form a post-repairing motion track segment, including:
and calling an interpolation algorithm to perform interpolation processing on the discontinuous points to form a post-compensation motion track section between the discontinuous points.
And constructing a post-compensation motion track section between the break points by utilizing an interpolation algorithm based on the known motion track section, wherein the constructed post-compensation motion track section can reflect the rule of the known motion track section, and the post-compensation motion track section is determined based on the rule, so that the accuracy of the track graph is improved.
In one embodiment, the step of completing the discontinuities based on a predetermined completion method to form post-completion motion track segments includes:
c1, comparing the known motion track segment with the pre-stored motion track data;
and C2, determining the track section corresponding to the motion track data with the highest matching degree as the rear compensation motion track section between the break points.
Wherein, the pre-stored motion trail data comprises: the pre-stored motion trail data, historical motion trail data and the like of the user can be combined with the pre-stored motion trail data and the adoption rate of each motion trail data to serve as a standard for judging the matching degree.
The scheme provided by the embodiment utilizes the pre-stored motion trail to complement the motion trail, and reduces the complexity of the complementing step and the complementing mode while ensuring the accuracy of the motion trail section.
Fig. 5 is a device for intercepting a target object according to another embodiment of the present disclosure, and as shown in fig. 5, the device according to the embodiment of the present disclosure includes: a motion trajectory obtaining module 501, a target object determining module 502, a frame capturing forming module 503 and a capturing module 504, which are specifically as follows:
a motion trajectory obtaining module 501, configured to obtain target instructions to be executed, and analyze a motion trajectory formed by the target instructions arranged along a time axis;
a determine target object module 502, configured to form a track graph based on the motion track, and determine a target object in an area covered by the track graph;
a capture frame forming module 503, configured to adjust the trajectory graph by using the boundary line of the target object, and form a capture frame;
and an intercepting module 504, configured to intercept the target object according to the intercepting box.
With regard to the intercepting apparatus of the target object in the above embodiment, the specific manner of executing the operation of each module has been described in detail in the embodiment related to the method, and will not be elaborated here.
Referring now to fig. 6, an electronic device (e.g., a schematic block diagram of a terminal device 600 in fig. 6) suitable for implementing embodiments of the present disclosure is shown, the terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle-mounted terminal (e.g., a car navigation terminal), etc., and a fixed terminal such as a digital TV, a desktop computer, etc. the electronic device shown in fig. 6 is only one example and should not bring any limitations to the functions and use scope of the embodiments of the present disclosure.
The electronic device includes: a memory and a processor, wherein the processor may be referred to as the processing device 601 hereinafter, and the memory may include at least one of a Read Only Memory (ROM)602, a Random Access Memory (RAM)603 and a storage device 608 hereinafter, which are specifically shown as follows:
as shown in fig. 6, electronic device 600 may include a processing means (e.g., central processing unit, graphics processor, etc.) 601 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 3 illustrates an electronic device 600 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 609, or may be installed from the storage means 608, or may be installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText transfer protocol), and may be interconnected with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer-readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to perform operations comprising: acquiring a target instruction to be executed, and analyzing a motion track formed by the target instruction arranged along a time axis; forming a track graph based on the motion track, and determining a target object in the area covered by the track graph; adjusting the track graph by using the boundary line of the target object to form an interception frame; and intercepting the target object according to the intercepting box.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the functionality and operation of possible implementations of methods, apparatus and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules or units described in the embodiments of the present disclosure may be implemented by software or hardware. Wherein the designation of a module or unit does not in some cases constitute a limitation of the unit itself.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a computer-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable medium may be a machine readable signal medium or a machine readable storage medium. A computer readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a computer-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, there is provided a method for intercepting a target object, including:
acquiring a target instruction to be executed, and analyzing a motion track formed by the target instruction arranged along a time axis;
forming a track graph based on the motion track, and determining a target object in the area covered by the track graph;
adjusting the track graph by using the boundary line of the target object to form an interception frame;
and intercepting the target object according to the intercepting box.
Optionally, the step of adjusting the trajectory graph by using the boundary line of the target object includes:
calling a preset boundary algorithm to extract boundary data of a target object, and determining a boundary line of the target object;
and when the motion trail intersects with the boundary line of the target object, adjusting the motion trail between the intersection points to be out of the boundary line so as to enable the area covered by the trail graph to cover the target object.
Optionally, the step of adjusting the trajectory graph by using the boundary line of the target object includes:
acquiring a target object with a complete boundary line in an area covered by a track graph;
and adjusting the motion trail of the trail graph to be coincident with the boundary line of the target object.
Optionally, the step of analyzing the motion trajectory formed by arranging the target instructions along a time axis includes:
receiving the contact area of the target instruction on a touch interface, and determining drawing parameters of a motion track according to the contact area;
and drawing a corresponding motion track according to the drawing parameters.
Optionally, when the motion trajectory includes a known motion trajectory segment and a break point, the step of forming the trajectory graph based on the motion trajectory includes:
completing the break points based on a preset completing mode to form a post-repairing motion track section;
and splicing the known motion track section and the rear compensation motion track section to form a closed track graph.
Optionally, the step of completing the discontinuity point based on a preset completing method to form a post-repairing motion track segment includes:
acquiring the known curvature of a known motion track segment, and determining the unknown curvature of the post-compensation motion track segment according to the known curvature;
and forming a post-compensation motion track section between the break points according to the unknown curvature.
Optionally, the step of completing the discontinuity point based on a preset completing method to form a post-repairing motion track segment includes:
comparing the known motion track segment with the pre-stored motion track data;
and determining the track section corresponding to the motion track data with the highest matching degree as a post-compensation motion track section between the discontinuities.
According to one or more embodiments of the present disclosure, there is also provided an intercepting apparatus of a target object, the apparatus including:
the motion track acquisition module is used for acquiring target instructions to be executed and analyzing motion tracks formed by the target instructions distributed along a time axis;
a target object determining module, configured to form a trajectory graph based on the motion trajectory, and determine a target object in an area covered by the trajectory graph;
a capture frame forming module, configured to adjust the trajectory graph by using the boundary line of the target object to form a capture frame;
and the intercepting module is used for intercepting the target object according to the intercepting box.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (10)

1. A method for intercepting a target object is characterized by comprising the following steps:
acquiring a target instruction to be executed, and analyzing a motion track formed by the target instruction arranged along a time axis;
forming a track graph based on the motion track, and determining a target object in the area covered by the track graph;
adjusting the track graph by using the boundary line of the target object to form an interception frame;
and intercepting the target object according to the intercepting box.
2. The method for intercepting a target object according to claim 1, wherein the step of adjusting the trajectory graph using the boundary line of the target object includes:
calling a preset boundary algorithm to extract boundary data of a target object, and determining a boundary line of the target object;
and when the motion trail intersects with the boundary line of the target object, adjusting the motion trail between the intersection points to be out of the boundary line so as to enable the area covered by the trail graph to cover the target object.
3. The method for intercepting a target object according to claim 1, wherein the step of adjusting the trajectory graph using the boundary line of the target object includes:
acquiring a target object with a complete boundary line in an area covered by a track graph;
and adjusting the motion trail of the trail graph to be coincident with the boundary line of the target object.
4. The method for intercepting a target object according to claim 1, wherein the step of parsing the motion trajectory formed by arranging the target instruction along a time axis includes:
receiving the contact area of the target instruction on a touch interface, and determining drawing parameters of a motion track according to the contact area;
and drawing a corresponding motion track according to the drawing parameters.
5. The intercepting method of a target object according to claim 1, wherein when a motion trajectory includes a known motion trajectory segment and a discontinuity, the step of forming a trajectory graph based on the motion trajectory includes:
completing the break points based on a preset completing mode to form a post-repairing motion track section;
and splicing the known motion track section and the rear compensation motion track section to form a closed track graph.
6. The method for intercepting a target object according to claim 5, wherein the step of completing the break points based on a preset completing method to form a post-repairing motion track segment includes:
acquiring the known curvature of a known motion track segment, and determining the unknown curvature of the post-compensation motion track segment according to the known curvature;
and forming a post-compensation motion track section between the break points according to the unknown curvature.
7. The method for intercepting a target object according to claim 5, wherein the step of completing the break points based on a preset completing method to form a post-repairing motion track segment includes:
comparing the known motion track segment with the pre-stored motion track data;
and determining the track section corresponding to the motion track data with the highest matching degree as a post-compensation motion track section between the discontinuities.
8. An intercepting apparatus of a target object, comprising:
the motion track acquisition module is used for acquiring target instructions to be executed and analyzing motion tracks formed by the target instructions distributed along a time axis;
a target object determining module, configured to form a trajectory graph based on the motion trajectory, and determine a target object in an area covered by the trajectory graph;
a capture frame forming module, configured to adjust the trajectory graph by using the boundary line of the target object to form a capture frame;
and the intercepting module is used for intercepting the target object according to the intercepting box.
9. An electronic device, comprising:
a memory and a processor;
the memory has stored therein a computer program;
the processor, when running the computer program, is configured to perform the method for intercepting a target object according to any one of claims 1 to 7.
10. A computer-readable medium, on which a computer program is stored, which, when being executed by a processor, carries out the method of intercepting a target object of any one of claims 1-7.
CN202010063436.0A 2020-01-20 2020-01-20 Intercepting method and device of target object, electronic equipment and storage medium Active CN111324261B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010063436.0A CN111324261B (en) 2020-01-20 2020-01-20 Intercepting method and device of target object, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010063436.0A CN111324261B (en) 2020-01-20 2020-01-20 Intercepting method and device of target object, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111324261A true CN111324261A (en) 2020-06-23
CN111324261B CN111324261B (en) 2021-01-19

Family

ID=71163275

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010063436.0A Active CN111324261B (en) 2020-01-20 2020-01-20 Intercepting method and device of target object, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111324261B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113190147A (en) * 2021-05-26 2021-07-30 广州文石信息科技有限公司 Electronic book display method and device
WO2023131022A1 (en) * 2022-01-04 2023-07-13 荣耀终端有限公司 Display control method, electronic device, and readable storage medium
WO2024094063A1 (en) * 2022-11-03 2024-05-10 华为技术有限公司 Screen capture processing method and electronic device

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104850350A (en) * 2015-05-25 2015-08-19 上海卓易科技股份有限公司 Screenshot method and system for touchscreen device
CN105808138A (en) * 2016-02-26 2016-07-27 上海卓易科技股份有限公司 Method and device for intercepting picture
CN106775300A (en) * 2016-11-29 2017-05-31 努比亚技术有限公司 A kind of screenshot method and device
CN106951158A (en) * 2016-05-26 2017-07-14 周兰兰 The screenshotss method of mobile terminal
US20170249069A1 (en) * 2016-02-29 2017-08-31 Vmware, Inc. Preserving desktop state across login sessions
CN107515715A (en) * 2017-07-31 2017-12-26 北京小米移动软件有限公司 Screenshot method, device and storage medium
CN107728899A (en) * 2017-09-29 2018-02-23 深圳市聚宝汇科技有限公司 A kind of picture intercept method and system
CN108471550A (en) * 2018-03-16 2018-08-31 维沃移动通信有限公司 A kind of video intercepting method and terminal
CN108733281A (en) * 2018-04-08 2018-11-02 深圳先搜科技有限公司 A kind of image interception method, system and terminal device
CN108956625A (en) * 2018-07-20 2018-12-07 Oppo广东移动通信有限公司 Glue road detection method and glue road detection device
WO2019019635A1 (en) * 2017-07-25 2019-01-31 平安科技(深圳)有限公司 Device and method for generating dynamic image, and computer readable storage medium
CN109375857A (en) * 2018-10-30 2019-02-22 努比亚技术有限公司 Screenshot method, device, mobile terminal and readable storage medium storing program for executing

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104850350A (en) * 2015-05-25 2015-08-19 上海卓易科技股份有限公司 Screenshot method and system for touchscreen device
CN105808138A (en) * 2016-02-26 2016-07-27 上海卓易科技股份有限公司 Method and device for intercepting picture
US20170249069A1 (en) * 2016-02-29 2017-08-31 Vmware, Inc. Preserving desktop state across login sessions
CN106951158A (en) * 2016-05-26 2017-07-14 周兰兰 The screenshotss method of mobile terminal
CN106775300A (en) * 2016-11-29 2017-05-31 努比亚技术有限公司 A kind of screenshot method and device
WO2019019635A1 (en) * 2017-07-25 2019-01-31 平安科技(深圳)有限公司 Device and method for generating dynamic image, and computer readable storage medium
CN107515715A (en) * 2017-07-31 2017-12-26 北京小米移动软件有限公司 Screenshot method, device and storage medium
CN107728899A (en) * 2017-09-29 2018-02-23 深圳市聚宝汇科技有限公司 A kind of picture intercept method and system
CN108471550A (en) * 2018-03-16 2018-08-31 维沃移动通信有限公司 A kind of video intercepting method and terminal
CN108733281A (en) * 2018-04-08 2018-11-02 深圳先搜科技有限公司 A kind of image interception method, system and terminal device
CN108956625A (en) * 2018-07-20 2018-12-07 Oppo广东移动通信有限公司 Glue road detection method and glue road detection device
CN109375857A (en) * 2018-10-30 2019-02-22 努比亚技术有限公司 Screenshot method, device, mobile terminal and readable storage medium storing program for executing

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113190147A (en) * 2021-05-26 2021-07-30 广州文石信息科技有限公司 Electronic book display method and device
WO2023131022A1 (en) * 2022-01-04 2023-07-13 荣耀终端有限公司 Display control method, electronic device, and readable storage medium
WO2024094063A1 (en) * 2022-11-03 2024-05-10 华为技术有限公司 Screen capture processing method and electronic device

Also Published As

Publication number Publication date
CN111324261B (en) 2021-01-19

Similar Documents

Publication Publication Date Title
CN111324261B (en) Intercepting method and device of target object, electronic equipment and storage medium
CN112184738B (en) Image segmentation method, device, equipment and storage medium
US11783111B2 (en) Display method and apparatus, and electronic device
CN111399956A (en) Content display method and device applied to display equipment and electronic equipment
CN111399729A (en) Image drawing method and device, readable medium and electronic equipment
CN110413812A (en) Training method, device, electronic equipment and the storage medium of neural network model
CN110825286B (en) Image processing method and device and electronic equipment
CN113313064A (en) Character recognition method and device, readable medium and electronic equipment
CN112051961A (en) Virtual interaction method and device, electronic equipment and computer readable storage medium
WO2022105622A1 (en) Image segmentation method and apparatus, readable medium, and electronic device
CN110795196A (en) Window display method, device, terminal and storage medium
CN113741756A (en) Information processing method, device, terminal and storage medium
CN111327762A (en) Operation track display method and device, electronic equipment and storage medium
CN115205305A (en) Instance segmentation model training method, instance segmentation method and device
CN111273884A (en) Image display method and device and electronic equipment
CN111324405A (en) Character display method and device and electronic equipment
US20230199262A1 (en) Information display method and device, and terminal and storage medium
CN115328429A (en) Display method, display device, electronic apparatus, and storage medium
CN114758342A (en) Text recognition method, device, medium and electronic equipment
CN114742934A (en) Image rendering method and device, readable medium and electronic equipment
CN111680754B (en) Image classification method, device, electronic equipment and computer readable storage medium
CN110348374B (en) Vehicle detection method and device, electronic equipment and storage medium
CN113672122A (en) Image processing method and device and electronic equipment
CN113127101A (en) Application program control method, device, equipment and medium
CN111338827A (en) Method and device for pasting table data and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230406

Address after: Room 802, Information Building, 13 Linyin North Street, Pinggu District, Beijing, 101299

Patentee after: Beijing youzhuju Network Technology Co.,Ltd.

Address before: No. 715, 7th floor, building 3, 52 Zhongguancun South Street, Haidian District, Beijing 100081

Patentee before: Beijing infinite light field technology Co.,Ltd.