CN110362252A - Processing method and processing device - Google Patents
Processing method and processing device Download PDFInfo
- Publication number
- CN110362252A CN110362252A CN201910635455.3A CN201910635455A CN110362252A CN 110362252 A CN110362252 A CN 110362252A CN 201910635455 A CN201910635455 A CN 201910635455A CN 110362252 A CN110362252 A CN 110362252A
- Authority
- CN
- China
- Prior art keywords
- subgraph
- image
- processed
- target
- area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 32
- 238000000034 method Methods 0.000 claims description 42
- 230000004044 response Effects 0.000 claims description 28
- 238000003709 image segmentation Methods 0.000 claims description 12
- 238000001514 detection method Methods 0.000 claims description 6
- 238000012876 topography Methods 0.000 abstract 1
- 230000008569 process Effects 0.000 description 15
- 238000010586 diagram Methods 0.000 description 8
- 238000004590 computer program Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 7
- 230000000694 effects Effects 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 4
- 230000005611 electricity Effects 0.000 description 3
- 230000011218 segmentation Effects 0.000 description 3
- 241000406668 Loxodonta cyclotis Species 0.000 description 2
- 230000009471 action Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000009434 installation Methods 0.000 description 2
- 238000011022 operating instruction Methods 0.000 description 2
- 238000007619 statistical method Methods 0.000 description 2
- 230000007306 turnover Effects 0.000 description 2
- 241000872198 Serjania polyphylla Species 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000001174 ascending effect Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/451—Execution arrangements for user interfaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/20—Education
- G06Q50/205—Education administration or guidance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/048—Indexing scheme relating to G06F3/048
- G06F2203/04803—Split screen, i.e. subdividing the display area or the window area into separate subareas
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Software Systems (AREA)
- Tourism & Hospitality (AREA)
- Health & Medical Sciences (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- General Health & Medical Sciences (AREA)
- Strategic Management (AREA)
- Economics (AREA)
- Human Resources & Organizations (AREA)
- General Business, Economics & Management (AREA)
- Primary Health Care (AREA)
- Marketing (AREA)
- Multimedia (AREA)
- Audiology, Speech & Language Pathology (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the present application discloses processing method and processing device.One specific embodiment of the processing method includes: to obtain at least one set of image to be processed;Each image to be processed is handled, at least two subgraphs are obtained, includes at least one object in each subgraph;At least the correspondence subgraph of each image to be processed is associated based on the object for including in subgraph;Wherein, the object that image to be processed includes in same group is identical, and image to be processed includes at least one identical object in different groups, and the size of subgraph is less than image to be processed.The association of topography may be implemented in this embodiment, convenient for subsequent user to the processing operation of image, helps to improve treatment effeciency.
Description
Technical field
The invention relates to field of computer technology more particularly to processing method and processing devices.
Background technique
Currently, realizing that the technology of electric marking has been widely used by automatically scanning.It is artificial compared to traditional
Go over examination papers mode, electric marking greatly improve go over examination papers speed while, mistake of going over examination papers can be effectively prevented from caused by human error
Accidentally, accuracy and fairness that paper is judged are improved.But existing electric marking system is only able to achieve online batch of whole paper
It reads, flexibility is poor.
Summary of the invention
The embodiment of the present application provides processing method and processing device.
In a first aspect, the embodiment of the present application provides a kind of processing method, comprising: obtain at least one set of image to be processed;
Each image to be processed is handled, at least two subgraphs are obtained, includes at least one object in each subgraph;At least
The correspondence subgraph of each image to be processed is associated based on the object for including in subgraph;Wherein, in same group wait locate
The reason image object that includes is identical, and image to be processed includes at least one identical object in different groups, and the size of subgraph is less than
Image to be processed.
In some embodiments, at least based on the object for including in subgraph by the correspondence subgraph of each image to be processed
It is associated, comprising: determine the subgraph with incidence relation from the subgraph that different images to be processed obtain;There to be pass
The subgraph of connection relationship carries out Clustering;Wherein, incidence relation includes at least one identical for characterizing in two subgraphs
Object.
In some embodiments, this method further include: in response to the first operation, associated subgraph is excellent according to first
First grade sequence is successively shown, and keeps other subgraphs on current presentation interface constant;Or in response to the first operation, at least
Based on the size of current display area and the quantity for being associated with subgraph, associated subgraph is also illustrated in current display
Region;Wherein, the first operation comprises at least one of the following: sliding, click, voice, gesture;First priority orders are based on following
At least one information and determine: the corresponding identification information of associated subgraph, current input information, history ranking information.
In some embodiments, each image to be processed is handled, obtains at least two subgraphs, comprising: obtained
The first object information in image to be processed, at least based on the first object information by image segmentation to be processed at the son of the first quantity
Image;Wherein, the first object information comprises at least one of the following: object number, object's position, object color, object font, right
As content;First quantity is at least related with the object number in image to be processed, and the first quantity is greater than or equal to 2.
In some embodiments, based on the first object information by image segmentation to be processed at the subgraph of the first quantity, packet
It includes: obtaining the object's position of adjacent object in image to be processed, the segmentation between adjacent object is at least determined based on object's position
Line, according to cut-off rule by image segmentation to be processed at the subgraph of the first quantity;Or obtain every an object in image to be processed
Font, content, and/or color, the display of every an object is at least determined based on the font, content and/or color of every an object
Region, according to the display area of every an object by image segmentation to be processed at the subgraph of the first quantity.
In some embodiments, this method further include: in response to the second operation, at least based on second in image to be processed
Object information obtains target subgraph;Target sub-object is successively shown according to the second priority orders;Wherein, the second operation packet
It includes following at least one: searching, selection;Second object information comprises at least one of the following: contents of object, object identity, object
Font, object color;Second priority orders are determined based on following at least one information: object matching degree, target subgraph
Identification information, current input information.
Second aspect, the embodiment of the present application provide a kind of processing method, comprising: in response to detecting to currently showing
The selection operation of electronic document determines target object indicated by selection operation;The display area of target object is determined as mesh
Mark region;In response to the first operation to target area, the target object in different electronic documents is carried out in target area
Display;Wherein, the first operation comprises at least one of the following: sliding, click, voice, gesture.
In some embodiments, if selection operation indicates at least two objects, it is determined that target indicated by selection operation
Object comprises at least one of the following: the preceding object of arranged in sequence at least two objects is determined as target object;Or really
The touch area for determining selection operation respectively corresponds the region accounting of at least two objects, and accounting high object in region is determined as mesh
Mark object;Or determine selection operation touch area at every an object byte number, the object more than byte number is determined as mesh
Mark object;Or it will show that the object of specific character is determined as target object at the touch area of selection operation.
The third aspect, the embodiment of the present application provide a kind of processing unit, comprising: acquiring unit is configured to obtain extremely
Few one group of image to be processed;Processing unit is configured to handle each image to be processed, obtains at least two subgraphs
Picture includes at least one object in each subgraph;Associative cell, being configured at least will based on the object for including in subgraph
The correspondence subgraph of each image to be processed is associated;Wherein, the object that image to be processed includes in same group is identical, different
Image to be processed includes at least one identical object in group, and the size of subgraph is less than image to be processed.
Fourth aspect, the embodiment of the present application provide a kind of processing unit, comprising: detection unit is configured in response to
It detects the selection operation to the electronic document currently shown, determines target object indicated by selection operation;Region determines single
Member is configured to the display area of target object being determined as target area;Display unit is configured in response to target area
First operation in domain, the target object in different electronic documents is shown in target area;Wherein, first operation include
Following at least one: sliding, click, voice, gesture.
Processing method and processing device provided by the embodiments of the present application, first can be to each image to be processed of acquisition at
Reason, to obtain at least two subgraphs;It, can be by the correspondence of each image to be processed later based on the object for including in subgraph
Subgraph is associated.In this way, by multiple subgraphs, being convenient for image segmentation to be processed to associated in image to be processed
Subgraph is individually handled, and is improved flexibility and the convenience of image procossing, is helped to improve treatment effeciency.
Detailed description of the invention
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the application's is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 is the flow chart of the first embodiment of processing method provided by the present application;
Fig. 2 is the flow chart of the second embodiment of processing method provided by the present application;
Fig. 3 is the flow chart of the 3rd embodiment of processing method provided by the present application;
Fig. 4 is the structural schematic diagram of the first embodiment of processing unit provided by the present application;
Fig. 5 is the structural schematic diagram of the second embodiment of processing unit provided by the present application.
Specific embodiment
To make those skilled in the art better understand the technical solution of the application, with reference to the accompanying drawing and specific embodiment party
Formula elaborates to the application.
By the description of the preferred form with reference to the accompanying drawings to the embodiment for being given as non-limiting example, the application's
These and other characteristic will become apparent.
It is also understood that although the application is described referring to some specific examples, those skilled in the art
Member realizes many other equivalents of the application in which can determine, they have feature as claimed in claim and therefore all
In the protection scope defined by whereby.
When read in conjunction with the accompanying drawings, in view of following detailed description, above and other aspect, the feature and advantage of the application will become
It is more readily apparent.
The specific embodiment of the application is described hereinafter with reference to attached drawing;It will be appreciated, however, that the embodiment invented is only
Various ways implementation can be used in the example of the application.Known and/or duplicate function and structure is simultaneously not described in detail with basis
True intention is distinguished in the operation of the history of user, and unnecessary or extra details is avoided to make the application smudgy.Cause
This, the specific structural and functional details invented herein are not intended to restriction, but as just the base of claim
Plinth and representative basis are used to that those skilled in the art to be instructed diversely to use this Shen with substantially any appropriate detailed construction
Please.
This specification can be used phrase " in one embodiment ", " in another embodiment ", " in another embodiment
In " or " in other embodiments ", it can be referred to one or more of the identical or different embodiment according to the application.
Referring to Figure 1, it illustrates the processes 100 of the first embodiment of processing method provided by the present application.The processing side
Method may comprise steps of:
Step 101, at least one set of image to be processed is obtained.
In the present embodiment, at least one set image to be processed available first.Here image to be processed, which can be, appoints
Be intended to picture, such as by scanning obtain paper document scan image, or directly from be locally stored, cloud server, its
The image that his terminal obtains.As an example, image to be processed can be the content comprising electronic document in Fig. 2 and Fig. 3 embodiment
Image.Such as its photo or scan image for can be paper document (such as paper, bill), it is also possible to electronic document page
Screenshot or screenshotss image etc..Wherein, the object that image to be processed includes in same group is often identical.And in different groups to
Handling image may include at least one identical object.Herein, the acquiring way of image to be processed is in this application and unlimited
System, can be what actual acquisition obtained, is also possible to resource acquisition from network.
Step 102, each image to be processed is handled, obtains at least two subgraphs.
In the present embodiment, each image to be processed obtained in step 101 can be handled, to obtain at least
Two subgraphs.It wherein, may include at least one object in each subgraph.And the size of subgraph will be generally less than to
Handle the size of image.Specific processing method is not intended to limit herein, can be configured according to the actual situation.Such as it can be with base
It is cut in every an object of identification or the boundary of every group objects, or every an object based on acquisition or every group objects
It is cut or is cut in boundary line.In another example image to be processed can be inputted in image processing model trained in advance, pass through
Model treatment exports to obtain at least two subgraphs.
In some embodiments, it is possible, firstly, to by the methods of image recognition, to obtain first pair in image to be processed
Image information.Wherein, the first object information may include that (but being not limited to) is at least one below: object number, object's position, object
Color, object font, contents of object etc..Then, at least the first object information based on acquisition, can be by image to be processed point
It is cut into the subgraph of the first quantity.Herein, the first quantity at least can be related with the object number in image to be processed.I.e.
One quantity is no more than the object number in image to be processed.Here the first quantity can be greater than or equal to 2.
As an example, the object's position of adjacent object in image to be processed can be obtained.To at least be based on object's position
To determine the cut-off rule between adjacent object.And then it can be by image segmentation to be processed at the subgraph of the first quantity according to cut-off rule
Picture.The identical subgraph of available in this way size.Such as every an object in image to be processed can be obtained with automatic identification
Font, content, at least one of color.To font, content and/or color (the i.e. three at least based on every an object
At least one of), to determine the display area of every an object.And then it can will be to be processed according to the display area of every an object
Image segmentation at the first quantity subgraph.It is understood that the subgraph that this mode obtains is not of uniform size fixed identical.
Further, it is also possible to further be divided according to font and/or color in every an object to every an object.Such as
By taking subjective item paper as an example, can based on such as examination question serial number of the identification information in image, by whole paper image according to comprising
Examination question (at this point, every an object is equivalent to an examination question) is divided into multiple subgraphs with examination question quantity equity.Each subgraph at this time
As corresponding one of examination question.It is possible to further making further dividing processing to the subgraph, namely according to font color
(black with blue) and font type (block letter/handwritten form) carry out the segmentation of topic and corresponding answer.Namely each subgraph can be into
One step is divided into the first subgraph and the second subgraph.Wherein, the first subgraph can represent topic, and the second subgraph can generation
Table answer.It in this way can be in order to carry out only showing answer when batch is read and made comments according to examination question, to improve efficiency of going over examination papers.
Step 103, at least the correspondence subgraph of each image to be processed is closed based on the object for including in subgraph
Connection.
It in the present embodiment, can will be each to be processed based on object included in the subgraph obtained in step 102
The correspondence subgraph of image is associated, only to execute response behaviour in the region of the subgraph when receiving and interactively entering
Make, to improve flexibility and convenience while reducing terminal processes task.For example, to be processed to first in same group
Image and the second image procossing to be processed obtain the first subgraph and the second subgraph to the first image procossing to be processed, to
Two image procossings to be processed obtain third subgraph and the 4th subgraph, and the first subgraph is identical as third subgraph, then by
Three subgraphs and the first subgraph are associated;Second subgraph and the 4th subgraph are identical, then by the second subgraph and the 4th
Subgraph association.Handled to different groups of the first image to be processed and the second image to be processed, obtain the first subgraph,
Second subgraph, third subgraph and the 4th subgraph, wherein only the first subgraph and third subgraph are identical at least one
Object, then only the first subgraph and third subgraph are associated.Here interrelational form is not intended to limit, and can will such as be closed
The subgraph of connection is deposited in same file folder, and identical association identification etc. can be for another example added.
Optionally, it is possible, firstly, to which determining has the subgraph of incidence relation from the subgraph that different images to be processed obtain
Picture.Wherein, it includes at least one identical object in two subgraphs that incidence relation, which can be used for characterizing,.Later, can will have
The subgraph of incidence relation carries out Clustering.It can will be belonged to same according to cluster result when in response to respective operations in this way
One group of subgraph is successively shown.
Processing method provided in this embodiment can be handled each image to be processed of acquisition, first to obtain
At least two subgraphs.Later based on the object for including in subgraph, can by the correspondence subgraph of each image to be processed into
Row association.By being associated to subgraph, operation can be further processed to local subgraph in order to user, such as joined
See the associated description in Fig. 2 and Fig. 3 embodiment.It can not only contribute to improve flexibility and the convenience of image procossing in this way,
And help to improve treatment effeciency.
In some optional implementations, in order to enrich the processing method in the present embodiment, the first operation is being detected
In the case where, associated subgraph can also successively be shown according to the first priority orders, and can keep currently opening up
Show that other subgraphs on interface are constant, namely realizes the effect locally shown.Wherein, first operation equally may include (but
It is not limited to) at least one below: sliding, click, voice, gesture etc..And above-mentioned first priority orders can based on down toward
Lack a kind of information and determine: the corresponding identification information of associated subgraph (such as sequence number, examinee's name or student number, subgraph
Acquisition time etc.), current input information (such as batch information of reader's input), history ranking information (the ranking name of such as examinee
It is secondary).
Or in the case where detecting the first operation, can also at least based on the size of current display area be associated with
Associated subgraph is also illustrated in current display area by the quantity of subgraph.Here current display area can be
Entire display area, be also possible to for showing associated subgraph region (i.e. partial display area, as the following examples
In target area).Facilitate the use demand for meeting different user in this way, expands the scope of application.
In application scenes, in order to further enrich the processing method in the present embodiment, the second operation is being detected
In the case where, it to obtain target subgraph, and then can also can be incited somebody to action at least based on the second object information in image to be processed
Target sub-object is successively shown according to the second priority orders.Wherein, the second operation may include following at least one: search,
Selection.Here the second object information may include that (but being not limited to) is at least one below: contents of object, object identity, object
Font, object color etc..And the second priority orders can be determined based on following at least one information: object matching degree is (i.e.
Matching degree between the second object information and the second operation content of target subgraph), the identification information (example of target subgraph
Property ground such as serial number, title, type), current input information.Specific operation and displaying process may refer to Fig. 2 and Fig. 3 and implement
Associated description in example, details are not described herein again.
With continued reference to Fig. 2, it illustrates the processes 200 of the second embodiment of processing method provided by the present application.The processing
Method can be applied to various electronic equipments.Here electronic equipment may include (but being not limited to) smart phone, tablet computer,
Projector, desktop computer and server etc..The processing method may comprise steps of:
Step 201, it in response to detecting to the selection operation of the electronic document currently shown, determines indicated by selection operation
Target object.
In the present embodiment, electronic document can be the electronic document page of the various files for needing to show, such as paper text
Part, invoice file, billing files, application form etc..And electronic document can be by file such as being taken pictures, being scanned etc.
What reason obtained.In addition, the display mode to electronic document is not intended to limit in this application.Such as it can use electronic equipment (such as
Mobile phone, desktop computer etc.) display screen shown.In another example can be shown using modes such as projections.
In the present embodiment, the content in electronic document generally can be used as the examination question in object, such as paper file, again
Information of making out an invoice in such as invoice file.And before being shown to electronic document, first electronic document can be carried out
The treatment process in embodiment is stated, to realize the association of object in electronic document or electronic document.And in the electricity for consulting display
While subfile, user can also select the object in electronic document.At this point, detecting to the electricity currently shown
In the case where the selection operation of subfile, target object indicated by the selection operation can be determined.I.e. indicated by selection operation
Object be target object.Herein, the concrete operations mode of selection operation and the method for determining target object are in this Shen
Please in be not intended to limit.
In some optional implementations, in order to meet the use demand of different user, the flexibility of operation is promoted, on
Stating selection operation at least may include slide or clicking operation.Wherein, slide, clicking operation mode of operation at this
In do not limit equally.Such as slide can be and singly refer to linear or curved slide motion.In another example clicking operation can be Dan Zhishuan
It hits or two fingers is clicked etc..
It herein, at least can be according to slide or the touch area of clicking operation, to determine indicated target pair
As.It specifically, can be each right by include in slide or the touch area of clicking operation and the electronic document currently shown
The display area of elephant is matched.If the display area at least portion of slide or the touch area of clicking operation and some object
Divide and be overlapped, then the object can be determined as target object.
Alternatively it is also possible to according to the content shown at slide or the touch area of clicking operation, to determine
The target object of instruction.Specifically, the content that can be shown at the touch area to slide or clicking operation first into
Row identification.It then can be by the content information progress for each object for including in the content of identification and the electronic document currently shown
Match.Object so as to which the content of content information and identification matches is determined as target object.
It should be noted that the determination of above-mentioned touch area does not limit equally here.For example, can be grasped first to sliding
The motion profile of work is identified.If linear slide, then the band of position that slide passes through can be determined as touching area
Domain.If curved slide motion, then the band of position of slide process or the region surrounded (can be wrapped by motion profile
The region enclosed) it is determined as touch area.In another example can directly will click on for clicking operation and operate touched region
It is determined as touch area.In application scenes, the center in the region that clicking operation is touched can also be first determined;
Then centered on the center, the region (border circular areas that such as diameter is 2 centimetres) of pre-set dimension shape is determined as touching
Region.
It is understood that the electronic document currently shown can be entire electronic document (such as in Fig. 1 embodiment to
Handle image), it is also possible to after being pre-processed (such as segmentation is cut, pixel adjustment) to entire electronic document, and obtained need
The electronic document (such as subgraph in Fig. 1 embodiment) shown.That is, can in the electronic document currently shown
Only to include an object (such as only one of examination question).Under normal conditions, it generally may include in the electronic document currently shown
Multiple (at least two) objects.And above-mentioned target object can be all objects in the electronic document currently shown, i.e., quite
In the entire electronic document of selection.Target object is also possible at least one object in the electronic document currently shown.
In some embodiments, if target object be an object, but by the above method determine selection operation indicate to
Few two objects, then can determine target object indicated by selection operation using following at least one mode:
For example, the preceding object of arranged in sequence at least two objects can be determined as target object.As an example, this
In arranged in sequence can be and put in order according to such as first up and then down, first left and then right, or can be according to arrangement serial number
Sequencing (such as ascending) etc..
In another example can determine that the touch area of selection operation respectively corresponds the region accounting of at least two objects, thus
Accounting high object in region can be determined as target object.As an example, if 70% region in the touch area of slide
Corresponding objects A (is overlapped with the display area of object A), and 30% region corresponding objects B is (i.e. heavy with the display area of object B
Close), then object A can be determined as target object.
For another example the byte number of every an object at the touch area of selection operation can be determined, i.e., shown at touch area
The content shown separately includes the byte number of every an object, so as to which the object more than byte number is determined as target object.Or
It can also will show that the object of specific character is determined as target object at the touch area of selection operation.As an example, can incite somebody to action
Show that the object of examination question number is determined as target object at touch area.Include in the content even shown at touch area
Examination question object represented by number three then can be determined as target object by the examination question number three of capitalization.On it should be noted that
Stating specific character can be configured according to actual electronic document.
From the above, it can be seen that it is pre- also to can choose out user in the case where user is without accurately executing selection operation
The target object of phase.The flexibility and convenience that can help improve user's operation in this way are conducive to promote user experience.
Step 202, the display area of target object is determined as target area.
It in the present embodiment, can be by target pair after determining target object indicated by selection operation in step 201
The display area of elephant is determined as target area.I.e. by the region of displaying target object, if target object is in display screen or projection
Display area on screen is determined as target area.
It should be noted that the examination question in paper usually may include header area and answer area.Wherein, the mark of similar paper
The content for inscribing area is typically identical, and the content in answer area is often different.In application scenes, if electronics is literary
Part is electronic test paper, i.e., the object in electronic document is the examination question in electronic test paper, then at least can be by answering in target examination question
The display area in topic area is determined as target area, thus the content shown needed for reducing target area.In this way in subsequent progress electricity
When the processing such as sub- exam paper marking, i.e., when carrying out the processing such as reading and making comments to the examination question of target area, help to improve display, read and make comments etc.
The treatment effeciency of reason.
Step 203, in response to the first operation to target area, by the target object in different electronic documents in target area
It is shown in domain.
It in the present embodiment, can be by the mesh in different electronic documents if detecting the first operation to target area
Mark object is shown in target area.It is understood that in order to realize quickly reading and making comments etc. to a large amount of electronic documents
Reason, would generally classify to electronic document before being read and made comments.That is, can be using described in Fig. 1 embodiment
Processing method electronic document is handled.To which the object (or image where object) in electronic document be closed
Connection.Here different electronic documents typically belong to same type of file, such as same application form, same set of electronic test paper etc..
Some original intrinsic informations (such as title content, form attributes) of same target are identical in i.e. each electronic document, and same
The band of position of an object in each electronic document is constant.
Herein, the concrete mode of the first operation does not limit equally in this application.Such as first operation may include (but
It is not limited to) at least one below: sliding, click, voice, gesture etc..In some embodiments, the first operation may include touching
Touch operation.It, can be according to the motion profile or touching that touching operates if detecting the touching operation to target area at this time
Target object in the indicated electronic document of touching operation is shown in target area by position.Such as in target area
It slides to the left, or clicks right side or the lower position of target area, it in this way can will be (positioned at the electronic document currently shown)
Target object in next electronic document is shown in target area.
Optionally, the first operation can also include voice or gesture operation.That is, can also be based on to target area
Voice operating instruction or operating gesture, by the voice operating instruct or operating gesture indicated by target pair in electronic document
As being shown in target area.
It should be noted that if the quantity of target object indicated by above-mentioned selection operation is less than the electronics text currently shown
The quantity of object included in part, i.e. target object are part (part) object in the electronic document currently shown, then exist
While target object in the indicated electronic document of first operation is shown in target area, non-target area can also be kept
The currently displayed contents of object in domain (display area i.e. other than removing target area) is constant.That is, user can be right
Regional area content in electronic document carries out the operation such as browsing.And in operation, the content that other regions are shown can be with
It remains unchanged.User is helped to realize in this way and quickly browses, read and make comments etc. processing to object to be seen (band of position), promotes user
The flexibility of operation.
Processing method provided by the embodiments of the present application, in the feelings for detecting the selection operation to the electronic document currently shown
Under condition, target object indicated by the selection operation can be determined first.It later, can be true by the display area of the target object
It is set to target area.It then, can be by the target pair in different electronic documents if detecting the first operation to target area
As being shown in the target area.This embodiment can realize the partial zones to electronic document according to selection operation
The operation such as browse in domain.I.e. user can carry out the operation such as browsing to the region to be seen of selection.It not only enriches to electronics in this way
File carries out the method for the processing such as reading and making comments, and can be convenient for the user to operate, is conducive to the use demand for meeting different user.
With further reference to Fig. 3, it illustrates the processes 300 of the 3rd embodiment of processing method provided by the present application.At this
Reason method may comprise steps of:
Step 301, it in response to detecting to the selection operation of the electronic document currently shown, determines indicated by selection operation
Target object.It specifically may refer to the associated description in the step 201 of Fig. 2 embodiment, details are not described herein again.
Step 302, the display area of target object is determined as target area.It specifically may refer to the step of Fig. 2 embodiment
Associated description in rapid 202, details are not described herein again.
Step 303, based on the operation for browsing first mark, the target object in the indicated electronic document of operation is shown
It is shown in target area.
In the present embodiment, in the case where determining target area, first can also be presented in target area and browses
Mark.Here first, which browses mark, can be used for browsing the target object being shown in target area.Wherein, first
Browsing mark may include that (but being not limited to) is at least one below: page turning item (such as scroll bar), turnover key are (as to the left, to the right
Or downwards, to first-class key) or be used to indicate the file identification (number of such as electronic document) of each electronic document.In this way, logical
The operation for browsing mark to first is crossed, the target object in electronic document indicated by the operation can be shown in target area
Domain.
Step 304, if detecting the selection operation to classification logotype, target pair indicated by the classification logotype by selection
As being shown in target area.
In the present embodiment, classification logotype can also be presented in target area.Wherein, classification logotype can be by right
Target object in each electronic document read and make comments result it is for statistical analysis obtained from.I.e. to read and make comments result carry out statistical
Class.By taking electronic test paper as an example, result of reading and making comments here may include having read and made comments and not read and made comments.Wherein, having read and made comments to include
Examination question score value or examination question deduction of points value.It is of all categories indicated by classification logotype at this time to can be different score range or deduction of points model
It encloses.In this way, if detecting the selection operation to classification logotype, target object can show indicated by the classification logotype by selection
It is shown in target area.
That is, can be screened to the target object in each electronic document by classification logotype.Filtering out needs
The target object to be shown in target area.The electronics text for wanting to read and make comments processing can be quickly found in order to user in this way
Part.And mark is browsed by above-mentioned first, target object indicated by the classification logotype of selection may be implemented in target area
Interior browses.
Step 305, compare view mode in response to starting, multiple target objects indicated by the classification logotype by selection into
Row Display on the same screen.
In the present embodiment, in order to which more convenient user quickly criticizes the same target object in different electronic documents
It reads, is also provided with comparison view mode.In the case where starting compares view mode, can will select in step 304
Multiple target objects indicated by classification logotype carry out Display on the same screen.In this way, being also convenient for user the analysis such as compares, counts.
Herein, the number of the target object of Display on the same screen can be preset;It is also possible to according to whole display size and mesh
Obtained from the display size of mark object is calculated;It can also be using other methods determination.
Optionally, in the case where comparing view mode, it can equally present and browse mark, i.e., second for browse target object
Browse mark.Wherein, second to browse mark may include (but being not limited to) page turning item or turnover key etc..In this way, passing through second
Browsing mark whole to the progress of multiple target objects of current Display on the same screen can browse, that is, switch currently displayed content.
Further, for the function of more abundant the application processing method, it can also be arranged and compare view mode phase
Corresponding global view mode.In the case where carrying out Display on the same screen to multiple target objects, if starting global view mode,
Display content before comparison view mode starting can be returned.It is equivalent to and exits comparison view mode, consequently facilitating user's operation.
Processing method provided by the embodiments of the present application is further enriched and perfect carries out the processing such as reading and making comments to electronic document
Function.It can help improve treatment effeciency in this way.The actual use situation of different user can more be met simultaneously, promoted
User experience expands the scope of application of method.
It should be noted that if only comprising a small amount of several objects in electronic document, such as one or two object, and this is several
When the display area of a object only accounts for the partial region of whole display area, efficiency is read and made comments in order to improve, at the beginning to electronics
When file is shown, starting comparison view mode can be defaulted.Display on the same screen can be carried out to multiple electronic documents.And
And browsing for the ease of electronic document, above-mentioned second can also be presented and browse mark.In some embodiments, in order to reality
The Display on the same screen of existing multiple electronic documents, can also fit the display size of each electronic document according to whole display size
Local adjustment processing (such as cut, scale).
In addition, for the processing method in the various embodiments described above, in order to further convenient for the user to operate, it is also provided with
For handling the Option of electronic document.The Option can suspend and be set in whole display area (such as display screen or
Projection screen) any position, and can be movable by the user.Wherein, Option may include (but being not limited to) below at least one
Kind mark: annotation, lookup, answer, summarizes at marking.
Herein, it may include default annotation content (such as emphasis, write good) in annotation mark.In marking mark
It may include default marking option (such as -1, -2).Searching mark can be used for carrying out electronic document browsing lookup.And answer
It usually may include the Key for Reference of electronic document in mark.In addition, summarizing mark can be used for counting and stores Current electronic
The total score of file.
That is, quick annotation, the marking of electronic document may be implemented, criticized to improve by annotation, mark of giving a mark
Read efficiency.It is understood that user can also give a mark or input manually annotation content.The content that user can be inputted at this time
Carry out automatic identification.Meanwhile the position of the adjustable annotation of user, marking.In addition, being carried out individually to every part of electronic document
(one by one) in the case where showing, user can also be using above-mentioned first operation, to realize browsing for display content.Or user
Above-mentioned lookup mark can be triggered, browses mark to be presented above-mentioned first.In this way, browsing mark by first may be implemented whole
The quick lookup of electronic document is browsed every time as unit of a electronic document entirety.In application scenes, Yong Huke
To carry out above-mentioned selection operation to the electronic document currently shown in the case where mark is searched in triggering.
In some embodiments, in order to further enhance user experience, when user carries out the above-mentioned first operation, the application
In processing method can also to first operation service speed detect.So as to based on the service speed detected,
Different effect is browsed to present.For example, can present if service speed is less than preset value and browse effect slowly, as dug
Region area is big and/or switch speed is smaller etc..In another example can present if service speed is greater than preset value and browse effect fastly
Fruit, the region area such as dug is small and/or switch speed is very fast etc..
Referring to Fig. 4, as the realization to method shown in above-mentioned Fig. 1, present invention also provides a kind of processing units
One embodiment.The Installation practice is corresponding with embodiment of the method shown in Fig. 1 embodiment.
As shown in figure 4, the processing unit 400 of the present embodiment may include: acquiring unit 401, it is configured to obtain at least
One group of image to be processed;Processing unit 402 is configured to handle each image to be processed, obtains at least two subgraphs
Picture includes at least one object in each subgraph;Associative cell 403 is configured at least based on pair for including in subgraph
As the correspondence subgraph of each image to be processed is associated;Wherein, the object that image to be processed includes in same group is identical,
Image to be processed includes at least one identical object in different groups, and the size of subgraph is less than image to be processed.
In some embodiments, associative cell 403 can be further configured to the son obtained from different images to be processed
The subgraph with incidence relation is determined in image;Subgraph with incidence relation is subjected to Clustering;Wherein, association is closed
System includes at least one identical object in two subgraphs for characterizing.
In some embodiments, which can also include the first display unit (being not shown in Fig. 4), be configured to
In response to the first operation, associated subgraph is successively shown according to the first priority orders, and keep current presentation interface
On other subgraphs it is constant;Or in response to first operation, at least based on the size of current display area be associated with subgraph
Associated subgraph is also illustrated in current display area by the quantity of picture;Wherein, the first operation includes following at least one
Kind: sliding, click, voice, gesture;First priority orders are determined based on following at least one information: associated subgraph
As corresponding identification information, current input information, history ranking information.
Optionally, processing unit 402 can be further configured to obtain the first object information in image to be processed, until
Lack image segmentation to be processed based on the first object information into the subgraph of the first quantity;Wherein, the first object information include with
Lower at least one: object number, object's position, object color, object font, contents of object;First quantity at least with it is to be processed
Object number in image is related, and the first quantity is greater than or equal to 2.
Further, processing unit 402 may be configured to obtain the object's position of adjacent object in image to be processed, until
It is few that cut-off rule between adjacent object is determined based on object's position, according to cut-off rule by image segmentation to be processed at the first quantity
Subgraph;Or obtain font, content, and/or the color of every an object in image to be processed, at least word based on every an object
Body, content and/or color determine the display area of every an object, according to the display area of every an object by image segmentation to be processed
At the subgraph of the first quantity.
In some optional implementations, which can also include the second display unit (being not shown in Fig. 4),
It is configured in response to the second operation, target subgraph is at least obtained based on the second object information in image to be processed;By mesh
Mark subobject is successively shown according to the second priority orders;Wherein, the second operation comprises at least one of the following: searching, selection;The
Two object informations comprise at least one of the following: contents of object, object identity, object font, object color;Second priority orders
It is determined based on following at least one information: object matching degree, the identification information of target subgraph, current input information.
It is understood that each step in all units recorded in the processing unit 400 and the method for reference Fig. 1 description
It is rapid corresponding.The device is equally applicable to above with respect to the beneficial effect of the operation of method description, feature and generation as a result,
400 and unit wherein included, details are not described herein.
With further reference to Fig. 5, as the realization to method shown in above-mentioned Fig. 2 to Fig. 3, present invention also provides a kind of processing
One embodiment of device.The Installation practice is corresponding with embodiment of the method shown in above-mentioned Fig. 2 to Fig. 3 embodiment.
As shown in figure 5, the processing unit 500 of the present embodiment may include: detection unit 501, it is configured in response to examine
The selection operation to the electronic document currently shown is measured, determines target object indicated by selection operation;Area determination unit
502, it is configured to the display area of target object being determined as target area;Display unit 503, is configured in response to mesh
The first operation for marking region, the target object in different electronic documents is shown in target area;Wherein, the first operation
It comprises at least one of the following: sliding, click, voice, gesture.
In some optional implementations, selection operation includes at least slide or clicking operation;And detection is single
Member 501 may include: first determining subelement (being not shown in Fig. 5), be configured to according at least to slide or clicking operation
Touch area, determine indicated target object;Or second determining subelement (being not shown in Fig. 5), it is configured at least
According to the content shown at slide or the touch area of clicking operation, indicated target object is determined.
Optionally, first determine that subelement can be further configured to the touch area by slide or clicking operation
It is matched with the display area for each object for including in the electronic document currently shown;If the touching of slide or clicking operation
It touches region to be at least partly overlapped with the display area of some object, then the object is determined as target object.
In some embodiments, if selection operation indicates at least two objects, detection unit 501 can further be matched
It is set to following at least one: the preceding object of arranged in sequence at least two objects is determined as target object;Or determine choosing
The touch area for selecting operation respectively corresponds the region accounting of at least two objects, and accounting high object in region is determined as target pair
As;Or determine selection operation touch area at every an object byte number, the object more than byte number is determined as target pair
As;Or it will show that the object of specific character is determined as target object at the touch area of selection operation.
Optionally, if display unit 503 can be further configured to detect that the touching to target area operates, root
According to the motion profile or touch position of touching operation, the target object in the indicated electronic document of touching operation is shown in
Target area.
Further, display unit 503 can also be further configured at least based on the voice operating to target area
Target object in electronic document indicated by voice operating instruction or operating gesture is shown in target by instruction or operating gesture
Region.
In some embodiments, which can also include holding unit (being not shown in Fig. 5), be configured to keep
The currently displayed contents of object in nontarget area is constant.
Optionally, which can also browse mark display unit (being not shown in Fig. 5) including first, be configured to
First is presented in target area and browses mark, wherein first, which browses mark, comprises at least one of the following: page turning item, page turning are pressed
Key or the file identification for being used to indicate each electronic document;And display unit 503 can be further configured at least based on pair
First browses the operation of mark, and the target object in the indicated electronic document of operation is shown in target area.
In some embodiments, which can also include classification logotype display unit (being not shown in Fig. 5), be matched
It is set in target area and classification logotype is presented, wherein classification logotype is by batch to the target object in each electronic document
Read result it is for statistical analysis obtained from;And if display unit 503 can also be further configured to detect to classification
The selection operation of mark, then target object indicated by the classification logotype by selection is shown in target area.
In application scenes, display unit 503 can be configured to compare view mode in response to starting, will select
Multiple target objects indicated by the classification logotype selected carry out Display on the same screen;In response to starting global view mode, comparison is returned
Display content before view mode starting.
Optionally, which can also browse mark display unit (being not shown in Fig. 5) including second, be configured to
Compare view mode in response to starting, present and second browse mark, wherein second browse mark including page turning item or page turning by
Key, for browsing the target object currently shown.
Further, which can also include Option display unit (being not shown in Fig. 5), be configured to hang
Float sets the Option for handling electronic document, wherein Option comprises at least one of the following mark: annotate, give a mark,
Lookup, summarizes at answer;Include default annotation content in annotation mark;Include default marking option in marking mark;Search mark
For carrying out browsing lookup to electronic document;It include the Key for Reference of electronic document in answer mark;Summarize mark for counting
And store the total score of Current electronic file.
In some embodiments, the object in electronic document may include the examination question of electronic test paper, wherein examination question can wrap
Containing header area and answer area;And area determination unit 502 can be further configured to the answer in target examination question at least
The display area in area is determined as target area.
In some embodiments, which can also include browsing effect display unit (being not shown in Fig. 5), be matched
It is set to the service speed of the first operation of detection, and is presented based on the service speed detected and different browse effect.
It is understood that in all units recorded in the processing unit 500 and the method described referring to figs. 2 and 3
Each step is corresponding.This is equally applicable to above with respect to the beneficial effect of the operation of method description, feature and generation as a result,
Device 500 and unit wherein included, details are not described herein.
Since the processing unit that the present embodiment is introduced is device corresponding to the processing method in the embodiment of the present application, therefore
And based on the processing method in the embodiment of the present application, those skilled in the art, which can understand, handles dress in the embodiment of the present application
The specific embodiment set and its various change form, so the processing unit is no longer discussed in detail herein.If this
Field technical staff implements the device of the processing method in the embodiment of the present application, belongs to the range to be protected of the application.
It should be understood by those skilled in the art that, embodiments herein can provide as method, system or computer program
Product.Therefore, complete hardware embodiment, complete software embodiment or reality combining software and hardware aspects can be used in the application
Apply the form of example.Moreover, it wherein includes the computer of computer usable program code that the application, which can be used in one or more,
The computer program implemented in usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) produces
The form of product.
The application is referring to method, the process of equipment (system) and computer program product according to the embodiment of the present application
Figure and/or block diagram describe.It should be understood that every one stream in flowchart and/or the block diagram can be realized by computer program instructions
The combination of process and/or box in journey and/or box and flowchart and/or the block diagram.It can provide these computer programs
Instruct the processing module of general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices with
A machine is generated, so that generating use by the instruction that the processing module of computer or other programmable data processing devices executes
In the function that realization is specified in one box or multiple boxes of one process or multiple processes and/or block diagrams of flow chart
Device.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy
Determine in the computer-readable memory that mode works, so that it includes referring to that instruction stored in the computer readable memory, which generates,
The manufacture of device is enabled, which realizes in a side of one process or multiple processes and/or block diagrams of flow chart
The function of being specified in frame or multiple boxes.
These computer program instructions can also be loaded into computer or other programmable data processing devices, so that counting
Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, thus in computer or
The instruction executed on other programmable devices is provided for realizing in one process or multiple processes and/or block diagrams of flow chart
One box or multiple boxes in specify function the step of.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.Those skilled in the art
Member is it should be appreciated that invention scope involved in the application, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic
Scheme, while should also cover in the case where not departing from foregoing invention design, it is carried out by above-mentioned technical characteristic or its equivalent feature
Any combination and the other technical solutions formed.Such as features described above has similar function with (but being not limited to) disclosed herein
Can technical characteristic replaced mutually and the technical solution that is formed.
Claims (10)
1. a kind of processing method, comprising:
Obtain at least one set of image to be processed;
Each image to be processed is handled, at least two subgraphs are obtained, includes at least in each subgraph
One object;
At least the correspondence subgraph of each image to be processed is associated based on the object for including in the subgraph;
Wherein, the object that image to be processed described in same group includes is identical, and image to be processed described in different groups includes at least
The size of one identical object, the subgraph is less than the image to be processed.
2. according to the method described in claim 1, described at least will be each to be processed based on the object for including in the subgraph
The correspondence subgraph of image is associated, comprising:
The subgraph with incidence relation is determined from the subgraph that different images to be processed obtain;
Subgraph with incidence relation is subjected to Clustering;
Wherein, the incidence relation includes at least one identical object in two subgraphs for characterizing.
3. according to the method described in claim 1, further include:
In response to the first operation, associated subgraph is successively shown according to the first priority orders, and keep current presentation
Other subgraphs on interface are constant;Or
It, will be associated at least based on the size of current display area and the quantity for being associated with subgraph in response to the first operation
Subgraph is also illustrated in current display area;
Wherein, first operation comprises at least one of the following: sliding, click, voice, gesture;First priority orders
It is determined based on following at least one information: the corresponding identification information of associated subgraph, current input information, history ranking
Information.
4. obtaining at least two sons according to the method described in claim 1, described handle each image to be processed
Image, comprising:
The first object information in the image to be processed is obtained, is at least based on first object information for the figure to be processed
Subgraph as being divided into the first quantity;
Wherein, first object information comprises at least one of the following: object number, object's position, object color, object word
Body, contents of object;First quantity is at least related with the object number in the image to be processed, and first quantity is greater than
Or it is equal to 2.
5. according to the method described in claim 4, it is described based on first object information by the image segmentation to be processed at
The subgraph of first quantity, comprising:
Obtain the object's position of adjacent object in the image to be processed, at least based on the object's position determine adjacent object it
Between cut-off rule, according to the cut-off rule by the image segmentation to be processed at the subgraph of the first quantity;Or
Font, content, and/or the color for obtaining every an object in the image to be processed, font at least based on every an object,
Content and/or color determine the display area of every an object, are divided the image to be processed according to the display area of every an object
It is cut into the subgraph of the first quantity.
6. according to the method described in claim 1, further include:
In response to the second operation, target subgraph is at least obtained based on the second object information in image to be processed;
The target sub-object is successively shown according to the second priority orders;
Wherein, second operation comprises at least one of the following: searching, selection;Second object information include it is following at least
It is a kind of: contents of object, object identity, object font, object color;Second priority orders are based on following at least one letter
It ceases and determines: object matching degree, the identification information of target subgraph, current input information.
7. a kind of processing method, comprising:
In response to detecting the selection operation to the electronic document currently shown, target pair indicated by the selection operation is determined
As;
The display area of the target object is determined as target area;
In response to the first operation to the target area, by the target object in different electronic documents in the target area
It is shown;
Wherein, first operation comprises at least one of the following: sliding, click, voice, gesture.
8. according to the method described in claim 7, if the selection operation indicates at least two objects, the determination choosing
The indicated target object of operation is selected, is comprised at least one of the following:
The preceding object of arranged in sequence at least two objects is determined as target object;Or
Determine that the touch area of the selection operation respectively corresponds the region accounting of at least two objects, by high pair of region accounting
As being determined as target object;Or
Object more than byte number is determined as target pair by the byte number for determining every an object at the touch area of the selection operation
As;Or
It will show that the object of specific character is determined as target object at the touch area of the selection operation.
9. a kind of processing unit, comprising:
Acquiring unit is configured to obtain at least one set of image to be processed;
Processing unit is configured to handle each image to be processed, obtains at least two subgraphs, each described
It include at least one object in subgraph;
Associative cell is configured at least based on the object for including in the subgraph by the correspondence subgraph of each image to be processed
As being associated;
Wherein, the object that image to be processed described in same group includes is identical, and image to be processed described in different groups includes at least
The size of one identical object, the subgraph is less than the image to be processed.
10. a kind of processing unit, comprising:
Detection unit is configured in response to detect the selection operation to the electronic document currently shown, determines the selection
The indicated target object of operation;
Area determination unit is configured to the display area of the target object being determined as target area;
Display unit is configured in response to the first operation to the target area, by the target pair in different electronic documents
As being shown in the target area;
Wherein, first operation comprises at least one of the following: sliding, click, voice, gesture.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910635455.3A CN110362252A (en) | 2019-07-15 | 2019-07-15 | Processing method and processing device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910635455.3A CN110362252A (en) | 2019-07-15 | 2019-07-15 | Processing method and processing device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110362252A true CN110362252A (en) | 2019-10-22 |
Family
ID=68219490
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910635455.3A Pending CN110362252A (en) | 2019-07-15 | 2019-07-15 | Processing method and processing device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110362252A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111159975A (en) * | 2019-12-31 | 2020-05-15 | 联想(北京)有限公司 | Display method and device |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103069375A (en) * | 2010-10-15 | 2013-04-24 | 夏普株式会社 | Information-processing device, control method for information-processing device, program, and recording medium |
CN103985279A (en) * | 2014-05-27 | 2014-08-13 | 北京师范大学 | Socialized homework evaluation and student learning process information recording system and method |
US20160358020A1 (en) * | 2015-06-04 | 2016-12-08 | Canon Kabushiki Kaisha | Information processing apparatus, control method, and storage medium |
CN106651876A (en) * | 2016-12-13 | 2017-05-10 | 深圳市海云天科技股份有限公司 | Image processing method and system for answer sheets |
CN106781784A (en) * | 2017-01-04 | 2017-05-31 | 王骁乾 | A kind of intelligence correction system |
CN107293171A (en) * | 2017-06-22 | 2017-10-24 | 宁波宁大教育设备有限公司 | Intelligent handwriting board, single topic segmentation and answer analysis method |
CN108229361A (en) * | 2017-12-27 | 2018-06-29 | 北京摩数教育科技有限公司 | A kind of electronic paper marking method |
CN109740436A (en) * | 2018-12-03 | 2019-05-10 | 李卫强 | A kind of intelligence marking system |
CN110008858A (en) * | 2019-03-20 | 2019-07-12 | 联想(北京)有限公司 | Paper methods of exhibiting and device, computer system and computer readable storage medium storing program for executing |
-
2019
- 2019-07-15 CN CN201910635455.3A patent/CN110362252A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103069375A (en) * | 2010-10-15 | 2013-04-24 | 夏普株式会社 | Information-processing device, control method for information-processing device, program, and recording medium |
CN103985279A (en) * | 2014-05-27 | 2014-08-13 | 北京师范大学 | Socialized homework evaluation and student learning process information recording system and method |
US20160358020A1 (en) * | 2015-06-04 | 2016-12-08 | Canon Kabushiki Kaisha | Information processing apparatus, control method, and storage medium |
CN106651876A (en) * | 2016-12-13 | 2017-05-10 | 深圳市海云天科技股份有限公司 | Image processing method and system for answer sheets |
CN106781784A (en) * | 2017-01-04 | 2017-05-31 | 王骁乾 | A kind of intelligence correction system |
CN107293171A (en) * | 2017-06-22 | 2017-10-24 | 宁波宁大教育设备有限公司 | Intelligent handwriting board, single topic segmentation and answer analysis method |
CN108229361A (en) * | 2017-12-27 | 2018-06-29 | 北京摩数教育科技有限公司 | A kind of electronic paper marking method |
CN109740436A (en) * | 2018-12-03 | 2019-05-10 | 李卫强 | A kind of intelligence marking system |
CN110008858A (en) * | 2019-03-20 | 2019-07-12 | 联想(北京)有限公司 | Paper methods of exhibiting and device, computer system and computer readable storage medium storing program for executing |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111159975A (en) * | 2019-12-31 | 2020-05-15 | 联想(北京)有限公司 | Display method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6431119B2 (en) | System and method for input assist control by sliding operation in portable terminal equipment | |
US9886669B2 (en) | Interactive visualization of machine-learning performance | |
EP2080113B1 (en) | Media material analysis of continuing article portions | |
CN104965630B (en) | Method and system for layout of desktop application icons | |
JP7111632B2 (en) | Image candidate determination device, image candidate determination method, program for controlling image candidate determination device, and recording medium storing the program | |
US8731308B2 (en) | Interactive image selection method | |
CN104731881B (en) | A kind of chat record method and its mobile terminal based on communications applications | |
US9798741B2 (en) | Interactive image selection method | |
CN102611815A (en) | Image processing apparatus, image processing system and image processing method | |
WO2022089170A1 (en) | Caption area identification method and apparatus, and device and storage medium | |
JP6876914B2 (en) | Information processing device | |
US10769196B2 (en) | Method and apparatus for displaying electronic photo, and mobile device | |
US10572769B2 (en) | Automatic image piling | |
US20230214091A1 (en) | Multimedia object arrangement method, electronic device, and storage medium | |
CN111753120A (en) | Method and device for searching questions, electronic equipment and storage medium | |
KR20220039578A (en) | Method for providing clothing recommendation information based on user-selected clothing, and server and program using the same | |
CN108121987B (en) | Information processing method and electronic equipment | |
JP2016066115A (en) | Digital content browsing support device, browsing support method, and program | |
EP3910496A1 (en) | Search method and device | |
CN110362252A (en) | Processing method and processing device | |
Yang et al. | A large-scale dataset for end-to-end table recognition in the wild | |
US20210289081A1 (en) | Image processing apparatus, image processing method, and storage medium | |
JP2016181042A (en) | Search apparatus, search method, and program | |
KR20150097250A (en) | Sketch retrieval system using tag information, user equipment, service equipment, service method and computer readable medium having computer program recorded therefor | |
US20180189602A1 (en) | Method of and system for determining and selecting media representing event diversity |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191022 |