CN113760153A - Image processing method and device - Google Patents
Image processing method and device Download PDFInfo
- Publication number
- CN113760153A CN113760153A CN202011127439.2A CN202011127439A CN113760153A CN 113760153 A CN113760153 A CN 113760153A CN 202011127439 A CN202011127439 A CN 202011127439A CN 113760153 A CN113760153 A CN 113760153A
- Authority
- CN
- China
- Prior art keywords
- original image
- image
- user
- area
- point data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title abstract description 11
- 238000000034 method Methods 0.000 claims abstract description 55
- 230000001960 triggered effect Effects 0.000 claims abstract description 54
- 238000012545 processing Methods 0.000 claims abstract description 28
- 238000010586 diagram Methods 0.000 claims description 56
- 238000004458 analytical method Methods 0.000 claims description 19
- 230000002776 aggregation Effects 0.000 claims description 15
- 238000004220 aggregation Methods 0.000 claims description 15
- 238000004590 computer program Methods 0.000 claims description 9
- 230000004044 response Effects 0.000 claims description 6
- 238000005457 optimization Methods 0.000 description 9
- 230000008569 process Effects 0.000 description 8
- 238000004891 communication Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 4
- 238000012163 sequencing technique Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 238000003786 synthesis reaction Methods 0.000 description 2
- 238000009933 burial Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000009432 framing Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000002076 thermal analysis method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Human Computer Interaction (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention discloses a method and a device for processing an image, and relates to the technical field of computers. One embodiment of the image processing method includes: acquiring an original image and buried point data corresponding to the original image, wherein the buried point data is used for representing size and/or position data of a region concerned by a user in the original image, and the buried point data is determined according to zooming operation and/or moving operation triggered by the user on the original image; and analyzing the attention of the user to each area of the original image according to the original image and the buried point data. The method and the device can accurately acquire the attention of each user to each area of the image.
Description
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method and an apparatus for processing an image.
Background
At present, most of the existing recommendation modes adopt a key recommendation mode. Taking the article image as an example, the same image is recommended to users who like the same image, or a detail drawing of the article is put in. However, the existing recommendation method cannot accurately acquire the attention of each user to each region of the image, and cannot know the user requirements.
Disclosure of Invention
In view of this, embodiments of the present invention provide an image processing method and an image processing apparatus, which can solve the problem that the existing recommendation method cannot accurately obtain the attention of a user to each area of an image.
To achieve the above object, according to an aspect of an embodiment of the present invention, there is provided a method of processing an image.
The image processing method of the embodiment of the invention comprises the following steps:
acquiring an original image and buried point data corresponding to the original image, wherein the buried point data is used for representing size and/or position data of a region concerned by a user in the original image, and the buried point data is determined according to zooming operation and/or moving operation triggered by the user on the original image;
and analyzing the attention of the user to each area of the original image according to the original image and the buried point data.
Optionally, before the step of obtaining the original image and the buried point data corresponding to the original image, the method further includes:
in response to a zooming operation and/or a moving operation triggered by a user on the original image, if the user triggers a jumping-out operation on the original image within a preset time after the zooming operation and/or the moving operation are triggered, determining the size and/or the position of an image in a display frame of the zooming operation and/or the moving operation at each time in the original image as buried point data of the original image;
or responding to a zooming operation and/or a moving operation triggered by a user on the original image, and if the zooming operation and/or the moving operation is not triggered again on the original image by the user after the preset time is exceeded, determining the size and/or the position of the image in the display frame of the zooming operation and/or the moving operation at each time in the original image at intervals of the preset time to serve as the buried point data of the original image.
Optionally, analyzing the attention of the user to each region of the original image according to the original image and the buried point data, including:
intercepting an area image of the original image according to the buried point data;
overlapping the original image and the area image;
and taking the number of superimposed layers of the area images as the attention of the user to each area of the original image.
Optionally, analyzing the attention of the user to each region of the original image according to the original image and the buried point data, including:
respectively intercepting corresponding area images of the original image and the same kind of image of the original image according to corresponding buried point data;
respectively superposing the original image with the area image of the original image, the similar image and the area image of the similar image;
taking the ratio of the sum of the superimposed layer numbers of the area image of the original image and the area image of the same type of image to the total layer number as the attention of the user to each area of the original image; the total layer number is used for representing the sum of the layer numbers of the original image, the regional image of the original image, the homogeneous image and the regional image of the homogeneous image.
Optionally, before the step of cropping the original image and the similar image of the original image according to the corresponding buried point data, the method further includes:
classifying a preset number of original images to form at least one set;
and respectively carrying out aggregation operation on the original images of each set to determine the similar images with the similarity to the original images in different sets within a preset range.
Optionally, after the step of analyzing the attention of the user to each region of the original image according to the original image and the buried point data, the method further includes:
according to the thermodynamic diagram or a selection operation triggered by a user on the thermodynamic diagram, intercepting a region image to be displayed from the original image and determining the display position of the region image on a page; the thermodynamic diagram is used for representing the attention degree of a user to each area of the original image;
and displaying the area image to be displayed at the display position on the page.
Optionally, before the step of intercepting an area image to be displayed from the original image and determining a display position of the area image on a page according to the thermodynamic diagram or a selection operation triggered by a user on the thermodynamic diagram, the method further includes:
acquiring a region of which the attention degree is greater than a preset threshold in the thermodynamic diagram;
generating at least one group of thermodynamic position coordinates according to the area with the attention degree larger than a preset threshold value in the thermodynamic diagram;
associating the thermal position coordinates with the original image.
To achieve the above object, according to another aspect of the embodiments of the present invention, there is provided an image processing apparatus.
The image processing device of the embodiment of the invention comprises:
the device comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring an original image and buried point data corresponding to the original image, the buried point data is used for representing the size and/or position data of a region concerned by a user in the original image, and the buried point data is determined according to the zooming operation and/or the moving operation triggered by the user on the original image;
and the analysis module is used for analyzing the attention of a user to each area of the original image according to the original image and the buried point data.
To achieve the above object, according to another aspect of an embodiment of the present invention, there is provided an electronic apparatus.
The electronic device of the embodiment of the invention comprises:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method as described above.
To achieve the above object, according to another aspect of an embodiment of the present invention, there is provided a computer-readable medium.
A computer-readable medium of an embodiment of the invention, on which a computer program is stored which, when executed by a processor, implements the method as described above.
One embodiment of the above invention has the following advantages or benefits:
in the embodiment of the invention, the attention of a user to each area of an original image is obtained based on the analysis of the buried point data of the original image, the buried point data is used for representing the size and/or position data of the area concerned by the user in the original image, and the buried point data is determined according to the zooming operation and/or the moving operation triggered by the user on the original image. The method and the device can accurately acquire the attention of each user to each area of the image.
Further effects of the above-mentioned non-conventional alternatives will be described below in connection with the embodiments.
Drawings
The drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein:
FIG. 1 is a flow chart illustrating a method for processing an image according to a first embodiment of the present invention;
FIG. 2 is a schematic flow chart of step 12 of an embodiment of the present invention;
FIG. 3 is a flow chart illustrating a process of determining buried point data according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating a method for processing an image according to a second embodiment of the present invention;
FIG. 5 is a block schematic diagram of an apparatus for processing an image according to an embodiment of the present invention;
FIG. 6 is a block schematic diagram of a system for processing images according to an embodiment of the invention;
FIG. 7 is an exemplary system architecture diagram in which embodiments of the present invention may be employed;
fig. 8 is a schematic structural diagram of a computer system suitable for implementing a terminal device or a server according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention are described below with reference to the accompanying drawings, in which various details of embodiments of the invention are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Generally, the association recommendation method needs to cluster users to obtain users who like the same kind of images, and the clustering is fuzzy, and lacks specific judgment criteria, so that the clustering result is not accurate, and the recommendation is distorted. In addition, the associated recommendation method cannot know details of the image that may be of interest to the user, and cannot achieve the purpose of operation optimization. Therefore, the conventional recommendation method has the following problems:
1) the focus of each user on the image details cannot be accurately known, and therefore the individual user requirements cannot be known.
2) The detailed points concerned by the group users cannot be accurately known, so that the images to be displayed on the page cannot be fed back and optimized.
3) Images to be displayed on a page cannot be automatically optimized to highlight the points concerned by the group.
In order to solve the above problem, embodiments of the present invention provide an image processing method, which can accurately analyze the attention of a user to each area of an image, and recommend the analysis result to an operator or a merchant to optimize an image to be displayed on a page. Fig. 1 is a flowchart illustrating a method for processing an image according to a first embodiment of the present invention, and as shown in fig. 1, the method for processing an image may include steps 11 to 12 as follows.
Step 11: and acquiring an original image and embedded point data corresponding to the original image.
In step 11, the original image may be a picture or a frame image in a video, etc. The buried point data is used for representing the size and/or the position of a region concerned by a user in the original image, and the buried point data is determined according to a zooming operation and/or a moving operation triggered by the user on the original image. It is understood that the area concerned by the user can be determined according to the zooming operation and/or the moving operation of the original image by the user, and then the size and/or the position of the area concerned can be used as the buried point data.
Before step 11, the embedding point data corresponding to the original image may be determined according to a zoom operation and/or a move operation triggered by a user on the original image, and a specific process of determining the embedding point data includes the following situations:
the first situation is as follows:
in response to a zooming operation and/or a moving operation triggered by a user on the original image, if the user triggers a jumping-out operation on the original image within a preset time after the zooming operation and/or the moving operation is triggered, determining the size and/or the position of an image in a display frame of the zooming operation and/or the moving operation at each time in the original image as the buried point data of the original image.
Case two:
responding to a zooming operation and/or a moving operation triggered by a user on the original image, and if the zooming operation and/or the moving operation is not triggered again on the original image by the user after the preset time is exceeded, determining the size and/or the position of the image in the display frame of the zooming operation and/or the moving operation at each time in the original image at intervals of the preset time to be used as the buried point data of the original image.
It should be noted that if the user triggers the zoom operation and/or the move operation on the original image again within the preset time after the zoom operation and/or the move operation is triggered, the buried point data of the original image is not recorded.
After the buried point data is determined, the original image, the article information corresponding to the original image, and the buried point data may be sent to a cloud database, so that the original image and the buried point data corresponding to the original image are obtained through step 11.
Step 12: and analyzing the attention of the user to each area of the original image according to the original image and the buried point data.
In step 12, the attention degree can be understood as the attention rate of the user to each region of the original image. When determining the attention degree of the user to each region of the original image, the following modes can be included:
the first method is as follows: and determining the attention of a user to each area of the original image according to the original image and the buried point data of the original image.
In the first mode, an area image may be first cut from the original image according to the buried point data, and the original image and the area image may be superimposed. And finally, taking the number of superposed layers of the area images as the attention of the user to each area of the original image.
The second method comprises the following steps: and determining the attention of a user to each area of the original image according to the original image, the embedded point data of the original image, the similar image of the original image and the embedded point data of the similar image.
Referring to fig. 2, when determining the attention of the user to each region of the original image according to the original image, the buried point data of the original image, the same kind of image of the original image, and the buried point data of the same kind of image, the process of analyzing the attention of the user to each region of the original image may include the following steps 121 to 125.
Step 121: a preset number of original images are classified to form at least one set.
In step 121, the original image may be classified according to item information of the original image. For example: the article information may be information such as an article type and a brand corresponding to the original image.
It should be noted that the preset number may be determined according to actual needs. When the preset number is very large, if the original image is directly aggregated (or called as clustering operation) in step 122, there is a problem of low aggregation efficiency. To solve this problem, the original images may be first classified through step 121 and then subjected to the aggregation operation through step 122, which may improve the aggregation efficiency.
Step 122: and respectively carrying out aggregation operation on the original images of each set to determine the similar images with the similarity to the original images in different sets within a preset range.
In step 122, the classified original images are aggregated according to the similarity, and homogeneous images of the original images can be determined through the aggregation. The original image can be an article picture of different merchants, and the same kind of image of the original image can be understood as an image with the similarity of the original image within a preset range.
Step 123: and respectively intercepting corresponding area images of the original image and the same kind of image of the original image according to corresponding buried point data.
In step 123, the image cut from the buried point data can be understood as the area of interest of the user in the original image.
Step 124: and overlapping the original image with the area image of the original image, the similar image and the area image of the similar image respectively.
In step 124, the original image may be respectively superimposed with the area image of the original image, the homogeneous image, and the area image of the homogeneous image based on the similarity of the pixel positions.
Step 125: taking the ratio of the sum of the superimposed layer numbers of the area image of the original image and the area image of the same type of image to the total layer number as the attention of the user to each area of the original image; the total layer number is used for representing the sum of the layer numbers of the original image, the regional image of the original image, the homogeneous image and the regional image of the homogeneous image.
Compared with the first mode, the second mode adds the buried point data of the same type of image in the process of determining the attention, so that the calculation accuracy of the attention can be improved, and the requirements of the user can be acquired more accurately.
It should be noted that the definition and the determination method of the attention degree of the user to each region of the original image are different, and the above two preferable methods are only examples and are not limited.
In the embodiment of the invention, the attention of the user to each area of the original image is obtained based on the analysis of the buried point data of the original image, the buried point data is used for representing the size and/or position data of the area concerned by the user in the original image, and the buried point data is determined according to the zooming operation and/or the moving operation triggered by the user on the original image. The method and the device can accurately acquire the attention of each user to each area of the image.
After determining the attention of the user to each region of the original image, displaying the attention of the user to each region of the original image in a thermodynamic diagram form, and further optimizing an image to be displayed on a page according to the thermodynamic diagram, wherein the optimization process specifically includes the following modes:
the first method is as follows: automatically optimizing an image to be displayed on a page according to the thermodynamic diagram, namely intercepting a region image to be displayed from the original image and determining the display position of the region image on the page according to the thermodynamic diagram; and displaying the area image to be displayed at the display position on the page.
In the first mode, at least one area image is first cut out from the original image according to a thermodynamic diagram. And then sequencing the area images corresponding to the original image according to the attention degree. And then selecting the area images to be displayed and display positions on a page according to the sequence of the area images. And finally displaying the area image to be displayed at the display position on the page. By the first mode, the area images can be pushed into an article gallery, and article pictures used on the line are automatically replaced based on the sequence of attention.
It should be noted that the display position may be understood as a position of the area image on the page, and the display position at least includes: primary or secondary map locations, such as: the region images with the top-ranked attention may be placed at the main map position and the region images with the bottom-ranked attention may be placed at the sub-map position. Besides, the display position of the area image on the page can be determined according to the selection operation of the user.
The second method comprises the following steps: and optimizing the image to be displayed on the page based on the selection operation of the user on the thermodynamic diagram.
In the second mode, according to the selection operation triggered by the user on the thermodynamic diagram, the area image to be displayed is intercepted from the original image, the display position of the area image on the page is determined, and the display position of the area image to be displayed on the page is displayed. And the second mode can automatically optimize the image to be displayed on the page according to the selection operation of the user.
It can be understood that, in order to facilitate the merchant to obtain the attention information of the user in time, a thermodynamic diagram is generated on the original image and sent to the merchant, so that the merchant can determine the attention degree of each area more intuitively. Based on selection operation of a merchant, the area image to be displayed can be intercepted from the original image, the display position of the area image to be displayed is determined, and finally the area image to be displayed is displayed at the corresponding display position.
Before the area image to be displayed is intercepted from the original image and the display position of the area image on a page is determined, an area with the attention degree larger than a preset threshold value in the thermodynamic diagram can be obtained firstly. And then generating at least one group of thermodynamic position coordinates according to the area with the attention degree larger than a preset threshold value in the thermodynamic diagram. Finally, the thermal position coordinates are associated with the original image in order to intercept the image of the area to be displayed in the original image according to a thermodynamic diagram or according to a selection operation triggered by the user on the thermodynamic diagram.
In addition, in order to facilitate obtaining the buried point data of the original image, referring to fig. 2, before step 11, the processing method of the original image further includes steps 14 to 20 as follows.
Step 14: and in response to the click operation of the original image triggered by the user, the background database pulls the original image, and renders and displays the original image at the front end.
In step 14, taking the original image as an article picture as an example, the user clicks the article picture area of the client, opens the article picture for browsing, pulls the article picture by the background database, and renders and displays the article picture at the front end.
Step 15: and in response to a zooming operation and/or a moving operation of the original image triggered by a user, re-rendering the original image according to the zooming operation and/or the moving operation.
In step 15, the user performs a zoom operation and/or a move operation on the original image, the front end reads operation information of the zoom operation and/or the move operation, and the pixels of the original image are zoomed and re-rendered according to the operation information of the zoom operation and/or the move operation.
Step 16: judging whether the zooming operation and/or the moving operation are triggered again by the user in preset time after the zooming operation and/or the moving operation are triggered; if yes, go to step 17; otherwise, step 18 is performed.
And step 17: judging whether a user triggers a jump-out operation or not; if yes, go to step 18; otherwise, step 15 is performed.
Step 18: and determining the size and/or position of the image in the display frame of each zooming operation and/or moving operation in the original image as the buried point data of the original image.
For example: and the preset time is 2s (seconds), whether the user has new operation on the picture within 2 seconds after the previous zooming operation and/or moving operation is periodically judged, and if the operation is the jumping-out operation, the size and the position of the image in the current display frame in the original image are calculated to be used as the buried point data of the current original image. If the zooming operation and/or the moving operation are/is carried out again within 2s, the buried point data of the original image is not recorded. And if the operation is not carried out for more than 2s, calculating the size and the position of the image in the current display frame in the original image as the buried point data of the current original image. If the image is kept for a long time, the image is repeatedly recorded every 2s, so that the position characteristics of the original image can be enhanced.
Step 19: and acquiring the article information of the original image.
In step 19, the article information of the original image may be acquired from an article library, and the article information of the original image at least includes: item type or brand type, etc.
Step 20: and sending the original image, the article information corresponding to the original image and the buried point data to a cloud database.
In the embodiment of the invention, the buried point data of the original image can be determined according to the zooming operation and/or the moving operation triggered by the user on the original image, and the attention of the user to each area of the original image can be further analyzed through the buried point data. The method and the device can accurately acquire the attention of each user to each area of the original image.
Fig. 4 is a flowchart of an image processing method according to a second embodiment of the present invention, and referring to fig. 4, the image processing method may include the following steps:
step 401: and acquiring an original image and embedded point data corresponding to the original image.
In step 401, the original image may be a picture or a frame image in a video, etc. The buried point data is used for representing the size and/or the position of a region concerned by a user in the original image, and the buried point data is determined according to a zooming operation and/or a moving operation triggered by the user on the original image. It is understood that the area concerned by the user can be determined according to the zooming operation and/or the moving operation of the original image by the user, and then the size and/or the position of the area concerned can be used as the buried point data.
Before step 401, the method may determine the buried point data corresponding to the original image according to the zoom operation and/or the move operation triggered by the user on the original image, and the specific process of determining the buried point data may include the following situations:
the first situation is as follows:
in response to a zooming operation and/or a moving operation triggered by a user on the original image, if the user triggers a jumping-out operation on the original image within a preset time after the zooming operation and/or the moving operation is triggered, determining the size and/or the position of an image in a display frame of the zooming operation and/or the moving operation at each time in the original image as the buried point data of the original image.
Case two:
responding to a zooming operation and/or a moving operation triggered by a user on the original image, and if the zooming operation and/or the moving operation is not triggered again on the original image by the user after the preset time is exceeded, determining the size and/or the position of the image in the display frame of the zooming operation and/or the moving operation at each time in the original image at intervals of the preset time to be used as the buried point data of the original image.
It should be noted that if the user triggers the zoom operation and/or the move operation on the original image again within the preset time after the zoom operation and/or the move operation is triggered, the buried point data of the original image is not recorded.
Step 402: a preset number of original images are classified to form at least one set.
In step 402, the original image may be classified according to item information of the original image. For example: the article information may be information such as an article type and a brand corresponding to the original image.
Step 403: and respectively carrying out aggregation operation on the original images of each set to determine the similar images with the similarity to the original images in different sets within a preset range.
In step 403, performing an aggregation operation (or referred to as a clustering operation) on the classified similar original images according to the similarity, and determining the similar images of the original images through the aggregation operation. The original image can be an article picture of different merchants, and the same kind of image of the original image can be understood as an image with the similarity of the original image within a preset range.
Step 404: and respectively intercepting corresponding area images of the original image and the same kind of image of the original image according to corresponding buried point data.
In step 404, the intercepted image may be understood as the area of interest to the user in the original image.
Step 405: and overlapping the original image with the area image of the original image, the similar image and the area image of the similar image respectively.
In step 405, the original image may be respectively superimposed with the area image of the original image, the homogeneous image, and the area image of the homogeneous image based on the similarity of pixel positions.
Step 406: taking the ratio of the sum of the superimposed layer numbers of the area image of the original image and the area image of the same type of image to the total layer number as the attention of the user to each area of the original image; the total layer number is used for representing the sum of the layer numbers of the original image, the regional image of the original image, the homogeneous image and the regional image of the homogeneous image.
Step 407: and generating a thermodynamic diagram of the original image according to the attention of the user to each region of the original image.
Step 408: and acquiring a region with the attention degree larger than a preset threshold in the thermodynamic diagram.
Step 409: and generating at least one group of thermodynamic position coordinates according to the area with the attention degree larger than a preset threshold value in the thermodynamic diagram.
In step 409, by focusing attention on the thermal analysis data, a region with a focus greater than a preset threshold is selected to generate a plurality of sets of thermal position coordinates. The preset threshold is configurable, and the user can configure the preset threshold by himself. For example: the preset threshold may be set to 90% in general. The selection principle of at least one group of thermodynamic position coordinates comprises the following steps:
1) a selection frame of a preset shape can be used to frame the coordinates of a plurality of areas in the thermodynamic diagram above a preset threshold, wherein the preset shape can be a regular shape such as a rectangle or a circle or an irregular shape.
2) And individually intercepting a plurality of areas with the same preset threshold value as an area coordinate.
Taking the preset shape as a rectangle as an example, when at least one group of thermal position coordinates are selected, a single rectangle is used for framing a plurality of area coordinates higher than a threshold value in the picture as far as possible; and (4) individually intercepting each rectangular area with the same preset threshold value as an area coordinate.
Step 410: associating the thermal position coordinates with the original image. Step 411 or step 414 may be performed after step 410.
Step 411: intercepting at least one region image in the original image according to the thermodynamic diagram.
After step 411, the truncated region image may be pushed into a gallery.
Step 412: and sequencing the area images corresponding to the original image according to the attention degree.
Step 413: according to the sequence of the area images, the area images to be displayed and the display positions on the page are selected, and then step 415 is executed.
In step 413, the display position may be understood as a position of the area image on the page, and the display position at least includes: primary or secondary map locations, such as: the region images with the top-ranked attention may be placed at the main map position and the region images with the bottom-ranked attention may be placed at the sub-map position. Besides, the display position of the area image can be determined according to the selection operation of the user.
Step 414: and according to a selection operation triggered by the user on the thermodynamic diagram, intercepting a region image to be displayed from the original image, determining the display position of the region image on a page, and then executing step 415.
Before step 414, the thermodynamic diagram of the original image may be sent to a terminal corresponding to a merchant, and the thermodynamic diagram of the original image is displayed by the terminal, and the merchant may trigger a selection operation on the terminal, and intercept, according to the selection operation, an area image to be displayed and a display position of the area image on a page on the original image.
Step 415: and displaying the area image to be displayed at the display position on the page.
In the embodiment of the invention, the attention of the user to each area of the original image is obtained based on the analysis of the buried point data of the original image, the buried point data is used for representing the size and/or position data of the area concerned by the user in the original image, and the buried point data is determined according to the zooming operation and/or the moving operation triggered by the user on the original image. According to the method and the device, the attention of each user to each area of the image can be accurately acquired, and the image to be displayed on the page is optimized according to the attention of the user to each area of the original image.
Fig. 5 is a block schematic diagram of an image processing apparatus according to an embodiment of the present invention, and referring to fig. 5, the image processing apparatus 500 includes:
an obtaining module 501, configured to obtain an original image and buried point data corresponding to the original image, where the buried point data is used to represent size and/or position data of a region of interest in the original image, and the buried point data is determined according to a zoom operation and/or a move operation triggered by a user on the original image;
an analysis module 502, configured to analyze, according to the original image and the buried point data, a degree of attention of a user to each area of the original image.
Optionally, the apparatus 500 for processing an image further includes:
the determining module is used for responding to a zooming operation and/or a moving operation triggered by a user on the original image, and if the user triggers a jumping-out operation on the original image within a preset time after the zooming operation and/or the moving operation are triggered, determining the size and/or the position of an image in a display frame of the zooming operation and/or the moving operation at each time in the original image to serve as the buried point data of the original image; or responding to a zooming operation and/or a moving operation triggered by a user on the original image, and if the zooming operation and/or the moving operation is not triggered again on the original image by the user after the preset time is exceeded, determining the size and/or the position of the image in the display frame of the zooming operation and/or the moving operation at each time in the original image at intervals of the preset time to serve as the buried point data of the original image.
Optionally, the analysis module 502 is further configured to:
intercepting an area image of the original image according to the buried point data;
overlapping the original image and the area image;
and taking the number of superimposed layers of the area images as the attention of the user to each area of the original image.
Optionally, the analysis module 502 is further configured to:
respectively intercepting corresponding area images of the original image and the same kind of image of the original image according to corresponding buried point data;
respectively superposing the original image with the area image of the original image, the similar image and the area image of the similar image;
taking the ratio of the sum of the superimposed layer numbers of the area image of the original image and the area image of the same type of image to the total layer number as the attention of the user to each area of the original image; the total layer number is used for representing the sum of the layer numbers of the original image, the regional image of the original image, the homogeneous image and the regional image of the homogeneous image.
Optionally, the apparatus 500 for processing an image further includes:
an aggregation module, configured to classify a preset number of original images to form at least one set; and respectively carrying out aggregation operation on the original images of each set to determine the similar images with the similarity to the original images in different sets within a preset range.
Optionally, the apparatus 500 for processing an image further includes:
the optimization module is used for intercepting a region image to be displayed from the original image according to the thermodynamic diagram and determining the display position of the region image on a page; the thermodynamic diagram is used for representing the attention degree of a user to each area of the original image;
and displaying the area image to be displayed at the display position on the page.
Optionally, the apparatus 500 for processing an image further includes:
the recommendation module is used for intercepting a region image to be displayed from the original image and determining the display position of the region image on a page according to the selection operation triggered by the user on the thermodynamic diagram; the thermodynamic diagram is used for representing the attention degree of a user to each area of the original image;
and displaying the area image to be displayed at the display position on the page.
Optionally, the apparatus 500 for processing an image further includes:
the correlation module is used for acquiring an area of which the attention degree is greater than a preset threshold value in the thermodynamic diagram; generating at least one group of thermodynamic position coordinates according to the area with the attention degree larger than a preset threshold value in the thermodynamic diagram; associating the thermal position coordinates with the original image.
In the embodiment of the invention, the attention of the user to each area of the original image is obtained based on the analysis of the buried point data of the original image, the buried point data is used for representing the size and/or position data of the area concerned by the user in the original image, and the buried point data is determined according to the zooming operation and/or the moving operation triggered by the user on the original image. According to the method and the device, the attention of each user to each area of the image can be accurately acquired, and the image to be displayed on the page is optimized according to the attention of the user to each area of the original image.
Fig. 6 is a block schematic diagram of a processing system of an image according to an embodiment of the present invention, referring to fig. 6, the processing system of an image including: a point of interest burial system 601, an interest analysis system 602, and an optimization recommendation system 603. The concerned buried point system 601 is configured to obtain buried point data that a user pays attention to an original image, and the concerned buried point system 601 may report the original image, the article information of the original image, and the buried point data. The attention analysis system 602 is configured to analyze the buried point data of the original image to generate a thermodynamic diagram of attention, and the attention analysis system 602 may analyze the attention details of a specific article picture. The optimization recommendation system 603 is configured to recommend an area image to be displayed on a page according to the analyzed attention, so as to design a new article picture at a later stage, and automatically update and optimize an existing picture display page.
Further, the system 601 for a buried point of interest includes: an information embedding system and a browsing size embedding system. The information buried point system is used for responding to a zooming operation and/or a moving operation of the original image triggered by a user, and determining the article information of the original image according to the zooming operation and/or the moving operation. The browsing size buried point system is used for responding to zooming operation and/or moving operation of the original image triggered by a user and determining buried point data of the original image according to the zooming operation and/or the moving operation.
Further, the attention analysis system 602 includes: a synthesis submodule, an aggregation submodule, and an analysis submodule. The aggregation sub-module is used for classifying a preset number of original images to form at least one set, and performing aggregation operation on the original images of each set respectively to determine the similar images with similarity to the original images in a preset range in different sets. The synthesis submodule is used for respectively intercepting corresponding area images of the original image and the same kind of image of the original image according to corresponding buried point data; respectively superposing the original image with the area image of the original image, the similar image and the area image of the similar image; taking the ratio of the sum of the superimposed layer numbers of the area image of the original image and the area image of the same type of image to the total layer number as the attention of the user to each area of the original image; the total layer number is used for representing the sum of the layer numbers of the original image, the regional image of the original image, the homogeneous image and the regional image of the homogeneous image. The analysis submodule is used for generating a thermodynamic diagram of the original image according to the attention degree of a user to each area of the original image.
Further, the optimization recommendation system 603 includes: a recommendation submodule and an optimization submodule. The recommendation submodule is used for generating a thermodynamic diagram on the original image according to the attention of a user to each region of the original image, intercepting a region image to be displayed from the original image according to selection operation triggered by the user on the thermodynamic diagram, determining the display position of the region image on a page, and finally displaying the region image to be displayed on the display position of the page. The optimization submodule is used for intercepting at least one area image from the original image according to the thermodynamic diagram, then sequencing the area images corresponding to the original image according to the attention degree, further selecting the area image to be displayed and the display position on the page according to the sequencing of the area images, and finally displaying the area image to be displayed at the display position on the page.
When a user browses a client interface by using software, the user opens the article picture, and performs zooming operation and/or moving operation on the screen, wherein the zooming operation and/or the moving operation aim to enlarge the detail information of the area concerned by the user. Through the attention degree point burying system, a picture interface after the user zooms and moves can be obtained, and coordinate dimension information of the picture interface is reported in combination with article information. Similar images of the original image are integrated and overlapped through the attention analysis system 602 to obtain attention degrees of each area of the original image, and finally, an analysis result is output to the optimization recommendation system 603 for a merchant to adjust an image strategy and automatically optimize an image to be displayed on a current page.
Fig. 7 shows an exemplary system architecture 700 of an image processing method or an image processing apparatus to which an embodiment of the present invention can be applied.
As shown in fig. 7, the system architecture 700 may include terminal devices 701, 702, 703, a network 704, and a server 705. The network 704 serves to provide a medium for communication links between the terminal devices 701, 702, 703 and the server 705. Network 704 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
A user may use the terminal devices 701, 702, 703 to interact with a server 705 over a network 704, to receive or send messages or the like. Various communication client applications, such as shopping applications, web browser applications, search applications, instant messaging tools, mailbox clients, social platform software, and the like, may be installed on the terminal devices 701, 702, and 703.
The terminal devices 701, 702, 703 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 705 may be a server that provides various services, such as a background management server that supports shopping websites browsed by users using the terminal devices 701, 702, and 703.
It should be noted that the image processing method provided by the embodiment of the present invention is generally executed by the server 705, and accordingly, the image processing apparatus is generally disposed in the server 705.
It should be understood that the number of terminal devices, networks, and servers in fig. 7 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to FIG. 8, shown is a block diagram of a computer system 800 suitable for use with a terminal device implementing an embodiment of the present invention. The terminal device shown in fig. 8 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 8, the computer system 800 includes a Central Processing Unit (CPU)801 that can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)802 or a program loaded from a storage section 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data necessary for the operation of the system 800 are also stored. The CPU 801, ROM 802, and RAM 803 are connected to each other via a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
The following components are connected to the I/O interface 805: an input portion 806 including a keyboard, a mouse, and the like; an output section 807 including a signal such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 808 including a hard disk and the like; and a communication section 809 including a network interface card such as a LAN card, a modem, or the like. The communication section 809 performs communication processing via a network such as the internet. A drive 810 is also connected to the I/O interface 805 as necessary. A removable medium 811 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 810 as necessary, so that a computer program read out therefrom is mounted on the storage section 808 as necessary.
In particular, according to the embodiments of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 809 and/or installed from the removable medium 811. The computer program executes the above-described functions defined in the system of the present invention when executed by the Central Processing Unit (CPU) 801.
It should be noted that the computer readable medium shown in the present invention can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
As another aspect, the present invention also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be separate and not incorporated into the device. The computer readable medium carries one or more programs which, when executed by a device, cause the device to comprise: acquiring an original image and buried point data corresponding to the original image, wherein the buried point data is used for representing size and/or position data of a region concerned by a user in the original image, and the buried point data is determined according to zooming operation and/or moving operation triggered by the user on the original image; and analyzing the attention of the user to each area of the original image according to the original image and the buried point data.
In the embodiment of the invention, the attention of a user to each area of the original image is obtained based on the analysis of the buried point data of the original image, the buried point data is used for representing the size and/or position data of the area concerned by the user in the original image, and the buried point data is determined according to the zooming operation and/or the moving operation triggered by the user on the original image. According to the method and the device, the attention of each user to each area of the image can be accurately acquired, and the image to be displayed on the page is optimized according to the attention of the user to each area of the original image.
The above-described embodiments should not be construed as limiting the scope of the invention. Those skilled in the art will appreciate that various modifications, combinations, sub-combinations, and substitutions can occur, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (10)
1. A method of processing an image, comprising:
acquiring an original image and buried point data corresponding to the original image, wherein the buried point data is used for representing size and/or position data of a region concerned by a user in the original image, and the buried point data is determined according to zooming operation and/or moving operation triggered by the user on the original image;
and analyzing the attention of the user to each area of the original image according to the original image and the buried point data.
2. The method of claim 1, wherein prior to the step of obtaining the original image and the corresponding buried point data for the original image, the method further comprises:
in response to a zooming operation and/or a moving operation triggered by a user on the original image, if the user triggers a jumping-out operation on the original image within a preset time after the zooming operation and/or the moving operation are triggered, determining the size and/or the position of an image in a display frame of the zooming operation and/or the moving operation at each time in the original image as buried point data of the original image;
or responding to a zooming operation and/or a moving operation triggered by a user on the original image, and if the zooming operation and/or the moving operation is not triggered again on the original image by the user after the preset time is exceeded, determining the size and/or the position of the image in the display frame of the zooming operation and/or the moving operation at each time in the original image at intervals of the preset time to serve as the buried point data of the original image.
3. The method of claim 1, wherein analyzing the attention of the user to the respective regions of the original image according to the original image and the buried point data comprises:
intercepting an area image of the original image according to the buried point data;
overlapping the original image and the area image;
and taking the number of superimposed layers of the area images as the attention of the user to each area of the original image.
4. The method of claim 1, wherein analyzing the attention of the user to the respective regions of the original image according to the original image and the buried point data comprises:
respectively intercepting corresponding area images of the original image and the same kind of image of the original image according to corresponding buried point data;
respectively superposing the original image with the area image of the original image, the similar image and the area image of the similar image;
taking the ratio of the sum of the superimposed layer numbers of the area image of the original image and the area image of the same type of image to the total layer number as the attention of the user to each area of the original image; the total layer number is used for representing the sum of the layer numbers of the original image, the regional image of the original image, the homogeneous image and the regional image of the homogeneous image.
5. The method of claim 4, wherein before the step of cropping the original image and the similar images of the original image according to the corresponding buried point data, the method further comprises:
classifying a preset number of original images to form at least one set;
and respectively carrying out aggregation operation on the original images of each set to determine the similar images with the similarity to the original images in different sets within a preset range.
6. The method of claim 1, wherein after the step of analyzing the user's attention to the respective regions of the original image based on the original image and the buried point data, the method further comprises:
according to the thermodynamic diagram or a selection operation triggered by a user on the thermodynamic diagram, intercepting a region image to be displayed from the original image and determining the display position of the region image on a page; the thermodynamic diagram is used for representing the attention degree of a user to each area of the original image;
and displaying the area image to be displayed at the display position on the page.
7. The method of claim 6, wherein before the step of intercepting the region image to be displayed from the original image and determining the display position of the region image on the page according to the thermodynamic diagram or the selection operation triggered by the user on the thermodynamic diagram, the method further comprises:
acquiring a region of which the attention degree is greater than a preset threshold in the thermodynamic diagram;
generating at least one group of thermodynamic position coordinates according to the area with the attention degree larger than a preset threshold value in the thermodynamic diagram;
associating the thermal position coordinates with the original image.
8. An apparatus for processing pictures, comprising:
the device comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring an original image and buried point data corresponding to the original image, the buried point data is used for representing the size and/or position data of a region concerned by a user in the original image, and the buried point data is determined according to the zooming operation and/or the moving operation triggered by the user on the original image;
and the analysis module is used for analyzing the attention of a user to each area of the original image according to the original image and the buried point data.
9. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-7.
10. A computer-readable medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011127439.2A CN113760153B (en) | 2020-10-20 | Image processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011127439.2A CN113760153B (en) | 2020-10-20 | Image processing method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113760153A true CN113760153A (en) | 2021-12-07 |
CN113760153B CN113760153B (en) | 2024-10-22 |
Family
ID=
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016091087A1 (en) * | 2014-12-08 | 2016-06-16 | 阿里巴巴集团控股有限公司 | Information presentation method and apparatus and electronic device |
WO2019227920A1 (en) * | 2018-05-31 | 2019-12-05 | 上海掌门科技有限公司 | Method and device for pushing information and presenting information |
CN110852938A (en) * | 2019-10-28 | 2020-02-28 | 腾讯科技(深圳)有限公司 | Display picture generation method and device and storage medium |
CN111738316A (en) * | 2020-06-10 | 2020-10-02 | 北京字节跳动网络技术有限公司 | Image classification method and device for zero sample learning and electronic equipment |
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016091087A1 (en) * | 2014-12-08 | 2016-06-16 | 阿里巴巴集团控股有限公司 | Information presentation method and apparatus and electronic device |
WO2019227920A1 (en) * | 2018-05-31 | 2019-12-05 | 上海掌门科技有限公司 | Method and device for pushing information and presenting information |
CN110852938A (en) * | 2019-10-28 | 2020-02-28 | 腾讯科技(深圳)有限公司 | Display picture generation method and device and storage medium |
CN111738316A (en) * | 2020-06-10 | 2020-10-02 | 北京字节跳动网络技术有限公司 | Image classification method and device for zero sample learning and electronic equipment |
Non-Patent Citations (2)
Title |
---|
HIROMI NEMOTO: "Ultra-eye: UHD and HD images eye tracking dataset", IEEE, 15 December 2014 (2014-12-15) * |
董涛;: "基于纹理图像与网格协同优化算法的三维模型压缩", 科技资讯, no. 02, 13 January 2020 (2020-01-13) * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109508681B (en) | Method and device for generating human body key point detection model | |
CN107315824B (en) | Method and device for generating thermodynamic diagram | |
CN112954450B (en) | Video processing method and device, electronic equipment and storage medium | |
CN109040960A (en) | A kind of method and apparatus for realizing location-based service | |
CN110914870B (en) | Annotation generation for image networks | |
WO2022105740A1 (en) | Video processing method and apparatus, readable medium, and electronic device | |
CN110619807B (en) | Method and device for generating global thermodynamic diagram | |
US10515103B2 (en) | Method and system for managing viewability of location-based spatial object | |
CN111241385B (en) | Information processing method, device, computer system and medium | |
US11544904B1 (en) | Mesh updates in an extended reality environment | |
US9792021B1 (en) | Transitioning an interface to a neighboring image | |
JP2010515968A (en) | Method and system for manipulating graphical images | |
CN113220381A (en) | Click data display method and device | |
CN111310086A (en) | Page jump method and device and electronic equipment | |
CN113837194A (en) | Image processing method, image processing apparatus, electronic device, and storage medium | |
CN111461965B (en) | Picture processing method and device, electronic equipment and computer readable medium | |
CN112445394B (en) | Screenshot method and screenshot device | |
CN113392676A (en) | Multi-target tracking behavior identification method and device | |
CN110633595B (en) | Target detection method and device by utilizing bilinear interpolation | |
US20150181288A1 (en) | Video sales and marketing system | |
CN113760153B (en) | Image processing method and device | |
US12106419B1 (en) | Live updates in a networked remote collaboration session | |
US11893675B1 (en) | Processing updated sensor data for remote collaboration | |
CN116208808A (en) | Video template generation method and device and electronic equipment | |
CN113760153A (en) | Image processing method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |