CN102103641A - Method for adding banner advertisement into user-browsed network image - Google Patents
Method for adding banner advertisement into user-browsed network image Download PDFInfo
- Publication number
- CN102103641A CN102103641A CN2011100549918A CN201110054991A CN102103641A CN 102103641 A CN102103641 A CN 102103641A CN 2011100549918 A CN2011100549918 A CN 2011100549918A CN 201110054991 A CN201110054991 A CN 201110054991A CN 102103641 A CN102103641 A CN 102103641A
- Authority
- CN
- China
- Prior art keywords
- image
- user
- advertisement
- unit
- visual
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a method for adding a banner advertisement into a user-browsed network image. The method comprises the following steps of: firstly, executing a vision similarity image search unit 20 on a user browsed network image unit 10 and determining similar images of an image picture 100 in the user browsed network image unit 10; secondly, executing a user interest description word sequencing unit 30 and acquiring user interest information according to different types of time constraint information; thirdly, executing an advertisement sequencing method and a selecting unit 40; fourthly, executing an advertisement position selecting and linking unit 50, calculating the vision similarity of each banner advertisement and a current image, determining the insertion position of the banner advertisement and adding a hyperlink relevant to descriptions of more detailed contents of the advertisement at a corresponding advertisement insertion position; and lastly, displaying a final search result in an advertisement insertion effect figure displaying unit 60.
Description
Technical field
The present invention relates to a kind of method of in user's browse network image, adding banner advertisement.
Background technology
Add banner advertisement in the image that user on the network is browsed, not only directly perceived but also the very big market space arranged.Take in nearly 20,000,000,000 dollars at the web advertisement in the first half of the year in 2010.There is following several problem in existing network icon advertisement recommend method: 1) user's specific aim deficiency of advertisement, and the advertisement insertion method on the internet is not distinguished well to the user group; (2) the correlativity deficiency of advertisement icon content and browse graph picture; (user's affinity deficiency of 3 advertisements, these advertisements all are irrelevant with user's interest under a lot of situations.
A kind of method that the web advertisement is sorted is disclosed in Chinese patent ZL200710117607.8.Adopt the keyword in advertisement keyword and the webpage to do coupling in advertisement is obtained, this order ads method based on the keyword coupling does not have user's specific aim, therefore can not reach to attract on the network user to the effect of advertisement notice.Therefore propose among the present invention the user on the internet is added the banner advertisement relevant with user interest in the image that it is browsed.This advertising method can attract user's notice effectively.
Summary of the invention
The objective of the invention is to overcome existing method and adds banner advertisement do not have user's deficiency targetedly in the network image that the user browses, proposing a kind of is the banner advertisement adding method of guiding with user and the network image content browsed thereof.
For reaching above purpose, the present invention adopts following technical scheme to be achieved:
A kind of method of adding banner advertisement in user's browse network image comprises the steps:
At first vision retrieving similar images unit 20 is carried out in the network image unit 10 that the user is browsed, wherein, the network image unit 10 that the user browses comprises image frame 100, image text 110 and user ID information 120, image frame 100 in the network image unit 10 that the user is browsed in vision retrieving similar images unit 20 carries out similar image to be determined, obtains user ID associated picture visual similarity ordering 220; Carry out user interest descriptor sequencing unit 30 according to user ID associated picture visual similarity ordering 220 then, obtain user interest information according to the different time constraint information; Next carry out order ads method and selected cell 40, promptly the result according to the ordering of user interest descriptor carries out order ads and advertisement selection to the advertisement in the banner advertisement storehouse according to correlativity; Next carrying out location advertising selects and link unit 50, result according to the previous step order ads, visual similarity to each banner advertisement and present image calculates, determine to insert the position of banner advertisement, and add the relevant more hyperlink of detailed content of this advertisement of describing in the correspondent advertisement insertion position; Final result for retrieval shows in showing insertion advertising results figure unit 60.
In the such scheme, described vision retrieving similar images unit 20 comprises following concrete steps: at first image frame 100 is carried out Visual Feature Retrieval Process step 101, extract color, texture and edge feature in the image, next carry out visual signature quantization step 102, after Visual Feature Retrieval Process, corresponding color characteristic, textural characteristics and edge feature are quantized with the method for K-mean cluster respectively; When carrying out Visual Feature Retrieval Process step 101 and visual signature quantization step 102, image and the image text execution index in the image text 200 downloaded on the network are set up, generate network image text message index database 201 based on user ID information; Then image and the image in the image text 200 downloaded on the network are carried out Visual Feature Retrieval Process step 101 and characteristic quantification step 102 successively, obtain network image visual signature index database 202 based on user ID information; Then to the visual signature quantitative information of characteristic quantification step 102 gained and the visual similarity metrology step of carrying out based on the Image Visual Feature index of all these users in the network image visual signature index database 202 of user ID information based on TF-IDF 210, carrying out similarity calculates, obtain based on the visual similarity score between each image and the user images picture 100 in the network image visual signature index database 202 of user ID information, at last above-mentioned visual similarity score is sorted, obtain user ID associated picture visual similarity ordering 220.
In color in the described extraction image, texture and the edge feature step, the extraction of color characteristic is 25 image blocks that wait size that original image are divided into 5x5, extracts the color moment feature of 9 dimensions in each piece respectively, and the dimension of color characteristic is 225; Scalable wavelet bag textural characteristics describing method is adopted in the extraction of textural characteristics, and the basis function of wavelet package transforms is ' DB2 ', the image divided mode be 2x2 and one placed in the middle etc. the sized images piece, the dimension of textural characteristics is 170.Edge Gradient Feature adopts the marginal distribution histogram based on 128 dimensions, and direction number is 16, and the quantification technique progression of gradient is 8.
Described user interest descriptor sequencing unit 30 comprises following concrete steps: according to user ID associated picture visual similarity ordering 220, carry out and extract user images text step, obtain vision similar image text message 310 with current browse graph picture, next execution in step 320, obtain user interest information according to the different time constraint information, step 320 comprises that the user interest of overall time-constrain obtains method 321, the user interest of time-constrain obtains method 322 recently, and both select one; Carry out the user interest ordered steps 330 based on the visual similarity weighting at last, this step is exactly to describe user interest according to the text message of image relevant in step 321 or the step 322.
The method of adding banner advertisement in the network image that the user browses that is provided among the present invention is compared with existing network icon adding method, and its beneficial effect shows that the advertisement of interpolation has user's specific aim.
Description of drawings
The present invention is described in further detail below in conjunction with the drawings and the specific embodiments.
Fig. 1 is the general steps synoptic diagram of the inventive method.
Fig. 2 is the concrete steps process flow diagram of visual similarity image retrieval unit 20 among Fig. 1.
Fig. 3 is the concrete steps process flow diagram of user interest descriptor sequencing unit 30 among Fig. 1.
Embodiment
Fig. 1 has provided the general steps synoptic diagram that adds the method for banner advertisement among the present invention in user's browse graph picture.Wherein comprise the network image unit 10 that the Internet user browses; Vision retrieving similar images unit 20 is carried out in the network image unit 10 that the user browses; Carry out user interest descriptor sequencing unit 30 then; Next carry out order ads method and selected cell 40; Next carrying out location advertising selects and link unit unit 50; Final result for retrieval shows in showing insertion advertising results figure unit 60.
The network image unit 10 that the user browses among the present invention comprises image frame 100, image text 110, user ID information 120.Image frame 100 in the network image unit 10 that the user is browsed in vision retrieving similar images unit 20 among the present invention carries out similar image and determines.Provide to example the network image unit 10 that the user is browsed among Fig. 2 and carried out the FB(flow block) that the vision similar image detects.At first image frame 100 is carried out Visual Feature Retrieval Process step 101, extract color, texture and edge feature in the image.Wherein the extraction of color characteristic is 25 image blocks that wait size that original image are divided into 5x5, extracts the color moment feature of 9 dimensions in each piece respectively, and the dimension of color characteristic is 225.Wherein scalable wavelet bag textural characteristics describing method is adopted in the description of textural characteristics, the basis function of wavelet package transforms is ' DB2 ', the image divided mode be 2x2 and one placed in the middle etc. the sized images piece, the dimension of textural characteristics is that 170 (correlation technique sees the paper of publishing: X.Qian for details, G.Liu, D.Guo, Z.Li, Z.Wang, and H.Wang, " Object Categorization using Hierarchical Wavelet Packet Texture Descriptors, " inProc.ISM 2009, pp.44-51.).Wherein edge feature adopts the marginal distribution histogram (direction number is 16, and the quantification progression of gradient is 8) based on 128 dimensions.
Next carry out visual signature quantization step 102, after feature extraction, corresponding color moment feature, wavelet packet textural characteristics and edge feature are quantized with the method for K-mean cluster respectively, the code book number that quantizes is respectively: 50000,10000 and 50000.Can change the code book number as required in practice.Suggestion code book number is more than 10000 among the present invention.
The similarity image comes from the image downloaded on the network and image text unit 200 thereof (this unit from website Bing, Flickr, the view data that download websites such as Google and the label text information of every width of cloth image) among the present invention.Then the image text execution index in the unit 200 is set up, generated network image text message index database 201 based on user ID information.Then the image in the unit 200 is carried out Visual Feature Retrieval Process step 101 and characteristic quantification step 102 successively, to obtain network image visual signature index database 202 based on user ID information.Then to visual signature quantitative information 102 and the visual similarity metrology step of carrying out based on the Image Visual Feature index of relevant all these users in the network image visual signature index database 202 of user ID information based on TF-IDF 210, calculate to carry out similarity, obtain the visual similarity score between each image and user images picture 100 in 202, suppose that active user's picture number is N, so wherein the visual similarity of any one image i must be divided into S (i), i=1~N, S (i) ∈ [0,1].In similarity is calculated, adopt and carry out (the TF-IDF method is a kind of known method) in this area based on the criterion of TF-IDF.At last above-mentioned visual similarity score is sorted (image being arranged by score order from high to low), obtain user ID associated picture visual similarity ordering 220.
In the user interest descriptor sequencing unit 30 of Fig. 1, realize user's interest is sorted.The pairing concrete steps of user interest sort method as shown in Figure 3.Comprising according to similar image ranking results 220 in the unit 20, carry out and extract user images text step, with the vision similar image text message 310 of acquisition with active user's browse graph picture, next execution in step 320, obtain user interest information according to the different time constraint information.Step 320 comprises that the user interest of overall time-constrain obtains method 321, the user interest of time-constrain obtains one of method 322 recently.The method of step 321 is the text message in all relevant images of user to be used for interest obtain; The method of step 322 is that the interest with the user is limited in the current slot.Carry out user interest ordered steps 330 at last, describe user interest according to the text message of image relevant in step 321 or the step 322 exactly based on the visual similarity weighting.Suppose that wherein picture number is M, the corresponding visual similarity of each image must be divided into S (i), i=1~M, and S (i) ∈ [0,1], the descriptive text vocabulary that is comprised among this image i has Z
iIndividual.Suppose to include K vocabulary in these images, be designated as t respectively
1~t
K, vocabulary t wherein
kThe number of times that occurs is that the score of c in respective image is respectively s
1~s
c, then final pairing user interest degree I
kFor:
The final user interest degree of describing adopts normalized interest-degree:
Result according to the user interest descriptor ordering that is drawn in the unit 30 in order ads in Fig. 1 and the advertisement selection unit 40 carries out order ads and advertisement selection to the advertisement in the banner advertisement storehouse according to correlativity.Method for measuring similarity in the advertisement coupling adopts existing open source literature (T.Mei, X.-S.Hua, and S.Li, Contextual in-image advertising, in Proc.ACM Multimedia, Vancouver, Canada, 2008, the pp.439-448.) method in.After calculating, can draw each advertisement a
iCorrelativity score U (a with the user
i).
Location advertising in Fig. 1 select and link unit 50 in according to the result of the order ads that is drawn in the unit 40, the visual similarity of each banner advertisement and present image is calculated.During similarity is calculated with the color correlation of image as measurement criterion.The execution in step of insertion position system of selection is as follows, at first image division is become 5*5 piece that waits size, then each piece is carried out the texture complexity and the content importance degree is divided, with the position P that finds out the most suitable interpolation icon (x, y, z), x wherein, y denotation coordination, z are represented corresponding Color Channel number (for coloured image port number z=3, for gray level image port number z=1).Concrete grammar can adopt existing document (T.Mei, X.-S.Hua, and S.Li, Contextual in-image advertising, in Proc.ACM Multimedia, Vancouver, Canada, 2008, the pp.439-448.) method of publishing in.After determining to insert the position of advertisement, the criterion with the advertisement icon and the color distortion of corresponding insertion position are measured as visual similarity can draw each advertisement a after calculating
iWith the current visual similarity score V (a that browses image frame 100 of user
i).
V(a
i)=exp(-D(a
i))
D (a wherein
i) expression advertisement a
iWith current local location P (x, y, vision difference z), the D (a that browses the most suitable interpolation advertisement in the image frame of user
i) can be expressed as:
In addition also with advertisement a
iSimilarity T (a with the current browse network image text 110 of user
i) also consider among tolerance T (a wherein
i) computing method and U (a
i) identical, do not do tired stating at this.
Score F (a of final advertisement selection
i) be user's correlativity score U (a that weighting is considered
i) and visual similarity score V (a
i) and user version correlativity T (a
i) and:
F (a
i)=α * U (a
i)+β * V (a
i)+γ * T (a
i), { α, beta, gamma } ∈ [0,1] is α wherein, and beta, gamma is a weighting coefficient,
α+β+γ=1
α=0.7,β=0.1,γ=0.2
Add the relevant more hyperlink of detailed content of this advertisement of describing then with the highest the picking out of advertisement score, and in the correspondent advertisement insertion position as final insertion advertisement.
In order ads and advertisement selection unit 40, can suitably select several alternative advertisements in the present invention and select and link the input of determining unit 50, like this, can effectively reduce the complexity of system handles as the advertisement insertion position.
The final advertising effect image that inserts shows in unit 60.
Claims (4)
1. a method of adding banner advertisement in user's browse network image is characterized in that, comprises the steps:
At first vision retrieving similar images unit (20) is carried out in the network image unit (10) that the user is browsed, wherein, the network image unit (10) that the user browses comprises image frame (100), image text (110) and user ID information (120), image frame (100) in the network image unit (10) that the user is browsed in vision retrieving similar images unit (20) carries out similar image to be determined, obtains user ID associated picture visual similarity ordering (220); Carry out user interest descriptor sequencing unit (30) according to user ID associated picture visual similarity ordering (220) then, obtain user interest information according to the different time constraint information; Next carry out order ads method and selected cell (40), promptly the result according to the ordering of user interest descriptor carries out order ads and advertisement selection to the advertisement in the banner advertisement storehouse according to correlativity; Next carrying out location advertising selects and link unit (50), result according to the previous step order ads, visual similarity to each banner advertisement and present image calculates, determine to insert the position of banner advertisement, and add the relevant more hyperlink of detailed content of this advertisement of describing in the correspondent advertisement insertion position; Final result for retrieval shows in showing insertion advertising results figure unit (60).
2. method of in user's browse network image, adding banner advertisement as claimed in claim 1, it is characterized in that, described vision retrieving similar images unit (20) comprises following concrete steps: at first image frame (100) is carried out Visual Feature Retrieval Process step (101), extract color, texture and edge feature in the image, next carry out visual signature quantization step (102), after Visual Feature Retrieval Process, corresponding color characteristic, textural characteristics and edge feature are quantized with the method for K-mean cluster respectively; When carrying out Visual Feature Retrieval Process step (101) and visual signature quantization step (102), image and the image text execution index in the image text (200) downloaded on the network are set up, generated network image text message index database (201) based on user ID information; Then image and the image in the image text (200) downloaded on the network are carried out Visual Feature Retrieval Process step (101) and characteristic quantification step (102) successively, obtain network image visual signature index database (202) based on user ID information; Then to the visual signature quantitative information of characteristic quantification step (102) gained and the visual similarity metrology step of carrying out based on the Image Visual Feature index of all these users in the network image visual signature index database (202) of user ID information based on TF-IDF (210), carrying out similarity calculates, obtain based on the visual similarity score between each image and the user images picture (100) in the network image visual signature index database (202) of user ID information, at last above-mentioned visual similarity score is sorted, obtain user ID associated picture visual similarity ordering (220).
3. method of in user's browse network image, adding banner advertisement as claimed in claim 2, it is characterized in that, in color in the described extraction image, texture and the edge feature step, the extraction of color characteristic is 25 image blocks that wait size that original image are divided into 5x5, extract the color moment feature of 9 dimensions in each piece respectively, the dimension of color characteristic is 225; Scalable wavelet bag textural characteristics describing method is adopted in the extraction of textural characteristics, and the basis function of wavelet package transforms is ' DB2 ', the image divided mode be 2x2 and one placed in the middle etc. the sized images piece, the dimension of textural characteristics is 170; Edge Gradient Feature adopts the marginal distribution histogram based on 128 dimensions, and direction number is 16, and the quantification progression of gradient is 8.
4. method of in user's browse network image, adding banner advertisement as claimed in claim 1, it is characterized in that, described user interest descriptor sequencing unit (30) comprises following concrete steps: according to user ID associated picture visual similarity ordering (220), carry out and extract user images text step, obtain vision similar image text message (310) with current browse graph picture, next execution in step (320), obtain user interest information according to the different time constraint information, step (320) comprises that the user interest of overall time-constrain obtains method (321), the user interest of time-constrain obtains method (322) recently, and both select one; Carry out the user interest ordered steps (330) based on the visual similarity weighting at last, this step is exactly to describe user interest according to the text message of image relevant in step (321) or the step (322).
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2011100549918A CN102103641A (en) | 2011-03-08 | 2011-03-08 | Method for adding banner advertisement into user-browsed network image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2011100549918A CN102103641A (en) | 2011-03-08 | 2011-03-08 | Method for adding banner advertisement into user-browsed network image |
Publications (1)
Publication Number | Publication Date |
---|---|
CN102103641A true CN102103641A (en) | 2011-06-22 |
Family
ID=44156411
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2011100549918A Pending CN102103641A (en) | 2011-03-08 | 2011-03-08 | Method for adding banner advertisement into user-browsed network image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102103641A (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013063740A1 (en) * | 2011-10-31 | 2013-05-10 | Google Inc. | Selecting images based on textual description |
CN103428539A (en) * | 2012-05-15 | 2013-12-04 | 腾讯科技(深圳)有限公司 | Pushed information publishing method and device |
CN104809251A (en) * | 2015-05-19 | 2015-07-29 | 北京理工大学 | Rapid optimal automatic data specification icon arranging method |
CN105956878A (en) * | 2016-04-25 | 2016-09-21 | 广州出益信息科技有限公司 | Network advertisement pushing method and network advertisement pushing device |
CN106156063A (en) * | 2015-03-30 | 2016-11-23 | 阿里巴巴集团控股有限公司 | Correlation technique and device for object picture search results ranking |
CN107784061A (en) * | 2016-08-24 | 2018-03-09 | 百度(美国)有限责任公司 | It is determined that the method and system and machine readable media of the content genres based on image |
CN108111897A (en) * | 2017-12-12 | 2018-06-01 | 北京奇艺世纪科技有限公司 | A kind of method and device for showing displaying information in video |
CN109241374A (en) * | 2018-06-07 | 2019-01-18 | 广东数相智能科技有限公司 | A kind of book information library update method and books in libraries localization method |
CN109389440A (en) * | 2017-08-02 | 2019-02-26 | 阿里巴巴集团控股有限公司 | The method, apparatus and electronic equipment of data object information are provided |
CN113676775A (en) * | 2021-08-27 | 2021-11-19 | 苏州因塞德信息科技有限公司 | Method for implanting advertisement in video and game by using artificial intelligence |
-
2011
- 2011-03-08 CN CN2011100549918A patent/CN102103641A/en active Pending
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013063740A1 (en) * | 2011-10-31 | 2013-05-10 | Google Inc. | Selecting images based on textual description |
CN103428539A (en) * | 2012-05-15 | 2013-12-04 | 腾讯科技(深圳)有限公司 | Pushed information publishing method and device |
CN106156063A (en) * | 2015-03-30 | 2016-11-23 | 阿里巴巴集团控股有限公司 | Correlation technique and device for object picture search results ranking |
CN104809251A (en) * | 2015-05-19 | 2015-07-29 | 北京理工大学 | Rapid optimal automatic data specification icon arranging method |
CN104809251B (en) * | 2015-05-19 | 2017-11-28 | 北京理工大学 | A kind of quickly optimal auto arranging method of data type specification icon |
CN105956878A (en) * | 2016-04-25 | 2016-09-21 | 广州出益信息科技有限公司 | Network advertisement pushing method and network advertisement pushing device |
CN107784061A (en) * | 2016-08-24 | 2018-03-09 | 百度(美国)有限责任公司 | It is determined that the method and system and machine readable media of the content genres based on image |
CN109389440A (en) * | 2017-08-02 | 2019-02-26 | 阿里巴巴集团控股有限公司 | The method, apparatus and electronic equipment of data object information are provided |
CN109389440B (en) * | 2017-08-02 | 2022-05-24 | 阿里巴巴集团控股有限公司 | Method and device for providing data object information and electronic equipment |
CN108111897A (en) * | 2017-12-12 | 2018-06-01 | 北京奇艺世纪科技有限公司 | A kind of method and device for showing displaying information in video |
CN109241374A (en) * | 2018-06-07 | 2019-01-18 | 广东数相智能科技有限公司 | A kind of book information library update method and books in libraries localization method |
CN113676775A (en) * | 2021-08-27 | 2021-11-19 | 苏州因塞德信息科技有限公司 | Method for implanting advertisement in video and game by using artificial intelligence |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102103641A (en) | Method for adding banner advertisement into user-browsed network image | |
CN107944913B (en) | High-potential user purchase intention prediction method based on big data user behavior analysis | |
US20240078258A1 (en) | Training Image and Text Embedding Models | |
CN102799591B (en) | Method and device for providing recommended word | |
CN107679960B (en) | Personalized clothing recommendation method based on clothing image and label text bimodal content analysis | |
CN104965889B (en) | Content recommendation method and device | |
CN103544216B (en) | The information recommendation method and system of a kind of combination picture material and keyword | |
US9607327B2 (en) | Object search and navigation method and system | |
US9489400B1 (en) | Interactive item filtering using images | |
US20240330361A1 (en) | Training Image and Text Embedding Models | |
CN106202362A (en) | Image recommendation method and image recommendation device | |
CN105718184A (en) | Data processing method and apparatus | |
CN112948575B (en) | Text data processing method, apparatus and computer readable storage medium | |
CN110929138A (en) | Recommendation information generation method, device, equipment and storage medium | |
US11354349B1 (en) | Identifying content related to a visual search query | |
CN112434232A (en) | Internet-based product keyword advertisement putting method and system | |
CN105825396B (en) | Method and system for clustering advertisement labels based on co-occurrence | |
CN111291191B (en) | Broadcast television knowledge graph construction method and device | |
CN107392718A (en) | Method of Commodity Recommendation | |
CN117829911B (en) | AI-driven advertisement creative optimization method and system | |
CN102929975A (en) | Recommending method based on document tag characterization | |
CN107622071A (en) | By indirect correlation feedback without clothes image searching system and the method looked under source | |
CN108470289A (en) | Virtual objects distribution method and equipment based on electric business shopping platform | |
CN102929948B (en) | list page identification system and method | |
CN116911926A (en) | Advertisement marketing recommendation method based on data analysis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20110622 |