CN111259885B - Wristwatch identification method and device - Google Patents
Wristwatch identification method and device Download PDFInfo
- Publication number
- CN111259885B CN111259885B CN202010131951.8A CN202010131951A CN111259885B CN 111259885 B CN111259885 B CN 111259885B CN 202010131951 A CN202010131951 A CN 202010131951A CN 111259885 B CN111259885 B CN 111259885B
- Authority
- CN
- China
- Prior art keywords
- brand
- wristwatch
- candidate
- frame
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 30
- 238000013145 classification model Methods 0.000 claims description 38
- 238000010586 diagram Methods 0.000 claims description 3
- 241001270131 Agaricus moelleri Species 0.000 claims 1
- 238000004458 analytical method Methods 0.000 description 8
- 230000008569 process Effects 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 210000000707 wrist Anatomy 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/22—Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of wristwatch identification, and discloses a wristwatch identification method and equipment. Wherein the method comprises the following steps: identifying a list frame candidate frame and a brand candidate frame on the picture through a scanning model; intercepting a data candidate region from the list frame candidate frame and the brand candidate frame by the scanning model and submitting the data candidate region to a server; the server identifies the brand of the wristwatch through the data candidate region of the brand candidate frame; the server calls out a corresponding watch style identification model according to the brand of the wristwatch; and identifying the data candidate areas of the watch frame candidate frames through the watch style identification model to obtain the watch style of the wristwatch. The wristwatch identification method provided by the technical scheme of the invention improves the identification speed and accuracy of wristwatch money.
Description
Technical Field
The invention relates to the technical field of wristwatch identification, in particular to a wristwatch identification method and device.
Background
The application of the image recognition technology based on deep learning in the aspect of wristwatch recognition is basically limited to application of a single model, for example, classification models are used for pictures, the classification models comprise Alexnet, VGG, inception, moblilenet, efficientnet, resnet and the like, and the difference between the accuracy of recognition results, the calculation amount in the training and recognition processes and the resource occupancy rate is different from the model.
In the wristwatch identification process by the classification model, the picture classification model is mature, and the wristwatch identification method has a mature scheme for training, identification and deployment, and is simple and convenient.
However, the application scene identified by the wristwatch determines that the user cannot provide a picture according to the input requirement of the model, and the picture classification model cannot detect the position of the wristwatch in the picture, so that the accuracy of identification is greatly reduced. Another problem is that the classification model gives a result, whether or not there is a form in the picture, which can lead to confusing results. In addition, the wristwatch on the market currently has more than 7 tens of thousands of wristwatches, and the requirement for distinguishing the many categories is difficult to meet by using a single classification model.
Disclosure of Invention
The invention aims to provide a wristwatch identification method, which aims to solve the problem of low wristwatch identification accuracy in the prior art.
The invention is realized in such a way that a wristwatch identification method comprises the following steps:
identifying a list frame candidate frame and a brand candidate frame on the picture through a scanning model;
intercepting a data candidate region from the list frame candidate frame and the brand candidate frame by the scanning model and submitting the data candidate region to a server;
the server identifies the brand of the wristwatch through the data candidate region of the brand candidate frame;
the server calls out a corresponding watch style identification model according to the brand of the wristwatch;
and identifying the data candidate areas of the watch frame candidate frames through the watch style identification model to obtain the watch style of the wristwatch.
Optionally, the step of identifying the table frame candidate frame and the brand candidate frame on the picture through the scanning model includes:
the scanning model converts the picture into a gray scale image;
extracting shape features and position features of the wristwatch on the gray level diagram;
and determining the list frame candidate frame and the brand candidate frame according to the shape characteristics and the position characteristics of the wristwatch.
Optionally, the step of identifying the table frame candidate frame and the brand candidate frame on the picture through the scan model further includes:
when the scanning model does not identify a list frame candidate frame and a brand candidate frame on the picture;
the scan model is re-identified.
Optionally, before the step of identifying the table frame candidate frame and the brand candidate frame on the picture through the scan model:
the scanning model receives the shot picture;
or, the scanning model receives the uploaded picture.
Optionally, the step of identifying, by the server, the brand of the wristwatch through the data candidate region of the brand candidate frame includes:
when the server identifies the brand of the wristwatch through the data candidate region of the brand candidate frame, executing the server to call out a corresponding brand classification model according to the brand of the wristwatch;
when the server does not identify the brand of the wristwatch through the data candidate region of the brand candidate frame; the server calls out an overall classification model; and identifying the data candidate areas of the watch frame candidate frames through the overall classification model to obtain the watch money of the wristwatch.
Optionally, the step when the server identifies the brand of the wristwatch through the data candidate region of the brand candidate frame includes:
extracting word features in a data candidate region of the brand candidate frame;
calling out a brand classification model;
comparing the character features with brand characters of a brand classification model;
the brand of the wristwatch is obtained.
Optionally, the step of extracting the text feature in the data candidate region of the brand candidate frame includes:
converting an image in a data candidate region of the brand candidate frame into a gray scale image;
recognizing character features in the gray scale image;
and extracting the character features in the gray level image.
Optionally, the step when the server identifies the brand of the wristwatch through the data candidate region of the brand candidate frame further includes:
extracting identification features in a data candidate region of the brand candidate frame;
calling out a brand classification model;
comparing the identification features with brand identifications of brand classification models;
the brand of the wristwatch is obtained.
Optionally, after the step of identifying the data candidate area of the table frame candidate frame through the table style identification model to obtain the table style of the wristwatch:
and the server sends the form of the wristwatch to the mobile terminal of the user.
The invention also provides wristwatch identification equipment, which comprises a scanning end and a processing end;
the scanning end is used for identifying a list frame candidate frame and a brand candidate frame on the picture through a scanning model; intercepting a data candidate region from the list frame candidate frame and the brand candidate frame by the scanning model and submitting the data candidate region to a server of a processing end;
the processing end is used for identifying the brand of the wristwatch through the server through the data candidate area of the brand candidate frame; the server calls out a corresponding brand classification model according to the brand of the wristwatch; and identifying the data candidate areas of the watch frame candidate frames through a brand classification model to obtain the watch money of the wristwatch.
Compared with the prior art, the wristwatch identification method provided by the invention has the advantages that in the wristwatch identification process, the brand of the wristwatch is firstly identified through the data candidate region of the brand candidate frame, the identification range of the wristwatch is reduced, and in the specific wristwatch identification process, the data candidate region of the brand candidate frame is identified through the wristwatch identification model corresponding to the brand, so that the wristwatch is obtained, the wristwatch identification accuracy is high, five percentage points are improved compared with the integral identification model, the wristwatch identification method is not limited by the classification quantity, and the situation of reduction of the identification rate due to excessive classification after the wristwatch is classified according to the brand is avoided. In addition, as the data candidate areas are intercepted in the list frame candidate frame and the brand candidate frame, which are sent by the scanning model, the whole picture is not necessarily transmitted, the picture transmission quantity is reduced, the response speed is improved, and the recognition speed is improved. The problem of among the prior art, the wristwatch discernment rate of accuracy is lower is solved.
Drawings
FIG. 1 is a flowchart of a wristwatch identification method according to an embodiment of the invention;
fig. 2 is a flowchart of a wristwatch identification method according to another embodiment of the invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The implementation of the present invention will be described in detail below with reference to specific embodiments.
The same or similar reference numerals in the drawings of the present embodiment correspond to the same or similar components; in the description of the present invention, it should be understood that, if there is an azimuth or positional relationship indicated by terms such as "upper", "lower", "left", "right", etc., based on the azimuth or positional relationship shown in the drawings, it is only for convenience of describing the present invention and simplifying the description, but it is not indicated or implied that the apparatus or element referred to must have a specific azimuth, be constructed and operated in a specific azimuth, and thus terms describing the positional relationship in the drawings are merely illustrative and should not be construed as limitations of the present patent, and specific meanings of the terms described above may be understood by those skilled in the art according to specific circumstances.
The wristwatch identification method provided by the invention improves the identification speed and accuracy of wristwatch money.
Referring to FIG. 1, a preferred embodiment of the present invention is shown.
A wristwatch identification method comprising the steps of:
s10, identifying a list frame candidate frame and a brand candidate frame on the picture through a scanning model;
s20, intercepting data candidate areas from the list frame candidate frame and the brand candidate frame by the scanning model, and submitting the data candidate areas to a server;
s30, the server identifies the brand of the wristwatch through the data candidate region of the brand candidate frame;
s40, the server calls out a corresponding watch style identification model according to the brand of the wristwatch;
s50, the data candidate areas of the watch frame candidate frames are identified through the watch style identification model, and watch style of the wristwatch is obtained.
The watch frame candidate frame is an area where the outline of the watch is located, and the area where the outline of the watch is located is intercepted to be a data candidate area of the watch frame candidate area so as to facilitate the subsequent watch identification; the brand candidate frame is the area range where the brand characters or the identification on the wristwatch are located, and the area range is intercepted to be the data candidate area of the brand candidate area so as to facilitate the identification of the subsequent wristwatch brands.
In the wristwatch identification process, firstly, the brand of the wristwatch is identified through the data candidate region of the brand candidate frame, the identification range of the watch money is reduced, and when the specific watch money is identified, the data candidate region of the watch candidate frame is identified through the watch money identification model corresponding to the brand, so that the watch money of the wristwatch is obtained, the identification accuracy of the wristwatch is high, five percentage points are improved compared with the whole identification model, the wristwatch is not limited to the classification quantity, and the situation that the identification rate is reduced due to excessive classification can not occur after the wristwatch is classified according to the brand. In addition, as the data candidate areas are intercepted in the list frame candidate frame and the brand candidate frame, which are sent by the scanning model, the whole picture is not necessarily transmitted, the picture transmission quantity is reduced, the response speed is improved, and the recognition speed is improved.
Referring to fig. 1 in combination, in an embodiment of the invention, the step of identifying a frame candidate and a brand candidate on a picture by scanning a model includes:
converting the picture into a gray scale picture by a scanning model;
extracting shape features and position features of the wristwatch on the gray level diagram;
and determining the frame candidate frame and the brand candidate frame according to the shape characteristics and the position characteristics of the wristwatch.
According to the method, after the picture is grayed, the detection of the dial outline does not need to pay attention to the characteristics such as color, the calculated amount of recognition is reduced, only the shape and position characteristics are extracted, the calculated amount is small, the speed is higher, and the model is smaller than the original model.
Specifically, when the candidate frame of the watch frame is determined, the shape characteristic is the outer contour of the watch, the position characteristic is the position of the watch, and the candidate frame of the watch money is obtained according to the outer contour and the position; after the list money candidate frame is determined, when the brand candidate frame is determined, the position characteristic is the position of the brand mark obtained in the area of the watch dial according to the brand mark, generally, the brand mark is positioned at the central upper part of the watch dial, the shape characteristic is the gray level different from the gray level of the watch dial according to the brand mark, and then the brand candidate frame is determined, so that the list frame candidate frame and the brand candidate frame can be obtained quickly, and the corresponding data candidate area is transmitted to a server for specific analysis.
And, the step of identifying the table frame candidate frame and the brand candidate frame on the picture through the scanning model further comprises:
when the scanning model does not identify the list frame candidate frame and the brand candidate frame on the picture;
the scan model is re-identified.
Often, when shooting, due to the bright change of light or shake during shooting, the scanning model is not easy to identify the list candidate area and the brand candidate area on the picture immediately, so that the identification, i.e. rescanning, needs to be carried out again, so that the analysis is more accurate. And stopping identification when the picture shooting is closed.
Thus, the step of identifying the table frame candidate and the brand candidate on the picture by scanning the model is preceded by:
the scanning model receives a shot picture;
or, the scan model receives the uploaded picture.
The shot picture is a real-time shot picture, so that a user can identify the style of the wristwatch in real time at any time and any place; the uploaded picture is the picture which is shot, so that the user can quickly identify the money of the wristwatch.
Referring to fig. 2 in combination, in an embodiment of the present invention, the step of identifying the brand of the wristwatch by the server through the data candidate area of the brand candidate frame includes:
when the server identifies the brand of the wristwatch through the data candidate region of the brand candidate frame, executing S40, and calling out a corresponding brand classification model according to the brand of the wristwatch by the server;
when the server does not identify the brand of the wristwatch through the data candidate region of the brand candidate frame; executing S60, calling out an overall classification model by the server; and S70, identifying the data candidate areas of the list frame candidate frames through the overall classification model to obtain the list money of the wristwatch.
That is, there is a case where the brand of the wristwatch cannot be identified, which is generally less, the picture is submitted to the overall classification model, the model is trained by all the forms, the model is relatively large, the identification rate is not as high as that of the brand model, but the data entering the model is less than 2%, so that the overall speed is not greatly affected. The added overall classification model improves the accuracy of wristwatch identification even further.
In addition, referring to fig. 2 in combination, when the server identifies the brand of the wristwatch through the data candidate region of the brand candidate frame, the steps include:
extracting word features in a data candidate region of the brand candidate frame;
calling out a brand classification model;
comparing the character features with brand characters of the brand classification model;
the brand of the wristwatch is obtained.
The method and the device have the advantages that the brand recognition efficiency is improved by directly comparing through characters, and corresponding wristwatch brands can be found out conveniently and quickly.
Specifically, the step of extracting text features in the data candidate region of the brand candidate frame includes:
converting the image in the data candidate region of the brand candidate frame into a gray image;
recognizing character features in the gray scale image;
and extracting the character features in the gray level image.
In order to facilitate the extraction of character features, converting an image into a gray image, obtaining features with gray different from the background in the gray image, wherein the features are character features, extracting the character features and analyzing the character features to obtain specific contents of characters so as to facilitate the analysis of a subsequent brand classification model.
In addition, when the server identifies the brand of the wristwatch through the data candidate region of the brand candidate frame, the steps further include:
extracting identification features in a data candidate region of the brand candidate frame;
calling out a brand classification model;
comparing the identification features with brand identifications of the brand classification model;
the brand of the wristwatch is obtained.
That is, when there is a wrist watch with a brand of non-text, the content of the text feature cannot be obtained, so after text feature analysis, the text feature is used as an identification feature to be analyzed by the brand classification model, thus, the text feature is analyzed first, then the identification feature is analyzed, so that the analysis efficiency is quickened, and the accuracy is improved.
Referring to fig. 1 in combination, in an embodiment of the present invention, after obtaining the brands of the wristwatch, an analysis is required for specific money, and since the styles of each brand are different, a table style identification model needs to be trained for each brand to determine the specific money.
In this embodiment, the step of identifying the data candidate area of the watch frame candidate frame by the watch style identification model to obtain the watch style of the wristwatch includes:
the watch style identification model separates the data candidate area into a watch outline and a watch color according to the data candidate area of the watch style candidate frame;
determining the contour model of the wristwatch according to the contour of the wristwatch;
and then finding out the corresponding watch money in the outline model of the wristwatch according to the color of the wristwatch, and further obtaining the watch money of the wristwatch.
Therefore, the specific form of the wristwatch can be obtained quickly, the operation speed is high, and the analysis is accurate.
Specifically, in order to further improve accuracy of identifying the type of the wristwatch, the outline of the wristwatch can be further divided into a dial outline and a watchband outline, and the outline model of the wristwatch is obtained by sequentially analyzing the dial outline and the watchband outline; the color of the wristwatch can be divided into dial color and watchband color, so that corresponding watch money is obtained from the obtained outline model, and the identification accuracy is improved.
In addition, the data candidate area of the watch frame candidate frame is identified through the watch style identification model, and after the watch style of the watch is obtained:
the server sends the watch money of the wristwatch to the mobile terminal of the user.
Therefore, the user can conveniently identify the watch money of the wristwatch at any time and any place through the Internet, corresponding feedback is obtained, and the use experience of the user is improved.
The invention also provides wristwatch identification equipment, which comprises a scanning end and a processing end;
the scanning end is used for identifying a list frame candidate frame and a brand candidate frame on the picture through the scanning model; the scanning model intercepts the data candidate areas from the list frame candidate frame and the brand candidate frame and submits the data candidate areas to a server of a processing end;
the processing end is used for identifying the brand of the wristwatch through the data candidate area of the brand candidate frame by the server; the server calls out a corresponding brand classification model according to the brands of the wristwatches; and identifying the data candidate areas of the watch frame candidate frames through a brand classification model to obtain the watch money of the wristwatch.
The scanning end is located in the mobile end of the user, that is, the scanning model can be in an app of the mobile end, and the user shoots or uploads a picture of the mobile end to send a data candidate region of the table frame candidate frame and the brand candidate frame obtained by the scanning model to the processing end for processing; the processing end is a cloud server, namely, the data candidate area can be sent into the server for analysis through wireless or wired signals, so that the identification efficiency is improved, and the data in the server can be updated in time, so that the wristwatch can be identified more accurately. After the wristwatch is identified, the specific money of the wristwatch is sent to the app of the mobile terminal of the user again through a wired or wireless signal, and the user is informed of the specific money of the wristwatch.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, and alternatives falling within the spirit and principles of the invention.
Claims (8)
1. A wristwatch identification method, comprising the steps of:
identifying a list frame candidate frame and a brand candidate frame on the picture through a scanning model;
intercepting a data candidate region from the list frame candidate frame and the brand candidate frame by the scanning model and submitting the data candidate region to a server;
the server identifies the brand of the wristwatch through the data candidate region of the brand candidate frame;
the server calls out a corresponding watch style identification model according to the brand of the wristwatch;
identifying the data candidate areas of the watch frame candidate frames through the watch style identification model to obtain watch style of the wristwatch;
the watch frame candidate frame is an area where the outline of the wristwatch is located; the brand candidate frame is the area range where the brand characters or marks on the wristwatch are located;
the step of identifying the table frame candidate frame and the brand candidate frame on the picture through the scanning model comprises the following steps:
the scanning model converts the picture into a gray scale image;
extracting shape features and position features of the wristwatch on the gray level diagram;
determining the list frame candidate frame and the brand candidate frame according to the shape characteristics and the position characteristics of the wristwatch;
when the watch box candidate frame is determined, the shape characteristic is the outer contour of the watch, the position characteristic is the position of the watch, and the watch money candidate frame is obtained according to the outer contour and the position; after the list money candidate frame is determined, when the brand candidate frame is determined, the position features are positions of the brand marks in the area of the watch dial according to the brand marks, the shape features are gray scales different from the dial according to the brand marks, and then the brand candidate frame is determined;
the step of the server identifying the brand of the wristwatch through the data candidate region of the brand candidate frame comprises the following steps:
when the server identifies the brand of the wristwatch through the data candidate region of the brand candidate frame, executing the server to call out a corresponding brand classification model according to the brand of the wristwatch;
when the server does not identify the brand of the wristwatch through the data candidate region of the brand candidate frame; the server calls out an overall classification model; and identifying the data candidate areas of the watch frame candidate frames through the overall classification model to obtain the watch money of the wristwatch.
2. The wristwatch identification method of claim 1, wherein the step of identifying the frame candidate and the brand candidate on the picture by scanning the model further comprises:
when the scanning model does not identify a list frame candidate frame and a brand candidate frame on the picture;
the scan model is re-identified.
3. The wristwatch identification method of any of claims 1-2, wherein the step of identifying the frame candidate and the brand candidate on the picture by scanning the model is preceded by:
the scanning model receives the shot picture;
or, the scanning model receives the uploaded picture.
4. The wristwatch identification method of any of claims 1 to 2, wherein the step when the server identifies the brand of the wristwatch through the data candidate region of the brand candidate frame comprises:
extracting word features in a data candidate region of the brand candidate frame;
calling out a brand classification model;
comparing the character features with brand characters of a brand classification model;
the brand of the wristwatch is obtained.
5. The wristwatch identification method of claim 4, wherein the step of extracting text features within the data candidate region of the brand candidate frame comprises:
converting an image in a data candidate region of the brand candidate frame into a gray scale image;
recognizing character features in the gray scale image;
and extracting the character features in the gray level image.
6. The wristwatch identification method of any of claims 1 to 2, wherein the step when the server identifies the brand of the wristwatch through the data candidate region of the brand candidate frame further comprises:
extracting identification features in a data candidate region of the brand candidate frame;
calling out a brand classification model;
comparing the identification features with brand identifications of brand classification models;
the brand of the wristwatch is obtained.
7. The wristwatch identification method of any one of claims 1 to 2, wherein after the step of identifying the data candidate region of the frame candidate frame by the wristwatch model, obtaining the wristwatch style of the wristwatch:
and the server sends the form of the wristwatch to the mobile terminal of the user.
8. A wristwatch identification device based on a wristwatch identification method of any of claims 1 to 7, characterized in that the wristwatch identification device comprises a scanning end, a processing end;
the scanning end is used for identifying a list frame candidate frame and a brand candidate frame on the picture through a scanning model; intercepting a data candidate region from the list frame candidate frame and the brand candidate frame by the scanning model and submitting the data candidate region to a server of a processing end;
the processing end is used for identifying the brand of the wristwatch through the server through the data candidate area of the brand candidate frame; the server calls out a corresponding brand classification model according to the brand of the wristwatch; and identifying the data candidate areas of the watch frame candidate frames through a brand classification model to obtain the watch money of the wristwatch.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010131951.8A CN111259885B (en) | 2020-02-27 | 2020-02-27 | Wristwatch identification method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010131951.8A CN111259885B (en) | 2020-02-27 | 2020-02-27 | Wristwatch identification method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111259885A CN111259885A (en) | 2020-06-09 |
CN111259885B true CN111259885B (en) | 2023-11-24 |
Family
ID=70953055
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010131951.8A Active CN111259885B (en) | 2020-02-27 | 2020-02-27 | Wristwatch identification method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111259885B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101196979A (en) * | 2006-12-22 | 2008-06-11 | 四川川大智胜软件股份有限公司 | Method for recognizing vehicle type by digital picture processing technology |
CN104820831A (en) * | 2015-05-13 | 2015-08-05 | 沈阳聚德视频技术有限公司 | Front vehicle face identification method based on AdaBoost license plate location |
CN108090429A (en) * | 2017-12-08 | 2018-05-29 | 浙江捷尚视觉科技股份有限公司 | Face bayonet model recognizing method before a kind of classification |
CN109190623A (en) * | 2018-09-15 | 2019-01-11 | 闽江学院 | A method of identification projector brand and model |
CN110647630A (en) * | 2019-09-30 | 2020-01-03 | 浙江执御信息技术有限公司 | Method and device for detecting same-style commodities |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101596299B1 (en) * | 2014-01-06 | 2016-02-22 | 현대모비스 주식회사 | Apparatus and Method for recognizing traffic sign board |
CN109086796B (en) * | 2018-06-27 | 2020-12-15 | Oppo(重庆)智能科技有限公司 | Image recognition method, image recognition device, mobile terminal and storage medium |
-
2020
- 2020-02-27 CN CN202010131951.8A patent/CN111259885B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101196979A (en) * | 2006-12-22 | 2008-06-11 | 四川川大智胜软件股份有限公司 | Method for recognizing vehicle type by digital picture processing technology |
CN104820831A (en) * | 2015-05-13 | 2015-08-05 | 沈阳聚德视频技术有限公司 | Front vehicle face identification method based on AdaBoost license plate location |
CN108090429A (en) * | 2017-12-08 | 2018-05-29 | 浙江捷尚视觉科技股份有限公司 | Face bayonet model recognizing method before a kind of classification |
CN109190623A (en) * | 2018-09-15 | 2019-01-11 | 闽江学院 | A method of identification projector brand and model |
CN110647630A (en) * | 2019-09-30 | 2020-01-03 | 浙江执御信息技术有限公司 | Method and device for detecting same-style commodities |
Also Published As
Publication number | Publication date |
---|---|
CN111259885A (en) | 2020-06-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10762387B2 (en) | Method and apparatus for processing image | |
CN112162930B (en) | Control identification method, related device, equipment and storage medium | |
KR101880004B1 (en) | Method and apparatus for identifying television channel information | |
CN105955011B (en) | Intelligent timing method and device | |
US20230119593A1 (en) | Method and apparatus for training facial feature extraction model, method and apparatus for extracting facial features, device, and storage medium | |
KR102002024B1 (en) | Method for processing labeling of object and object management server | |
CN112100431B (en) | Evaluation method, device and equipment of OCR system and readable storage medium | |
CN108011976B (en) | Internet access terminal model identification method and computer equipment | |
CN110275834A (en) | User interface automatization test system and method | |
CN105095919A (en) | Image recognition method and image recognition device | |
CN111475613A (en) | Case classification method and device, computer equipment and storage medium | |
CN104808794A (en) | Method and system for inputting lip language | |
CN109471981B (en) | Comment information sorting method and device, server and storage medium | |
CN113470024A (en) | Hub internal defect detection method, device, equipment, medium and program product | |
CN112115950B (en) | Wine mark identification method, wine information management method, device, equipment and storage medium | |
CN111639629A (en) | Pig weight measuring method and device based on image processing and storage medium | |
CN114120307A (en) | Display content identification method, device, equipment and storage medium | |
CN112150457A (en) | Video detection method, device and computer readable storage medium | |
CN112418214A (en) | Vehicle identification code identification method and device, electronic equipment and storage medium | |
CN115205883A (en) | Data auditing method, device, equipment and storage medium based on OCR (optical character recognition) and NLP (non-line language) | |
CN103530625A (en) | Optical character recognition method based on digital image processing | |
CN112381092A (en) | Tracking method, device and computer readable storage medium | |
CN111259885B (en) | Wristwatch identification method and device | |
CN108108646B (en) | Bar code information identification method, terminal and computer readable storage medium | |
WO2022062027A1 (en) | Wine product positioning method and apparatus, wine product information management method and apparatus, and device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |