JP7332220B2 - オブジェクト認識に基づく機械制御方法及びそのシステム - Google Patents
オブジェクト認識に基づく機械制御方法及びそのシステム Download PDFInfo
- Publication number
- JP7332220B2 JP7332220B2 JP2022523669A JP2022523669A JP7332220B2 JP 7332220 B2 JP7332220 B2 JP 7332220B2 JP 2022523669 A JP2022523669 A JP 2022523669A JP 2022523669 A JP2022523669 A JP 2022523669A JP 7332220 B2 JP7332220 B2 JP 7332220B2
- Authority
- JP
- Japan
- Prior art keywords
- machine
- image
- feature
- local
- images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
- G06F18/2148—Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the process organisation or structure, e.g. boosting cascade
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
- G06F18/2155—Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the incorporation of unlabelled data, e.g. multiple instance learning [MIL], semi-supervised techniques using expectation-maximisation [EM] or naïve labelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/285—Selection of pattern recognition techniques, e.g. of classifiers in a multi-classifier system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
- G06N20/10—Machine learning using kernel methods, e.g. support vector machines [SVM]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/22—Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
- G06V10/235—Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition based on user input or interaction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
- G06V10/451—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
- G06V10/454—Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
- G06V10/765—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
- G06V10/7747—Organisation of the process, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/809—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/66—Trinkets, e.g. shirt buttons or jewellery items
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/68—Food, e.g. fruit or vegetables
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/19—Recognition using electronic means
- G06V30/191—Design or setup of recognition systems or techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
- G06V30/19173—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/24—Character recognition characterised by the processing or recognition method
- G06V30/248—Character recognition characterised by the processing or recognition method involving plural approaches, e.g. verification by template match; Resolving confusion among similar patterns, e.g. "O" versus "Q"
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/24—Character recognition characterised by the processing or recognition method
- G06V30/248—Character recognition characterised by the processing or recognition method involving plural approaches, e.g. verification by template match; Resolving confusion among similar patterns, e.g. "O" versus "Q"
- G06V30/2528—Combination of methods, e.g. classifiers, working on the same input data
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Data Mining & Analysis (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Biodiversity & Conservation Biology (AREA)
- Image Analysis (AREA)
- Control Of Washing Machine And Dryer (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Description
本願は、2019年11月11日付けで出願された美国特許出願第16/680,347号に対する優先権を主張するものである。先の美国特許出願は、その全文を参照により本願に援用される。
図1Bの家電製品システム101内の様々なシステムの機能は、単なる例示的なものに過ぎない。機能の他の構成及び分割は可能である。特定のサブシステムのいくつかの機能は、様々な実施例において別のサブシステムで実現することができる。
110 電気機器A
111 ユーザ機器A
112 電気機器B
113 ユーザ機器B
114 電気機器C
115 ユーザ機器C
120 訓練セットサーバ
122 訓練モデルサーバ
128 注釈ステーション
129 推論セットサーバ
190 ネットワーク
Claims (8)
- オブジェクト認識に基づく機械制御方法であって、
1つ又は複数のプロセッサと、カメラと、メモリとを備えた第1機械において、
前記第1機械内の未整理のアイテムコレクションの1つ又は複数の画像を収集することと、
前記1つ又は複数の画像から、前記未整理のアイテムコレクションの1つ又は複数のアイテムタイプを決定することと、
前記未整理のアイテムコレクションで決定された1つ又は複数のアイテムタイプに基づいて、前記第1機械の動作設定を選択することと、を含み、
前記1つ又は複数の画像から、前記未整理のアイテムコレクションの1つ又は複数のアイテムタイプを決定することは、
前記1つ又は複数の画像における対応する画像を対応する複数のサブ領域に分割することと、
前記対応する画像の前記対応する複数のサブ領域に対して特徴検出を実行して、対応する複数の領域特徴ベクトルを取得することであって、前記サブ領域の領域特徴ベクトルは、前記サブ領域の事前定義された複数の局所アイテム特徴の特性を示すことと、
前記対応する複数の領域特徴ベクトルを組み合わせることにより、前記対応する画像の統合された特徴ベクトルを生成することと、
複数のバイナリ分類器を前記対応する画像の前記統合された特徴ベクトルに適用することであって、前記複数のバイナリ分類器における対応するバイナリ分類器は、前記統合された特徴ベクトルを受信し、前記対応する画像の前記統合された特徴ベクトルに基づいて、前記バイナリ分類器に関連付けられたアイテムタイプが前記対応する画像に存在するかどうかを決定するように構成されることと、を含む、
オブジェクト認識に基づく機械制御方法。 - 前記第1機械は、前記1つ又は複数の画像における対応する1つの画像を収集した後、前記第1機械内の前記未整理のアイテムコレクションを移動する、
請求項1に記載のオブジェクト認識に基づく機械制御方法。 - 前記事前定義された局所アイテム特徴は、手動で識別された複数の局所アイテム特徴ラベルを含み、前記対応する複数のサブ領域に対して特徴検出を実行して、前記対応する複数の領域特徴ベクトルを取得することは、機械学習モデルを介して、前記手動で識別された複数の局所アイテム特徴ラベルに応じる対応する機械生成の特徴を取得することを含む、
請求項1に記載のオブジェクト認識に基づく機械制御方法。 - 前記バイナリ分類器は、サポートベクトルマシンであり、前記サポートベクトルマシンは、深層学習モデルによって生成された前記複数のサブ領域の特徴ベクトルで訓練されたものである、
請求項1に記載のオブジェクト認識に基づく機械制御方法。 - 前記特徴ベクトルは、前記深層学習モデルの出力レイヤ以外の前記深層学習モデルのレイヤから取得されたものである、
請求項4に記載のオブジェクト認識に基づく機械制御方法。 - 前記深層学習モデルは、以下の訓練プロセスで生成されたものであり、前記訓練プロセスは、
訓練データセットを受信することと、
前記訓練データセットを複数のサブセットに分割することであって、各サブセットは、前記事前定義された複数の局所アイテム特徴のうちの1つ又は複数の局所アイテム特徴に対応し、前記各サブセットは、対応する1つ又は複数の局所アイテム特徴ラベルを有することと、
対応する局所アイテム特徴ラベルを有する前記訓練データセットを使用して、前記深層学習モデルを訓練することと、を含む、
請求項4に記載のオブジェクト認識に基づく機械制御方法。 - 機械であって、
1つ又は複数のプロセッサと、
カメラと、
命令が記憶されているメモリと、を含み、前記命令が前記1つ又は複数のプロセッサによって実行されるときに、前記プロセッサに、請求項1ないし6のいずれか一項に記載の方法を実行させる、前記機械。 - 命令が記憶されているコンピュータ可読記憶媒体であって、
前記命令が機械の1つ又は複数のプロセッサによって実行されるときに、前記プロセッサに、請求項1ないし6のいずれか一項に記載の方法を実行させる、前記コンピュータ可読記憶媒体。
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/680,347 US11048976B2 (en) | 2019-11-11 | 2019-11-11 | Method and system for controlling machines based on object recognition |
US16/680,347 | 2019-11-11 | ||
PCT/CN2020/102857 WO2021093359A1 (en) | 2019-11-11 | 2020-07-17 | Method and system for controlling machines based on object recognition |
Publications (2)
Publication Number | Publication Date |
---|---|
JP2022554127A JP2022554127A (ja) | 2022-12-28 |
JP7332220B2 true JP7332220B2 (ja) | 2023-08-23 |
Family
ID=75846688
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
JP2022523669A Active JP7332220B2 (ja) | 2019-11-11 | 2020-07-17 | オブジェクト認識に基づく機械制御方法及びそのシステム |
Country Status (5)
Country | Link |
---|---|
US (1) | US11048976B2 (ja) |
EP (1) | EP4028591A4 (ja) |
JP (1) | JP7332220B2 (ja) |
CN (1) | CN114466954B (ja) |
WO (1) | WO2021093359A1 (ja) |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20210054796A (ko) * | 2019-11-06 | 2021-05-14 | 엘지전자 주식회사 | 지능형 디바이스의 도어 오픈 모니터링 |
US11373095B2 (en) * | 2019-12-23 | 2022-06-28 | Jens C. Jenkins | Machine learning multiple features of depicted item |
CN111507321B (zh) * | 2020-07-01 | 2020-09-25 | 中国地质大学(武汉) | 多输出土地覆盖分类模型的训练方法、分类方法及装置 |
CN112663277A (zh) * | 2020-12-12 | 2021-04-16 | 上海电机学院 | 一种基于图像识别的洗衣系统及控制方法 |
US11205101B1 (en) * | 2021-05-11 | 2021-12-21 | NotCo Delaware, LLC | Formula and recipe generation with feedback loop |
US20220383037A1 (en) * | 2021-05-27 | 2022-12-01 | Adobe Inc. | Extracting attributes from arbitrary digital images utilizing a multi-attribute contrastive classification neural network |
CN114855416B (zh) * | 2022-04-25 | 2024-03-22 | 青岛海尔科技有限公司 | 洗涤程序的推荐方法及装置、存储介质及电子装置 |
CN114627279B (zh) * | 2022-05-17 | 2022-10-04 | 山东微亮联动网络科技有限公司 | 一种快餐菜品定位方法 |
US20230419640A1 (en) * | 2022-05-27 | 2023-12-28 | Raytheon Company | Object classification based on spatially discriminated parts |
US11982661B1 (en) | 2023-05-30 | 2024-05-14 | NotCo Delaware, LLC | Sensory transformer method of generating ingredients and formulas |
US12056443B1 (en) * | 2023-12-13 | 2024-08-06 | nference, inc. | Apparatus and method for generating annotations for electronic records |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2017207960A (ja) | 2016-05-19 | 2017-11-24 | 株式会社リコー | 画像解析装置、画像解析方法およびプログラム |
CN109213886A (zh) | 2018-08-09 | 2019-01-15 | 山东师范大学 | 基于图像分割和模糊模式识别的图像检索方法及系统 |
Family Cites Families (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5657424A (en) * | 1995-10-31 | 1997-08-12 | Dictaphone Corporation | Isolated word recognition using decision tree classifiers and time-indexed feature vectors |
JP3530363B2 (ja) * | 1997-11-26 | 2004-05-24 | 日本電信電話株式会社 | 認識モデル生成方法および画像認識方法 |
US7319779B1 (en) * | 2003-12-08 | 2008-01-15 | Videomining Corporation | Classification of humans into multiple age categories from digital images |
JP2010092199A (ja) * | 2008-10-07 | 2010-04-22 | Sony Corp | 情報処理装置および方法、プログラム、並びに記録媒体 |
JP4906900B2 (ja) * | 2009-09-24 | 2012-03-28 | ヤフー株式会社 | 画像検索装置、画像検索方法及びプログラム |
US8391602B2 (en) * | 2010-04-08 | 2013-03-05 | University Of Calcutta | Character recognition |
US8406470B2 (en) * | 2011-04-19 | 2013-03-26 | Mitsubishi Electric Research Laboratories, Inc. | Object detection in depth images |
US8718362B2 (en) * | 2012-03-28 | 2014-05-06 | Mitsubishi Electric Research Laboratories, Inc. | Appearance and context based object classification in images |
US8873812B2 (en) * | 2012-08-06 | 2014-10-28 | Xerox Corporation | Image segmentation using hierarchical unsupervised segmentation and hierarchical classifiers |
US9600711B2 (en) * | 2012-08-29 | 2017-03-21 | Conduent Business Services, Llc | Method and system for automatically recognizing facial expressions via algorithmic periocular localization |
EP2929252B1 (en) * | 2012-12-04 | 2018-10-24 | Stork genannt Wersborg, Ingo | Haet treatment device with monitoring system |
US9158992B2 (en) * | 2013-03-14 | 2015-10-13 | Here Global B.V. | Acceleration of linear classifiers |
US9792823B2 (en) * | 2014-09-15 | 2017-10-17 | Raytheon Bbn Technologies Corp. | Multi-view learning in detection of psychological states |
CN106192289A (zh) | 2015-04-30 | 2016-12-07 | 青岛海尔洗衣机有限公司 | 洗衣机控制方法及洗衣机 |
US9443320B1 (en) * | 2015-05-18 | 2016-09-13 | Xerox Corporation | Multi-object tracking with generic object proposals |
US10037456B2 (en) * | 2015-09-04 | 2018-07-31 | The Friedland Group, Inc. | Automated methods and systems for identifying and assigning attributes to human-face-containing subimages of input images |
CN105956512A (zh) | 2016-04-19 | 2016-09-21 | 安徽工程大学 | 一种基于机器视觉的空调温度自动调节系统及方法 |
EP3685235B1 (en) * | 2017-12-30 | 2023-03-29 | Midea Group Co., Ltd. | Food preparation method and system based on ingredient recognition |
US10643332B2 (en) | 2018-03-29 | 2020-05-05 | Uveye Ltd. | Method of vehicle image comparison and system thereof |
CN108385330A (zh) | 2018-04-10 | 2018-08-10 | 深圳市度比智能技术有限公司 | 洗衣装置、系统及方法 |
CN109594288A (zh) | 2018-12-03 | 2019-04-09 | 珠海格力电器股份有限公司 | 一种衣物洗涤方法、系统及洗衣机 |
CN110004664B (zh) * | 2019-04-28 | 2021-07-16 | 深圳数联天下智能科技有限公司 | 衣物污渍识别方法、装置、洗衣机及存储介质 |
CN110331551A (zh) | 2019-05-24 | 2019-10-15 | 珠海格力电器股份有限公司 | 洗衣机的洗涤控制方法、装置、计算机设备和存储介质 |
-
2019
- 2019-11-11 US US16/680,347 patent/US11048976B2/en active Active
-
2020
- 2020-07-17 JP JP2022523669A patent/JP7332220B2/ja active Active
- 2020-07-17 EP EP20886484.3A patent/EP4028591A4/en active Pending
- 2020-07-17 WO PCT/CN2020/102857 patent/WO2021093359A1/en unknown
- 2020-07-17 CN CN202080065025.3A patent/CN114466954B/zh active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2017207960A (ja) | 2016-05-19 | 2017-11-24 | 株式会社リコー | 画像解析装置、画像解析方法およびプログラム |
CN109213886A (zh) | 2018-08-09 | 2019-01-15 | 山东师范大学 | 基于图像分割和模糊模式识别的图像检索方法及系统 |
Also Published As
Publication number | Publication date |
---|---|
EP4028591A1 (en) | 2022-07-20 |
WO2021093359A1 (en) | 2021-05-20 |
EP4028591A4 (en) | 2022-11-02 |
CN114466954B (zh) | 2022-11-29 |
US11048976B2 (en) | 2021-06-29 |
CN114466954A (zh) | 2022-05-10 |
US20210142110A1 (en) | 2021-05-13 |
JP2022554127A (ja) | 2022-12-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7332220B2 (ja) | オブジェクト認識に基づく機械制御方法及びそのシステム | |
KR102329592B1 (ko) | 재료 인식을 기반으로 하는 음식 준비 방법 및 시스템 | |
JP7239782B2 (ja) | 目標検出モデルのマルチパストレーニングによる機器設定の調整 | |
CN113194792B (zh) | 训练烹饪器具、定位食物以及确定烹饪进度的系统和方法 | |
CN105353634B (zh) | 利用手势识别控制操作的家电设备与方法 | |
US11080918B2 (en) | Method and system for predicting garment attributes using deep learning | |
Takagi et al. | What makes a style: Experimental analysis of fashion prediction | |
CN109643448A (zh) | 机器人系统中的细粒度物体识别 | |
CN106846122B (zh) | 商品数据处理方法和装置 | |
Jiménez | Visual grasp point localization, classification and state recognition in robotic manipulation of cloth: An overview | |
CN107862313B (zh) | 洗碗机及其控制方法和装置 | |
CN112817237A (zh) | 一种烹饪控制方法、装置、设备及存储介质 | |
Lee et al. | Mt-diet: Automated smartphone based diet assessment with infrared images | |
KR20220011450A (ko) | 물품을 식별하는 장치, 서버 및 식별하는 방법 | |
US11790041B2 (en) | Method and system for reducing false positives in object detection neural networks caused by novel objects | |
Morris et al. | Inventory Management of the Refrigerator's Produce Bins Using Classification Algorithms and Hand Analysis | |
Lin et al. | A Study of Automatic Judgment of Food Color and Cooking Conditions with Artificial Intelligence Technology. Processes 2021, 9, 1128 | |
US20240133101A1 (en) | Electronic device for mapping identification information and custom information, and control method thereof | |
CN113584847B (zh) | 循环扇控制方法、系统、中控设备及存储介质 | |
CN110173866B (zh) | 空调器的智能送风方法及装置、空调器 | |
Jiang | Towards Future Smart Kitchen Using AI-Driven Approaches with Multimodal Data | |
CN118015328A (zh) | 衣物存放方法、装置、设备及存储介质 | |
Damen et al. | You-Do, I-Learn: Unsupervised Multi-User egocentric Approach Towards Video-Based Guidance |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A521 | Request for written amendment filed |
Free format text: JAPANESE INTERMEDIATE CODE: A523 Effective date: 20220421 |
|
A621 | Written request for application examination |
Free format text: JAPANESE INTERMEDIATE CODE: A621 Effective date: 20220421 |
|
A131 | Notification of reasons for refusal |
Free format text: JAPANESE INTERMEDIATE CODE: A131 Effective date: 20230404 |
|
A521 | Request for written amendment filed |
Free format text: JAPANESE INTERMEDIATE CODE: A523 Effective date: 20230609 |
|
TRDD | Decision of grant or rejection written | ||
A01 | Written decision to grant a patent or to grant a registration (utility model) |
Free format text: JAPANESE INTERMEDIATE CODE: A01 Effective date: 20230711 |
|
A61 | First payment of annual fees (during grant procedure) |
Free format text: JAPANESE INTERMEDIATE CODE: A61 Effective date: 20230803 |
|
R150 | Certificate of patent or registration of utility model |
Ref document number: 7332220 Country of ref document: JP Free format text: JAPANESE INTERMEDIATE CODE: R150 |