AU2014271204A1 - Image recognition of vehicle parts - Google Patents

Image recognition of vehicle parts Download PDF

Info

Publication number
AU2014271204A1
AU2014271204A1 AU2014271204A AU2014271204A AU2014271204A1 AU 2014271204 A1 AU2014271204 A1 AU 2014271204A1 AU 2014271204 A AU2014271204 A AU 2014271204A AU 2014271204 A AU2014271204 A AU 2014271204A AU 2014271204 A1 AU2014271204 A1 AU 2014271204A1
Authority
AU
Australia
Prior art keywords
target image
vehicle part
vehicle
image
image recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
AU2014271204A
Other versions
AU2014271204B2 (en
Inventor
Andrew Robert Bates
Ian Keith Bott
George Kyriakopoulos
David Nathan Woolfson
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
FMP Group Australia Pty Ltd
Original Assignee
FMP Group Australia Pty Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2013901813A external-priority patent/AU2013901813A0/en
Priority claimed from AU2013101043A external-priority patent/AU2013101043A4/en
Application filed by FMP Group Australia Pty Ltd filed Critical FMP Group Australia Pty Ltd
Priority to AU2014271204A priority Critical patent/AU2014271204B2/en
Publication of AU2014271204A1 publication Critical patent/AU2014271204A1/en
Application granted granted Critical
Publication of AU2014271204B2 publication Critical patent/AU2014271204B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The present disclosure concerns methods, computer programs, user device and network device for image recognition of vehicle parts. First, a target image of the vehicle part is obtained (320) from a camera system (240). Image recognition (360) to identify the vehicle part in the target image is performed. This comprises comparing (366) the target image with candidate images to obtain one or more matching results. Then, providing (370) the one or more results on a user interface (262). Each matching result comprises an image (710) and information (712) of a candidate vehicle part.

Description

WO 2014/186840 PCT/AU2014/050046 Image recognition of vehicle parts Cross-Reference to Related Applications 5 The present application claims priority from Australian provisional patent application 2013901813 and Australian innovaton patent 2013101043 ic contents of which are incorporated herein by reference. Technical Field 10 The present disclosure concerns methods, computer programs, user device and network device for itnage recognition of vehicle parts. Background Mechanics generally rely on hard copy catalogues of a part manufacturer when 15 ordering vehicle parts. For example, when a vehick part needs to be replaced, mechanics generally relies on their knowledge of the vehicle part or manually searches through catalogues to identify the vehicle part. Any discussion of documents, acts, materials, devices, articles or the like which has 20 been included in the present specification is not to be taken as an admission that any or all of these matters form part of the prior art base or were common general knowledge in the field relevant to the present disclosure as it existed before the priority date of each claim of this application. 25 Throughout this specification the word "comprse, or variations such as "comprises" or "comprising", will be understood to imply the inclusion of a stated element, integer or step, or group of elements, integers or steps, but not the exclusion of any other element, integer or step, or group of elements, integers or steps. 30 Summary There is provided a computer-implemented method for image recognition of a vehicle part on a user device, the method comprising: obtaining a target image of the vehicle part from a camera system; performing image recognition to identify the vehicle part in the target image, 35 wherein image recognition comprises comparing the target image with candidate images to obtain one or more matching results; and WO 2014/186840 PCT/AU2014/050046 2 providing the one or more matching results on a user interface, wherein each matching result comprises an image and information of a candidate vehicle part. The computer-implemented method may further comprise providing a user interface to 5 order the vehicle part from a supplier. The user device may be positioning-enabled, in which case the method may further compri se: collecting positioning information from a positioning system of the user device; 10 and determining one or more suppliers of the vehicle part that are nearest to the user device based on the positioning information. The camera system may be a camera system on the user device. In tis case. the 15 method may further comprise, prior to obtaining the target image from the camera system, providing a user interface comprising one or more of the following: an overlay feature that guides the user to capture the vehicle part within a viewport; an orientation feature to enable the camera system when the user device is held at an acceptable orientation; and 20 a settings feature to adapt a setting of the camera system The method may further comprise: receiving one or more attributes of the vehicle part in the target image; and filtering the candidate images or matching results based on the one or more 25 attributes. In this case, the one or more attributes may include one or more of; vehicle's make, vehicle's model and a. dimension of the vehicle part. The method may further comprise, prior to performing image recognition, processing the target image by performing one or more of the following: 30 resizing the target iage: filtering the target itnage; cropping the target image to a predetermined size; and estimating a dimension of the vehicle part in the target image. 35 Performing image recognition may further comprise: WO 2014/186840 PCT/AU2014/050046 3 extracting one or more visual features of the vehicle part from the target image; and comparing the target image with the candidate images based on the one or more visual features, 5 Further, performing image recognition may comprise: sending the target image to a server to compare the target image with candidate images; and receiving the one or more matching results from the server or a different server. 10 There is provided a user device for image recognition of a vehicle part, the device comprises a processor to perform the method described above. The user device may further comprise a camera system to capture the target image and a display to display the user interface. 15 There is provided a computer program to cause a user device to perform the method described above. There is provided a computer-implemcnted method for image recognition of a vehicle 20 part on a network device capable of acting as a server, the method comprising: receiving a target image of the vehicle part from a user device; performing image recognition to identify the vehicle part in the target image. wherein image recognition comprises comparing the target image with candidate images to obtain one or more matching results; and 25 providing the one or more results to the user device, wherein each result comprises an image and information of a candidate vehicle part. There is provided a network device capable of acting as a server for image recognition of a vehicle part comprising a processor to perform the method described directly 30 above. The network device may further comprise an. interface to receive the target image and to provide the one or more matching results. There is provided a computer program to cause a network device capable of acting as a server to perform the method described directly above. 35 WO 2014/186840 PCT/AU2014/050046 4 Brief Description of Drawings Examples of inage recognition of vehicle parts will now be described with reference to the accompanying drawings, in which: Fig. I is a block diagram of an example system for image recognition of vehicle 5 parts; Fig. 2 is a block diagram of an example stmeture of an electronic device capable of acting as a user device in Fig. 1; Fig is a flowchart of steps performed by an image recognition application on a user device in Fig. 1; 10 Fig 4 is an example interface for capturing a target image of a vehicle part; Fig. 5 is an example interface for providing attributes of a vehicle part; Fig. 6(a) is an example target image; Fig. 6(b) is the example target image in Fig. 6(a) after image processing: Fig. 6(c) is the example target image in Fig. 6(b) after feature extraction; 15 Fig. 6(d) is an example set of candidate images to which the example target image in Fig. 6(c) is compared to when image recognition is performed; Fig. 7(a) is an example interface for displaying results of image recognition; Fig. 7(b) is an example interface for filtering the results of image recognition in Fig. 7(a); 20 Fig. 8(a) is the example interface in Fig. 7(a) after the results are filtered according to Fig. 7(b); Fig. 8(b) is an example interface for displaying a result; Fig. 9 is an example interface for displaying supplier information and order placement; and 25 Fig. 10 is an example structure of a network device capable of acting as a server. Detailed Description Fig. I is a block diagram of an example 100 for inage recognition system of vehicle parts. The system 100 comprises Application Program Interface (API) server 110 and 30 image recognition server 120 that are in communication with each other and multiple user devices 142 operated by users 140 over a communications network 150, 152, The users 140 of the user devices 142 may be mechanics or vehicle repairers who wish to identify vehicle parts such as brake pads, brake. rotors, brake shoes, brake drums, 35 loaded caliper, reman bare caliper, semi loaded caliper, wheel cylinder, clutch master cylinder, slave cylinder, brake hydraulic hose etc.
WO 2014/186840 PCT/AU2014/050046 5 To facilitate image recognition of vehicle parts, a software application in the form of an image recognition application 1.44 is installed on each user device. The user devices 142 communicate with the API server 110 to access image recognition services 5 provided by the image recognition server 120. The API server 110 and image recognition server 120 have access to a data store 130 (either via the comnmnications network 150 as shown or directly) to retrieve various information such as user information 132, vehicle part information 134 and supplier information 136. 10 In one example. image recognition of a vehicle part includes the following: A target image of the vehicle part is obtained, for example from. a camera system of the user device 142. Inage recognition is performed to identify the vehicle part in the target image. For example, image recognition may involve comparing the target image vith 15 candidate images to obtain one or more best matching results. One or more results are provided on a user interface, each result including an image and information of a candidate vehicle part potentially identifying the vehicle part in the target image. 20 Advantageously, the image recognition application 144 facilitates faster and more efficient recognition of vehicle parts, Using the application 144, a user does not have to perform the manual process of searching through hard copy catalogues (which may not be complete or current) to identify the vehicle part. 25 Since the image recognition application 144 is able to provide access to the latest vehicle part information, this also reduces or removes the need for manufacturers and/or suppliers to print various catalogues, thereby saving costs and efforts. The image recognition application 144 may be used conveniently since it is accessible by users 140 anytime and anywhere, e.g. at their workshops, or at a crash site for example. 30 User Device 142 Referring now to the block diagram in Fig. 2, an example electronic device 200 capable of acting as the user device 142 will now be explained. 35 The image recognition application 144 may be implemented on any suitable Internet capable user devices 142, such as a smartphone (e.g. Apple iPhone 3GS, 4S, 4, 5), WO 2014/186840 PCT/AU2014/050046 6 tablet computer (eNg Apple iPad), personal digital assistant, desktop computer, laptop computer. and any other suitable device. The image recognition application 144 may be downloaded onto the user device 142. For example, if the user device 1.42. is an Apple device, the image recognition application 144 may be a downloadable "App" 5 that is available through the Apple App Store (trade marks of Apple, Inc). Similarly, the image recogniLtion application 144 may be downloaded from the "Blackberry App World" for Blackberry devices (trade marks of Research In Motion Limited), and from the "Android Market" or "Google Play" for Android devices (trade marks of Google, Inc.). The image recognition application 144 may also be pre-programmed on the user 10 142, Capable of acting means having the necessary features to perform the functions described. In one example, the user device 142 may be a mobile electronic device. The electronic device 200 in Fig. 2 comprises one or more processors 202 in 15 communication with a memory interface 204 coupled to memory 210, and a peripherals interface 206. The memory 210 may include random access memory and/or non volatile memory, such as magnetic disc storage devices etc. The memory 210 stores various applications 230 including the image recognition application 144; an operating system 212; and executable instructions to perform communications functions 214; 20 graphical user interface processing 216; senorng 218; phone-related functions 220; electronic messaging functions 222; web browsing functions 224; camera functions 226; and GPS or navigation functions 228. The applications 230 implemented on the electronic device 200 include the image 25 recognition application 144, and other applications (not shown for simplicity) such as a web browsing application, an email application, a telephone application, a video conferencing application, a video camera application, a digital camera, a photo management application, a digital music application, a digital video application, etc. 30 Sensors, devices and systems can be coupled to the peripherals interface 204 to facilitate various functionalities, such as the following. Camera system 240 is coupled to an optical sensor 242, such as a charged coupled device (CCD) or a complementary metal-oxide seniconductor (CMOS) 35 optical sensor, to facilitate camera functions.
WO 2014/186840 PCT/AU2014/050046 7 Positioning system 250 collects geographical location information of the device 142 by employing any suitable positioning technology such as GPS Assisted GPS (aGPS). GPS generally uses signals from satellites alone, while aGPS additionally uses signals from base stations or wireless access points in poor 5 signals condition. Positioning system 250 may be integral with the device or provided by a separate GPS-enabled device coupled to the electronic device 142. Input/Output (i/O) system 260 is coupled to a touch-sensitive display 262 10 sensitive to haptic and/or tactile contact via a user, and/or other input devices such as buttons. The touch-sensitive display 262 may also comprise a multi touch sensitive display that can for c.xample. detect and process a number of touch points simultaneously. Other touch-sensitive display technologies may also be used, such as display in which contact is made using a stylus. The terms 15 "touch-sensitive display" and "touch screen" will be used interchangeably throughout the disclosure. In embodiments where user interfaces are designed to work with finger-based contacts and gestures the device 142 translates finger-based input (which is less precise due to the larger area of finger contact) into more precise pointer- or cursor-based input for performing actions desired 20 by the user 140. Wireless communications system 264 is designed to allow wireless comm unications over a network employing suitable communications protocols, standards and technologies such as GPRS, EDGE, WCDMA. OFDMA, 25 Bluctooth, Wireless Fidelity (Witi) or Wi-MAX and Long-Term Evolution (LTE) etc. Sensor system 268, such as an accelerometer, a light sensor and a proximity sensor are used to facilitate orientation, lighting and proximity functions, 30 respectively. Audio system 270 can be coupled to a speaker 272 and microphone 274 to facilitate voice-enabled functions such as telephony functions 35 Although one example implementation has been provided here, it will be appreciated that other suitable configurations capable of implementing the image recognition WO 2014/186840 PCT/AU2014/050046 8 application 144 on the electronic device 200 may be used, It will be appreciated that the image recognition application 1.44 may support portrait and/or landscape modes. Target image 5 Fig. 3 shows an example method performed by the image recognition system 100 According to block 310 in Fig. 3. the image recognition application 144 first provides a user interface to obtain a target image of a vehicle part to be replaced. Provides is understood to mean image recognition application 144 operates to provide the necessary information to the user device 142 so that the user device 142 can display on 10 the display262 the user interface. Referring also to Fig. 4, an example user interface 400 is provided to capture a target image 410 of a vehicle part (e.g. brake pad) using the camera system (240 in Fig. 2) of the user device 142. To improve the quality of the target image 410. the user interface 15 4(X0 may include one or more of the following: An overlay feature that defines a viewport using a set of horizontal 430 and vertical 432 lines for guiding the user 140 to capture the vehicle part within the viewport. In one example, the viewport may he 120 pixels x 120 pixels. In 20 other embodiments the overlay may not be rectangular in shape. but more in the general shape of the vehicle part. In this example the overlay would be substantially oval shape with the length of the oval laying horizontally. The overlay feature may be selected from a set of predefined overlays based on attributes of the vehicle part of and/or vehicle received at block 330 and 340 25 described below. An orientation feature 440 for guiding the orientation and/or angle of the user device 142 when the target image is taken. This function 440 relies on the acelerometer in the sensor system (see 268 in Fig. 2) and the user device 142 30 may be adjusted until a bubble represetation 422 appears within the appropriate boundary of a "spirit level" (eg. within the inner circle as shown), In one example, the capture button 420 will only appear once the acceptable orientation is obtained, e g. when the user device 142 is held flat. The tip may inform the user of the particular perspective view that should be captured. 35 WO 2014/186840 PCT/AU2014/050046 9 A settings feature 450 for adapting a setting of the camera system 240, such as a flash setting feature to automatically enable or disable. the flash setting of the camera system to improve the quality of the target image. 5 Tips 460 for guiding the user 140 during the capture of the target image 410. For example, the Lip may be to request the user 140 to capture the target image 410 against a white background and/or A4 paper (as shown) and/or to move to a brighter spot if the inage is too dark. The tip may be also to request the user 140 to align the vehicle part in the centre of the screen. 10 The target image 410 is then captured and stored, such as when a user's touch input is detected on a capture button 420 on the screen 400 (see also block 320 in Fig. 3). Although one target inage 410 is shown, it will be appreciated that another image and/or additional information may be requested if no match is found. Multiple 15 candidate images of the same vehicle part may be stored in the data store 130 to improve accuracy. In another example multiple target images of the vehicle part may be captured from different perspectives (ug- front and iear). This generally improves the accuracy of the subsequent inage procisslng process but makes it more resource intensive because more images are processed. 20 According to blocks 330 and 340 in Fig. 3, the image recognition application 144 may provide a user interface to obtain one or more attributes of the vehicle part. The attributes may be used to improve the accuracy of the image recognition process. An example interface 500 is shown in Fig. 5. which may be used by the user to provide 25 vehicle information (e-g. make and/or model) and size of the vehicle part (e.g. width and length). A list of vehicle manufacturers and models may be provided to the user for selection. Blocks 330 and 340 are optional, and a user may skip through it by selecting 'next. 30 Processing of Target Image According to bIlok 350 in Fig. 3. the target image is processed to facilitate the subsequent image recognition process. This may involve one or more of the following: Resizing the target image to reduce the data size of the image and make the 35 subsequent image recognition process more efficient. For example, the final image size may be 15 to 35 KB.
WO 2014/186840 PCT/AU2014/050046 10 Filtering the target image to improve its quality, for example to remove shadows and brighten the target image. 5 Cropping the target image to a predetermined site, for example to maximize the size of the vehicle within the viewport, or cropping to be the same as the viewport in Fig. 4. Estimating one or more attributes of the vehicle part captured in the target image. 10 An example is the size of the vehicle part. which may be estimated if the vehicle part is captured against a background of predetermined size (e.g, A4 paper). Based on ratio between the vehicle part and the background, the width and/or length of the vehicle part may be estimated. 15 Block 350 may be performed by the image recognition application 144 without any further input from the user. The processed target image is now ready for image recognition. Image recognition 20 According to block 360 in Fig. 3. image recognition is then performed to identify the vehicle part in the target image. In particular, the target image is compared with multiple candidate images 134 stored in the data store 130 shown in Fig. I to identify matches. 25 At block 360, one of the following architecture may be implemented: (a) "Thin client" architecture In one example. block 360 may be performed by the user device 142 in conjunction with the API server 110 and image recognition server 120. For 30 example, the target image is first sent to the API server 1. 1.0 after block 350 in Fig. 3. The API server 110 then provides the target image to the image recognition served 120. which then provides the matching results by sending them directly to the user device 142 or via the API server 110. 35 Information exchange between the server 110/120 and the user device 142 may he performed in any suitable format, such as eXtensible Markup Language WO 2014/186840 PCT/AU2014/050046 11 (XML). To facilitate faster image transfer, a 'lay loading' process may be used where the loading process is performed in the background and the user 140 can continue using the application 144. 5 (b) "Thick client" architecture Alternatively, block 360 may be performed by the user device 142 In this ease, the user device 142 may access candidate images in the data store 130 directly via the communications network 1150, The thick client architecture may be implemented if the user device 142 has (he processing capability to perform 10 imaec rec ignition within an acceptable timeframe. Otherwise. the thin client architecture may be preferred in practice. Referring also to Fig. 6. an example image recognition process will be explained. In this example. an example target image 610 to be matched is shown in Fig. 6(a) whereas 15 Fig. 6(b) shows a higher quality version 620 of the same image after it is processed at block 350 in Fig. 3. According to block 362 in Fig. 3. one or more visual features are extracted from the vehicle part captured in the target image 610, In the example in Fig. 6(c). 20 features 630 may relate to the shape, configuration and dimensions of its backing plate and friction pad of the brake pad. According to block 364 in Fig. 3, a set of candidate images are identified from the data store 130. An example set 640 is shown in Fig. 6(d), which includes 25 candidate images of brake pads that may be matched with the target in Fig. 6(c). The set of candidate images may be filtered based on the attribute(s) provided by the user at blocks 330 and 340 in Fig. 3. For example, if the user has provided a particular vehicles make and/or model, only candidate images associated with those attributes are identified as candidate images. In other 30 examples where blocks 330 and 240 were not performed, the candidate images may be the entire image library. According to block 366 in Fig. 3. the target image is compared with the set of candidate images to identify the vehicle part in the target image. For example, 35 this may involve comparing the visual features 630 extracted from the vehicle with visual features 650 of each candidate image in Fig. 6(d). The similarities WO 2014/186840 PCT/AU2014/050046 12 and/or differences of the features are compared (ie, 630 vs 650) to obtain one or more matching results. For example, in Fig. 6(d), the most relevant result is indicated at 660. 5 Although one example is provided, it will be appreciated that any suitable image recognition processes may be used. such as algorithms based on spectral graph techniques to cluster visual features; algorithms based on machine learning (e g. neural network, support vector machine), algorithms based on transformations such as FFT based correlation algorithms, colour histograms or any other suitable algorithm. The 10 image recognition algorithm may include shape comparison algorithm enables comparison of target image to the database of the library of vehicle part images and returns result showing matching parts in order of probability. The library of images in the data store 130 may he stored according to an image 15 protocol. Multiple images may be stored for a particular vehicle part, for example those taken from different perspectives or close-ups of different sub-parts., The image protocol may include a naming convention, such as 'DBXXXYYY-ZZ.jpg' where 'DBXXX' indicates the part number, 'YYYI indicates extra information (e.g. sub-parts Inner, Outer, Right Hand, Left Hand, Stealth) and 'ZZ' indicates the type of user device 20 used to capture the image (e.g. iPhone 3GS, 4, 4S, 5). The images may be cropped to a particular size (e.g. 420 pixel x 420 pixel). As can be see information of the vehicle part captured in the image in the library is encoded in the protocol. Alternatively or in addition, information of the vehicle part captured in the image may be stored in the datastore in an associated manner. such as metadata of the image or in the same or 25 related record in the database, The data store 130 may be optimised to improve accuracy and speed of image recognition. For example, an index may be built that facilitates fast retrieval of candidate images. 30 Results interface According to block 370, one or more results are provided to the user 140 on the user device 142. One example interface 700 in shown in Fig, 7(a). Each result represents a candidate vehicle pait that best matched the target image, Therefore the set of images 35 that form the matching results each potentially identify the vehicle part in the target image, Each result includes a thumbnail image 710 and information 712 of the WO 2014/186840 PCT/AU2014/050046 13 candidate vehicle part. such as the vehicle pard's attributes being one or more of identifier (part number), vhicl'cs makee, vehicle's model, part variations etc. This information can be extracted fIrom the naming protocol or extracted from. information stored associated with the result in the data store 130. 5 The results may also be ranked according to their percentage of accuracy or true match probability 716, which may be computed based on the similarities and/or differences between the target image and each result. The results may be categorized into different groups based on their percentage of accuracy 716, such as 'recommended' and 10 'alternative' (not shown for simplicity). For example, the top two results may be categorised as 'recommended' and shown at the top of the screen, and the remaining results eategorised as 'alternative'. A user 140 can scroll through the results by providing a scroll gesture of the touch screen interface, for example. 15 Fig. 7(b) shows an example interface 750 for filtering the results in Fig. 7(a). In particular, the results may be filtered based on attribute(s) of the vehicle part (if not provided at blocks 330 and 340 in Fig. 3), For example, the attributes may be the vehicle's nake 752. vehicle's model 754 and any other text-based key words 756. 20 Fig. 8(a) shows an example interface 800 on w which the filtered result is shown. In this example, only one result matches with the attributes provided. Details of the result 8 10 may be viewed by selecting the result., for example. by providing a tap gesture on the 'next' icon 814 on the interface 800 (the same icon 714 is also shown in Fig. 7(a)). 25 Fig. 8(b) shows an example interface 850 for displaying details of a particular result. Each result specifies one of more of the following : identifier (e.g. 'DB1170'); images (e.g. from different perspectives); vehicle's make and model (e.g. 'Subaru Impreza WRX STI WRX STI 4 Pot Front / 2 Pot Rear 1999-2001'); types of the vehicle part (e.g. 'General CT' '4WD'. 'Heavy Duty' and 'Utltimate') and other information such 30 as dimensions (e.g. '4 pads 108 x 42 x 14 mm), The result may be saved by selecting a save result' button 852 on the interface 850. in which case the information will be saved onto the user device 142. The interface 850 also allows a user 140 to contact their preferred supplier by selecting 35 the 'contact preferred supplier' button 854. In this case, one or more preferred suppliers may be stored by the image recognition application 144. Once the button 854 WO 2014/186840 PCT/AU2014/050046 14 is selected, the result will be sent to the preferred supplier (e.g. via email or message) to place an order or make an enquiry. The interface 850 further allows the user 140 to retrieve supplier information associated 5 with the result displayed. For example, the supplier information (see 136 in Fig. 1) may include contact information (e.g. address, phone number) and/or inventory information (e.g. whether a vehicle part is available, how many is available). The supplier information may be retrieved based on the location information collected 10 using the positioning system (250 in Fig. 2) of the user device 142. In the example in Fig. 8(b), the interface 850 allows the user 140 to find one or more suppliers based on the location of the user device 142 by selecting the "find the nearest supplier' button 856. In this example, the supplier's location is stored as part of the supplier information 136 in the data store 130, 15 Fig. 9 shows an example interface 900 for displaying the results of the supplier search. The interface 900 provides a list of suppliers for vehicle part 'DBI170'. and their distance from the user device 142. contact deals and inventory information. In this case, 'Supplier A' is the closest, but does not have the vehicle part in stock. 'Supplier 20 B' is I km further away but has the vehicle part. Each result may be selected using the 'next button' to view the supplier in more detail. The interface 900 also allows the user 140 of the user device 142 to order the vehicle part from one or more of the suppliers displayed. For example, a supplier may be 25 selected using the 'selection' button 910 provided before the 'order' button 920 is selected. In this case, the order will be processed and sent to a computing device 162 of the relevant supplier 160 (see Fig. 1). It will be appreciated that if there are no results found, the user 140 may be presented 30 with an interface with the option of taking a new picture or starting over with a new search. Results analysis In one example, target images captured using the inage recognition application 144 35 may be stored in the data store 130. The purpose is to analyse how users 140 use the application 144 to facilitate future developments and identification of application bugs, WO 2014/186840 PCT/AU2014/050046 15 The result the user 140 finds most relevant may also be sent to the server 110/120 to further improve the image recognition process. For example, the results may be used as inputs to a supervised learning process to improve the accuracy of the mapping 5 between target images and candidate images in the data store 130. The image recogniion process may be reviewed from time to time. At the same time supplier information 136 can also be dynamically updated based on information received from suppliers such as by direct communications or by scrapping 10 of the supplier websites. Server 110/120 Referring to Fig. 10, an example structure of a network device capable of acting as either one or more of the server 110 and 120 in Fig. I is shown. The example network 15 device 1000 includes a processor 1010, a memory 1020 and a network interface device 1040 that communicate with each other via bus 10130. The network device 1000 is capable of communicating with the user devices 142 via the network interface device 1040 and a wide area communications network 130, for example, including an inport and outport port of the network device interface 1040. 20 In the example in Fig. 10, the memory 1020 stores machine-readable instructions 1024 to implement functions of the server 110. Although the data store 130 in Fig. I is shown as a separate entity, the information in the data store 130 may be stored in the memory 1020 on the server 110/120. 25 For example, the various methods, processes and functional units described herein may be implemented by the processor 1010. The term 'processor' is to be interpreted broadly to include a CPU, processing unit, ASIC, logic unit, or programmable gate array etc. The processes, methods and functional units may all be performed by a single 30 processor 100 or split between several processors (not shown in Fig. 10 for simplicity); reference in this disclosure or the claims to a processor' should thus be interpreted to mean 'one or more processorst. Although one network interface device 1040 is shown in Fig. 10., processes performed 35 by the network interface device 1040 may be split between several network interface devices, As such, reference in this disclosure to a 'network interface device' should be WO 2014/186840 PCT/AU2014/050046 16 interpreted to mean 'one or more network interface devices", The processes, methods and functional units may be implemented as machine-readable instructions executable by one or more processors 1010, hardware logc] circuitry of the one or more processors 1010 or a combination thereof. 5 It should be understood that computer components, processing units, engines, software modules, functions and data structures described herein may he connected directly or indirectly to each other in order to allow any data flow required for their operations. It is also noted that software instructions or module can be implemented using various 10 methods. For example, a subroutine unit of code, a software function, an object in an object-oriented programming environment, a computer script, computer code or firmware can be used. The software components and/or functionality may be located on a single device or distributed over multiple devices depending on the application. 15 It should also be understood that although the terms 'first', 'second' etc. may have been used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first user interface could be termed a second user interface, and, similarly, a second user interface could be termed a first user interface, without departing front the 20 scope of the present disclosure. The first user interface and second user interface may not be the same user interface. Reference in the specification to "one embodiment" or "an embodiment" of the present invention means that a particular feature, structure or characteristic described in 25 connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrase "in one embodiment" appearing in various places throughout the specification are not necessarily all referring to the same embodiment. Unless the context clearly requires otherwise, words using singular or plural number also include the plural or singular number respectively. It will be 30 understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items, It will be appreciated by persons skilled in the art that numerous variations and/or modifications may be made to the invention as shown in the specific embodiments 35 without departing from the scope of the invention as broadly described. The present WO 2014/186840 PCT/AU2014/050046 17 embodiments are. therefore to be considered in all respects as illustrative and not restrictive.

Claims (8)

  1. 2. The computer-implemented method of claim . further comprising providing a user interface to order the vehicle part from a supplier. 15 3. The computer-implemented method of claim I or 2, wherein the user device is positioning-enabled and the method further comprises: collecting positioning information from a positioning system of the user device; and determining one or more suppliers of the vehicle part that are nearest to the user 20 device based on the positioning information.
  2. 4. The computer-implemented method of claim . 2 or 3. further comprising, prior to obtaining the target image from the camera system, providing a user interface comprising one or more of the following: 25 an overlay feature that guides the user to capture the v vehicle part within a viewport; an orientation feature to enable the camera systern only when the user device is held at an acceptable orientation; and a settings feature to adapt a setting of the camera system. 30
  3. 5. The computer-implemented method of any one of the preceding claims, further comprising: receiving one or more attributes of the vehicle part in the target image; and filtering the candidate images or matching results based on the one or more 35 attributes. WO 2014/186840 PCT/AU2014/050046 19
  4. 6. The computer-implemented method of claim 55 wherein the one or more attributes include one or more of: vehicle's make. vehicle's model and a dimension of the vehicle pail 5 7. The computer-implemented method of any one of the preceding claims, wherein the information of the candidate vehicle part includes one or more attributes of: part type, part identifier vehicle's make, vehicle's model and part variation. 8, The computer-implemented method of any one of the preceding claims further 10 comprising, prior to performing image recognition. processing the target image by performing one or more of the following: resizing the target image; filtering the target image; cropping the target image to a predetermined size: and 15 estimating a dimension of the vehicle part in the target image. 9, The computer-implemented method of any one of the preceding claims, wherein performing image recognition comprises extracting one or more visual features of the vehicle part from the target image; 20 and comparing the target image with the candidate images based on the one or more visual features.
  5. 10. The computer-implemented method of any one of the preceding claims, wherein 25 performing image recognition comprises sending the target image to a server to compare the target image with candidate images; and receiving the one or more matching results from the server or a different server. 30 11. A user device for image recognition of a vehicle part, the device comprises a processor to perform the method according to any one of claims 1 to 10.
  6. 12. The user device of claim 11. further comprising a camera system to capture the target image and a display to display the user interface. 35 WO 2014/186840 PCT/AU2014/050046 20 1. Computer program to cause a user device to perform the method of image recognition of a vehicle part according to any one of claims I to 10. 14, A computer-implemented method for image recognition of a vehicle part on a 5 network device capable of acting as a server, the method comprising: receiving a target image of the vehicle part from a user device; performing image recognition to identify the vehicle part in the target image, wherein image recognition comprises comparing the target image with candidate images to obtain one or more matching results; and 10 providing the one or more matching results to the user device,. wherein each result comprises an image and information of a candidate vehicle part.
  7. 15. A network device capable of acting as a server for image recognition of a vehicle part comprising a processor to perform the method according to claim 14. 15 16 The network device of claim 14, further comprising an interface to receive the target image and to provide the one or more matching results.
  8. 17. Computer program to cause a network device capable of acting as a server to 20 perform the method according to claim 14.
AU2014271204A 2013-05-21 2014-05-21 Image recognition of vehicle parts Active AU2014271204B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2014271204A AU2014271204B2 (en) 2013-05-21 2014-05-21 Image recognition of vehicle parts

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
AU2013901813A AU2013901813A0 (en) 2013-05-21 Image recognition of vehicle parts
AU2013101043A AU2013101043A4 (en) 2013-05-21 2013-05-21 Image recognition of vehicle parts
AU2013901813 2013-05-21
AU2013101043 2013-05-21
PCT/AU2014/050046 WO2014186840A1 (en) 2013-05-21 2014-05-21 Image recognition of vehicle parts
AU2014271204A AU2014271204B2 (en) 2013-05-21 2014-05-21 Image recognition of vehicle parts

Publications (2)

Publication Number Publication Date
AU2014271204A1 true AU2014271204A1 (en) 2015-12-03
AU2014271204B2 AU2014271204B2 (en) 2019-03-14

Family

ID=51932634

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2014271204A Active AU2014271204B2 (en) 2013-05-21 2014-05-21 Image recognition of vehicle parts

Country Status (3)

Country Link
AU (1) AU2014271204B2 (en)
NZ (1) NZ630397A (en)
WO (1) WO2014186840A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10163033B2 (en) 2016-12-13 2018-12-25 Caterpillar Inc. Vehicle classification and vehicle pose estimation

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
SE1750463A1 (en) 2017-04-20 2018-10-21 Wiretronic Ab Method and computer vision system for handling of an operable tool
JP7005932B2 (en) * 2017-04-28 2022-01-24 富士通株式会社 Search program, search device and search method
KR102023469B1 (en) * 2019-04-02 2019-09-20 이재명 Body parts manufacturing system
CN110659567B (en) * 2019-08-15 2023-01-10 创新先进技术有限公司 Method and device for identifying damaged part of vehicle
CN111310561A (en) * 2020-01-07 2020-06-19 成都睿琪科技有限责任公司 Vehicle configuration identification method and device
CN114663871A (en) * 2022-03-23 2022-06-24 北京京东乾石科技有限公司 Image recognition method, training method, device, system and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060030985A1 (en) * 2003-10-24 2006-02-09 Active Recognition Technologies Inc., Vehicle recognition using multiple metrics
US7751805B2 (en) * 2004-02-20 2010-07-06 Google Inc. Mobile image-based information retrieval system
US9135277B2 (en) * 2009-08-07 2015-09-15 Google Inc. Architecture for responding to a visual query

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10163033B2 (en) 2016-12-13 2018-12-25 Caterpillar Inc. Vehicle classification and vehicle pose estimation

Also Published As

Publication number Publication date
NZ630397A (en) 2017-06-30
AU2014271204B2 (en) 2019-03-14
WO2014186840A1 (en) 2014-11-27

Similar Documents

Publication Publication Date Title
AU2014271204B2 (en) Image recognition of vehicle parts
JP7058760B2 (en) Image processing methods and their devices, terminals and computer programs
EP3125135B1 (en) Picture processing method and device
US8320644B2 (en) Object detection metadata
CN109087376B (en) Image processing method, image processing device, storage medium and electronic equipment
WO2016101757A1 (en) Image processing method and device based on mobile device
WO2019105457A1 (en) Image processing method, computer device and computer readable storage medium
CN110909209B (en) Live video searching method and device, equipment, server and storage medium
TWI470549B (en) A method of using an image recognition guide to install an application, and an electronic device
JP7210089B2 (en) RESOURCE DISPLAY METHOD, APPARATUS, DEVICE AND COMPUTER PROGRAM
WO2017107855A1 (en) Picture searching method and device
US20220222831A1 (en) Method for processing images and electronic device therefor
CN105335714A (en) Photograph processing method, device and apparatus
WO2022068719A1 (en) Image display method and apparatus, and electronic device
CN110019907B (en) Image retrieval method and device
US20170200062A1 (en) Method of determination of stable zones within an image stream, and portable device for implementing the method
CN105426904A (en) Photo processing method, apparatus and device
CN110610178A (en) Image recognition method, device, terminal and computer readable storage medium
AU2013101043A4 (en) Image recognition of vehicle parts
EP2800349B1 (en) Method and electronic device for generating thumbnail image
WO2022016803A1 (en) Visual positioning method and apparatus, electronic device, and computer readable storage medium
WO2019075644A1 (en) Portrait photograph searching method, and terminal
JP2023519755A (en) Image registration method and apparatus
CN110942065B (en) Text box selection method, text box selection device, terminal equipment and computer readable storage medium
WO2019174606A1 (en) Image processing method and terminal

Legal Events

Date Code Title Description
FGA Letters patent sealed or granted (standard patent)