WO2014186840A1 - Image recognition of vehicle parts - Google Patents

Image recognition of vehicle parts Download PDF

Info

Publication number
WO2014186840A1
WO2014186840A1 PCT/AU2014/050046 AU2014050046W WO2014186840A1 WO 2014186840 A1 WO2014186840 A1 WO 2014186840A1 AU 2014050046 W AU2014050046 W AU 2014050046W WO 2014186840 A1 WO2014186840 A1 WO 2014186840A1
Authority
WO
WIPO (PCT)
Prior art keywords
target image
vehicle part
image
image recognition
vehicle
Prior art date
Application number
PCT/AU2014/050046
Other languages
French (fr)
Inventor
Andrew Robert Bates
David Nathan Woolfson
Ian Keith Bott
George Kyriakopoulos
Original Assignee
Fmp Group (Australia) Pty Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2013101043A external-priority patent/AU2013101043A4/en
Priority claimed from AU2013901813A external-priority patent/AU2013901813A0/en
Application filed by Fmp Group (Australia) Pty Limited filed Critical Fmp Group (Australia) Pty Limited
Priority to NZ630397A priority Critical patent/NZ630397A/en
Priority to AU2014271204A priority patent/AU2014271204B2/en
Publication of WO2014186840A1 publication Critical patent/WO2014186840A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces

Definitions

  • the present disclosure concerns methods, computer programs, user device and network device for image recognition of vehicle parts.
  • Mechanics generally rely on hard copy catalogues of a part manufacturer when ordering vehicle parts. For example, when a vehicle part needs to be replaced, mechanics generally relies on their knowledge of the vehicle part or manually searches through catalogue to identify the vehicle part-
  • An discussio of documents, acts, materials, devices, articles or the like which has been included in the present specification is not to be taken as an admission that any or all of these matters form part of the prior art base or were common general knowledge in the field relevant to the present disclosure as it existed before the priority date of each claim of this application.
  • a computer-implemented method for image recognition of a vehicle part, on a user device comprising:
  • image recognition comprises comparing the target image with candidate, images to obtain one or more matching results; and 7 providing the one or more matching results on a user interface, wherein each matching result comprises an image and information of a candidate vehicle part.
  • the computer-implemented method may further comprise providing a user interface to order the vehicle part from a supplier.
  • the user device may be positioning-enablsd, in which case the method may further comprise:
  • the camera system may be a camera system on the user device.
  • the method may further comprise, prior t obtaining the target image from the camera system, providin a user interface comprising one or more of the following:
  • an overlay feature that guides the user to capture the vehicle part within a viewport; an orientation feature to enable the camera system when the user device is held at an acceptable orientation;
  • a settings feature to adapt a setting of the camera system.
  • the method may further comprise:
  • the one or more attributes may include one or more of: vehicle's make, vehicle's model and a dimension of the vehicle part.
  • the method may further comprise, prior to performing image recognition, processing the target image by performing one or more of the following;
  • Performing image recognition may further comprise: extracting ' one or more visual features of the vehicle part from the target image; and
  • performing image recognition may comprise:
  • a user device for image recognition of a vehicle part comprises a processo to perform the method described above.
  • the user device may further comprise a camera system to capture the targe image and a display to display the user interface.
  • a computer-implemented method for image recognition of a vehicle part on a network device capable of acting as a server comprising;
  • image recognition comprise comparing the target image with candidate images to obtain one or more matching results
  • each result comprises an image and information of a candidate vehicle part.
  • a network device capable of acting as a server for image recognition of a vehicle part comprising a processor to perform, the method described directly above.
  • the network device may further comprise an interface to receive the target image and to provide the one or more matching results.
  • Fig. 1 is a block diagram of an example system for image recognition of vehicle parts
  • Fig, 2 is a block diagram of an example structure of an electronic device capable of acting as a user device in Fig. 1;
  • Fig. 3 is a flowchart of steps performed by an image recognition application on a user device in Fig. 1 ;
  • Fig. 4 is an example interface for capturing a target image of a vehicle part
  • Fig. 5 is an example interface for providin attributes of a vehicle part
  • Fig, 6(a) is an example target image
  • Fig. 6(b) is the example target /image in Fig. 6(a) after image processing
  • Fig. 6(c) is the example target image in Fig. 6(b) after feature extraction
  • Fig. 6(d) is an example set of candidate images to which the example target image in Fig. 6(c) is compared to when image recognition is performed;
  • Fig. 7(a) is an example interface for displaying results of image recognition
  • Fig, 7(b) i an example interface for filtering the result of image recognition in Fig. 7(a);
  • Fig. 8(a) is the example interface in Fig. 7(a) after the results are filtered according to Fig. 7(b);
  • Fig. 8(b) is an example interface for displaying a result
  • Fig. 9 is an example interface for displaying supplier information and order placement.
  • Fig. 10 is an example structure of network; device capable of acting as a server.
  • Fig, 1 is a block diagram of an example 100 for image recognition system of vehicle parts.
  • the system 100 comprises Application Program Interface (.API) server 110 and image recognition server 120 that are in communication with each other and multiple user devices 142 operated by users 140 over, a communications network 150, 152,
  • .API Application Program Interface
  • the users 140 of the user deviee 142 may be mechanics or vehicle repairers who wish to identify vehicle- part such as brake pads, brake rotors, brake shoes, brake drums, loaded caliper, reman bare caliper, semi loaded caliper, wheel cylinder, clutch master cylinder, slave cylinder, brake hydraulic hose etc.
  • a software application in the form of an image recognition application 144 is installed on each user device.
  • the user devices 142 communicate with the API server 1 10 to access image recognition services provided by the image recognition server 120.
  • the API server 1 10 and image recognition server 120 have access to a data store 130 (either vi the communications network 150 as shown or directly) to retrieve various information such as user information 132, vehicle part information 134 and supplier information 136.
  • image recognition of a vehicle part include the following:
  • a target image of the vehicle part is obtained, for example from camera system of the user device 1 2,
  • Image recognition is performed to identify the vehicle part in the target image. For example, image recognition may involve comparing the target image with candidate images to obtain one or more best .matching results.
  • the image recognition application 144 facilitates faste and more efficient recognition of vehicle parts. Using the application 144, a user does not have to perform the manual process of searching through hard copy catalogues (which may not be complete or current) t identify the vehicle part. Since the image recognition application 144 is able to provide access to the latest vehicle part information, this also reduces or removes the need for manufacturers and/or suppliers to print various catalogues, thereby saving costs and efforts.
  • the image recognition application 144 may be used conveniently since it is accessible by users 140 anytime and anywhere, e.g. at their workshops, or at a crash site for example.
  • the image recognition application 144 may be implemented on any suitable Internet- capable user devices 142, such as a smartphone (e.g. Apple iPhone 3GS, 4S, 4, 5), tablet computer ⁇ .g. Apple iPad), personal digital assistant, desktop computer, laptop computer, and any other suitable device,
  • the image recognition application 144 may be downloaded onto the user device 142.
  • the image recognition application 144 may be downloaded from the "Blackberry App World” fo Blackberr devices (trade marks of Research In Motion Limited), and from the "Android Market” or "Google Play” for Android devices (trade marks of Google, Inc.).
  • the image recognition application 144 may also be pre-programmed on the user 142..
  • the user device 142 may be a mobile electronic device.
  • the electronic device 200 in Fig. 2 comprises one or more processors 202 in communication with a memory interface 204 coupled to memory 210, and a peripherals interface 206.
  • the memory 21 may include random access memory and/or nonvolatile memory, such a magnetic disc storage devices etc.
  • the memory 210 stores various application 23(5 including the image recognition application 144; an operating system 212; and executable instruction to perform communications functions 214; graphical user interface processing 216; sensor processing 218; phone-related functions 220; electronic ' messaging functions 222; web browsing functions 224; camera functions 226; and GPS or navigation functions 228.
  • the applications 230 implemented on the electronic device 200 include the image recognition application 144, and other applications (not shown for simplicity) such as web browsing application, an email application, a telephone application, a video conferencin application, a video camera application, a digital camera, a photo management application, a digital music application, a digital video application, etc.
  • Sensors, devices and systems can be coupled to the peripherals interface 204 to facilitate various functionalities, such as the following.
  • Camera system 240 is coupled to an optical sensor 242, such as a charged coupled device (CCD) or a complementary metal-oxide semiconductor (CMOS) optical sensor, to facilitate camera functions.
  • Positioning system 250 collects geographical location information of the device 14 b employin any suitable positioning technology such as GPS Assisted- GPS (aGPS). GPS generally uses signals from satellites alone, while aGPS additionally uses signals from base stations or wireless access points in poor signals condition. Positioning system 250 may be integral with the device or provided by a separate GPS-enabled device coupled to the electronic device 342.
  • aGPS GPS Assisted- GPS
  • I/O system 260 is coupled to a touch-sensitive display 262 sensitive to hapiic and/or tactile contact via ' a user, and/or other input devices such as buttons.
  • the touch-sensitive display 262 may also comprise a multi- touc sensitive displa that can, for example, detect and process a number of touch points simultaneously.
  • Other touch-sensitive display technologies may also be used, such as display in which contact is made using a stylus.
  • the device 142 translates finger-based input (which is les precise due to the larger area of finger contact) into more precise pointer- or cursor-based input for performing actions desired by the user 140.
  • Wireless communications system 264 is designed to allow wireless communications over a network employing suitable communications protocols, standards and technologies such as GPRS, EDGE, WCDMA, OFDMA, Bluetooth, Wireless Fidelity (WiFi) or Wi-MAX and Long-Term Evolutio
  • suitable communications protocols such as GPRS, EDGE, WCDMA, OFDMA, Bluetooth, Wireless Fidelity (WiFi) or Wi-MAX and Long-Term Evolutio
  • Sensor system 268, such as an accelerometer, a light sensor and a proximity sensor are used to facilitate orientation, lighting and proximity functions, respectively.
  • Audio system 270 can be coupled to a speaker 272 and microphone facilitate voice-enabled, functions such as telephony functions
  • voice-enabled, functions such as telephony functions
  • the image recognition applicatio 144 may support portrait and/or landscape modes.
  • Fig. 3 shows an example method performed by the image recognition system 100.
  • the image recognition application 144 first provides a user interface to obtain a target image of a vehicle part to be replaced. Provides is understood to mea image recognition application 144 operates to provide the necessary information to the user device 142 so that the user device 142 can display on the display 262 the user interface.
  • an example user interface 400 is provided to capture a target image 410 of a vehicle part (e.g. brake pad) using the camera system (240 in Fig. 2) of the user device 142.
  • the user interface 400 may include one or more of the following:
  • an overlay feature that defines a viewport using a set of horizontal 430 and vertical 432 lines for guiding the use 140 to capture the vehicle part within the viewport may be 120 pixels x 120 pixels, in other embodiments the overlay may not he rectangular in shape, but more in the general shape of the vehicle part. In this example the overlay would be substantially oval shape with the length of the oval laying horizontally.
  • the overlay feature may be selected from a set of predefined overlays based on attributes of the vehicle part of and/or vehicle received at block 330 and 340 described below.
  • This function 440 relies on the accelerometer in the sensor system (see 268 in Fig. 2) and the user device 142 may be adjusted until a bubble representation 422 appears within the appropriate boundary of a "spirit level" (e.g. within the inner circle as shown).
  • the capture button 420 will onl appear once the acceptable orientation is obtained, e.g. when the user device 142 is held fiat. The ti may inform the user of the particular perspective view that, should be captured.
  • a settings feature 450 for adapting a setting of the camera system 240» such as a flash setting feature to automatically enable or disable the flash setting of the camera system to impro ve the quality of the target image.
  • the tip may be to request the user 140 to capture the target image 410 against a white background and/or A4 paper (as shown) and/o to move to a brighte spot if the image is too dark.
  • the tip may be also to request the user 1 0 to align the vehicle part in the centre of the screen.
  • the target image 410 is then captured and stored, such a when a user' touch input is detected on a capture button 420 on the screen 400 (see also block 320 in Fig, 3). Although one target image 410 is shown, it will be appreciated that another image and/or additional information may be requested if no match i found. Multiple candidate images of the same vehicle part may be stored in the data store 130 to improve accuracy. In another example, multiple target images of the vehicle part ma be captured from different perspectives (e.g. front and rear). This generall improves the accuracy of the subsequent image processing process, but makes it more resource- intensive because more images: are processed.
  • the image recognition application 144 may provide a user interface to obtain one or more attribute of the vehicle part.
  • the attributes may be used to improve the accuracy of the image recognitio process.
  • An example interface 500 is shown in Fig. 5, which may be used by the user to provide vehicle information (e.g. make and/or model) and size of the vehicle part (e.g. width and length).
  • vehicle information e.g. make and/or model
  • size of the vehicle part e.g. width and length
  • a list of vehicle manufacturers and models may be provided to the user for selection.
  • Blocks 330 and 340 arc optional, and a user may skip through it by selecting 'next'. Processing of Target Image
  • the target image is processed to. facilitate the subsequent image recognition process.
  • Thi may involve one or more of the following:
  • the final image size may be 15 to 35 KB.
  • Filtering the target image to improve its quality for example to remove shadows and brighten the target image.
  • Esti matin one or more attributes of the vehicle part captured in the target image.
  • An example is the size of the vehicle pail, which may be estimated if the vehicle part i captured against a background of predetermined size (e.g. A4 paper). Based on ratio between the vehicle part and the background, the width and/or length of the vehicle part may be estimated.
  • Block 350 may be performed by the image recognition application 144 without any further input irom the user. The processed target image is now ready for image recognition.
  • image recognition is then performed to identify the vehicle part in the target image.
  • the target image is compared with multiple candidate images 134 stored in the dat store 130 shown in Fig, 1 to identify matches.
  • one of the following architecture may be implemented: (a) "Thin client" architecture
  • block 360 may be performed by the user device 142 in conjunction with the API server 110 and image recognition server 120, For example, the target image is first sent to the API server 110 after block 350 in
  • the API server 1 10 then provides the target image to the image recognition serve 120, which then provides the matching results by sending them directly to the user device 142 or via the API server 1 10.
  • Information exchange betwee the server 1 10/120 and the user device 142 may be performed in any suitable format, such as extensible Markup Language (XML).
  • XML extensible Markup Language
  • a 'lax-y loading' process ma be used Where the loading process is performed in the background and. the user 140 can continue using the application 144.
  • block 360 may be performed by the user device 142.
  • the user device 142 may access candidate images in the data store 130 directly via the communications network 1 150.
  • Th thick client architecture may be implemented if the user device 142 has the processing capabilitiesit to perform image recognition within an acceptable timeframe. Otherwise, the thin client architecture may be preferred in practice.
  • FIG. 6 an example image recognition process will be explained.
  • an example target image 610 to be matched is shown in Fig. 6(a) whereas Fig. 6(b) shows a higher quality version 620 of the same image after it is processed at block 350 in Fig. 3.
  • features 630 may relate to the shape, configuration and dimensions of its backing plate and friction pad of the brake pad.
  • a set of candidate images are identified from the data store 130.
  • An example set 640 is shown in Fig. 6(d), which includes candidate images of brake pads that may be matched with the target in Fig. 6(c).
  • the set of candidate images may be filtered based on the attribute-is ) provided by the user at blocks 330 and 340 in Fig. 3. For example, if the user has provided a particular vehicle's make and/or model, only candidate images associated with those attributes are identified as candidate images. In other examples where blocks 330 and 240 were not performed, the candidate images may be the entire image library.
  • the target image i compared with the set of candidate images to identify the vehicle- part in the target image.
  • thi may involve comparin the visual features 630 extracted from the vehicle. with visual features 650 of each candidate image in Fig. 6(d).
  • the similarities and/or differences of the features are compared (i.e. 630 v 650) to obtain one or more matching results.
  • the most relevant result is indicated at 660.
  • any suitable image recognition processes may be used, such as algorithms based on spectral graph techniques to cluster visual features; algorithms based on machine learning (e.g.
  • the image recognition algorithm may include shape comparison algorithm enables compariso of target image to the database of the library of vehicle part images and returns result showing matching parts in order of probability.
  • the library of images in the data store 130 may be stored according to an image protocol. Multiple image may be stored for a particular vehicle part, for example those taken from different perspectives or close-ups of different sub-parts.
  • the image protocol may include a naming convention, such as 'DB XXX_YYY_ZZ.jpg ! where 'DBXXX' indicates the part number, ⁇ ' indicates extra information (e.g. sub-parts Inner, Outer, Right Hand, Left Hand, Stealth) and 'ZZ' indicates the type of user device used to capture the image (e.g. i Phone 3-GS-, 4, 4S, 5),
  • the images may be cropped to a particular size (e.g. 420 pixel x 420 pixel).
  • information of the vehicle part captured in the image in the library is encoded in the protocol.
  • information of the vehicle part captured in the image may be stored i the datastore in an associated manner, such as metadata of the image or in the same or related record i the database.
  • the data store 130 may be optimised to improve accuracy and speed of image recognition.
  • an index may be built that facilitates fast retrieval of candidate images.
  • one or more results are provided to the user 140 on the user device 142.
  • One example interface- 700 in shown in Fig. 7(a) Each result- represents a candidate vehicle part that best matched the target image. Therefore the set of images that form the matchin results each potentially identify the vehicle part in the target image.
  • Each result includes a thumbnail image 710 and information 712 of the candidate vehicle part, such as the vehicle part's attributes being one or .more of identifier (part number), vehicle's make, vehicle's model, part variations etc. This information can be extracted from the naming protocol or extracted from inicmnation stored associated with the result in the data store 130,
  • the result may also be ranked according to their percentage of accuracy or true match probability 716, which may be computed based on the similarities and/or differences between the target image and each result.
  • the results may be categorised int different groups based on their percentage of accuracy 716, such as 'recommended' and 'alternative' (not shown tor simplicity). For example, the top two result may be categorised as 'recommended' and shown at the top of the screen, and the remaining results categorised as 'alternative'.
  • a user 140 can scroll through the results by providing a scroll gesture of the touc screen interface, for example.
  • Fig. 7(b) shows an example interface 750 for filtering the results in Fig. 7(a).
  • the results may be filtered based on ailribute(s) of the vehicle part (if not provided at blocks 330 and 340 in Fig. 3).
  • the attributes may be the vehicle's make 752, vehicle's model 754 and any ther te t-based keywords 756.
  • Fig. 8(a) shows an example interface 800 on which the filtered result is .shown. In this example, only one result matche with the attributes provided. Details of the result 810 may be viewed by selecting the result, for example, by providing a tap gesture on the 'next' icon 814 on the interface 800 (the same icon 714 i also shown in Fig. 7(a)).
  • Fig. 8(b) shows an.
  • example interface 850 for displaying details of a particular result.
  • Each result specifies one of more of the following: identifier (e.g. 'DB.1170'); images (e.g. from different perspectives); vehicle's make and model (e.g. 'Subaru Impreza WRX STI WRX STI 4 Pot Front / 2 Pot Rear 1999-2001'); types of the vehicle par (e.g. 'General CT ⁇ '4WD ⁇ 'Heavy Duty' and 'Ultimate') and other infomiation such as dimensions (e.g. '4 pads 108 x 42 x 14 mm).
  • the result may be saved by selecting a 'save result' button 852 on the interface 850, in which case the information will be saved onto the user device 1.42.
  • the interface 850 also allows a use 140 to contact their preferred supplier by selecting the 'contact preferred supplier' button 854.
  • one or more preferred suppliers may be stored by the image recognition application 144. Once the button 854 is selected, the result will be sent: to the preferred supplier (e.g. via email or message) to place an order or make an enquiry.
  • the interface 850 further allows the user 140 to retrieve supplier information associated with the result displayed.
  • the supplier information may include contact information (e.g. address, phone number) and/or inventor ⁇ ' information (e.g. whether a vehicle part is available, how many is available).
  • the supplier infonnation may be retrieved based on the location information collected using the positioning system ⁇ 250 in Fig. 2) of the user device 142.
  • the interface 85 allows the user 140 to find one or more suppliers based on the location of the user device 142 by selecting the 'find the nearest supplier' button 856.
  • the supplier's location is stored as part of the supplier information 136 in the data store 130,
  • Fig. 9 shows an example interface 900 for displaying the results of the supplier search.
  • the interface 900 provides a list of suppliers for vehicle part 'DB 1 17Q ⁇ and their distance from the user device 142, contact details and inventory Information.
  • 'Supplier A' is the closest, but does not have the vehicle part in stock.
  • 'Supplier B' is 1 km further away but has the vehicle part.
  • Each result may be selected using the 'next button' to view the supplier in more detail.
  • the interface 900 also allows the user 140 of the user device 1.42 to order the vehicle part: from one or more of the suppliers displayed. For example, a supplier may be selected using the 'selection' butto 910 provided before the 'order' button 920 is selected. In: thi case, the order will be processed and sent to a computing device 162 of the relevant supplier 160 (see Fig. 1).
  • the user 140 may be presented with an interface with the option of taking a new picture or starting over with a new search.
  • target images captured using the image recognition application 144 may be stored in the data store 130.
  • the purpose is to analyse how users 140 use the. application 144 to facilitate future developments and identification of application bugs.
  • the result the user 40 finds most relevant may also be sent to the server 110/120 to further improve the image recognition process.
  • the results may be used as inputs to a supervised learning process to improve the accuracy of the mapping between target image and candidate images in the data store 130.
  • the image recognition proces may be reviewed front time to time.
  • supplier information 136 can also be dynamically- updated based on information received from suppliers, such as by direct communications or b scrapping of the supplier websites .
  • the example network device 100 includes a processor 1010, a memory 1020 and a network interface devic 1040 that communicate with each other via bus 1030.
  • the network device 100 is capable of communicating- with the user devices 142 via the network interface device 1040 and a wide area communications network 130, for example, including an inport and outport port of the network device interface 1040.
  • the memor 1020 stores machine- readable instructions. 1024 to implement functions of the server 1 10.
  • the data store 130 in Fig, 1 is shown a a separate entity, the information in the data store 130 may be stored in the memory 1020 on the server 110/120.
  • processor 1010 may be implemented by the various methods, processe and functional unit described herein.
  • the term 'processor' is to be interpreted broadly to include a CPU, processin unit, ASIC, logic unit, or programmable gate array etc.
  • the processes, methods and functional units may all be performed by a single processor 100 or split between several processors (not shown in Fig. 10 for simplicity); reference in this disclosure or the claims to a 'processor' should tlms be interpreted to mean 'one or more- rocessors'.
  • network interface device 1040 Although one network interface device 1040 is shown in Fig. 10, processe performed by the network intcrfece device 1040 may be split between several network interlace devices. As such, reference in this disclosure to a 'network interface device' should be interpreted to mean One or more network interface devices".
  • the processes, methods and functional units may be implemented as machine-readable: instructions executable by one or more processors 1010, hardware logic circuitry of the one or more processors 1010 or a combination thereof.
  • first user interface could be termed a second user interface
  • second user interface could be termed a first user interface
  • the first user interface and second user interface may not be the same user interface.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The present disclosure concerns methods, computer programs, user device and network device for image recognition of vehicle parts. First, a target image of the vehicle part is obtained (320) from a camera system (240). Image recognition (360) to identify the vehicle part in the target image is performed. This comprises comparing (366) the target image with candidate images to obtain one or more matching results. Then, providing (370) the one or more results on a user interface (262). Each matching result comprises an image (710) and information (712) of a candidate vehicle part.

Description

Image recognition of vehicle parts
Cross- Reference to Related Applications
The present application claims priority from Australian provisional patent application 2013901813 and Australian innovation patent 2013101043 the contents of which are i corporated herein by reference.
Technical Field
The present disclosure concerns methods, computer programs, user device and network device for image recognition of vehicle parts.
Background
Mechanics generally rely on hard copy catalogues of a part manufacturer when ordering vehicle parts. For example, when a vehicle part needs to be replaced, mechanics generally relies on their knowledge of the vehicle part or manually searches through catalogue to identify the vehicle part- An discussio of documents, acts, materials, devices, articles or the like which has been included in the present specification is not to be taken as an admission that any or all of these matters form part of the prior art base or were common general knowledge in the field relevant to the present disclosure as it existed before the priority date of each claim of this application. Throughout this specification the word "comprise", or variations such as "comprises" or "comprising", will be understood to imply the inclusio of a stated element, integer or step, or group of elements, integers or steps, but not the exclusio of any other element, integer or step, or group of elements, integers or steps. Summary
There is provided a computer-implemented method for image recognition of a vehicle part, on a user device, the method comprising:
obtaining a target image of the vehicle part from a camera system;
performing image recognition to identify the vehicle part in the target, image, wherein image recognition comprises comparing the target image with candidate, images to obtain one or more matching results; and 7 providing the one or more matching results on a user interface, wherein each matching result comprises an image and information of a candidate vehicle part.
The computer-implemented method may further comprise providing a user interface to order the vehicle part from a supplier.
The user device may be positioning-enablsd, in which case the method may further comprise:
collecting positioning information from positioning system of the user device; and
determining one or more supplier of the vehicle part that are nearest to the user device, based on the positioning information.
The camera system may be a camera system on the user device. In this case, the method may further comprise, prior t obtaining the target image from the camera system, providin a user interface comprising one or more of the following:
an overlay feature that guides the user to capture the vehicle part within a viewport; an orientation feature to enable the camera system when the user device is held at an acceptable orientation; and
a settings feature to adapt a setting of the camera system.
The method may further comprise:
recei ving one or more attributes of the vehicle part in the target image; and filtering the candidate images or matching results based on the one or more attributes. In this case, the one or more attributes may include one or more of: vehicle's make, vehicle's model and a dimension of the vehicle part.
The method may further comprise, prior to performing image recognition, processing the target image by performing one or more of the following;
resizing the target image;
filtering the target image;
cropping the target image to a predetermined size; and
estimating dimension of the vehicle part in the target image. Performing image recognition may further comprise: extracting' one or more visual features of the vehicle part from the target image; and
comparing the target image with the candidate images based on the one or more isual features ,
Further, performing image recognition may comprise:
sending the target image to a server to compare the target image with candidate images; and
receiving the one or more matching results From the server or a different server,
There is provided a user device for image recognition of a vehicle part, the device comprises a processo to perform the method described above. The user device may further comprise a camera system to capture the targe image and a display to display the user interface.
There is provided a computer program to cause a user device to perform the method described above.
There is provided a computer-implemented method for image recognition of a vehicle part on a network device capable of acting as a server, the method comprising;
receiving a target image of the vehicle part from user device;
performing image recognition to identify the vehicle part in the target image, wherein image recognition comprise comparing the target image with candidate images to obtain one or more matching results; and
providing the one or more results to the user device,, wherein each result comprises an image and information of a candidate vehicle part.
There is provided a network device capable of acting as a server for image recognition of a vehicle part comprising a processor to perform, the method described directly above. The network device may further comprise an interface to receive the target image and to provide the one or more matching results.
There is provided computer program to cause a network device capable of acting as a server to perforin the method described directl above. Brief Description of Drawings
Examples of image recognition of vehicle parts will now be described with reference to the accompanying drawings, in which;
Fig. 1 is a block diagram of an example system for image recognition of vehicle parts;
Fig, 2 is a block diagram of an example structure of an electronic device capable of acting as a user device in Fig. 1;
Fig. 3 is a flowchart of steps performed by an image recognition application on a user device in Fig. 1 ;
Fig. 4 is an example interface for capturing a target image of a vehicle part;
Fig. 5 is an example interface for providin attributes of a vehicle part;
Fig, 6(a) is an example target image;
Fig. 6(b) is the example target /image in Fig. 6(a) after image processing;
Fig. 6(c) is the example target image in Fig. 6(b) after feature extraction;
Fig. 6(d) is an example set of candidate images to which the example target image in Fig. 6(c) is compared to when image recognition is performed;
Fig. 7(a) is an example interface for displaying results of image recognition
Fig, 7(b) i an example interface for filtering the result of image recognition in Fig. 7(a);
Fig. 8(a) is the example interface in Fig. 7(a) after the results are filtered according to Fig. 7(b);
Fig. 8(b) is an example interface for displaying a result;
Fig. 9 is an example interface for displaying supplier information and order placement; and
Fig. 10 is an example structure of network; device capable of acting as a server.
Detailed Description
Fig, 1 is a block diagram of an example 100 for image recognition system of vehicle parts. The system 100 comprises Application Program Interface (.API) server 110 and image recognition server 120 that are in communication with each other and multiple user devices 142 operated by users 140 over, a communications network 150, 152,
The users 140 of the user deviee 142 may be mechanics or vehicle repairers who wish to identify vehicle- part such as brake pads, brake rotors, brake shoes, brake drums, loaded caliper, reman bare caliper, semi loaded caliper, wheel cylinder, clutch master cylinder, slave cylinder, brake hydraulic hose etc. To facilitate image recognition of vehicle parts, a software application in the form of an image recognition application 144 is installed on each user device. The user devices 142 communicate with the API server 1 10 to access image recognition services provided by the image recognition server 120. The API server 1 10 and image recognition server 120 have access to a data store 130 (either vi the communications network 150 as shown or directly) to retrieve various information such as user information 132, vehicle part information 134 and supplier information 136. in one example, image recognition of a vehicle part include the following:
A target image of the vehicle part is obtained, for example from camera system of the user device 1 2,
Image recognition is performed to identify the vehicle part in the target image. For example, image recognition may involve comparing the target image with candidate images to obtain one or more best .matching results.
One or more results are provided on a user interface, each result including an image and information of a candidate vehicle part potentiall identifying the vehicle part in the target image. Advantageously, the image recognition application 144 facilitates faste and more efficient recognition of vehicle parts. Using the application 144, a user does not have to perform the manual process of searching through hard copy catalogues (which may not be complete or current) t identify the vehicle part. Since the image recognition application 144 is able to provide access to the latest vehicle part information, this also reduces or removes the need for manufacturers and/or suppliers to print various catalogues, thereby saving costs and efforts. The image recognition application 144 may be used conveniently since it is accessible by users 140 anytime and anywhere, e.g. at their workshops, or at a crash site for example.
User Device 142
Referring now to the block diagram in Pig. 2, an example electronic device 200 capable of acting as the user device 142 will now be explained.
The image recognition application 144 may be implemented on any suitable Internet- capable user devices 142, such as a smartphone (e.g. Apple iPhone 3GS, 4S, 4, 5), tablet computer { .g. Apple iPad), personal digital assistant, desktop computer, laptop computer, and any other suitable device, The image recognition application 144 may be downloaded onto the user device 142. For example, if t e user device 142 is an Apple device, the image recognition application 144 ma be a downloadable "App" that is available through the Apple App Store (trade marks of Apple, Inc). Similarly, the image recognition application 144 may be downloaded from the "Blackberry App World" fo Blackberr devices (trade marks of Research In Motion Limited), and from the "Android Market" or "Google Play" for Android devices (trade marks of Google, Inc.). The image recognition application 144 may also be pre-programmed on the user 142..
Capable of acting means having the necessary features to perform the functions described. In one example, the user device 142 ma be a mobile electronic device. The electronic device 200 in Fig. 2 comprises one or more processors 202 in communication with a memory interface 204 coupled to memory 210, and a peripherals interface 206. The memory 21 may include random access memory and/or nonvolatile memory, such a magnetic disc storage devices etc. The memory 210 stores various application 23(5 including the image recognition application 144; an operating system 212; and executable instruction to perform communications functions 214; graphical user interface processing 216; sensor processing 218; phone-related functions 220; electronic' messaging functions 222; web browsing functions 224; camera functions 226; and GPS or navigation functions 228.
The applications 230 implemented on the electronic device 200 include the image recognition application 144, and other applications (not shown for simplicity) such as web browsing application, an email application, a telephone application, a video conferencin application, a video camera application, a digital camera, a photo management application, a digital music application, a digital video application, etc. Sensors, devices and systems can be coupled to the peripherals interface 204 to facilitate various functionalities, such as the following.
Camera system 240 is coupled to an optical sensor 242, such as a charged coupled device (CCD) or a complementary metal-oxide semiconductor (CMOS) optical sensor, to facilitate camera functions. Positioning system 250 collects geographical location information of the device 14 b employin any suitable positioning technology such as GPS Assisted- GPS (aGPS). GPS generally uses signals from satellites alone, while aGPS additionally uses signals from base stations or wireless access points in poor signals condition. Positioning system 250 may be integral with the device or provided by a separate GPS-enabled device coupled to the electronic device 342.
Input/Output (I/O) system 260 is coupled to a touch-sensitive display 262 sensitive to hapiic and/or tactile contact via' a user, and/or other input devices such as buttons. The touch-sensitive display 262 may also comprise a multi- touc sensitive displa that can, for example, detect and process a number of touch points simultaneously. Other touch-sensitive display technologies may also be used, such as display in which contact is made using a stylus. The terms "touch-sensitive display" and "touch screen'' will be used interchangeably throughout the disclosure. In embodiments where user interface are designed t work with finger-based contact and gestures,, the device 142 translates finger-based input (which is les precise due to the larger area of finger contact) into more precise pointer- or cursor-based input for performing actions desired by the user 140.
Wireless communications system 264 is designed to allow wireless communications over a network employing suitable communications protocols, standards and technologies such as GPRS, EDGE, WCDMA, OFDMA, Bluetooth, Wireless Fidelity (WiFi) or Wi-MAX and Long-Term Evolutio
(LTE) etc.
Sensor system 268, such as an accelerometer, a light sensor and a proximity sensor are used to facilitate orientation, lighting and proximity functions, respectively.
Audio system 270 can be coupled to a speaker 272 and microphone facilitate voice-enabled, functions such as telephony functions Although one example implementation has been provided here, it will be appreciated that other suitable configuration capable of implementing the image recognition application 144 on the electronic device 200 may be used. It will be appreciated that the image recognition applicatio 144 may support portrait and/or landscape modes.
Target image
Fig. 3 shows an example method performed by the image recognition system 100. According to block 10 in Fig. 3, the image recognition application 144 first provides a user interface to obtain a target image of a vehicle part to be replaced. Provides is understood to mea image recognition application 144 operates to provide the necessary information to the user device 142 so that the user device 142 can display on the display 262 the user interface.
Referring also to Fig. 4, an example user interface 400 is provided to capture a target image 410 of a vehicle part (e.g. brake pad) using the camera system (240 in Fig. 2) of the user device 142. To improve the quality of the target image 410, the user interface 400 may include one or more of the following:
An overlay feature that defines a viewport using a set of horizontal 430 and vertical 432 lines for guiding the use 140 to capture the vehicle part within the viewport, in one example, the viewport may be 120 pixels x 120 pixels, in other embodiments the overlay may not he rectangular in shape, but more in the general shape of the vehicle part. In this example the overlay would be substantially oval shape with the length of the oval laying horizontally. The overlay feature ma be selected from a set of predefined overlays based on attributes of the vehicle part of and/or vehicle received at block 330 and 340 described below.
A orientatio feature- 440 for guidin the orientatio and/or angle of the user device 142 when the target image i taken. This function 440 relies on the accelerometer in the sensor system (see 268 in Fig. 2) and the user device 142 may be adjusted until a bubble representation 422 appears within the appropriate boundary of a "spirit level" (e.g. within the inner circle as shown). In one example, the capture button 420 will onl appear once the acceptable orientation is obtained, e.g. when the user device 142 is held fiat. The ti may inform the user of the particular perspective view that, should be captured. A settings feature 450 for adapting a setting of the camera system 240» such as a flash setting feature to automatically enable or disable the flash setting of the camera system to impro ve the quality of the target image. Tip 460 for guiding the user 140 during the capture of the target image 410. For example, the tip may be to request the user 140 to capture the target image 410 against a white background and/or A4 paper (as shown) and/o to move to a brighte spot if the image is too dark. The tip may be also to request the user 1 0 to align the vehicle part in the centre of the screen.
The target image 410 is then captured and stored, such a when a user' touch input is detected on a capture button 420 on the screen 400 (see also block 320 in Fig, 3). Although one target image 410 is shown, it will be appreciated that another image and/or additional information may be requested if no match i found. Multiple candidate images of the same vehicle part may be stored in the data store 130 to improve accuracy. In another example, multiple target images of the vehicle part ma be captured from different perspectives (e.g. front and rear). This generall improves the accuracy of the subsequent image processing process, but makes it more resource- intensive because more images: are processed.
According to block 330 and 340 in Fig. 3, the image recognition application 144 may provide a user interface to obtain one or more attribute of the vehicle part. The attributes may be used to improve the accuracy of the image recognitio process. An example interface 500 is shown in Fig. 5, which may be used by the user to provide vehicle information (e.g. make and/or model) and size of the vehicle part (e.g. width and length). A list of vehicle manufacturers and models may be provided to the user for selection. Blocks 330 and 340 arc optional, and a user may skip through it by selecting 'next'. Processing of Target Image
According to block 350 in Fig. 3, the target image is processed to. facilitate the subsequent image recognition process. Thi may involve one or more of the following:
Resizing the targe image to reduce the data size of the image and make the subsequent image recognition process more efficient. For example, the final image size may be 15 to 35 KB. Filtering the target image to improve its quality , for example to remove shadows and brighten the target image. Cropping the target image to a predetermined si¾e> for example to maximize the size of the vehicle within the viewport, or cropping to be the same as the viewport in Fig. 4.
Esti matin one or more attributes of the vehicle part captured in the target image. An example is the size of the vehicle pail, which may be estimated if the vehicle part i captured against a background of predetermined size (e.g. A4 paper). Based on ratio between the vehicle part and the background, the width and/or length of the vehicle part may be estimated. Block 350 may be performed by the image recognition application 144 without any further input irom the user. The processed target image is now ready for image recognition.
Image recognition
According to block 360 in Fig. 3, image recognition is then performed to identify the vehicle part in the target image. In particular, the target image is compared with multiple candidate images 134 stored in the dat store 130 shown in Fig, 1 to identify matches. At block 360, one of the following architecture may be implemented: (a) "Thin client" architecture
In one example, block 360 may be performed by the user device 142 in conjunction with the API server 110 and image recognition server 120, For example, the target image is first sent to the API server 110 after block 350 in
Fig. 3. The API server 1 10 then provides the target image to the image recognition serve 120, which then provides the matching results by sending them directly to the user device 142 or via the API server 1 10. Information exchange betwee the server 1 10/120 and the user device 142 may be performed in any suitable format, such as extensible Markup Language (XML). To facilitate faster image transfer, a 'lax-y loading' process ma be used Where the loading process is performed in the background and. the user 140 can continue using the application 144. (b) "Thick client" architecture
Alternatively, block 360 may be performed by the user device 142. In this case, the user device 142 may access candidate images in the data store 130 directly via the communications network 1 150. Th thick client architecture may be implemented if the user device 142 has the processing capabilit to perform image recognition within an acceptable timeframe. Otherwise, the thin client architecture may be preferred in practice.
Referring also to Fig. 6, an example image recognition process will be explained. In this example, an example target image 610 to be matched is shown in Fig. 6(a) whereas Fig. 6(b) shows a higher quality version 620 of the same image after it is processed at block 350 in Fig. 3.
According t block 362 in Fig. 3, one or more visual features are extracted from the vehicle part captured in the target image 610. In the example in Fig. 6(c), features 630 may relate to the shape, configuration and dimensions of its backing plate and friction pad of the brake pad.
According to block 364 in Fig. 3, a set of candidate images are identified from the data store 130. An example set 640 is shown in Fig. 6(d), which includes candidate images of brake pads that may be matched with the target in Fig. 6(c).
The set of candidate images may be filtered based on the attribute-is ) provided by the user at blocks 330 and 340 in Fig. 3. For example, if the user has provided a particular vehicle's make and/or model, only candidate images associated with those attributes are identified as candidate images. In other examples where blocks 330 and 240 were not performed, the candidate images may be the entire image library.
Aecording to block 366 in Fig. 3, the target image i compared with the set of candidate images to identify the vehicle- part in the target image. For example, thi may involve comparin the visual features 630 extracted from the vehicle. with visual features 650 of each candidate image in Fig. 6(d). The similarities and/or differences of the features are compared (i.e. 630 v 650) to obtain one or more matching results. For example, in Fig. 6(d), the most relevant result is indicated at 660. Although one example is provided, it will be appreciated that any suitable image recognition processes may be used, such as algorithms based on spectral graph techniques to cluster visual features; algorithms based on machine learning (e.g. neural network, support vector machine), algorithms based on transformations such as FFT- based correlation algorithms, colour histograms, or any other suitable algorithm. The image recognition algorithm may include shape comparison algorithm enables compariso of target image to the database of the library of vehicle part images and returns result showing matching parts in order of probability.
The library of images in the data store 130 may be stored according to an image protocol. Multiple image may be stored for a particular vehicle part, for example those taken from different perspectives or close-ups of different sub-parts. The image protocol may include a naming convention, such as 'DB XXX_YYY_ZZ.jpg ! where 'DBXXX' indicates the part number, ΎΥΥ' indicates extra information (e.g. sub-parts Inner, Outer, Right Hand, Left Hand, Stealth) and 'ZZ' indicates the type of user device used to capture the image (e.g. i Phone 3-GS-, 4, 4S, 5), The images may be cropped to a particular size (e.g. 420 pixel x 420 pixel). As can be see information of the vehicle part captured in the image in the library is encoded in the protocol. Alternatively or in addition, information of the vehicle part captured in the image may be stored i the datastore in an associated manner, such as metadata of the image or in the same or related record i the database.
The data store 130 may be optimised to improve accuracy and speed of image recognition. For example, an index may be built that facilitates fast retrieval of candidate images.
Results interface
According to block 370, one or more results are provided to the user 140 on the user device 142. One example interface- 700 in shown in Fig. 7(a), Each result- represents a candidate vehicle part that best matched the target image. Therefore the set of images that form the matchin results each potentially identify the vehicle part in the target image. Each result includes a thumbnail image 710 and information 712 of the candidate vehicle part,, such as the vehicle part's attributes being one or .more of identifier (part number), vehicle's make, vehicle's model, part variations etc. This information can be extracted from the naming protocol or extracted from inicmnation stored associated with the result in the data store 130,
The result may also be ranked according to their percentage of accuracy or true match probability 716, which may be computed based on the similarities and/or differences between the target image and each result. The results may be categorised int different groups based on their percentage of accuracy 716, such as 'recommended' and 'alternative' (not shown tor simplicity). For example, the top two result may be categorised as 'recommended' and shown at the top of the screen, and the remaining results categorised as 'alternative'. A user 140 can scroll through the results by providing a scroll gesture of the touc screen interface, for example. Fig. 7(b) shows an example interface 750 for filtering the results in Fig. 7(a). In particular, the results may be filtered based on ailribute(s) of the vehicle part (if not provided at blocks 330 and 340 in Fig. 3). For example, the attributes may be the vehicle's make 752, vehicle's model 754 and any ther te t-based keywords 756. Fig. 8(a) shows an example interface 800 on which the filtered result is .shown. In this example, only one result matche with the attributes provided. Details of the result 810 may be viewed by selecting the result, for example, by providing a tap gesture on the 'next' icon 814 on the interface 800 (the same icon 714 i also shown in Fig. 7(a)). Fig. 8(b) shows an. example interface 850 for displaying details of a particular result. Each result specifies one of more of the following: identifier (e.g. 'DB.1170'); images (e.g. from different perspectives); vehicle's make and model (e.g. 'Subaru Impreza WRX STI WRX STI 4 Pot Front / 2 Pot Rear 1999-2001'); types of the vehicle par (e.g. 'General CT\ '4WD\ 'Heavy Duty' and 'Ultimate') and other infomiation such as dimensions (e.g. '4 pads 108 x 42 x 14 mm). The result may be saved by selecting a 'save result' button 852 on the interface 850, in which case the information will be saved onto the user device 1.42.
The interface 850 also allows a use 140 to contact their preferred supplier by selecting the 'contact preferred supplier' button 854. In this case, one or more preferred suppliers may be stored by the image recognition application 144. Once the button 854 is selected, the result will be sent: to the preferred supplier (e.g. via email or message) to place an order or make an enquiry.
The interface 850 further allows the user 140 to retrieve supplier information associated with the result displayed. For example, the supplier information (see 136 in Fig. 1 ) may include contact information (e.g. address, phone number) and/or inventor}' information (e.g. whether a vehicle part is available, how many is available).
The supplier infonnation may be retrieved based on the location information collected using the positioning system {250 in Fig. 2) of the user device 142. In the example in Fig. 8(b), the interface 85 allows the user 140 to find one or more suppliers based on the location of the user device 142 by selecting the 'find the nearest supplier' button 856. In this example, the supplier's location is stored as part of the supplier information 136 in the data store 130,
Fig. 9 shows an example interface 900 for displaying the results of the supplier search. The interface 900 provides a list of suppliers for vehicle part 'DB 1 17Q\ and their distance from the user device 142, contact details and inventory Information. In this case, 'Supplier A' is the closest, but does not have the vehicle part in stock. 'Supplier B' is 1 km further away but has the vehicle part. Each result may be selected using the 'next button' to view the supplier in more detail.
The interface 900 also allows the user 140 of the user device 1.42 to order the vehicle part: from one or more of the suppliers displayed. For example, a supplier may be selected using the 'selection' butto 910 provided before the 'order' button 920 is selected. In: thi case, the order will be processed and sent to a computing device 162 of the relevant supplier 160 (see Fig. 1).
It will be appreciated that if there are no results found, the user 140 may be presented with an interface with the option of taking a new picture or starting over with a new search.
Results analysis
In one example, target images captured using the image recognition application 144 may be stored in the data store 130. The purpose is to analyse how users 140 use the. application 144 to facilitate future developments and identification of application bugs. The result the user 40 finds most relevant may also be sent to the server 110/120 to further improve the image recognition process. For example, the results may be used as inputs to a supervised learning process to improve the accuracy of the mapping between target image and candidate images in the data store 130. The image recognition proces may be reviewed front time to time.
At the same time supplier information 136 can also be dynamically- updated based on information received from suppliers, such as by direct communications or b scrapping of the supplier websites .
Server 1 107120
Referring to Fig. 10, an example structure of a network device capable of acting as either one or more of the server 1 1.0 and 120 in Fig. 1 is shown. The example network device 100 includes a processor 1010, a memory 1020 and a network interface devic 1040 that communicate with each other via bus 1030. The network device 100 is capable of communicating- with the user devices 142 via the network interface device 1040 and a wide area communications network 130, for example, including an inport and outport port of the network device interface 1040. in the example in Fig. 1 , the memor 1020 stores machine- readable instructions. 1024 to implement functions of the server 1 10. Although the data store 130 in Fig, 1 is shown a a separate entity, the information in the data store 130 may be stored in the memory 1020 on the server 110/120.
Fo example, the various methods, processe and functional unit described herein may be implemented by the processor 1010. The term 'processor' is to be interpreted broadly to include a CPU, processin unit, ASIC, logic unit, or programmable gate array etc. The processes, methods and functional units may all be performed by a single processor 100 or split between several processors (not shown in Fig. 10 for simplicity); reference in this disclosure or the claims to a 'processor' should tlms be interpreted to mean 'one or more- rocessors'.
Although one network interface device 1040 is shown in Fig. 10, processe performed by the network intcrfece device 1040 may be split between several network interlace devices. As such, reference in this disclosure to a 'network interface device' should be interpreted to mean One or more network interface devices". The processes, methods and functional units may be implemented as machine-readable: instructions executable by one or more processors 1010, hardware logic circuitry of the one or more processors 1010 or a combination thereof.
It should be understood that computer components, processing units, engines, software modules, functions and data structures described herein may be connected directly or indirectly to eac other in order to allow any data flow required for their operations, it is also noted that software instructions or module can be implemented using various methods. For example, a subroutine unit of code, a software function, an object in an object-oriented programming environment, a computer script, computer code or firmware can be used. The software components and/or functionality may be located on a single device or distributed over multiple devices depending on the application. It should also be understood that although the term "first', 'second' etc. may have been used herein to describe various elements, these elements should not be limited by these terms. These term are only used to distinguish one clement from another. For example, a first user interface could be termed a second user interface, and, similarly , a second user interface could be termed a first user interface, without departing from the scope of the present disclosure, The first user interface and second user interface may not be the same user interface.
Reference in the specification to "one embodiment" or "an embodiment" of the present invention means that a particular feature, structure or characteristic described in connection with the embodiment is included in at le st one embodiment of the present invention. Thus, the appearances of the phrase "in one embodiment" appearing in various places throughout the specification are not necessarily all referring to the same embodiment. Unless the context clearly requires otherwise, words using singular or plural number also include the plural or singular number respectively. .It will be understood that the term "and/or" as used herein refers to and encompasse any and all possible combinations of one or more of the associated listed items.
It wilt be appreciated by persons skilled in the art that numerous variations and/or modifications may be made to the invention as shown in the specific embodiments without departing from the scope of the invention as broadly described. The present embodiments are, therefore, to be considered in all respects as illustrative and not .restrictive.

Claims

Claims
1. A computer-implemented method for image recognition of a vehicle part on a user device, the method comprising:
obtaining a target image of the vehicle part from a camera system;
performing image recognition to identify the vehicle part i the target image, wherein image recognition comprise comparing the target image with candidate images to obtain one or more matching results; and
providing the one or more matching result on a user interface, wherein each matching result comprises an image and information of a candidate vehicle part.
2, The computer-implemented method of claim 1 , further comprising providing a use interface, to order the vehicle part from a supplier. 3. The computer-implemented method of claim .1 or 2, wherein the user device is posifioning-enabled and the method further comprises:
collecting positioning information from a positioning system of the user device; and
determining one or more suppliers of the vehicle part that are nearest to the user device based on the positioning information.
4. The computer-implemented method of claim 1, 2 or 3, further comprising, prior to obtainin the target image from the camera system, providing a user interface comprising one or more of the following:
an overlay feature that guide the user to capture the vehicle part within a viewport;
an orientation feature to enable the camera system only when the user device is held at an acceptable orientation', and
a settings feature to adapt a setting of the camera system.
5. The computer-implemented method of any one of the preceding claims, further comprising:
receiving one or more attributes of the vehicle part in the target image; and filterin the candidate images or matching results based on the one or more attributes.
6. The computer- implemented method of claim 5, wherein the one or more attributes include one or more of: vehicle's make, vehicle's model and a dimension of the vehicle part. 7. The computer-implemented method of any one of the preceding claims, wherein the information of the candidate vehicle part includes one or more attributes of: part type, part identifier, vehicle's make, vehicle's model and part variation,
8. The computer-implemented method of an one of the preceding claims, further comprising, prior to performing image recognition, processing the target image by performing one or more of the following:
resizing the target image- filtering the target i mage;
cropping the target image to a predetermined size; and
estimatin a dimension of the vehicle part in the target image.
9. The computer-implemented method of an one of the preceding claims, wherein performing image recognition comprises:
extracting one or more visual features of the vehicle par from the target image; and
comparing the target image with the candidate image based on the one or more visual features.
10. The computer-implemented method of any one of the preceding claims, wherein performing image recognition comprises;
sending the target image to a server to compare the target image with candidate images; and
recei ving the one or more matching results from the server or a different server. l i. A user device for image recognition of a vehicle part, the device comprises a processor to perform the method according to any one of claims 1 to 10.
12, The user device of claim 11, further comprising a camera system to capture the target image and a display to display the user interface.
13. Computer program to cause a user device to perform the method of image recognition of a vehicle part according to an one of claims 1 to 10.
14. A eomputer-implemented method for image recognition of a vehicle part on a network device capable of acting as a server, the method comprising:
receiving a target image of the vehicle part from a user device;
performing image recognition to identit the vehicle part in the target image, wherein image recognition comprises comparing the target image with candidate images to obtain one or more matching results; and
providing the one or more matching results to the user device, wherein each result comprises an image and information of a candidate vehicle part.
15. A network device capable of acting as a server lor image recognition of a vehicle part comprising a processor to perform the method according to claim 14.
16. The network device of claim 14, further comprising an interface to receive the target image and to provide the one or more matching results.
17. Computer program to cause a network device capable of acting as a server to perform the method according to claim 14.
PCT/AU2014/050046 2013-05-21 2014-05-21 Image recognition of vehicle parts WO2014186840A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
NZ630397A NZ630397A (en) 2013-05-21 2014-05-21 Image recognition of vehicle parts
AU2014271204A AU2014271204B2 (en) 2013-05-21 2014-05-21 Image recognition of vehicle parts

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
AU2013101043 2013-05-21
AU2013101043A AU2013101043A4 (en) 2013-05-21 2013-05-21 Image recognition of vehicle parts
AU2013901813 2013-05-21
AU2013901813A AU2013901813A0 (en) 2013-05-21 Image recognition of vehicle parts

Publications (1)

Publication Number Publication Date
WO2014186840A1 true WO2014186840A1 (en) 2014-11-27

Family

ID=51932634

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/AU2014/050046 WO2014186840A1 (en) 2013-05-21 2014-05-21 Image recognition of vehicle parts

Country Status (3)

Country Link
AU (1) AU2014271204B2 (en)
NZ (1) NZ630397A (en)
WO (1) WO2014186840A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018194508A1 (en) * 2017-04-20 2018-10-25 Wiretronic Ab Method and computer vision system for handling of an operable tool
EP3396566A1 (en) * 2017-04-28 2018-10-31 Fujitsu Limited Method, information processing apparatus and program
KR102023469B1 (en) * 2019-04-02 2019-09-20 이재명 Body parts manufacturing system
CN110659567A (en) * 2019-08-15 2020-01-07 阿里巴巴集团控股有限公司 Method and device for identifying damaged part of vehicle
CN111310561A (en) * 2020-01-07 2020-06-19 成都睿琪科技有限责任公司 Vehicle configuration identification method and device
WO2023178930A1 (en) * 2022-03-23 2023-09-28 北京京东乾石科技有限公司 Image recognition method and apparatus, training method and apparatus, system, and storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10163033B2 (en) 2016-12-13 2018-12-25 Caterpillar Inc. Vehicle classification and vehicle pose estimation

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005041071A1 (en) * 2003-10-24 2005-05-06 Active Recognition Technologies, Inc. Vehicle recognition using multiple metrics
US20060240862A1 (en) * 2004-02-20 2006-10-26 Hartmut Neven Mobile image-based information retrieval system
WO2011017557A1 (en) * 2009-08-07 2011-02-10 Google Inc. Architecture for responding to a visual query

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005041071A1 (en) * 2003-10-24 2005-05-06 Active Recognition Technologies, Inc. Vehicle recognition using multiple metrics
US20060240862A1 (en) * 2004-02-20 2006-10-26 Hartmut Neven Mobile image-based information retrieval system
WO2011017557A1 (en) * 2009-08-07 2011-02-10 Google Inc. Architecture for responding to a visual query

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018194508A1 (en) * 2017-04-20 2018-10-25 Wiretronic Ab Method and computer vision system for handling of an operable tool
CN110574039A (en) * 2017-04-20 2019-12-13 瓦尔卓尼克公司(瑞典) method and computer vision system for processing operational tools
US10964052B2 (en) 2017-04-20 2021-03-30 Wiretronic Ab Method and computer vision system for handling of an operable tool
CN110574039B (en) * 2017-04-20 2023-06-23 瓦尔卓尼克公司(瑞典) Method and computer vision system for processing an operable tool
EP3396566A1 (en) * 2017-04-28 2018-10-31 Fujitsu Limited Method, information processing apparatus and program
KR102023469B1 (en) * 2019-04-02 2019-09-20 이재명 Body parts manufacturing system
CN110659567A (en) * 2019-08-15 2020-01-07 阿里巴巴集团控股有限公司 Method and device for identifying damaged part of vehicle
CN111310561A (en) * 2020-01-07 2020-06-19 成都睿琪科技有限责任公司 Vehicle configuration identification method and device
WO2023178930A1 (en) * 2022-03-23 2023-09-28 北京京东乾石科技有限公司 Image recognition method and apparatus, training method and apparatus, system, and storage medium

Also Published As

Publication number Publication date
AU2014271204B2 (en) 2019-03-14
NZ630397A (en) 2017-06-30
AU2014271204A1 (en) 2015-12-03

Similar Documents

Publication Publication Date Title
AU2014271204B2 (en) Image recognition of vehicle parts
EP3125135B1 (en) Picture processing method and device
JP7058760B2 (en) Image processing methods and their devices, terminals and computer programs
US8320644B2 (en) Object detection metadata
CN105094760B (en) A kind of picture indicia method and device
CN109189879B (en) Electronic book display method and device
WO2016101757A1 (en) Image processing method and device based on mobile device
WO2021169132A1 (en) Imaging processing method and apparatus, electronic device, and storage medium
WO2019105457A1 (en) Image processing method, computer device and computer readable storage medium
US20160253298A1 (en) Photo and Document Integration
WO2010024992A1 (en) Image tagging user interface
EP2319008A2 (en) Tagging images with labels
WO2015172359A1 (en) Object search method and apparatus
EP3260998A1 (en) Method and device for setting profile picture
US9633444B2 (en) Method and device for image segmentation
GB2499385A (en) Automated notification of images with changed appearance in common content
US11531702B2 (en) Electronic device for generating video comprising character and method thereof
CN110909209A (en) Live video searching method and device, equipment, server and storage medium
WO2022068719A1 (en) Image display method and apparatus, and electronic device
CN107239207A (en) Photo display methods and device
CN110019907B (en) Image retrieval method and device
WO2022016803A1 (en) Visual positioning method and apparatus, electronic device, and computer readable storage medium
CN108009273B (en) Image display method, image display device and computer-readable storage medium
CN116546274B (en) Video segmentation method, selection method, synthesis method and related devices
AU2013101043A4 (en) Image recognition of vehicle parts

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14801831

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2014271204

Country of ref document: AU

Date of ref document: 20140521

Kind code of ref document: A

122 Ep: pct application non-entry in european phase

Ref document number: 14801831

Country of ref document: EP

Kind code of ref document: A1