AU2014271204B2 - Image recognition of vehicle parts - Google Patents

Image recognition of vehicle parts Download PDF

Info

Publication number
AU2014271204B2
AU2014271204B2 AU2014271204A AU2014271204A AU2014271204B2 AU 2014271204 B2 AU2014271204 B2 AU 2014271204B2 AU 2014271204 A AU2014271204 A AU 2014271204A AU 2014271204 A AU2014271204 A AU 2014271204A AU 2014271204 B2 AU2014271204 B2 AU 2014271204B2
Authority
AU
Australia
Prior art keywords
vehicle part
target image
image
image recognition
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
AU2014271204A
Other versions
AU2014271204A1 (en
Inventor
Andrew Robert Bates
Ian Keith Bott
George Kyriakopoulos
David Nathan Woolfson
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
FMP Group Australia Pty Ltd
Original Assignee
FMP Group Australia Pty Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2013101043A external-priority patent/AU2013101043A4/en
Priority claimed from AU2013901813A external-priority patent/AU2013901813A0/en
Application filed by FMP Group Australia Pty Ltd filed Critical FMP Group Australia Pty Ltd
Priority to AU2014271204A priority Critical patent/AU2014271204B2/en
Publication of AU2014271204A1 publication Critical patent/AU2014271204A1/en
Application granted granted Critical
Publication of AU2014271204B2 publication Critical patent/AU2014271204B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The present disclosure concerns methods, computer programs, user device and network device for image recognition of vehicle parts. First, a target image of the vehicle part is obtained (320) from a camera system (240). Image recognition (360) to identify the vehicle part in the target image is performed. This comprises comparing (366) the target image with candidate images to obtain one or more matching results. Then, providing (370) the one or more results on a user interface (262). Each matching result comprises an image (710) and information (712) of a candidate vehicle part.

Description

Image recognition of vehicle parts
Cross-Reference to Related Applications
The present application claims priority from Australian provisional patent application
2013901813 and Australian innovation patent 2013101043 the contents of which are incorporated herein by reference.
Technical Field
The present disclosure concerns methods, computer programs, user device and network device for image recognition of vehicle parts.
Background
Mechanics generally rely on hard copy catalogues of a part manufacturer when 15 ordering vehicle parts. For example, when a vehicle part needs to be replaced, mechanics generally relies on their knowledge of the vehicle part or manually searches through catalogues to identify the vehicle part.
Any discussion of documents, acts, materials, devices, articles or the like which has 20 been included in the present specification is not to be taken as an admission that any or all of these matters form part of the prior art base or were common general knowledge in the field relevant to the present disclosure as it existed before the priority date of each claim of this application.
Throughout this specification the word comprise, or variations such as comprises or comprising, will be understood to imply the inclusion of a stated element, integer or step, or group of elements, integers or steps, but not the exclusion of any other element, integer or step, or group of elements, integers or steps.
Summary
There is provided a computer-implemented method for providing attributes of a candidate vehicle part using image recognition of a vehicle part on a user device, the method comprising:
obtaining a target image of the vehicle part from a camera system;
performing image recognition to identify the vehicle part in the target image, wherein image recognition comprises comparing the target image with candidate
2014271204 01 Feb 2019 images to obtain one or more matching results, wherein performing image recognition comprises extracting one or more visual features of the vehicle part from the target image and comparing the target image with the candidate images based on the one or more visual features; and providing the one or more matching results on a user interface, wherein each matching result comprises an image of a candidate vehicle part; and providing attributes of the candidate vehicle part.
The computer-implemented method may further comprise providing a user interface to 10 order the vehicle part from a supplier.
The user device may be positioning-enabled, in which case the method may further comprise:
collecting positioning information from a positioning system of the user device; 15 and determining one or more suppliers of the vehicle part that are nearest to the user device based on the positioning information.
The camera system may be a camera system on the user device. In this case, the 20 method may further comprise, prior to obtaining the target image from the camera system, providing a user interface comprising one or more of the following:
an overlay feature that guides the user to capture the vehicle part within a viewport; an orientation feature to enable the camera system when the user device is held at an acceptable orientation; and a settings feature to adapt a setting of the camera system.
The method may further comprise:
receiving one or more attributes of the vehicle part in the target image; and filtering the candidate images or matching results based on the one or more attributes. In this case, the one or more attributes may include one or more of: vehicle’s make, vehicle’s model and a dimension of the vehicle part.
The method may further comprise, prior to performing image recognition, processing the target image by performing one or more of the following:
resizing the target image;
filtering the target image;
2014271204 01 Feb 2019 cropping the target image to a predetermined size; and estimating a dimension of the vehicle part in the target image.
Further, performing image recognition may comprise:
sending the target image to a server to compare the target image with candidate images; and receiving the one or more matching results from the server or a different server.
There is provided a user device for image recognition of a vehicle part, the device 10 comprises a processor to perform the method described above. The user device may further comprise a camera system to capture the target image and a display to display the user interface.
There is provided a computer program to cause a user device to perform the method 15 described above.
There is provided a network device for providing attributes of a candidate vehicle part using image recognition of a vehicle part on a network device capable of acting as a server, the method comprising:
receiving a target image of the vehicle part from a user device;
performing image recognition to identify the vehicle part in the target image, wherein image recognition comprises comparing the target image with candidate images to obtain one or more matching results, wherein performing image recognition comprises extracting one or more visual features of the vehicle part from the target 25 image and comparing the target image with the candidate images based on the one or more visual features; and providing the one or more results to the user device, wherein each result comprises an image; and providing attributes of the candidate vehicle part.
There is provided a network device for providing attributes of a candidate vehicle part using image recognition of a vehicle part comprising a processor to perform the method described directly above. The network device may further comprise an interface to receive the target image and to provide the one or more matching results.
2014271204 01 Feb 2019
There is provided a computer program to cause a network device to perform the method described directly above.
Brief Description of Drawings
Examples of image recognition of vehicle parts will now be described with reference to the accompanying drawings, in which:
Fig. 1 is a block diagram of an example system for image recognition of vehicle parts;
Fig. 2 is a block diagram of an example structure of an electronic device capable 10 of acting as a user device in Fig. 1;
Fig. 3 is a flowchart of steps performed by an image recognition application on a user device in Fig. 1;
Fig. 4 is an example interface for capturing a target image of a vehicle part;
Fig. 5 is an example interface for providing attributes of a vehicle part;
Fig. 6(a) is an example target image;
Fig. 6(b) is the example target image in Fig. 6(a) after image processing;
Fig. 6(c) is the example target image in Fig. 6(b) after feature extraction;
Fig. 6(d) is an example set of candidate images to which the example target image in Fig. 6(c) is compared to when image recognition is performed;
Fig. 7(a) is an example interface for displaying results of image recognition;
Fig. 7(b) is an example interface for filtering the results of image recognition in Fig. 7(a);
Fig. 8(a) is the example interface in Fig. 7(a) after the results are filtered according to Fig. 7(b);
Fig. 8(b) is an example interface for displaying a result;
Fig. 9 is an example interface for displaying supplier information and order placement; and
Fig. 10 is an example structure of a network device capable of acting as a server.
Detailed Description
Fig. 1 is a block diagram of an example 100 for image recognition system of vehicle parts. The system 100 comprises Application Program Interface (API) server 110 and image recognition server 120 that are in communication with each other and multiple user devices 142 operated by users 140 over a communications network 150, 152.
2014271204 01 Feb 2019
The users 140 of the user devices 142 may be mechanics or vehicle repairers who wish to identify vehicle parts such as brake pads, brake rotors, brake shoes, brake drums, loaded caliper, reman bare caliper, semi loaded caliper, wheel cylinder, clutch master cylinder, slave cylinder, brake hydraulic hose etc.
To facilitate image recognition of vehicle parts, a software application in the form of an image recognition application 144 is installed on each user device. The user devices 142 communicate with the API server 110 to access image recognition services provided by the image recognition server 120. The API server 110 and image 10 recognition server 120 have access to a data store 130 (either via the communications network 150 as shown or directly) to retrieve various information such as user information 132, vehicle part information 134 and supplier information 136.
In one example, image recognition of a vehicle part includes the following:
A target image of the vehicle part is obtained, for example from a camera system of the user device 142.
Image recognition is performed to identify the vehicle part in the target image.
For example, image recognition may involve comparing the target image with candidate images to obtain one or more best matching results.
One or more results are provided on a user interface, each result including an image and information of a candidate vehicle part potentially identifying the vehicle part in the target image.
Advantageously, the image recognition application 144 facilitates faster and more 25 efficient recognition of vehicle parts. Using the application 144, a user does not have to perform the manual process of searching through hard copy catalogues (which may not be complete or current) to identify the vehicle part.
Since the image recognition application 144 is able to provide access to the latest 30 vehicle part information, this also reduces or removes the need for manufacturers and/or suppliers to print various catalogues, thereby saving costs and efforts. The image recognition application 144 may be used conveniently since it is accessible by users 140 anytime and anywhere, e.g. at their workshops, or at a crash site for example.
User Device 142
2014271204 01 Feb 2019
Referring now to the block diagram in Fig. 2, an example electronic device 200 capable of acting as the user device 142 will now be explained.
The image recognition application 144 may be implemented on any suitable Intemet5 capable user devices 142, such as a smartphone (e.g. Apple iPhone 3GS, 4S, 4, 5), tablet computer (e.g. Apple iPad), personal digital assistant, desktop computer, laptop computer, and any other suitable device. The image recognition application 144 may be downloaded onto the user device 142. For example, if the user device 142 is an Apple device, the image recognition application 144 may be a downloadable “App” 10 that is available through the Apple App Store (trade marks of Apple, Inc). Similarly, the image recognition application 144 may be downloaded from the “Blackberry App World” for Blackberry devices (trade marks of Research In Motion Limited), and from the “Android Market” or “Google Play” for Android devices (trade marks of Google, Inc.). The image recognition application 144 may also be pre-programmed on the user 15 142.
Capable of acting means having the necessary features to perform the functions described. In one example, the user device 142 may be a mobile electronic device. The electronic device 200 in Fig. 2 comprises one or more processors 202 in 20 communication with a memory interface 204 coupled to memory 210, and a peripherals interface 206. The memory 210 may include random access memory and/or nonvolatile memory, such as magnetic disc storage devices etc. The memory 210 stores various applications 230 including the image recognition application 144; an operating system 212; and executable instructions to perform communications functions 214; 25 graphical user interface processing 216; sensor processing 218; phone-related functions 220; electronic messaging functions 222; web browsing functions 224; camera functions 226; and GPS or navigation functions 228.
The applications 230 implemented on the electronic device 200 include the image 30 recognition application 144, and other applications (not shown for simplicity) such as a web browsing application, an email application, a telephone application, a video conferencing application, a video camera application, a digital camera, a photo management application, a digital music application, a digital video application, etc.
Sensors, devices and systems can be coupled to the peripherals interface 204 to facilitate various functionalities, such as the following.
2014271204 01 Feb 2019
Camera system 240 is coupled to an optical sensor 242, such as a charged coupled device (CCD) or a complementary metal-oxide semiconductor (CMOS) optical sensor, to facilitate camera functions.
Positioning system 250 collects geographical location information of the device
142 by employing any suitable positioning technology such as GPS AssistedGPS (aGPS). GPS generally uses signals from satellites alone, while aGPS additionally uses signals from base stations or wireless access points in poor 10 signals condition. Positioning system 250 may be integral with the device or provided by a separate GPS-enabled device coupled to the electronic device 142.
Input/Output (I/O) system 260 is coupled to a touch-sensitive display 262 15 sensitive to haptic and/or tactile contact via a user, and/or other input devices such as buttons. The touch-sensitive display 262 may also comprise a multitouch sensitive display that can, for example, detect and process a number of touch points simultaneously. Other touch-sensitive display technologies may also be used, such as display in which contact is made using a stylus. The terms 20 “touch-sensitive display” and “touch screen” will be used interchangeably throughout the disclosure. In embodiments where user interfaces are designed to work with finger-based contacts and gestures, the device 142 translates finger-based input (which is less precise due to the larger area of finger contact) into more precise pointer- or cursor-based input for performing actions desired 25 by the user 140.
Wireless communications system 264 is designed to allow wireless communications over a network employing suitable communications protocols, standards and technologies such as GPRS, EDGE, WCDMA, OFDM A, 30 Bluetooth, Wireless Fidelity (WiFi) or Wi-MAX and Long-Term Evolution (LTE) etc.
Sensor system 268, such as an accelerometer, a light sensor and a proximity sensor are used to facilitate orientation, lighting and proximity functions, 35 respectively.
2014271204 01 Feb 2019
Audio system 270 can be coupled to a speaker 272 and microphone 274 to facilitate voice-enabled functions such as telephony functions
Although one example implementation has been provided here, it will be appreciated 5 that other suitable configurations capable of implementing the image recognition application 144 on the electronic device 200 may be used. It will be appreciated that the image recognition application 144 may support portrait and/or landscape modes.
Target image
Fig. 3 shows an example method performed by the image recognition system 100. According to block 310 in Fig. 3, the image recognition application 144 first provides a user interface to obtain a target image of a vehicle part to be replaced. Provides is understood to mean image recognition application 144 operates to provide the necessary information to the user device 142 so that the user device 142 can display on 15 the display262 the user interface.
Referring also to Fig. 4, an example user interface 400 is provided to capture a target image 410 of a vehicle part (e.g. brake pad) using the camera system (240 in Fig. 2) of the user device 142. To improve the quality of the target image 410, the user interface 20 400 may include one or more of the following:
An overlay feature that defines a viewport using a set of horizontal 430 and vertical 432 lines for guiding the user 140 to capture the vehicle part within the viewport. In one example, the viewport may be 120 pixels x 120 pixels. In 25 other embodiments the overlay may not be rectangular in shape, but more in the general shape of the vehicle part. In this example the overlay would be substantially oval shape with the length of the oval laying horizontally. The overlay feature may be selected from a set of predefined overlays based on attributes of the vehicle part of and/or vehicle received at block 330 and 340 30 described below.
An orientation feature 440 for guiding the orientation and/or angle of the user device 142 when the target image is taken. This function 440 relies on the accelerometer in the sensor system (see 268 in Fig. 2) and the user device 142 35 may be adjusted until a bubble representation 422 appears within the appropriate boundary of a “spirit level” (e.g. within the inner circle as shown). In one
2014271204 01 Feb 2019 example, the capture button 420 will only appear once the acceptable orientation is obtained, e.g. when the user device 142 is held flat. The tip may inform the user of the particular perspective view that should be captured.
A settings feature 450 for adapting a setting of the camera system 240, such as a flash setting feature to automatically enable or disable the flash setting of the camera system to improve the quality of the target image.
Tips 460 for guiding the user 140 during the capture of the target image 410. For example, the tip may be to request the user 140 to capture the target image 410 against a white background and/or A4 paper (as shown) and/or to move to a brighter spot if the image is too dark. The tip may be also to request the user 140 to align the vehicle part in the centre of the screen.
The target image 410 is then captured and stored, such as when a user’s touch input is detected on a capture button 420 on the screen 400 (see also block 320 in Fig. 3). Although one target image 410 is shown, it will be appreciated that another image and/or additional information may be requested if no match is found. Multiple candidate images of the same vehicle part may be stored in the data store 130 to 20 improve accuracy. In another example, multiple target images of the vehicle part may be captured from different perspectives (e.g. front and rear). This generally improves the accuracy of the subsequent image processing process, but makes it more resourceintensive because more images are processed.
According to blocks 330 and 340 in Fig. 3, the image recognition application 144 may provide a user interface to obtain one or more attributes of the vehicle part. The attributes may be used to improve the accuracy of the image recognition process. An example interface 500 is shown in Fig. 5, which may be used by the user to provide vehicle information (e.g. make and/or model) and size of the vehicle part (e.g. width and length). A list of vehicle manufacturers and models may be provided to the user for selection. Blocks 330 and 340 are optional, and a user may skip through it by selecting ‘next’.
Processing of Target Image
According to block 350 in Fig. 3, the target image is processed to facilitate the subsequent image recognition process. This may involve one or more of the following:
ίο
2014271204 01 Feb 2019
Resizing the target image to reduce the data size of the image and make the subsequent image recognition process more efficient. For example, the final image size may be 15 to 35 KB.
Filtering the target image to improve its quality, for example to remove shadows and brighten the target image.
Cropping the target image to a predetermined size, for example to maximize the 10 size of the vehicle within the viewport, or cropping to be the same as the viewport in Fig. 4.
Estimating one or more attributes of the vehicle part captured in the target image. An example is the size of the vehicle part, which may be estimated if the vehicle 15 part is captured against a background of predetermined size (e.g. A4 paper).
Based on ratio between the vehicle part and the background, the width and/or length of the vehicle part may be estimated.
Block 350 may be performed by the image recognition application 144 without any 20 further input from the user. The processed target image is now ready for image recognition.
Image recognition
According to block 360 in Fig. 3, image recognition is then performed to identify the 25 vehicle part in the target image. In particular, the target image is compared with multiple candidate images 134 stored in the data store 130 shown in Fig. 1 to identify matches.
At block 360, one of the following architecture may be implemented:
(a) “Thin client” architecture
In one example, block 360 may be performed by the user device 142 in conjunction with the API server 110 and image recognition server 120. For example, the target image is first sent to the API server 110 after block 350 in 35 Fig. 3. The API server 110 then provides the target image to the image
2014271204 01 Feb 2019 recognition server 120, which then provides the matching results by sending them directly to the user device 142 or via the API server 110.
Information exchange between the server 110/120 and the user device 142 may be performed in any suitable format, such as extensible Markup Language (XML). To facilitate faster image transfer, a ‘lazy loading’ process may be used where the loading process is performed in the background and the user 140 can continue using the application 144.
(b) “Thick client” architecture
Alternatively, block 360 may be performed by the user device 142. In this case, the user device 142 may access candidate images in the data store 130 directly via the communications network 1150. The thick client architecture may be implemented if the user device 142 has the processing capability to perform 15 image recognition within an acceptable timeframe. Otherwise, the thin client architecture may be preferred in practice.
Referring also to Fig. 6, an example image recognition process will be explained. In this example, an example target image 610 to be matched is shown in Fig. 6(a) whereas 20 Fig. 6(b) shows a higher quality version 620 of the same image after it is processed at block 350 in Fig. 3.
According to block 362 in Fig. 3, one or more visual features are extracted from the vehicle part captured in the target image 610. In the example in Fig. 6(c), 25 features 630 may relate to the shape, configuration and dimensions of its backing plate and friction pad of the brake pad.
According to block 364 in Fig. 3, a set of candidate images are identified from the data store 130. An example set 640 is shown in Fig. 6(d), which includes 30 candidate images of brake pads that may be matched with the target in Fig. 6(c).
The set of candidate images may be filtered based on the attribute(s) provided by the user at blocks 330 and 340 in Fig. 3. For example, if the user has provided a particular vehicle’s make and/or model, only candidate images associated with those attributes are identified as candidate images. In other 35 examples where blocks 330 and 240 were not performed, the candidate images may be the entire image library.
2014271204 01 Feb 2019
According to block 366 in Fig. 3, the target image is compared with the set of candidate images to identify the vehicle part in the target image. For example, this may involve comparing the visual features 630 extracted from the vehicle 5 with visual features 650 of each candidate image in Fig. 6(d). The similarities and/or differences of the features are compared (i.e. 630 vs 650) to obtain one or more matching results. For example, in Fig. 6(d), the most relevant result is indicated at 660.
Although one example is provided, it will be appreciated that any suitable image recognition processes may be used, such as algorithms based on spectral graph techniques to cluster visual features; algorithms based on machine learning (e.g. neural network, support vector machine), algorithms based on transformations such as FFTbased correlation algorithms, colour histograms, or any other suitable algorithm. The image recognition algorithm may include shape comparison algorithm enables comparison of target image to the database of the library of vehicle part images and returns result showing matching parts in order of probability.
The library of images in the data store 130 may be stored according to an image 20 protocol. Multiple images may be stored for a particular vehicle part, for example those taken from different perspectives or close-ups of different sub-parts. The image protocol may include a naming convention, such as ‘DBXXX_YYY_ZZ.jpg’ where ‘DBXXX’ indicates the part number, ΎΥΥ’ indicates extra information (e.g. sub-parts Inner, Outer, Right Hand, Feft Hand, Stealth) and ‘ZZ’ indicates the type of user device 25 used to capture the image (e.g. iPhone 3GS, 4, 4S, 5). The images may be cropped to a particular size (e.g. 420 pixel x 420 pixel). As can be see information of the vehicle part captured in the image in the library is encoded in the protocol. Alternatively or in addition, information of the vehicle part captured in the image may be stored in the datastore in an associated manner, such as metadata of the image or in the same or 30 related record in the database.
The data store 130 may be optimised to improve accuracy and speed of image recognition. For example, an index may be built that facilitates fast retrieval of candidate images.
Results interface
2014271204 01 Feb 2019
According to block 370, one or more results are provided to the user 140 on the user device 142. One example interface 700 in shown in Fig. 7(a). Each result represents a candidate vehicle part that best matched the target image. Therefore the set of images that form the matching results each potentially identify the vehicle part in the target 5 image. Each result includes a thumbnail image 710 and information 712 of the candidate vehicle part, such as the vehicle part’s attributes being one or more of identifier (part number), vehicle’s make, vehicle’s model, part variations etc. This information can be extracted from the naming protocol or extracted from information stored associated with the result in the data store 130.
The results may also be ranked according to their percentage of accuracy or true match probability 716, which may be computed based on the similarities and/or differences between the target image and each result. The results may be categorised into different groups based on their percentage of accuracy 716, such as ‘recommended’ and 15 ‘alternative’ (not shown for simplicity). For example, the top two results may be categorised as ‘recommended’ and shown at the top of the screen, and the remaining results categorised as ‘alternative’. A user 140 can scroll through the results by providing a scroll gesture of the touch screen interface, for example.
Fig. 7(b) shows an example interface 750 for filtering the results in Fig. 7(a). In particular, the results may be filtered based on attribute(s) of the vehicle part (if not provided at blocks 330 and 340 in Fig. 3). For example, the attributes may be the vehicle’s make 752, vehicle’s model 754 and any other text-based keywords 756.
Fig. 8(a) shows an example interface 800 on which the filtered result is shown. In this example, only one result matches with the attributes provided. Details of the result 810 may be viewed by selecting the result, for example, by providing a tap gesture on the ‘next’ icon 814 on the interface 800 (the same icon 714 is also shown in Fig. 7(a)).
Fig. 8(b) shows an example interface 850 for displaying details of a particular result. Each result specifies one of more of the following: identifier (e.g. ‘DB1170’); images (e.g. from different perspectives); vehicle’s make and model (e.g. ‘Subaru Impreza WRX STI WRX STI 4 Pot Front / 2 Pot Rear 1999-2001’); types of the vehicle part (e.g. ‘General CT’, ‘4WD’, ‘Heavy Duty’ and ‘Ultimate’) and other information such 35 as dimensions (e.g. ‘4 pads 108 x 42 x 14 mm). The result may be saved by selecting a
2014271204 01 Feb 2019 ‘save result’ button 852 on the interface 850, in which case the information will be saved onto the user device 142.
The interface 850 also allows a user 140 to contact their preferred supplier by selecting the ‘contact preferred supplier’ button 854. In this case, one or more preferred suppliers may be stored by the image recognition application 144. Once the button 854 is selected, the result will be sent to the preferred supplier (e.g. via email or message) to place an order or make an enquiry.
The interface 850 further allows the user 140 to retrieve supplier information associated with the result displayed. For example, the supplier information (see 136 in Fig. 1) may include contact information (e.g. address, phone number) and/or inventory information (e.g. whether a vehicle part is available, how many is available).
The supplier information may be retrieved based on the location information collected using the positioning system (250 in Fig. 2) of the user device 142. In the example in Fig. 8(b), the interface 850 allows the user 140 to find one or more suppliers based on the location of the user device 142 by selecting the ‘find the nearest supplier’ button 856. In this example, the supplier’s location is stored as part of the supplier 20 information 136 in the data store 130.
Fig. 9 shows an example interface 900 for displaying the results of the supplier search. The interface 900 provides a list of suppliers for vehicle part ‘DB1170’, and their distance from the user device 142, contact details and inventory information. In this 25 case, ‘Supplier A’ is the closest, but does not have the vehicle part in stock. ‘Supplier
B’ is 1 km further away but has the vehicle part. Each result may be selected using the ‘next button’ to view the supplier in more detail.
The interface 900 also allows the user 140 of the user device 142 to order the vehicle part from one or more of the suppliers displayed. For example, a supplier may be selected using the ‘selection’ button 910 provided before the ‘order’ button 920 is selected. In this case, the order will be processed and sent to a computing device 162 of the relevant supplier 160 (see Fig. 1).
2014271204 01 Feb 2019
It will be appreciated that if there are no results found, the user 140 may be presented with an interface with the option of taking a new picture or starting over with a new search.
Results analysis
In one example, target images captured using the image recognition application 144 may be stored in the data store 130. The purpose is to analyse how users 140 use the application 144 to facilitate future developments and identification of application bugs.
The result the user 140 finds most relevant may also be sent to the server 110/120 to further improve the image recognition process. For example, the results may be used as inputs to a supervised learning process to improve the accuracy of the mapping between target images and candidate images in the data store 130. The image recognition process may be reviewed from time to time.
At the same time supplier information 136 can also be dynamically updated based on information received from suppliers, such as by direct communications or by scrapping of the supplier websites.
Server 110/120
Referring to Fig. 10, an example structure of a network device capable of acting as either one or more of the server 110 and 120 in Fig. 1 is shown. The example network device 1000 includes a processor 1010, a memory 1020 and a network interface device 1040 that communicate with each other via bus 1030. The network device 1000 is 25 capable of communicating with the user devices 142 via the network interface device 1040 and a wide area communications network 130, for example, including an inport and outport port of the network device interface 1040.
In the example in Fig. 10, the memory 1020 stores machine-readable instructions 1024 30 to implement functions of the server 110. Although the data store 130 in Fig. 1 is shown as a separate entity, the information in the data store 130 may be stored in the memory 1020 on the server 110/120.
For example, the various methods, processes and functional units described herein may 35 be implemented by the processor 1010. The term ‘processor’ is to be interpreted broadly to include a CPU, processing unit, ASIC, logic unit, or programmable gate
2014271204 01 Feb 2019 array etc. The processes, methods and functional units may all be performed by a single processor 100 or split between several processors (not shown in Fig. 10 for simplicity); reference in this disclosure or the claims to a ‘processor’ should thus be interpreted to mean ‘one or more processors’.
Although one network interface device 1040 is shown in Fig. 10, processes performed by the network interface device 1040 may be split between several network interface devices. As such, reference in this disclosure to a ‘network interface device’ should be interpreted to mean ‘one or more network interface devices”. The processes, methods 10 and functional units may be implemented as machine-readable instructions executable by one or more processors 1010, hardware logic circuitry of the one or more processors 1010 or a combination thereof.
It should be understood that computer components, processing units, engines, software 15 modules, functions and data structures described herein may be connected directly or indirectly to each other in order to allow any data flow required for their operations. It is also noted that software instructions or module can be implemented using various methods. For example, a subroutine unit of code, a software function, an object in an object-oriented programming environment, a computer script, computer code or 20 firmware can be used. The software components and/or functionality may be located on a single device or distributed over multiple devices depending on the application.
It should also be understood that although the terms ‘first’, ‘second’ etc. may have been used herein to describe various elements, these elements should not be limited by these 25 terms. These terms are only used to distinguish one element from another. For example, a first user interface could be termed a second user interface, and, similarly, a second user interface could be termed a first user interface, without departing from the scope of the present disclosure. The first user interface and second user interface may not be the same user interface.
Reference in the specification to one embodiment or an embodiment of the present invention means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrase in one embodiment appearing in 35 various places throughout the specification are not necessarily all referring to the same embodiment. Unless the context clearly requires otherwise, words using singular or
2014271204 01 Feb 2019 plural number also include the plural or singular number respectively. It will be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It will be appreciated by persons skilled in the art that numerous variations and/or modifications may be made to the invention as shown in the specific embodiments without departing from the scope of the invention as broadly described. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive.

Claims (15)

1. A computer-implemented method for providing attributes of a candidate vehicle part using image recognition of a vehicle part on a user device, the method comprising:
5 obtaining a target image of the vehicle part from a camera system;
performing image recognition to identify the vehicle part in the target image, wherein image recognition comprises comparing the target image with candidate images to obtain one or more matching results, wherein performing image recognition comprises extracting one or more visual features of the vehicle part from the target 10 image and comparing the target image with the candidate images based on the one or more visual features;
providing the one or more matching results on a user interface, wherein each matching result comprises an image of a candidate vehicle part; and providing attributes of the candidate vehicle part.
2. The computer-implemented method of claim 1, further comprising providing a user interface to order the vehicle part from a supplier.
3. The computer-implemented method of claim 1 or 2, wherein the user device is 20 positioning-enabled and the method further comprises:
collecting positioning information from a positioning system of the user device; and determining one or more suppliers of the vehicle part that are nearest to the user device based on the positioning information.
4. The computer-implemented method of claim 1, 2 or 3, further comprising, prior to obtaining the target image from the camera system, providing a user interface comprising one or more of the following:
an overlay feature that guides the user to capture the vehicle part within a 30 viewport;
an orientation feature to enable the camera system only when the user device is held at an acceptable orientation; and a settings feature to adapt a setting of the camera system.
35 5. The computer-implemented method of any one of the preceding claims, further comprising:
2014271204 01 Feb 2019 receiving one or more attributes of the vehicle part in the target image; and filtering the candidate images or matching results based on the one or more attributes.
5
6. The computer-implemented method of claim 5, wherein the one or more attributes include one or more of vehicle’s make, vehicle’s model and a dimension of the vehicle part.
7. The computer-implemented method of any one of the preceding claims, wherein 10 the attributes of the candidate vehicle part includes one or more of: part type, part identifier, vehicle’s make, vehicle’s model and part variation.
8. The computer-implemented method of any one of the preceding claims, further comprising, prior to performing image recognition, processing the target image by
15 performing one or more of the following:
resizing the target image;
filtering the target image;
cropping the target image to a predetermined size; and estimating a dimension of the vehicle part in the target image.
9. The computer-implemented method of any one of the preceding claims, wherein performing image recognition comprises:
sending the target image to a server to compare the target image with candidate images; and
25 receiving the one or more matching results from the server or a different server.
10. A user device for image recognition of a vehicle part, the device comprises a processor to perform the method according to any one of claims 1 to 9.
30
11. The user device of claim 10, further comprising a camera system to capture the target image and a display to display the user interface.
12. Computer program to cause a user device to perform the method of image recognition of a vehicle part according to any one of claims 1 to 9.
2014271204 01 Feb 2019
13. A network device for providing attributes of a candidate vehicle part using image recognition of a vehicle part comprising a processor to:
receive a target image of the vehicle part from a user device;
perform image recognition to identify the vehicle part in the target image,
5 wherein image recognition comprises comparing the target image with candidate images to obtain one or more matching results, wherein performing image recognition comprises extracting one or more visual features of the vehicle part from the target image and comparing the target image with the candidate images based on the one or more visual features; and
10 provide the one or more matching results to the user device, wherein each result comprises an image of a candidate vehicle part; and provide attributes of the candidate vehicle part.
14. The network device of claim 13, further comprising an interface to receive the 15 target image and to provide the one or more matching results.
15. Computer program to cause a network device to operate according to claim 13.
AU2014271204A 2013-05-21 2014-05-21 Image recognition of vehicle parts Active AU2014271204B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2014271204A AU2014271204B2 (en) 2013-05-21 2014-05-21 Image recognition of vehicle parts

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
AU2013101043 2013-05-21
AU2013101043A AU2013101043A4 (en) 2013-05-21 2013-05-21 Image recognition of vehicle parts
AU2013901813 2013-05-21
AU2013901813A AU2013901813A0 (en) 2013-05-21 Image recognition of vehicle parts
PCT/AU2014/050046 WO2014186840A1 (en) 2013-05-21 2014-05-21 Image recognition of vehicle parts
AU2014271204A AU2014271204B2 (en) 2013-05-21 2014-05-21 Image recognition of vehicle parts

Publications (2)

Publication Number Publication Date
AU2014271204A1 AU2014271204A1 (en) 2015-12-03
AU2014271204B2 true AU2014271204B2 (en) 2019-03-14

Family

ID=51932634

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2014271204A Active AU2014271204B2 (en) 2013-05-21 2014-05-21 Image recognition of vehicle parts

Country Status (3)

Country Link
AU (1) AU2014271204B2 (en)
NZ (1) NZ630397A (en)
WO (1) WO2014186840A1 (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10163033B2 (en) 2016-12-13 2018-12-25 Caterpillar Inc. Vehicle classification and vehicle pose estimation
SE1750463A1 (en) * 2017-04-20 2018-10-21 Wiretronic Ab Method and computer vision system for handling of an operable tool
JP7005932B2 (en) * 2017-04-28 2022-01-24 富士通株式会社 Search program, search device and search method
KR102023469B1 (en) * 2019-04-02 2019-09-20 이재명 Body parts manufacturing system
CN110659567B (en) * 2019-08-15 2023-01-10 创新先进技术有限公司 Method and device for identifying damaged part of vehicle
CN111310561B (en) * 2020-01-07 2024-10-01 成都睿琪科技有限责任公司 Vehicle configuration identification method and device
CN114663871A (en) * 2022-03-23 2022-06-24 北京京东乾石科技有限公司 Image recognition method, training method, device, system and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011017557A1 (en) * 2009-08-07 2011-02-10 Google Inc. Architecture for responding to a visual query

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060030985A1 (en) * 2003-10-24 2006-02-09 Active Recognition Technologies Inc., Vehicle recognition using multiple metrics
US7751805B2 (en) * 2004-02-20 2010-07-06 Google Inc. Mobile image-based information retrieval system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011017557A1 (en) * 2009-08-07 2011-02-10 Google Inc. Architecture for responding to a visual query

Also Published As

Publication number Publication date
NZ630397A (en) 2017-06-30
AU2014271204A1 (en) 2015-12-03
WO2014186840A1 (en) 2014-11-27

Similar Documents

Publication Publication Date Title
AU2014271204B2 (en) Image recognition of vehicle parts
JP7058760B2 (en) Image processing methods and their devices, terminals and computer programs
EP3125135B1 (en) Picture processing method and device
JP5734910B2 (en) Information providing system and information providing method
JP6392991B2 (en) Spatial parameter identification method, apparatus, program, recording medium, and terminal device using image
WO2016101757A1 (en) Image processing method and device based on mobile device
CN109189879B (en) Electronic book display method and device
CN108898082B (en) Picture processing method, picture processing device and terminal equipment
EP2677501A2 (en) Apparatus and method for changing images in electronic device
WO2018184260A1 (en) Correcting method and device for document image
CN104049861A (en) Electronic device and method of operating the same
GB2499385A (en) Automated notification of images with changed appearance in common content
WO2022068719A1 (en) Image display method and apparatus, and electronic device
WO2017107855A1 (en) Picture searching method and device
CN106570078A (en) Picture classification display method and apparatus, and mobile terminal
CN105335714A (en) Photograph processing method, device and apparatus
TWI798912B (en) Search method, electronic device and non-transitory computer-readable recording medium
WO2019109887A1 (en) Image processing method, electronic device, and computer readable storage medium
WO2022016803A1 (en) Visual positioning method and apparatus, electronic device, and computer readable storage medium
CN110019907B (en) Image retrieval method and device
CN108009273B (en) Image display method, image display device and computer-readable storage medium
US11238622B2 (en) Method of providing augmented reality contents and electronic device therefor
US20170076427A1 (en) Methods and devices for outputting a zoom sequence
AU2013101043A4 (en) Image recognition of vehicle parts
US20200310747A1 (en) Processing audio data

Legal Events

Date Code Title Description
FGA Letters patent sealed or granted (standard patent)