WO2022110106A1 - 一种扫码方法及装置 - Google Patents

一种扫码方法及装置 Download PDF

Info

Publication number
WO2022110106A1
WO2022110106A1 PCT/CN2020/132645 CN2020132645W WO2022110106A1 WO 2022110106 A1 WO2022110106 A1 WO 2022110106A1 CN 2020132645 W CN2020132645 W CN 2020132645W WO 2022110106 A1 WO2022110106 A1 WO 2022110106A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
vehicle
coding pattern
target coding
target
Prior art date
Application number
PCT/CN2020/132645
Other languages
English (en)
French (fr)
Inventor
王庆文
王振阳
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP20962995.5A priority Critical patent/EP4246369A4/en
Priority to CN202080004157.5A priority patent/CN112585613A/zh
Priority to PCT/CN2020/132645 priority patent/WO2022110106A1/zh
Publication of WO2022110106A1 publication Critical patent/WO2022110106A1/zh
Priority to US18/325,837 priority patent/US20230325619A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1408Methods for optical code recognition the method being specifically adapted for the type of code
    • G06K7/14172D bar codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/10544Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum
    • G06K7/10712Fixed beam scanning
    • G06K7/10722Photodetector array or CCD scanning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1408Methods for optical code recognition the method being specifically adapted for the type of code
    • G06K7/14131D bar codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1439Methods for optical code recognition including a method step for retrieval of the optical code
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/146Methods for optical code recognition the method including quality enhancement steps
    • G06K7/1465Methods for optical code recognition the method including quality enhancement steps using several successive scans of the optical code
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1439Methods for optical code recognition including a method step for retrieval of the optical code
    • G06K7/1443Methods for optical code recognition including a method step for retrieval of the optical code locating of the code in an image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects

Definitions

  • the invention relates to the field of intelligent cockpits, in particular to a vehicle code scanning method and device.
  • Scanning is a non-contact interaction method, in which one party provides a coded pattern containing specific information, and the other party scans the coded pattern containing specific information provided by the former using an optical recognition device, obtains specific instructions through network communication, and Perform corresponding operations to complete the scan code interaction process.
  • Embodiments of the present invention provide a code scanning method, a control device, and a computer-readable medium based on a vehicle-mounted camera, so as to solve the problem that when scanning codes using smart terminal devices such as mobile phones in driving scenarios, it is often necessary to unlock and open corresponding application software, and the operation is complicated. It takes a long time, distracts the driver's attention, and easily causes problems such as traffic jams, which improves the efficiency of code scanning operations and driving safety in driving scenarios.
  • a first aspect of the embodiments of the present invention provides a code scanning method, including:
  • the target coding pattern is parsed through a second image, the second image is an image collected by a vehicle-mounted camera, and the interval between the collection time and the collection time of the first image is within the first preset time interval. set time;
  • the first image collected by the vehicle-mounted camera contains the captured target coding pattern means that the captured target coding pattern exists in the first image, rather than the captured parseable target coding pattern.
  • parseable means that the information corresponding to the two-dimensional code can be parsed from the pattern. Since the vehicle is in a fast driving state and the angle of the camera is a problem, the captured target code pattern included in the first image may be There are problems such as incompleteness or unclearness or distortion, and the original target coding pattern may not be able to be reproduced clearly and completely. This results in that the captured target coding pattern contained in the first image captured by the vehicle camera may not be parseable. Therefore, when parsing the first image including the captured target coding pattern, the The target coding pattern is incomplete, the definition is low, and the distortion is serious, which leads to the failure of parsing.
  • parsing the target coding pattern through the second image may be:
  • the vehicle-mounted camera In the driving scene, especially when the vehicle is driving, it is difficult to collect a parseable target encoded image at one time, so it is easy to cause the problem of missed detection.
  • the first image collected by the vehicle-mounted camera contains the target coding pattern, no matter whether it is complete or not, it will enter the analysis stage, and the integrity of the target coding pattern can be checked and aligned in the analysis stage. and other operations to reduce the missed detection rate.
  • the vehicle camera Before and after the time when it detects that the first image obtained by the vehicle camera contains the target coding pattern, it is possible for the vehicle camera to collect an image containing the target coding pattern that can be analyzed. Therefore, when analyzing the first image obtained by the vehicle camera When the analysis information is not obtained for the target encoding pattern in the image, the second image obtained by the vehicle-mounted camera and the interval between the acquisition time and the acquisition time of the first image is within the first preset time and continues to analyze the target encoding pattern, which can improve the target coding pattern. The success rate of encoding pattern parsing.
  • parsing the target coding pattern through the second image may be:
  • the second preset time interval sample the image data within the first preset time from the moment when the vehicle-mounted camera collects the first image, to obtain a second image;
  • the next sampling is performed on the image data within the first preset time from the moment when the vehicle-mounted camera collects the first image according to the second preset time interval.
  • sampling starts from the acquisition time of the first image to the time before this time, or it can be sampled from the acquisition time of the first image to the time after this time, and it can also be alternated from the acquisition time of the first image to this time. Sampling is performed at the time before and after the time, and if the operation conditions permit, sampling can also be performed simultaneously from the time when the first image is collected to the time before and after the time.
  • the parsing of the target coding pattern through the second image may be:
  • the target coding pattern in the second image is parsed according to the sorting order, and the target coding pattern in one second image can be parsed at a time, and if the operation conditions permit, the target coding pattern in multiple second images can also be parsed at the same time. pattern.
  • the parsing information refers to the valid target information carried by the target coding pattern, which is usually a link pointing to some content or interface.
  • the target coding pattern when no parsing information is obtained by parsing the target coding pattern in the second image , the target coding pattern can be analyzed by the third image collected by the vehicle-mounted camera after changing the vehicle state.
  • the changing the vehicle state may be: prompting the driver to adjust the vehicle state or automatically adjusting the vehicle by the vehicle state.
  • the vehicle state may be: the position, speed, One or more of the orientation and the shooting angle of the vehicle camera.
  • the analysis information is not obtained by analyzing the target coding pattern in the second image, it means that the imaging effect of the target coding pattern in the first image and the second image is not very good, which makes it difficult to analyze.
  • adaptive adjustment of the vehicle state such as the position, speed, orientation of the vehicle and the shooting angle of the vehicle camera, can improve the imaging effect of the target coding pattern and improve the success rate of analysis.
  • the change of the vehicle state can be to prompt the driver to adjust the vehicle state.
  • the vehicle state can also be automatically adjusted by the vehicle, reducing the risk to the driver. The distraction of attention improves driving safety.
  • the The third image collected by the vehicle-mounted camera limits the time or number of times for parsing the target coding pattern, for example, a time threshold or a number of thresholds are set. When the time or the number of times for parsing the target coding pattern through the third image collected by the vehicle camera reaches the threshold value, a failure is directly returned.
  • the state of the car may be changed continuously, and the third image will be continuously collected for analysis. This process may waste a lot of time, and even in some extreme cases, no matter how the vehicle state is adjusted, the target coding pattern in the third image cannot be successfully resolved. Therefore, limiting the time or number of times to analyze the third image collected by the vehicle camera can stop the analysis operation in time when the analysis time is long, and prompt the driver to interact in other ways, which improves efficiency and saves time. .
  • the parsing target coding pattern includes:
  • the detection of the integrity of the target coding pattern may be based on the positioning icon in the target coding pattern to detect the target coding pattern.
  • the integrity of the target coding pattern can also be detected based on the position of the target coding pattern.
  • the aligning the target coding pattern includes: aligning the target coding pattern based on a positioning icon in the target coding pattern.
  • the first image obtained by the vehicle-mounted camera contains the target coding pattern, no matter whether it is complete or not, it will enter the analysis stage. If the target coding pattern is incomplete or the distortion is relatively high If it is serious, the parsing information cannot be obtained. Therefore, in the parsing stage, the integrity of the target coding pattern is first detected, and if the target coding pattern is incomplete, the image is discarded and the next image is parsed. When a complete target coding pattern is detected, the target coding pattern is aligned, and if the target coding pattern is too distorted so that it cannot be aligned, the image is discarded and the next image is parsed.
  • the alignment step can also improve the success rate of analyzing the target coding pattern and reduce the missed detection rate.
  • security verification may also be performed before a corresponding operation is performed.
  • the security verification may be to confirm the willingness to perform the corresponding operation, for example, to confirm the willingness to perform the corresponding operation through one or more ways of voice response, operating the vehicle, gesture response, and head gesture response.
  • the operator's identity can also be confirmed, for example, the operator's identity can be confirmed through one or more of face recognition, iris recognition, fingerprint recognition, and voice recognition.
  • the driver or passenger can confirm the willingness to perform the corresponding operation, and ensure that the operation to be performed is approved by the driver or passenger. Can greatly improve security.
  • the human-computer interaction mode of voice response, vehicle operation, gesture response, and head posture response is easy to operate, which can reduce the distraction of the driver's attention and improve the safety of driving.
  • the identity of the operator can also be confirmed to further improve the security.
  • the target encoding pattern may be a two-dimensional code and a one-dimensional barcode.
  • the first image collected by the vehicle-mounted camera includes a target encoding pattern, which may be :
  • the first image collected by the vehicle-mounted camera includes a coding pattern with a specific geometric feature, and the specific geometric feature is a feature that distinguishes the target coding pattern from other coding patterns.
  • the geometric feature includes: one or more of a border, a shading, a background color, and an aspect ratio. kind.
  • Some geometric features can be added to the target coding pattern to be identified to distinguish the target coding pattern from other coding patterns, thereby reducing the probability of detecting and analyzing wrong coding patterns and reducing the interference of irrelevant coding patterns.
  • acquiring the first image collected by the vehicle-mounted camera may be: When conditions are met, obtain the first image captured by the vehicle camera
  • the trigger conditions include: receiving an operator's instruction, when the vehicle speed is lower than a threshold, reaching a specific geographical location One or more of location, and commands sent over the network.
  • the target coding pattern in the image collected by the vehicle camera is continuously collected and analyzed, it may cause certain interference to the normal driving of the driver.
  • collecting and parsing the target coding information when the user does not want it may lead to some misoperations.
  • the operator can turn on the acquisition function of the target coding pattern when the code scanning operation is required, which reduces the interference of irrelevant coding patterns, reduces the probability of misoperation of the vehicle, and reduces the impact on the driver. Interference with normal driving improves safety.
  • a second aspect of the embodiments of the present invention provides a vehicle-mounted device, including:
  • the processor is coupled to the memory, the processor is coupled to the memory, the memory stores program instructions, and when the program instructions stored in the memory are executed by the processor, the first aspect or any one of the sixteen possible implementation manners of the first aspect is implemented.
  • the vehicle-mounted device provided in the second aspect of the embodiment of the present invention may be installed in the vehicle in any form, including a front-mounted vehicle-mounted product and a rear-mounted vehicle-mounted product.
  • the vehicle-mounted device in a first possible implementation manner of the second aspect of the embodiments of the present invention, can be integrated inside the vehicle-mounted camera, and the vehicle-mounted camera can implement the first aspect or the first sixteen aspects of the first aspect. Any one of the possible implementation manners is used to scan the code, and the parsing information obtained by parsing the target coding pattern is transmitted to the central control system, so as to achieve all the above beneficial effects.
  • the vehicle-mounted device in combination with the second aspect of the embodiment of the present invention, in the second possible implementation manner of the second aspect of the embodiment of the present invention, may be integrated in the central control system of the vehicle, and the image collected by the vehicle-mounted camera is acquired by the central control system,
  • the barcode scanning method described in the first aspect or any one of the sixteen possible implementation manners of the first aspect is implemented, and all the above-mentioned beneficial effects are achieved.
  • a third aspect of the embodiments of the present invention provides another vehicle-mounted device, including:
  • Vehicle camera used to collect images
  • the processor is coupled to the memory, the processor is coupled to the memory, the memory stores program instructions, and when the program instructions stored in the memory are executed by the processor, the first aspect or any one of the sixteen possible implementation manners of the first aspect is implemented.
  • the vehicle-mounted device provided by the third aspect of the embodiments of the present invention may be installed in the vehicle in any form, including a front-mounted vehicle-mounted product and a rear-mounted vehicle-mounted product.
  • the vehicle-mounted device integrateds a vehicle-mounted camera, a memory, and a processor, and can integrally complete the acquisition, detection, and analysis steps of a target coding pattern, and realize the first aspect or the first sixteen possible possibilities of the first aspect. Any one of the code scanning methods in the implementation manner can be implemented, and all the above-mentioned beneficial effects can be realized.
  • a fourth aspect of the embodiments of the present invention provides a vehicle, including:
  • Vehicle camera used to collect images
  • Display module used to display the content corresponding to the parsing information obtained by parsing the target coding pattern
  • Input module used to receive operator input
  • the processor is coupled to the memory, the processor is coupled to the memory, the memory stores program instructions, and when the program instructions stored in the memory are executed by the processor, the first aspect or any one of the sixteen possible implementation manners of the first aspect is implemented.
  • a fourth aspect of the embodiments of the present invention provides a vehicle, including a vehicle-mounted camera, a memory, a processor, a display module, and an input module.
  • the first aspect or the first The barcode scanning method described in any one of the first sixteen possible implementation manners.
  • the content corresponding to the parsing information obtained by parsing the target coding pattern is displayed through the display module, and the operator's input is received through the input module, which can realize the interaction process with the operator after the scanning operation, and process the parsing target coding pattern in time.
  • the acquired parsing information improves the efficiency of the code scanning operation, and achieves all the above beneficial effects.
  • a fifth aspect of the embodiments of the present invention provides a computer-readable medium, where the computer-readable medium includes a program, which, when executed on a computer, enables the computer to implement the first aspect or the first fifteen possible implementation manners of the first aspect Any of the scanning methods described in .
  • FIG. 1 is a schematic diagram of a code scanning method provided by an embodiment of the present invention.
  • 2A is a two-dimensional code pattern provided by an embodiment of the present invention.
  • 2B is a two-dimensional barcode pattern provided by an embodiment of the present invention.
  • 2C is a one-dimensional barcode pattern provided by an embodiment of the present invention.
  • 3 is a schematic diagram of the relationship between the time when the vehicle-mounted camera collects the second image and the time when the first image is collected according to an embodiment of the present invention
  • 4A is an embodiment of an embodiment of the present invention for parsing a target coding pattern based on a second image
  • 4B is another implementation manner of analyzing a target coding pattern based on a second image provided by an embodiment of the present invention.
  • FIG. 5 is a method for analyzing a target coding pattern in an image obtained by a vehicle-mounted camera provided by an embodiment of the present invention
  • 6A is a schematic diagram of an on-board camera capturing an image including a target coding pattern according to an embodiment of the present invention
  • 6B is a schematic diagram of an on-board camera capturing an image including a target coding pattern according to an embodiment of the present invention
  • FIGS. 7A-7D are schematic diagrams of target coding patterns with geometric features provided by an embodiment of the present invention.
  • FIG. 8 is a schematic structural diagram of a vehicle-mounted device provided by an embodiment of the present invention.
  • FIG. 9 is a schematic structural diagram of another vehicle-mounted device provided by an embodiment of the present invention.
  • FIG. 10 is a schematic structural diagram of a vehicle provided by an embodiment of the present invention.
  • Smart terminal devices such as mobile phones are used as optical identification devices.
  • the vehicle In order to successfully parse the information into the coding pattern.
  • the vehicle in driving-related scenarios, the vehicle is often in a driving state, and it is difficult for the in-vehicle camera to collect a coding pattern that meets the parsing conditions at one time.
  • the vehicle may have missed the best position to capture the coding pattern, resulting in the failure of the scanning code interaction process.
  • a code scanning method that is more suitable for driving scenes and vehicle-mounted cameras is proposed to solve the problem that it is difficult to collect and analyze coding patterns in a driving scene, and it is difficult to analyze successfully at one time.
  • a code scanning method 100 is provided, and the method includes the following steps.
  • Step 101 Obtain a first image collected by a vehicle-mounted camera, where the first image includes a captured target coding pattern;
  • Step 102 Parse the target coding pattern
  • Step 103 when analyzing the target encoding pattern in the first image and no analysis information is obtained, analyze the target encoding pattern through the second image;
  • a panoramic surveillance image system usually arranges a wide-angle or fisheye camera at the front, rear, left, and right of the vehicle to combine a 360° surround view effect.
  • a first image captured by a vehicle-mounted camera is acquired.
  • the vehicle-mounted camera may reuse an existing camera that performs other functions at a suitable location. For example, two sides of the camera may be used. The cameras on the front of the vehicle can be used to collect images located on both sides of the vehicle, and the cameras located in front of the vehicle can be used to collect images located in front of the vehicle, etc.
  • the multiplexed vehicle camera can determine the start time of the collection function according to the original function, as long as it is ensured that the vehicle camera is in the ON state when the code scanning operation needs to be performed.
  • This embodiment of the present invention does not limit it. It is switched on automatically when the vehicle control system is switched on, or it can be switched on with the driver's authorization to start.
  • a dedicated camera dedicated to the scanning operation of the vehicle may also be installed at a suitable location.
  • the collection function of the special camera can be automatically turned on after the vehicle control system is turned on, or it can be turned on through some trigger conditions when the code scanning operation is required. The trigger conditions of the special camera collection function will be introduced in detail below.
  • the images collected by the vehicle-mounted camera often have a certain degree of distortion.
  • the image collected by the vehicle-mounted camera can be subjected to distortion correction.
  • the algorithm integrated by the vehicle camera can perform distortion correction while collecting the image, and directly obtain the image after distortion correction, or the processor can perform distortion correction on the acquired image collected by the vehicle camera to obtain the image after distortion correction.
  • the present invention implements For example, there is no restriction on the manner of performing distortion correction on the image collected by the vehicle camera.
  • the target coding pattern considered in this embodiment of the method is a coding pattern that contains specific information and is expected to be collected and parsed during the scanning code interaction process.
  • the target encoding pattern may be a two-dimensional code pattern (such as a QR Code) as shown in FIG. 2A
  • the target encoding pattern may be a two-dimensional barcode pattern as shown in FIG. 2B ( Such as PDF417)
  • the target coding pattern can also be a one-dimensional barcode pattern (eg Code128) as shown in FIG. 2C.
  • the fact that the first image collected by the vehicle-mounted camera contains the captured target coding pattern means that there is a captured target coding pattern in the first image, rather than a captured parseable target coding pattern.
  • parseable means that the information corresponding to the two-dimensional code can be parsed from the pattern. Since the vehicle is in a fast driving state and the angle of the camera is a problem, the captured target code pattern included in the first image may be There are problems such as incompleteness or unclearness or distortion, and the original target coding pattern may not be able to be reproduced clearly and completely. This results in that the captured target coding pattern contained in the first image captured by the vehicle camera may not be parseable. Therefore, when parsing the first image including the captured target coding pattern, the The target coding pattern is incomplete, the definition is low, and the distortion is serious, which leads to the failure of parsing.
  • a pattern recognition algorithm may be used to detect the target coding pattern in the first image acquired by the vehicle-mounted camera.
  • the detection of target coding patterns is regarded as a target detection task, and algorithms such as deep neural networks, support vector machines, and template matching are trained by collecting and labeling samples containing target coding patterns.
  • algorithms such as deep neural networks, support vector machines, and template matching are trained by collecting and labeling samples containing target coding patterns.
  • some incomplete target coding patterns and severely distorted target coding patterns can be added when training the target detection algorithm.
  • the trained algorithm can detect whether there is a target coding pattern in the image collected by the vehicle camera, and can also detect the existence of the target coding pattern when the target coding pattern is incomplete or severely distorted.
  • an independent target detection algorithm can be trained according to the above method, which is dedicated to the detection of target coding patterns.
  • the vehicle camera itself has some functions that need to perform target detection, and the function is running continuously, or the operation scene of its target detection function can cover the scene of the scanning operation, the target can be encoded.
  • patterns are integrated into existing object detection algorithms available for in-vehicle cameras.
  • the function of detecting whether the image acquired by the vehicle-mounted camera contains the target coding pattern may be enabled simultaneously with the acquisition function of the vehicle-mounted camera, and run continuously. In other embodiments, when the acquisition function of the vehicle-mounted camera is enabled, the detection function of the target coding pattern may be enabled in a certain triggering manner, which will be described in detail below.
  • step 102 of the embodiment of the present invention the target encoding pattern in the first image collected by the vehicle-mounted camera is analyzed to obtain analysis information in the target encoding pattern.
  • step 102 of the embodiment of the present invention since the target coding pattern in the first image collected by the vehicle-mounted camera may be incomplete, too small in size, serious distortion, etc., it is difficult to perform the target coding pattern in the first image.
  • Analysis that is, the analysis information cannot be obtained by analyzing the first image obtained by the vehicle-mounted camera.
  • the parsing information refers to the valid target information carried by the target coding pattern, which is usually the content or interface obtained after accessing certain links.
  • step 103 of the embodiment of the present invention when no parsing information is obtained by analyzing the target coded image in the first image obtained by the vehicle-mounted camera, a second image containing the target coded pattern can be obtained, and the second image can be analyzed based on the second image. the target coding pattern.
  • the probability of capturing a parseable target encoding pattern is the highest before and after the time when the image captured by the vehicle-mounted camera is detected to contain the target encoding pattern. Therefore, when parsing the target encoded image in the first image acquired by the vehicle camera does not obtain parsing information, the target encoding pattern can be analyzed through the second image, and the second image is the time interval between the vehicle camera and the first image. image within the first preset time.
  • t0 is the time when the vehicle-mounted camera collects the first image
  • t1 is the first preset time
  • the second image may be the vehicle-mounted camera at the time from t0-t1 to t0+t1 Images captured within the segment.
  • the first preset time can be set by the system, and different first preset times can be set according to different vehicle speeds.
  • the first preset time when the vehicle speed is 30km/h can be set as 5 seconds, when the vehicle speed is greater than 30km/h, the first preset time is appropriately shortened, and when the vehicle speed is less than 30km/h, the first preset time is appropriately extended.
  • the first preset time can be set by the user.
  • step 103 when analyzing the target encoding pattern in the first image and no analysis information is obtained, analyze the target encoding pattern by using the second image.
  • the second preset time can be set by the system, and different second preset times can be set according to different vehicle speeds.
  • the second preset time when the vehicle speed is 30km/h can be set as 0.1 second, when the vehicle speed is greater than 30km/h, the second preset time is appropriately shortened, and when the vehicle speed is less than 30km/h, the second preset time is appropriately extended.
  • the second preset time can be set by the user.
  • the second preset time t2 is used as the time interval, and sampling is performed again to the time before t0 to obtain a second image A2.
  • the target coding pattern in A2 is parsed.
  • the second preset time t2 continues to take the second preset time t2 as the time interval to sample again to the time before t0 to obtain the second images A3, A4, etc. in sequence.
  • the analysis information cannot be obtained from the images sampled at the time before time t0, turn to the time after time t0, sample a second image B1 with the second preset time t2 as the time interval, and analyze the second image Target coding pattern in B1. If the analysis information is not obtained by analyzing the target coding pattern in the second image B1, the second preset time t2 is used as the time interval, and sampling is performed again at the time after t0 to obtain a second image B2.
  • the target coding pattern in B2 is parsed. When analyzing the target coding pattern in the second image B2 and no analysis information is obtained, continue to take the second preset time t2 as a time interval, and sample the time before t0 again to obtain the second images B3, B4, etc. in sequence. Once the parsing information in the target encoding pattern is successfully parsed, the parsing process is stopped.
  • the second image may be sampled and analyzed in the above order, or the second image may be sampled and analyzed in other order.
  • Sample and analyze the image after time t0 and then sample and analyze the image before time t0, that is, take the second preset time t2 as the time interval, and sample the time after t0 to obtain a second time interval.
  • Image B1 parsing the target coding pattern in the second image B1. If the analysis information is not obtained by analyzing the target coding pattern in the second image B1, the second preset time t2 is used as the time interval, and sampling is performed again at the time after t0 to obtain a second image B2.
  • the target coding pattern in B2 is parsed.
  • the second preset time t2 is used as the time interval, and sampling is performed again to the time before t0 to obtain a second image A2.
  • the target coding pattern in A2 is parsed.
  • the second preset time t2 continues to take the second preset time t2 as the time interval to sample again to the time before t0 to obtain the second images A3, A4, etc. in sequence.
  • the second image may be obtained by sampling the left and right sides alternately, and the second image obtained by sampling may be analyzed.
  • the second preset time t2 may be used as a time interval to sample the time after t0 to obtain a second image B1, and analyze the target coding pattern in the second image B1. If no analysis information is obtained by analyzing the target coding pattern in the second image B1, take the second preset time t2 as the time interval, sample the time before t0 to obtain a second image A1, and analyze the second image A1. target coding pattern.
  • the second preset time t2 is used as the time interval, and sampling is performed again at the time after t0 to obtain a second image B2.
  • the target coding pattern in B2 is parsed.
  • the parsing process is stopped.
  • the time t0 when the vehicle camera collects the first image can be taken as the starting point
  • the second image can be obtained by sampling forward and backward simultaneously
  • the target in the second image can be encoded pattern analysis. For example, firstly, taking the second preset time t2 as a time interval, the second images A1 and B1 are obtained by sampling, and the target coding patterns in the second images A1 and B1 are analyzed. If no analysis information is obtained by analyzing the target coding patterns in the second images A1 and B1, continue to use the second preset time t2 as a time interval to sample the second images A2 and B2.
  • the coding pattern is parsed.
  • sampling manner of the second image is not limited to the above manner, and other common sampling manners and sampling manners that can be obtained by those skilled in the art without creative efforts are also possible.
  • the second preset time t2 is used as the time interval to sample the images collected by the vehicle-mounted camera and the time interval from the first image is within the first preset time, that is, to The second image collected by the vehicle camera between the time t0-t1 and the time t0+t1 is sampled, and multiple second images (A1, A2, An-1, An, B1, B2, Bn-1, Bn- 1, etc.). Sort the plurality of second images obtained by sampling, and analyze the target coding patterns in the second images in sequence according to the sorting order. Each second image may be analyzed in sequence according to the sorting order, and if the operation conditions permit, multiple second images may also be analyzed at the same time.
  • the parsing of the target coding pattern is stopped.
  • the index related to the imaging effect and the difficulty of analysis can be used as the sorting rule.
  • the plurality of second images are sorted according to evaluation indicators such as exposure and sharpness. . It should be understood that the sorting method is not limited in this embodiment of the method, and other sorting indicators commonly used in the art are also possible.
  • the instructions in the parsed information can include any action that the onboard control system can perform.
  • operations may include: scanning a code to pay, performing vehicle identity authentication, registering vehicle information in a given system, downloading a given in-vehicle application, and other operations are also possible.
  • a method 500 for analyzing information in a target coding pattern through an image including a target coding pattern collected by a vehicle-mounted camera is provided.
  • the method may include the following steps.
  • Step 501 Detect the position of the target coding pattern
  • Step 502 Detect the integrity of the target coding pattern
  • Step 503 Align the target coding pattern
  • Step 504 Parse the target coding pattern.
  • step 501 detects the position of the target coding pattern.
  • the position of the detected target coding pattern can be multiplexed with the algorithm used when detecting the target coding pattern in step 102.
  • a relatively simple algorithm such as a sliding window can also be used to detect the position of the target coding pattern. It should be understood that other algorithms commonly used in the art that can implement the detection of the position of the target coding pattern are also possible.
  • the rectangular area containing the target coding pattern can be cut out for use in subsequent steps.
  • the position of the target coding pattern can be used to detect the integrity of the target coding pattern, or the step of detecting the position of the target coding pattern can be skipped, and the integrity of the target coding pattern can be detected by other methods.
  • the integrity of the target coding pattern is detected.
  • the integrity of the target coding pattern can be judged based on the positioning icon.
  • the three "back"-shaped icons located at the upper left, upper right and lower left are the positioning icons of the two-dimensional code.
  • the two-dimensional code pattern is complete; in the two-dimensional barcode pattern shown in Figure 2B, the black boxes and lines on the left and right are the front and rear positioning icons of the two-dimensional barcode, and the integrity of the front and rear positioning icons It can be realized by edge detection operators such as Sobel.
  • edge detection operators such as Sobel.
  • the two-dimensional barcode pattern can be considered to be complete; as shown in Figure 2C In the one-dimensional barcode pattern shown, the vertical lines on the left and right are the front and rear positioning icons.
  • the one-dimensional barcode pattern can be considered complete (one-dimensional barcodes do not need to detect vertical integrity) .
  • the integrity of the target coding pattern can also be judged based on its position. For example, when the target coding pattern is located at the edge of the image, the target coding pattern is considered to be incomplete. When the edges have no intersection, the target coding pattern is considered complete. It should be understood that other methods commonly used in the art that can implement the detection of the integrity of the target coding pattern are also possible.
  • the prior art cannot obtain analytical information from an incomplete target coding pattern. Therefore, if it is detected that the target coding pattern is incomplete, the image is discarded and the next image containing the target coding pattern is directly parsed.
  • an alignment operation can be performed on the target coding pattern to improve the parsing success rate of the target coding pattern.
  • the target coding pattern can be aligned based on the positioning icon, that is, the position of the positioning icon in the target coding pattern is first identified, and then the positioning icon is transformed to a specific position through affine transformation, so as to obtain a straight , A centered, appropriately sized target encoding pattern.
  • the deep neural network, support vector machine, template matching and other algorithms can be trained by collecting and labeling samples containing the location and size of the positioning icon. The trained algorithm can detect the position and size of the positioning icon. Then, the alignment of the target coding pattern is realized. It should be understood that other methods commonly used in the art to achieve target coding pattern alignment are also possible.
  • step 502 if the perspective or projection of the target coding pattern is relatively serious and cannot be aligned, the image is discarded and the next image is directly analyzed. An image containing the encoding pattern for this target.
  • Step 504 parses the complete, aligned target coding pattern.
  • algorithms commonly used in the field such as global binarization and hybrid binarization, can be used to convert the black and white cells in the target coding pattern into binary numbers, and then Decoded into character information, which may be a link to an object, etc. If the target coding pattern is successfully parsed, the corresponding indication in the parsing information is obtained. If the parsing of the target coding pattern fails, that is, the parsing information in the target coding pattern cannot be successfully obtained, the status of parsing failure is returned.
  • the state of the vehicle can be appropriately changed, and the target coding pattern can be parsed through the third image collected by the vehicle-mounted camera after the vehicle state is changed.
  • the purpose of changing the vehicle state is to adapt the possible vehicle states so that the vehicle camera can obtain a better acquisition effect. For example, the position, speed, orientation of the vehicle, and the shooting angle of the in-vehicle camera can be adjusted adaptively.
  • the vehicle speed can be appropriately reduced; when the distance between the vehicle and the target coding pattern is too small, resulting in incomplete target coding patterns collected, the vehicle can be appropriately increased
  • the distance from the target coding pattern; when the orientation of the vehicle and/or the shooting angle of the on-board camera causes serious distortion of the collected target coding pattern, the orientation of the vehicle and/or the shooting angle of the on-board camera can be adjusted to be beneficial to the image.
  • the direction of acquisition is adjusted.
  • the above-mentioned aspects can be adjusted at the same time, or only one aspect can be adjusted at a time. After analyzing and adjusting the aspect, the target coding pattern in the image obtained by the vehicle camera is still not analyzed. information, adjust another aspect. It should be understood that, in order to obtain a better collection effect, other adaptive adjustments to the vehicle state are also possible.
  • the driver may be prompted to manually adjust the above-mentioned vehicle state to obtain a better collection effect.
  • the automatic driving function can control the vehicle to automatically complete the above adjustment under the condition that there is no safety risk.
  • the time or the number of times for parsing the target coding pattern of the third image collected by the vehicle camera may be limited. For example, a certain threshold may be set. When the time or number of coding patterns reaches the set threshold, it will return failure directly. At this point, the driver can be prompted to interact in other ways to improve efficiency.
  • a step of security authentication may be added before the corresponding operation is performed according to the instructions in the parsing information, so as to improve security and prevent malicious attacks.
  • some interaction methods with less interference to the driving operation and simple operation can be adopted, and the willingness to perform the corresponding operation can be confirmed through the interaction with the driver or other persons in the vehicle, for example, the voice can be used The willingness to perform the corresponding operation is confirmed by means of answering, operating the vehicle, gesture response, and head gesture response. It should be understood that other human-computer interaction methods commonly used in the art are also possible.
  • identity authentication may be added to the security authentication. The identity of the operator can be verified by verifying some specific human characteristics. For example, the operator's identity is verified by methods such as face recognition, iris recognition, fingerprint recognition, and voice recognition. It should be understood that other methods commonly used in the art for verifying the operator's identity are also possible.
  • the party providing the target coding pattern should consider factors such as the position, orientation, and viewing angle of the vehicle camera, and choose a position that can be completely and clearly identified to arrange the target coding pattern.
  • the scanning position In the process of scanning the code, there is such a position, when the vehicle passes through the position at the latest, the collection and analysis of the target coding pattern is completed, and this position is called the scanning position.
  • the target code pattern can be sprayed on the ground in front of the scanning position, and the image containing the target code pattern in front of the vehicle can be captured by a vehicle-mounted camera located in front of the vehicle.
  • the target coding pattern sprayed on the ground should have the appropriate size for better acquisition.
  • the length and width of the target code pattern sprayed on the ground should be greater than 60 cm.
  • the target code pattern can be sprayed or posted on a sign near the scanning position.
  • the sign can be an independent sign or a sign shared with other information.
  • On-board cameras on the left and right sides of the vehicle capture images containing the target-encoded pattern located on the sign.
  • the target coding pattern sprayed on or posted on the sign should also be of the right size for better capture.
  • the length and width of the target code pattern sprayed or posted on the sign should be greater than 40 cm.
  • the target coding pattern can be distinguished from other coding patterns by adding some geometric features to the target coding pattern.
  • the geometric features may include any features that enable the target coding pattern to be differentiated from other coding patterns, such as borders, shading, background color, aspect ratio, and the like.
  • Figure 7A shows the target coding pattern after adding a frame
  • Figure 7B shows the target coding pattern after adding shading
  • Figure 7C shows the target coding pattern after adding background color
  • the trained target detection algorithm can only identify target coding patterns containing specific geometric features and avoid interference of other coding patterns.
  • the vehicle-mounted camera dedicated to the scanning operation may be turned on when the vehicle-mounted control system is turned on, or may be turned on by setting a certain trigger condition.
  • the function of detecting whether the image obtained by the vehicle camera contains the target coding pattern can be enabled simultaneously with the acquisition function of the vehicle camera.
  • the on-board camera that performs other functions is reused to perform the scanning operation, the function of detecting whether the image obtained by the on-board camera contains the target coding pattern can be turned on when the acquisition function of the on-board camera is turned on, so as to capture the target coding pattern from time to time.
  • the detection function of the target coding pattern can be enabled by setting a certain trigger condition.
  • the operator operates the vehicle, makes specific gestures, issues voice commands, and so on.
  • a speed threshold can be set, and when the vehicle's driving speed is lower than the threshold, the target coding is triggered.
  • Pattern detection function a triggering method based on geographic location information can be used to preset locations on the map where code scanning may be required, such as gas stations, toll stations, charging piles, etc., when the vehicle arrives at the preset location When triggering the detection function of the target coding pattern.
  • a network-based triggering method can also be used, and the party providing the target coding pattern issues a code scanning instruction through the network to trigger the detection function of the target coding pattern. It should be understood that other common triggering methods that can be obtained by those skilled in the art without creative efforts are also possible.
  • a vehicle-mounted device 800 is provided.
  • the in-vehicle device 800 includes: a memory 801 and a processor 802 (wherein the number of processors 802 in the in-vehicle device 800 may be one or more, and one processor is taken as an example in FIG. 8 ).
  • Memory 801 may include read-only memory and random access memory, and provides instructions and data to processor 802 .
  • a portion of memory 801 may also include non-volatile random access memory (NVRAM).
  • NVRAM non-volatile random access memory
  • the memory 801 stores processors and operation instructions, executable modules or data structures, or their subsets, or their extended sets, wherein the operation instructions may include various operation instructions for implementing various operations.
  • the methods disclosed in the above embodiments of the present application may be applied to the processor 802 or implemented by the processor 802 .
  • the processor 802 may be an integrated circuit chip with signal processing capability. In the implementation process, each step of the above-mentioned method can be completed by an integrated logic circuit of hardware in the processor 802 or an instruction in the form of software.
  • the above-mentioned processor 802 may be a general-purpose processor, a digital signal processor (digital signal processing, DSP), a microprocessor or a microcontroller, and may further include an application specific integrated circuit (ASIC), a field programmable Field-programmable gate array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • DSP digital signal processing
  • ASIC application specific integrated circuit
  • FPGA field programmable Field-programmable gate array
  • the processor 802 may implement or execute the methods, steps, and logical block diagrams disclosed in the embodiments of this application.
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the steps of the method disclosed in conjunction with the embodiments of the present application may be directly embodied as executed by a hardware decoding processor, or executed by a combination of hardware and software modules in the decoding processor.
  • the software modules may be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other storage media mature in the art.
  • the storage medium is located in the memory 801, and the processor 802 reads the information in the memory 801, and completes the steps of the above method in combination with its hardware.
  • the in-vehicle device 800 may be installed in the vehicle in any form.
  • the in-vehicle device 800 may be a pre-installed in-vehicle product, for example, installed by a complete machine manufacturer before leaving the factory.
  • the in-vehicle device 800 may be a post-installed in-vehicle product, for example, installed by a channel such as a 4S store after the show.
  • the vehicle-mounted device 800 can be integrated inside the vehicle-mounted camera to form a smart camera, the vehicle-mounted camera completes image acquisition and the above-mentioned code scanning method, and transmits the parsing information obtained by parsing the target coding pattern to the central control unit, and the central control system completes subsequent operations such as the display of analytical information and user operations.
  • the in-vehicle device 800 may be integrated in the central control system of the vehicle, and the central control system acquires images captured by the in-vehicle camera, performs the above-mentioned scanning operation, and completes subsequent operations such as display of analytical information and user operations.
  • the vehicle-mounted device 900 includes: a vehicle-mounted camera 901 , a memory 902 and a processor 903 .
  • the in-vehicle cameras used to capture images may reuse existing cameras at suitable locations that perform other functions.
  • the camera captures images located in front of the vehicle, etc.
  • the multiplexed vehicle camera can determine the start time of the collection function according to the original function, as long as it is ensured that the vehicle camera is in the ON state when the code scanning operation needs to be performed.
  • This embodiment of the present invention does not limit it. It is switched on automatically when the vehicle control system is switched on, or it can be switched on with the driver's authorization to start.
  • a dedicated camera dedicated to the scanning operation of the vehicle may also be installed at a suitable location. The acquisition function of the dedicated camera can be automatically turned on after the vehicle control system is turned on, or it can be turned on through some trigger conditions when a code scanning operation is required.
  • a speed threshold can be set, and when the vehicle's driving speed is lower than the threshold, the target coding is triggered.
  • Pattern detection function a triggering method based on geographic location information can be used to preset locations on the map where code scanning may be required, such as gas stations, toll stations, charging piles, etc., when the vehicle arrives at the preset location When triggering the detection function of the target coding pattern.
  • a network-based triggering method can also be used, and the party providing the target coding pattern issues a code scanning instruction through the network to trigger the detection function of the target coding pattern. It should be understood that other common triggering methods that can be obtained by those skilled in the art without creative efforts are also possible.
  • the in-vehicle device 900 integrates an in-vehicle camera, a memory and a processor, and can be used as a smart camera to complete image acquisition and code scanning operations, and transmit the analysis information obtained by analyzing the target coding pattern to the central control unit, and the central control system completes the analysis of the information.
  • Follow-up operations such as display and user operations.
  • a vehicle 1000 which specifically includes the following modules:
  • Vehicle-mounted camera 1001 configured to capture images.
  • the use and triggering conditions of the vehicle-mounted camera are as described in the vehicle-mounted device 900 , which will not be repeated here.
  • Display module 1004 configured to display the parsing information obtained by parsing the target coding pattern
  • Input module 1005 configured to receive an operator's input, and then determine an operation to be performed on the parsing information.
  • the display module 1004 includes any device in the cockpit of the vehicle that can realize a display function, such as an on-board screen, a head-up display system, a rear screen, and the like. It should be understood that other display devices commonly used in cabins are also possible.
  • the display module 1004 displays the parsing information obtained by parsing the target coding pattern, and transmits the information to the driver and/or passengers.
  • the input module 1005 includes any device in the vehicle cabin that can realize human-computer interaction functions, such as a touch screen, a voice receiving system, a camera, and other sensors.
  • the user's instruction on what operation to perform for the instruction in the parsing information is received through human-computer interaction.
  • instructions can be conveyed through voice response, operating the vehicle, gesture response, head gesture response, etc. It should be understood that other commonly used human-computer interaction methods and devices for realizing human-computer interaction functions in the cockpit of the vehicle are also possible.
  • the vehicle 1000 collects an image through the vehicle-mounted camera 1001 to obtain an image containing the captured target coding pattern; the above-mentioned scanning method is implemented through the memory 1002 and the processor 1003, and the parsing information is obtained by parsing the target coding pattern in the image collected by the vehicle-mounted camera 1001; The content or interface corresponding to the analysis information is displayed for the operator through the display module; the operation of the operator on the content or interface corresponding to the analysis information is obtained through the input module, thereby completing the processing of the analysis information.
  • the vehicle 1000 can centrally implement the entire process from acquisition, analysis to processing.
  • Embodiments of the present invention also provide a computer-readable storage medium. From the description of the above implementation manner, those skilled in the art can clearly understand that the present application can be implemented by means of software plus necessary general-purpose hardware. Of course, it is also possible to It is realized by dedicated hardware including dedicated integrated circuits, dedicated CLUs, dedicated memories, and dedicated components. Under normal circumstances, all functions completed by a computer program can be easily implemented by corresponding hardware, and the specific hardware structures used to implement the same function can also be various, such as analog circuits, digital circuits or special circuit, etc. However, a software program implementation is a better implementation in many cases for this application.
  • the technical solutions of the present application can be embodied in the form of software products in essence, or the parts that make contributions to the prior art.
  • the computer software products are stored in a readable storage medium, such as a floppy disk of a computer. , U disk, mobile hard disk, ROM, RAM, magnetic disk or optical disk, etc., including several instructions to enable a computer device (which may be a personal computer, server, or network device, etc.) to execute the methods described in the various embodiments of the present application .
  • a computer device which may be a personal computer, server, or network device, etc.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general purpose computer, special purpose computer, computer network, or other programmable device.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be downloaded from a website site, computer, server, or data center Transmission to another website site, computer, server, or data center is by wire (eg, coaxial cable, fiber optic, digital subscriber line (DSL)) or wireless (eg, infrared, wireless, microwave, etc.).
  • wire eg, coaxial cable, fiber optic, digital subscriber line (DSL)
  • wireless eg, infrared, wireless, microwave, etc.
  • the computer-readable storage medium may be any available medium that can be stored by a computer, or a data storage device such as a server, data center, etc., which includes one or more available media integrated.
  • the usable media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, DVD), or semiconductor media (eg, Solid State Disk (SSD)), and the like.

Abstract

一种基于车载摄像头的扫码方法,涉及智能座舱领域,所述方法包括:通过车载摄像头采集第一图像;解析第一图像中包含的目标编码图案;当没有获得解析信息时,基于车载摄像头获取的图像数据,采集与第一图像在时间关系上临近的其他图像进行解析以尝试获取到目标编码图案中的解析信息。所述基于车载摄像头的扫码方法能够解决驾驶场景中使用智能终端设备扫码存在的操作复杂,耗时长,车辆移动场景下扫描成功率较低,使用终端手动操作又容易分散驾驶员注意力,易引起交通阻塞等问题,提升驾驶场景中扫码操作的效率和驾驶的安全性。

Description

一种扫码方法及装置 技术领域
本发明涉及智能座舱领域,尤其涉及一种车辆扫码方法及装置。
背景技术
扫码是一种非接触的交互方式,在该交互方式中,一方提供含有特定信息的编码图案,另一方使用光学识别设备扫描前者提供的含有特定信息的编码图案,通过网络通信获取特定指令并执行相应操作,完成扫码交互流程。
在驾驶场景中,也存在很多扫码交互的操作。目前,在驾驶相关的扫码交互场景中,主要以手机等智能终端设备作为光学识别设备,采集并解析目标编码图案中的特定信息。而使用手机等智能终端设备扫码往往需要解锁并打开对应的应用软件,操作复杂,耗时较长,分散驾驶员注意力,容易引起交通阻塞等情况。若车辆正处于坡道等特殊路况,采用手机等智能终端设备进行扫码操作容易分散驾驶员注意力,进而造成溜车等安全隐患。
发明内容
本发明实施例提供一种基于车载摄像头的扫码方法、控制装置及计算机可读介质,以解决因驾驶场景中使用手机等智能终端设备扫码往往需要解锁并打开对应的应用软件,操作复杂,耗时较长,分散驾驶员注意力,容易引起交通阻塞等问题,提升驾驶场景中扫码操作的效率和驾驶的安全性。
本发明实施例第一方面提供一种扫码方法,包括:
获取车载摄像头采集的第一图像,该第一图像包含拍摄到的目标编码图案;
解析该目标编码图案;
当解析该目标编码图案没有获得解析信息时,通过第二图像解析所述目标编码图案,该第二图像为车载摄像头采集的图像,其采集时间与第一图像的采集时间的间隔在第一预设时间内;
其中,车载摄像头采集的第一图像中包含拍摄到的目标编码图案是指该第一图像中存在拍摄到的目标编码图案,而不是存在拍摄到的可解析的目标编码图案。这里,可解析是指能够从图案中解析出二维码对应的信息,由于车辆是处于快速行驶状态的,以及摄像头的角度问题,所述第一图像中所包括的拍摄到的目标编码图案可能存在不完整或者不清晰或者畸变等问题,不一定能够清晰完整的再现原本的目标编码图案。这就导致车载摄像头采集的第一图像中包含的拍摄到的目标编码图案不一定是可解析的,因此,可能存在对包含拍摄到目标编码图案的第一图像进行解析时,由于第一图像中的目标编码图案不完整、清晰度较低、畸变情况较严重等问题,导致解析失败的情况。
其中,通过第二图像解析所述目标编码图案,可以是:
每次解析一个第二图像中的目标编码图像,在没有获取到解析信息的情况下,继续解析下一个第二图像;或者
在运算条件允许的情况下,同时解析多个第二图像中的目标编码图像。
在驾驶场景中,特别是在车辆行驶的过程中,很难一次采集到可解析的目标编码图像,因此容易造成漏检的问题。在本发明的实施例中,只要车载摄像头采集的第一图像包含目标编码图案,无论其是否完整,是否能够解析,均进入解析阶段,可以在解析阶段对目标编码图案进行完整性的检验及对齐等操作,以降低漏检率。
在检测到车载摄像头获取的第一图像中包含目标编码图案的时刻的临近时间前后,车载摄像头是有可能采集到包含能够解析的目标编码图案的图像,因此,在解析车载摄像头获取的第一图像中的目标编码图案没有获取到解析信息时,通过车载摄像头获取的,采集时间与第一图像的采集时间的间隔在第一预设时间内的第二图像继续解析该目标编码图案,能够提高目标编码图案解析的成功率。
结合第一方面,在第一方面第一种可能的实现方式中,通过第二图像解析目标编码图案,可以是:
根据第二预设时间间隔对距离车载摄像头采集第一图像的时刻在第一预设时间内的图像数据进行采样,获取一个第二图像;
通过该第二图像解析目标编码图案;
当解析该第二图像中的目标编码图案没有获取到解析信息,根据第二预设时间间隔对距离车载摄像头采集第一图像的时刻在第一预设时间内的图像数据进行下一次采样。
由于越靠近检测到车载摄像头获取的第一图像中包含目标编码图案的时刻,车载摄像头采集到包含能够解析的目标编码图案的图像的几率越大,因此,在对图像数据进行采样时,可以先从第一图像的采集时刻开始向该时刻以前的时刻进行采样,也可以先从第一图像的采集时刻开始向该时刻以后的时刻进行采样,还可以从第一图像的采集时刻开始交替向该时刻以前和以后的时刻进行采样,在运算条件允许的情况下,还可以从第一图像的采集时刻开始同时向该时刻以前和以后的时刻进行采样。通过上述第二图像的解析方式,能够提高解析效率,缩短扫码操作的时间。
结合第一方面,在第一方面第二种可能的实现方式中,所述通过第二图像解析所述目标编码图案,可以是:
根据第二预设时间间隔对距离车载摄像头采集第一图像的时刻在第一预设时间内的图像数据进行采样,得到多个第二图像;
根据曝光度或清晰度对所述多个第二图像进行排序;
按照排序顺序解析第二图像中的目标编码图案;
当获取到解析信息时,停止解析目标编码图案。
其中,按照排序顺序解析第二图像中的目标编码图案,可以每次解析一个第二图像中的目标编码图案,在运算条件允许的情况下,也可以同时解析多个第二图像中的目标编码图案。
通过对多个第二图像进行排序,并按照排序顺序解析所述第二图像中的所述目标编码图案,能够优先解析成像效果较好的图像,提高解析效率,缩短扫码操作的时间。
结合第一方面或第一方面前两种可能的实现方式中的任意一种,在第一方面第三种可能的实现方式中,当解析所述第二图像中的所述目标编码图案获取到解析信息,根据所述 解析信息中的指示执行相应操作。解析信息是指目标编码图案所携带的有效目标信息,通常为指向某些内容或者界面的链接。
结合第一方面或第一方面前三种可能的实现方式中的任意一种,在第一方面第四种可能的实现方式中,当解析第二图像中的目标编码图案没有获取到解析信息时,可以通过改变车辆状态后,车载摄像头采集的第三图像解析该目标编码图案。
结合第一方面第四种可能的实现方式,在第一方面第五种可能的实现方式中,所述改变车辆状态,可以是:提示驾驶员调整所述车辆状态或由车辆自动调整所述车辆状态。
结合第一方面第四种可能的实现方式或第一方面第五种可能的实现方式,在第一方面第六种可能的实现方式中,所述车辆状态,可以是:车辆的位置、速度、朝向和车载摄像头的拍摄角度中的一种或几种。
若解析第二图像中的目标编码图案没有获取到解析信息,说明第一图像和第二图像中目标编码图案的成像效果都不是很好,导致其难以进行解析。此时对车辆状态,例如:车辆的位置、速度、朝向和车载摄像头的拍摄角度等进行适应性调整,能够改善该目标编码图案的成像效果,提高解析成功率。车辆状态的改变,可以是提示驾驶员调整所述车辆状态,对于具有一定自动驾驶功能的车辆,在不存在安全隐患的前提下,也可以由车辆自动调整所述车辆状态,减小对驾驶员注意力的分散,提高驾驶的安全性。
结合第一方面第四种可能的实现方式或第一方面第五种可能的实现方式或第一方面第六种可能的实现方式,在第一方面第七种可能的实现方式中,可以对通过车载摄像头采集的第三图像对目标编码图案进行解析的时间或者次数进行限制,例如,设置时间阈值或次数阈值。当通过车载摄像头采集的第三图像对目标编码图案进行解析的时间或者次数达到阈值时,直接返回失败。
在不设置停止条件的情况下,在成功获取到解析信息前,可能会不断改变汽车的状态,并不断采集第三图像进行解析。该过程可能会浪费大量的时间,甚至在一些极端情况下,无论如何调整车辆状态,均无法成功解析第三图像中的目标编码图案。因此,对解析车载摄像头采集的第三图像的时间或者次数进行一定的限制能够在解析时间较长的情况下,及时停止解析操作,提示驾驶员采用其他方式进行交互,提高了效率,节约了时间。
结合第一方面或第一方面前七种可能的实现方式中的任意一种,在第一方面第八种可能的实现方式中,所述解析目标编码图案,包括:
检测目标编码图案的完整性;
当检测到完整的目标编码图案,对齐该目标编码图案;
解析该目标编码图案。
结合第一方面第八种可能的实现方式,在第一方面第九种可能的实现方式中,所述检测目标编码图案的完整性,可以是基于目标编码图案中的定位图标检测该目标编码图案的完整性,也可以是基于目标编码图案的位置检测该目标编码图案的完整性。
结合第一方面第八种可能的实现方式,在第一方面第十种可能的实现方式中,所述对齐目标编码图案,包括:基于目标编码图案中的定位图标对齐该目标编码图案。
在本发明实施例的第一方面中,只要车载摄像头获取的第一图像中包含目标编码图案,无论其是否完整,是否能够解析,均进入解析阶段,若该目标编码图案不完整或者畸变情 况较为严重,则无法获取到解析信息。因此,在解析阶段,首先检测该目标编码图案的完整性,若该目标编码图案不完整,则放弃该图像,解析下一张图像。当检测到完整的目标编码图案时,对齐该目标编码图案,若该目标编码图案畸变较为严重导致无法对齐,则放弃该图像,解析下一张图像。通过完整性的检测和对齐两个步骤,能够提前排除掉目标编码图案无法解析的图像,提高解析效率,缩短扫码操作的时间。另外,对齐步骤还能够提高所述目标编码图案解析的成功率,降低漏检率。
结合第一方面第三种可能的实现方式,在第一方面第十一种可能的实现方式中,还可以在执行相应操作前进行安全验证。
在车载摄像头持续采集图像的过程中,可能会采集并解析到一些无关的、不符合驾驶员意愿的编码图案信息,甚至有可能采集并解析到一些包含恶意信息的编码图案,因此,在根据解析信息中的指示执行相应操作之前进行安全验证,能够极大的提高安全性。
安全验证可以是对执行相应操作的意愿进行确认,例如,通过语音应答、操作车机、手势响应、和头部姿态响应中的一种或几种方式对执行相应操作的意愿进行确认。在一些情况下,还可以对操作者的身份进行确认,例如,通过人脸识别、虹膜识别、指纹识别、语音识别中的一种或几种方式对操作者的身份进行确认。
通过采取语音应答、操作车机、手势响应、和头部姿态响应的人机交互方式,能够实现司机或乘客对执行相应操作的意愿的确认,保证所要执行的操作是被司机或乘客认可的,能够极大的提高安全性。语音应答、操作车机、手势响应、和头部姿态响应的人机交互方式操作简单,能够减少对驾驶员注意力的分散,提高驾驶的安全性。另外,在执行一些安全等级较高的操作,例如:进行较大金额的支付时,除了对支付意愿进行确认外,还可以对操作者的身份进行确认,以进一步提高安全性。
结合第一方面或第一方面前十一种可能的实现方式中的任意一种,在第一方面第十二种可能的实现方式中,目标编码图案可以是二维码和一维条形码。
结合第一方面或第一方面前十二种可能的实现方式中的任意一种,在第一方面第十三种可能的实现方式中,车载摄像头采集的第一图像包含目标编码图案,可以是:车载摄像头采集的第一图像包含具有特定几何特征的编码图案,该特定几何特征为使目标编码图案区别于其他编码图案的特征。
结合第一方面第十三种可能的实现方式,在第一方面第十四种可能的实现方式中,所述几何特征包括:边框、底纹、背景颜色和宽高比中的一种或几种。
在实际的应用场景中,可能想要检测并解析其中一些包含特定信息的编码图案,而不想受到其他编码图案的干扰。可以通过在需要识别的目标编码图案中加入一些几何特征,以区分目标编码图案和其他的编码图案,减少检测并解析错误编码图案的几率,减小无关编码图案的干扰。
结合第一方面或第一方面前十四种可能的实现方式中的任意一种,在第一方面第十五种可能的实现方式中,获取车载摄像头采集的第一图像,可以为:满足触发条件时,获取车载摄像头采集的第一图像
结合第一方面第十五种可能的实现方式,在第一方面第十六种可能的实现方式中,所述触发条件,包括:接收到操作者的指令、车速低于阈值时、到达特定地理位置、和接收 到通过网络发送的指令中的一种或几种。
在实际应用场景中,可能存在众多的编码图案信息,若不断采集并解析车载摄像头采集的图像中的目标编码图案,可能会对驾驶员的正常驾驶产生一定的干扰。另外,在用户不希望的时候进行目标编码信息的采集和解析,可能导致一些误操作。通过设置触发条件,能够使操作者在需要进行扫码操作的时候再打开目标编码图案的采集功能,减小了无关编码图案的干扰,降低了车辆误操作的几率,也减小了对驾驶员正常驾驶的干扰,提高了安全性。
本发明实施例第二方面提供一种车载装置,包括:
处理器和存储器,处理器和存储器耦合,存储器存储有程序指令,当存储器存储的程序指令被处理器执行时实现第一方面或第一方面前十六种可能的实现方式中的任意一种所述的扫码方法。
本发明实施例第二方面提供的车载装置可以任何形式安装在车内,包括前装的车载产品和后装的车载产品。
结合本发明实施例第二方面,在本发明实施例第二方面第一种可能的实现方式中,该车载装置可集成在车载摄像头内部,由车载摄像头实现第一方面或第一方面前十六种可能的实现方式中的任意一种所述的扫码方法,并将解析目标编码图案获取到的解析信息传递给中心控制系统,并实现上述所有有益效果。
结合本发明实施例第二方面,在本发明实施例第二方面第二种可能的实现方式中,该车载装置可集成在车辆的中心控制系统中,由中心控制系统获取车载摄像头采集的图像,实现第一方面或第一方面前十六种可能的实现方式中的任意一种所述的扫码方法,并实现上述所有有益效果。
本发明实施例第三方面提供另一种车载装置,包括:
车载摄像头:用于采集图像;
处理器和存储器,处理器和存储器耦合,存储器存储有程序指令,当存储器存储的程序指令被处理器执行时实现第一方面或第一方面前十六种可能的实现方式中的任意一种所述的扫码方法。
本发明实施例第三方面提供的车载装置可以任何形式安装在车内,包括前装的车载产品和后装的车载产品。
本发明实施例第三方面提供的车载装置集成车载摄像头、存储器和处理器,能够一体化完成目标编码图案的采集、检测、和解析步骤,实现第一方面或第一方面前十六种可能的实现方式中的任意一种所述的扫码方法,并实现上述所有有益效果。
本发明实施例第四方面提供一种车辆,包括:
车载摄像头:用于采集图像;
显示模块:用于显示解析目标编码图案所获取的解析信息对应的内容;
输入模块:用于接收操作者的输入;
处理器和存储器,处理器和存储器耦合,存储器存储有程序指令,当存储器存储的程序指令被处理器执行时实现第一方面或第一方面前十六种可能的实现方式中的任意一种所述的扫码方法。
本发明实施例第四方面提供一种车辆,包括车载摄像头、存储器、处理器、显示模块和输入模块,在采集、检测目标编码图案,并成功获取到解析信息后,实现第一方面或第一方面前十六种可能的实现方式中的任意一种所述的扫码方法。在此基础上,通过显示模块显示解析目标编码图案所获取的解析信息对应的内容,通过输入模块接收操作者的输入,能够实现扫码操作后与操作者的交互过程,及时处理解析目标编码图案获取的解析信息,提高扫码操作的效率,并实现上述所有有益效果。
本发明实施例第五方面提供一种计算机可读介质,该计算机可读介质包括程序,当其在计算机上运行时,使得计算机实现第一方面或第一方面前十五种可能的实现方式中的任意一种所述的扫码方法。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对现有技术中以及本发明实施例描述中所需要使用的附图作简单地介绍。
图1是本发明实施例提供的一种扫码方法的示意图;
图2A是本发明实施例提供的一种二维码图案;
图2B是本发明实施例提供的一种二维条形码图案;
图2C是本发明实施例提供的一种一维条形码图案;
图3是本发明实施例提供的车载摄像头采集第二图像的时刻与采集第一图像的时刻的关系示意图;
图4A是本发明实施例提供的一种基于第二图像解析目标编码图案的实施方式;
图4B是本发明实施例提供的另一种基于第二图像解析目标编码图案的实施方式;
图5是本发明实施例提供的一种解析车载摄像头获取的图像中的目标编码图案的方法;
图6A是本发明实施例提供的一种车载摄像头采集包含目标编码图案图像的示意图;
图6B是本发明实施例提供的一种车载摄像头采集包含目标编码图案图像的示意图;
图7A-图7D是本发明实施例提供的具有几何特征的目标编码图案示意图;
图8是本发明实施例提供的一种车载装置的一种结构示意图;
图9是本发明实施例提供的另一种车载装置的一种结构示意图;
图10是本发明实施例提供的一种车辆的一种结构示意图;
具体实施方式
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的术语在适当情况下可以互换,这仅仅是描述本申请的实施例中对相同属性的对象在描述时所采用的区分方式。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,以便包含一系列单元的过程、方法、系统、产品或设备不必限于那些单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它单元。
下面结合附图,对本申请的实施例进行描述。本领域普通技术人员可知,随着技术的 发展和新场景的出现,本申请实施例提供的技术方案对于类似的技术问题,同样适用。
以手机等智能终端设备作为光学识别设备,采集并解析编码图案的操作中往往需要通过调整智能终端的位置和角度,使采集到的图像中的编码图案平直、大小合适、且位置居中,这样才能成功解析到编码图案中的信息。但是在驾驶相关的场景中,车辆往往处于行驶状态,车载摄像头很难一次就采集到符合解析条件的编码图案。当解析车载摄像头采集的包含编码图案的图像失败后,再次进行采集操作时,车辆可能已经错过了采集编码图案的最佳位置,导致扫码交互过程失败。本发明实施例中针对驾驶场景中,编码图案的采集及解析较为困难,难以一次解析成功的问题,提出了一种更适合于驾驶场景和车载摄像头的扫码方法。
请参阅图1,在本发明一个实施例中,提供一种扫码方法100,该方法包含如下步骤。
步骤101:获取车载摄像头采集的第一图像,该第一图像包含拍摄到的目标编码图案;
步骤102:解析该目标编码图案;
步骤103:当解析第一图像中的目标编码图案没有获得解析信息时,通过第二图像解析该目标编码图案;
目前的车辆多装有车载摄像头装置,对车外的环境进行监控。例如,全景式监控影像系统(AVM,Around View Monitor)通常在车辆的前后左右各布置一颗广角或鱼眼摄像头,以拼合出360°环视效果。在本发明实施例的步骤101中,获取车载摄像头采集的第一图像,在一种实施方式中,该车载摄像头可复用合适位置的现有的执行其他功能的摄像头,例如,可利用两侧的摄像头采集位于车辆两侧的图像,可利用车辆前方的摄像头采集位于车辆前方的图像等。复用的车载摄像头可根据原有功能决定采集功能的开启时刻,只要保证需要执行扫码操作时,车载摄像头处于开启状态即可,本发明实施例不做限制,例如,车载摄像头的采集功能可在车辆控制系统开启后自动开启,也可以在驾驶员授权启动后开启。在另一种实施方式中,也可在合适的位置安装专用于车辆扫码操作的专用摄像头。专用摄像头的采集功能可在车辆控制系统开启后自动开启,也可以在需要进行扫码操作时通过一些触发条件开启,下文将对专用摄像头采集功能的触发条件进行详细介绍。
车载摄像头采集的图像往往具有一定程度的畸变,在一种实施方式中,为了便于检测和解析车载摄像头获取的图像中的目标编码图案,可对车载摄像头采集到的图像进行畸变矫正,具体地,可由车载摄像头集成的算法在采集图像的同时进行畸变矫正,直接获取经过畸变矫正后的图像,也可由处理器对获取的车载摄像头采集的图像进行畸变矫正,得到畸变矫正后的图像,本发明实施例对于对车载摄像头采集的图像进行畸变矫正的方式不做限制。
本方法实施例中考虑的目标编码图案为包含特定信息,在扫码交互的过程中希望被采集并解析的编码图案。在一些实施例中,目标编码图案可以是如图2A所示的二维码图案(如QR Code),在另一些实施例中,目标编码图案可以是如图2B所示的二维条形码图案(如PDF417),在另一些实施例中,目标编码图案还可以是如图2C所示的一维条形码图案(如Code128)。
车载摄像头采集的第一图像中包含拍摄到的目标编码图案是指该第一图像中存在拍摄到的目标编码图案,而不是存在拍摄到的可解析的目标编码图案。这里,可解析是指能够 从图案中解析出二维码对应的信息,由于车辆是处于快速行驶状态的,以及摄像头的角度问题,所述第一图像中所包括的拍摄到的目标编码图案可能存在不完整或者不清晰或者畸变等问题,不一定能够清晰完整的再现原本的目标编码图案。这就导致车载摄像头采集的第一图像中包含的拍摄到的目标编码图案不一定是可解析的,因此,可能存在对包含拍摄到目标编码图案的第一图像进行解析时,由于第一图像中的目标编码图案不完整、清晰度较低、畸变情况较严重等问题,导致解析失败的情况。
为了获取车载摄像头采集的包含拍摄到的目标编码图案的第一图像,需要对车载摄像头采集的图像进行检测,检测车载摄像头采集的图像中是否包含目标编码图案。现有的编码图案检测算法多是基于规则的,这类算法要求编码图案平直、大小合适、位置居中。对于车辆,特别是行驶中的车辆而言,直接采集到满足上述要求的编码图像是比较困难的。因此,不宜直接采用这类算法。为克服上述困难,在一种实施方式中,可采用模式识别的算法对车载摄像头获取的第一图像中的目标编码图案进行检测。即,将目标编码图案的检测视为目标检测任务,通过采集、标注含有目标编码图案的样本,训练深度神经网络、支持向量机、模板匹配等算法。为了提高目标检测算法的鲁棒性,降低漏检率,可以在训练目标检测算法时,加入一些不完整的目标编码图案和畸变较为严重的目标编码图案。训练好的算法能够检测车载摄像头采集的图像中是否存在目标编码图案,在目标编码图案不完整或者畸变较严重的情况下也能够检测到目标编码图案的存在。
在一种实施方式中,可以按照上述方法训练一个独立的目标检测算法,专用于目标编码图案的检测。在另一种实施方式中,若车载摄像头本身具备一些需要进行目标检测的功能,且该功能是持续运行的,或者其目标检测功能的运行场景能够覆盖扫码操作的场景,则可以将目标编码图案作为一类新的目标整合到可用于车载摄像头已有的目标检测算法中。
在一些实施方式中,检测车载摄像头获取的图像中是否包含目标编码图案的功能可以与车载摄像头的采集功能同时开启,并持续运行。在另一些实施方式中,在车载摄像头的采集功能开启的情况下,可以通过一定的触发方式开启目标编码图案的检测功能,下文将进行详细介绍。
在本发明实施例的步骤102中,对车载摄像头采集的第一图像中的目标编码图案进行解析以获取目标编码图案中的解析信息。下文将对解析过程中的各个步骤进行详细描述。
在本发明实施例的步骤102中,由于车载摄像头采集的第一图像中的目标编码图案可能不完整,也可能存在尺寸过小、畸变严重等问题,导致第一图像中的目标编码图案难以进行解析,即,通过解析车载摄像头获取的第一图像无法获取到解析信息。解析信息是指目标编码图案所携带的有效目标信息,通常为访问某些链接后得到的内容或者界面等。
在本发明实施例的步骤103中,当解析车载摄像头获取的第一图像中的目标编码图像没有获取到解析信息时,可以通过获取包含该目标编码图案的第二图像,并基于第二图像解析该目标编码图案。一般而言,在检测到车载摄像头采集的图像中包含目标编码图案的时刻的前后采集到包含可解析的目标编码图案的概率是最高的。因此,当解析车载摄像头获取的第一图像中的目标编码图像没有获取到解析信息时,可以通过第二图像解析该目标编码图案,该第二图像为车载摄像头采集的与第一图像的时间间隔在第一预设时间内的图 像。
请参阅图3,在本发明的一种实施方式中,t0为车载摄像头采集第一图像的时刻,t1为第一预设时间,第二图像可以是车载摄像头在t0-t1至t0+t1时间段内采集的图像。在一种实施方式中,第一预设时间可以由系统设置,并可根据不同的车速设置不同的第一预设时间,例如,可将车速为30km/h时的第一预设时间设置为5秒,当车速大于30km/h时,适当缩短第一预设时间,当车速小于30km/h时,适当延长第一预设时间。在另一种实施方式中,第一预设时间可由用户进行设置。
在步骤103中,当解析所述第一图像中的所述目标编码图案没有获得解析信息时,通过第二图像解析所述目标编码图案。请参阅图4A,在一种实施方式中,以车载摄像头采集第一图像的时刻t0为起点,以第二预设时间t2为时间间隔,向t0以前的时刻进行采样,得到一个第二图像A1,解析第二图像A1中的目标编码图案。在一种实施方式中,第二预设时间可以由系统设置,并可根据不同的车速设置不同的第二预设时间,例如,可将车速为30km/h时的第二预设时间设置为0.1秒,当车速大于30km/h时,适当缩短第二预设时间,当车速小于30km/h时,适当延长第二预设时间。在另一种实施方式中,第二预设时间可由用户进行设置。
若解析第二图像A1中的目标编码图案未能获得解析信息,则以第二预设时间t2为时间间隔,向t0以前的时刻再次进行采样,得到一个第二图像A2,并对第二图像A2中的目标编码图案进行解析。当解析第二图像A2中的目标编码图案仍没有获取到解析信息时,继续以第二预设时间t2为时间间隔,向t0以前的时刻再次进行采样,依次得到第二图像A3,A4等。一旦成功获取到目标编码图案中的解析信息,停止所述解析过程。若在t0时刻以前的时刻采样得到的图像中均未能获取到解析信息,则转向t0时刻以后的时刻,以第二预设时间t2为时间间隔采样得到一个第二图像B1,解析第二图像B1中的目标编码图案。若解析第二图像B1中的目标编码图案没有获取到解析信息,则以第二预设时间t2为时间间隔,向t0以后的时刻再次进行采样,得到一个第二图像B2,并对第二图像B2中的目标编码图案进行解析。当解析第二图像B2中的目标编码图案仍没有获取到解析信息时,继续以第二预设时间t2为时间间隔,向t0以前的时刻再次进行采样,依次得到第二图像B3,B4等。一旦成功解析目标编码图案中的解析信息,则停止所述解析过程。
应当理解,依照本实施例示例的方法,可以按照上述顺序对第二图像进行采样和解析,也可以按照其他顺序对第二图像进行采样和解析,例如,在另一种实施方式中,可以先对t0时刻以后的图像进行采样并解析,再对t0时刻以前的图像进行采样并解析,即,可以先以第二预设时间t2为时间间隔,向t0以后的时刻进行采样,得到一个第二图像B1,解析第二图像B1中的目标编码图案。若解析第二图像B1中的目标编码图案没有获取到解析信息,则以第二预设时间t2为时间间隔,向t0以后的时刻再次进行采样,得到一个第二图像B2,并对第二图像B2中的目标编码图案进行解析。当解析第二图像B2中的目标编码图案仍没有获取到解析信息时,继续以第二预设时间t2为时间间隔,向t0以前的时刻再次进行采样,依次得到第二图像B3,B4等。一旦成功解析目标编码图案中的解析信息,则停止所述解析过程。若在t0时刻以后的时刻采样得到的图像中均未能获取到解析信息,则转向t0时刻以前的时刻,以第二预设时间t2为时间间隔采样得到一个第二图像A1,解析第二 图像A1中的目标编码图案。若解析第二图像A1中的目标编码图案未能获得解析信息,则以第二预设时间t2为时间间隔,向t0以前的时刻再次进行采样,得到一个第二图像A2,并对第二图像A2中的目标编码图案进行解析。当解析第二图像A2中的目标编码图案仍没有获取到解析信息时,继续以第二预设时间t2为时间间隔,向t0以前的时刻再次进行采样,依次得到第二图像A3,A4等。一旦成功获取到目标编码图案中的解析信息,停止所述解析过程。
在另一种实施方式中,可以采取左右侧交替的方式采样得到第二图像,并对采样得到的第二图像进行解析。可以先以第二预设时间t2为时间间隔,向t0以后的时刻进行采样,得到一个第二图像B1,解析第二图像B1中的目标编码图案。若解析第二图像B1中的目标编码图案没有获取到解析信息,以第二预设时间t2为时间间隔,向t0以前的时刻进行采样,得到一个第二图像A1,解析第二图像A1中的目标编码图案。若解析第二图像A1中的目标编码图案未能获得解析信息,则以第二预设时间t2为时间间隔,向t0以后的时刻再次进行采样,得到一个第二图像B2,并对第二图像B2中的目标编码图案进行解析。当解析第二图像B2中的目标编码图案仍没有获取到解析信息时,继续按照该顺序依次采样得到第二图像A2、B3、A3、B4、A4等,并依次解析每一个第二图像。一旦成功获取到目标编码图案中的解析信息,停止所述解析过程。还可以按照上述左右交叉的顺序先向t0以前的时刻进行采样,依次采样得到第二图像A1、B1、A2、B2、A3、B3、A4、B4等,并依次解析每一个第二图像。一旦成功获取到目标编码图案中的解析信息,停止所述解析过程。
在另一种实施方式中,在运算条件允许的情况下还可以以车载摄像头采集第一图像的时刻t0为起点,向前向后同时取样得到第二图像,并对第二图像中的目标编码图案进行解析。例如,首先以第二预设时间t2为时间间隔,采样得到第二图像A1和B1,对第二图像A1和B1中的目标编码图案进行解析。若解析第二图像A1和B1中的目标编码图案没有获取到解析信息,继续以第二预设时间t2为时间间隔,采样得到第二图像A2和B2,对第二图像A2和B2中的目标编码图案进行解析。以此类推,一旦成功获取到目标编码图案中的解析信息,停止所述解析过程。应当理解,对第二图像的采样方式不限于上述方式,其他常用的采样方式和本领域技术人员不付出创造性劳动就能得到的采样方式也是可能的。
请参阅图4B,在另一种实施方式中,以第二预设时间t2为时间间隔,对车载摄像头采集的与第一图像时间间隔在第一预设时间内的图像进行采样,即,对车载摄像头在t0-t1时刻至t0+t1时刻之间采集的第二图像进行采样,同时得到多个第二图像(A1,A2,An-1,An,B1,B2,Bn-1,Bn-1等)。对采样得到的多个第二图像进行排序,按照排序顺序依次对第二图像中的目标编码图案进行解析。可以根据排序顺序依次对每一个第二图像进行解析,在运算条件允许的情况下,也可以同时对多个第二图像进行解析。一旦获取到目标编码图案中的解析信息时,停止解析该目标编码图案。在对采样得到的多个第二图像进行排序时,可以采用成像效果、解析的难易程度相关的指标作为排序规则,例如,根据曝光度和清晰度等评价指标对多个第二图像进行排序。应当理解,本方法实施例对排序的方法不做限定,其他本领域常用的排序指标也是可能的。
在成功获取到目标编码图案中的解析信息后,可根据解析信息中的指示执行相应的操作。解析信息中的指示可以包括,车载控制系统可以执行的任何操作。例如,操作可以包 括:进行扫码支付操作,进行车辆身份认证、在给定系统中进行车辆信息登记,下载给定的车载应用,其他操作也是可能的。
请参阅图5,在本发明一个实施例中,提供一种通过车载摄像头采集的包含目标编码图案的图像解析目标编码图案中的信息的方法500,该方法可包含如下步骤。
步骤501:检测目标编码图案的位置;
步骤502:检测目标编码图案的完整性;
步骤503:对齐目标编码图案;
步骤504:解析目标编码图案。
其中,步骤501对目标编码图案的位置进行检测,在一种实现方式中,检测目标编码图案的位置可以复用在步骤102中检测目标编码图案时采用的算法,在另一种实现方式中,也可以重新训练类似的目标检测算法专用于检测目标编码图案的位置。另外,由于在步骤102中检测目标编码图案时已经具备了目标编码图案位置的先验,在另一种实现方式中,还可以利用滑窗等较为简单的算法检测目标编码图案的位置。应该理解,其他本领域常用的,可以实现检测目标编码图案的位置的算法也是可能的。在完成目标编码图案位置的检测后,可以将含有目标编码图案的矩形区域裁剪出来,供后续的步骤使用。目标编码图案的位置可用于检测目标编码图案的完整性,也可跳过检测目标编码图案位置的步骤,采用其他方式检测目标编码图案的完整性。
步骤502中对目标编码图案的完整性进行检测,在一种实施方式中,目标编码图案的完整性可以基于定位图标进行判断,当检测到目标编码图案上的全部定位图标时,认为目标编码图案是完整的。例如,在如图2A所示的二维码图案中,位于左上、右上和左下的三个“回”形图标为二维码的定位图标,当检测到三个完整的“回”形图标时,可认为该二维码图案是完整的;在如图2B所示的二维条形码图案中,位于左边和右边的黑色方框和线条为二维条码的前后定位图标,前后定位图标的完整性可通过Sobel等边缘检测算子实现,检测到前后定位图标的4个边缘,且其交点(定位图标的顶点)在图像内部时,可认为该二维条形码图案是完整的;在如图2C所示的一维条形码图案中,位于左边和右边的部分竖线为前后定位图标,在横向检测到所有定位图标时,可认为该一维条形码图案时完整的(一维条形码无需检测纵向完整性)。在另一种实施方式中,目标编码图案的完整性还可以基于其位置进行判断,例如,当目标编码图案位于图像边缘时,认为该目标编码图案不完整,当目标编码图案的边界与图像的边缘无交点时,认为该目标编码图案是完整的。应该理解,其他本领域常用的,可以实现检测目标编码图案完整性的方法也是可能的。
现有技术无法从不完整的目标编码图案中解析得到解析信息,因此,若检测到目标编码图案不完整时,则放弃该图像,直接解析下一张包含该目标编码图案的图像。
由于车载摄像头的拍摄角度、成像视角等原因,车载摄像头拍摄的图像往往存在一定的透视或投影效应,因此,在步骤503中可以对目标编码图案进行对齐操作,以提高目标编码图案的解析成功率。在一种实现方式中,可以基于定位图标对目标编码图案进行对齐,即,首先识别目标编码图案中定位图标的位置,然后通过仿射变换,将定位图标变换到特定的位置,从而获得平直、居中、大小合适的目标编码图案。在对目标编码图案进行对齐时,可通过采集并标注含有定位图标位置和尺寸的样本,训练深度神经网络、支持向量机、 模板匹配等算法,训练好的算法能够检测定位图标的位置和尺寸,进而实现目标编码图案的对齐。应该理解,其他本领域常用的,可以实现目标编码图案对齐的方法也是可能的。
透视或投影效应较为严重的目标编码图案可能无法解析得到解析信息,因此,在步骤502中,若目标编码图案的透视或投影相应较为严重而无法实现对齐,则放弃该图像,直接解析下一张包含该目标编码图案的图像。
步骤504对完整的,对齐后的目标编码图案进行解析。对于完整的,对齐后的目标编码图案,可采用本领域常用的算法,例如,全局二值化、混合二值化等算法,将目标编码图案中的黑色和白色小格转换成二进制数,进而解码为字符信息,该字符信息可能为指向某一对象的链接等。若成功解析目标编码图案,则获取解析信息中的相应指示,若解析目标编码图案失败,即,未能成功获取目标编码图案中的解析信息,则返回解析失败的状态。
在一些实施例中,当通过第二图像未能成功获取到目标编码图案中的解析信息时,可以适当改变车辆的状态,并通过车辆状态改变后,车载摄像头采集的第三图像解析目标编码图案。改变车辆状态是为了通过对可能的车辆状态进行适应性调整,以使得车载摄像头获得更好的采集效果。例如,可以对车辆的位置、速度、朝向、车载摄像头的拍摄角度等进行适应性调整。具体地,当车速过快导致目标编码图案成像的清晰度较低时,可以适当降低车速;当车辆距离目标编码图案的距离过小,导致采集的目标编码图案不完整时,可以适当增大车辆与目标编码图案的距离;当车辆的朝向和/或车载摄像头的拍摄角度导致采集到的目标编码图案的畸变情况较为严重时,可以将车辆的朝向和/或车载摄像头的拍摄角度向有益于图像采集的方向调整。在对车辆状态进行调整时,可以同时对上述多个方面进行调整,也可以每次仅调整一个方面,当解析对该方面进行调整后车载摄像头获取的图像中的目标编码图案仍没有获取到解析信息时,对另一个方面进行调整。应当理解,为了得到更好的采集效果,其他对车辆状态进行的适应性调整也是可能的。
在一种实施方式中,可以提示驾驶员手动调整上述车辆状态,以获得更好的采集效果。在另一种实施方式中,如果车辆具备一定的自动驾驶功能,还可以在不存在安全风险的情况下,由自动驾驶功能控制车辆自动完成上述调整。
在通过车载摄像头获取的第三图像解析目标编码图案的过程中,在成功获取到解析信息前,可能需要不断的对汽车状态进行改变,进而不断的获取第三图像进行解析。该过程可能会持续很长的时间,并对车辆的正常行驶造成一定的干扰。甚至在一些极端情况下,无论如何调整车辆状态,均无法成功解析第三图像中的目标编码图案。因此,在一种实施方式中,可以对通过车载摄像头采集的第三图像解析目标编码图案的时间或次数进行一定的限制,例如,设置一定的阈值,当通过车载摄像头采集的第三图像解析目标编码图案的时间或次数达到所设置的阈值时,直接返回失败。此时,可提示驾驶员通过其他方式进行交互操作,以提高效率。
在一些实施例中,可以在根据解析信息中的指示执行相应操作之前加入安全认证的步骤,以提高安全性,防范恶意攻击。在一种实施方式中,可以采取一些对驾驶操作的干扰较小、操作简单的交互方式,通过与驾驶员或其他车内人员的交互,对执行相应操作的意愿进行确认,例如,可以通过语音应答、操作车机、手势响应、头部姿态响应等方式对执行相应操作的意愿进行确认。应当理解,其他本领域常用的人机交互方式也是可能的。在 另一种实施方式中,为了进一步提高安全性,在一些风险较高的操作中,可以在安全认证中加入身份认证。可以通过验证一些具有专一性的人体特征,对操作者的身份进行验证。例如,通过人脸识别、虹膜识别、指纹识别、语音识别等方法对操作者的身份进行验证。应当理解,其他本领域常用的,对操作者身份进行验证的方法也是可能的。
提供目标编码图案的一方在布置目标编码图案时,应该考虑车载摄像头的位置、朝向和视角等因素,选择能够被完整地、清晰地识别的位置布置目标编码图案。在扫码过程中,存在这样一个位置,车辆最晚在经过该位置时,完成目标编码图案的采集和解析,称该位置为扫码位置。请参见图6A,在一些实施方式中,可将目标编码图案喷涂于扫码位置前方的地面上,可通过位于车辆前方的车载摄像头采集车辆前方包含目标编码图案的图像。喷涂于地面上的目标编码图案应该具有合适的尺寸,以获得更好的采集效果。一般而言,喷涂于地面上的目标编码图案的长和宽均应该大于60厘米。请参见图6B,在另一些实施方式中,可将目标编码图案喷涂或张贴在扫码位置附近的标牌上,该标牌可以是独立的标牌,也可以是与其他信息公用的标牌,可通过位于车辆左右两侧的车载摄像头采集位于标牌上的包含目标编码图案的图像。喷涂或张贴在标牌上的目标编码图案也应该具有合适的尺寸,以获得更好的采集效果。一般而言,喷涂或张贴在标牌上的目标编码图案的长和宽均应该大于40厘米。
在一些实施例中,可以通过在目标编码图案上加入一些几何特征,使得目标编码图案能够区别于其他编码图案。该几何特征可以包括任何能够使得目标编码图案能够区别于其他编码图案的特征,例如,边框、底纹、背景颜色和宽高比等。如图7A所示为加入边框后的目标编码图案,如图7B所示为加入底纹后的目标编码图案,如图7C所示为加入背景颜色后的目标编码图案,如图7D所示为改变了宽高比后的目标编码图案。应当理解,其他本领域常用的,能够使目标编码图案区别于其他编码图案的几何特征也是可能的。通过在检测目标编码图案的算法的训练数据中加入上述特定几何特征,使得训练好的目标检测算法可以仅识别包含特定几何特征的目标编码图案,避免其他编码图案的干扰。
步骤101中,专用于扫码操作的车载摄像头可以在车载控制系统开启时就开启,也可以通过设置一定的触发条件开启。在采用专用车载摄像头执行扫码操作时,检测车载摄像头获取的图像中是否包含目标编码图案的功能可以与车载摄像头的采集功能同时开启。在复用执行其他功能的车载摄像头执行扫码操作时,检测车载摄像头获取的图像中是否包含目标编码图案的功能可以在车载摄像头的采集功能开启时就开启,以时时捕获目标编码图案,在车载摄像头的采集功能开启的前提下,可以通过设置一定的触发条件开启目标编码图案的检测功能。例如,由操作者操作车机、做出特定手势、发出语音指令等。考虑到驾驶相关的扫码场景通常都是在停车或者低速行驶条件下进行的,在另一种实施方式中,可以设定一个速度阈值,当车辆的行驶速度低于该阈值时,触发目标编码图案检测功能。在另一种实施方式中,可以采用基于地理位置信息的触发方式,在地图中预设可能需要进行扫码的地点,例如,加油站、收费站、充电桩等,在车辆到达预设的地点时触发目标编码图案的检测功能。在另一种实施方式中,还可以采用基于网络的触发方式,由提供目标编码图案的一方通过网络下达扫码指令,触发目标编码图案的检测功能。应当理解,其他常见的,本领域技术人员不需要付出创造性劳动就可以得到的触发方式也是可能的。
基于前述实施例相同的技术构思,请参阅图8,在本发明一个实施例中,提供一种车载装置800。具体的,车载装置800包括:存储器801和处理器802(其中车载装置800中的处理器802的数量可以是一个或多个,图8中以一个处理器为例)。存储器801可以包括只读存储器和随机存取存储器,并向处理器802提供指令和数据。
存储器801的一部分还可以包括非易失性随机存取存储器(non-volatile random access memory,NVRAM)。存储器801存储有处理器和操作指令、可执行模块或者数据结构,或者它们的子集,或者它们的扩展集,其中,操作指令可包括各种操作指令,用于实现各种操作。
上述本申请实施例揭示的方法可以应用于处理器802中,或者由处理器802实现。处理器802可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器802中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器802可以是通用处理器、数字信号处理器(digital signal processing,DSP)、微处理器或微控制器,还可进一步包括专用集成电路(application specific integrated circuit,ASIC)、现场可编程门阵列(field-programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。该处理器802可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器801,处理器802读取存储器801中的信息,结合其硬件完成上述方法的步骤。
车载装置800可以以任何形式安装在车内,在一些实施方式中,车载装置800可以为前装的车载产品,例如,在出厂前由整机厂商进行配套安装。在另一些实施方式中,车载装置800可以为后装的车载产品,例如,在出场后由4S店等渠道进行安装。
另外,在一些实施方式中,车载装置800可集成在车载摄像头的内部,构成智能摄像头,由车载摄像头完成图像采集及上述扫码方法,并将解析目标编码图案获取到的解析信息传递给中心控制单元,由中心控制系统完成解析信息的显示及用户操作等后续操作。
在一些实施方式中,车载装置800可集成在车辆的中心控制系统中,由中心控制系统获取车载摄像头采集的图像,进行上述扫码操作,并完成解析信息的显示及用户操作等后续操作。
需要说明的是,对于处理器802执行图1至图7所示方法实施例中描述的各个步骤及过程的具体实现方式以及带来的有益效果,均可以参考图1至图7对应的各个方法实施例中的叙述,此处不再一一赘述。
基于前述实施例相同的技术构思,请参阅图9,在本发明另一个实施例中,提供另一种车载装置900。具体的,车载装置900包括:车载摄像头901,存储器902和处理器903。
在一种实施方式中,用于采集图像的车载摄像头可复用合适位置的现有的执行其他功能的摄像头,例如,可利用两侧的摄像头采集位于车辆两侧的图像,可利用车辆前方的摄像头采集位于车辆前方的图像等。复用的车载摄像头可根据原有功能决定采集功能的开启 时刻,只要保证需要执行扫码操作时,车载摄像头处于开启状态即可,本发明实施例不做限制,例如,车载摄像头的采集功能可在车辆控制系统开启后自动开启,也可以在驾驶员授权启动后开启。在另一种实施方式中,也可在合适的位置安装专用于车辆扫码操作的专用摄像头。专用摄像头的采集功能可在车辆控制系统开启后自动开启,也可以在需要进行扫码操作时通过一些触发条件开启。
例如,由操作者操作车机、做出特定手势、发出语音指令等。考虑到驾驶相关的扫码场景通常都是在停车或者低速行驶条件下进行的,在另一种实施方式中,可以设定一个速度阈值,当车辆的行驶速度低于该阈值时,触发目标编码图案检测功能。在另一种实施方式中,可以采用基于地理位置信息的触发方式,在地图中预设可能需要进行扫码的地点,例如,加油站、收费站、充电桩等,在车辆到达预设的地点时触发目标编码图案的检测功能。在另一种实施方式中,还可以采用基于网络的触发方式,由提供目标编码图案的一方通过网络下达扫码指令,触发目标编码图案的检测功能。应当理解,其他常见的,本领域技术人员不需要付出创造性劳动就可以得到的触发方式也是可能的。
存储器901和处理器902的具体实现方式如车载装置800所述,此处不再一一赘述。
车载装置900集成车载摄像头、存储器和处理器,能够作为智能摄像头完成图像的采集及扫码操作,并将解析目标编码图案获取到的解析信息传递给中心控制单元,由中心控制系统完成解析信息的显示及用户操作等后续操作。
基于前述实施例相同的技术构思,请参阅图10,在本发明另一个实施例中,提供一种车辆1000,具体包含如下模块:
车载摄像头1001:配置为采集图像,车载摄像头的使用和触发条件如车载装置900所述,此处不再一一赘述。
存储器1002和处理器1003,存储器901和处理器902的具体实现方式如车载装置800所述,此处不再一一赘述。
显示模块1004:配置为显示解析目标编码图案获取的解析信息;
输入模块1005:配置为接收操作者的输入,进而确定对解析信息进行的操作。
显示模块1004包括车辆座舱内任何可以实现显示功能的装置,例如:车机屏、抬头显示系统、后排屏幕等。应当理解,其他的座舱内常用的显示装置也是可能的。显示模块1004将解析目标编码图案获取到的解析信息进行显示,将信息传递给驾驶员和/或乘客。
输入模块1005包括车辆座舱内任何可以实现人机交互功能的装置,例如,触摸屏、语音接收系统、摄像头、其他传感器等。通过人机交互接收用户对于解析信息中的指示执行何种操作的指令。例如,可以通过语音应答、操作车机、手势响应、头部姿态响应等方式传达指令。应当理解,其他车辆座舱内常用的人机交互方式和实现人机交互功能的装置也是可能的。
车辆1000通过车载摄像头1001采集图像,获取包含拍摄到的目标编码图案的图像;通过存储器1002和处理器1003实现上述扫码方法,通过解析车载摄像头1001采集的图像中的目标编码图案获取解析信息;通过显示模块为操作者显示解析信息对应的内容或界面;通过输入模块获取操作者对解析信息对应的内容或界面的操作,进而完成对解析信息的处理。车辆1000能够集中实现从采集、解析到处理的全部流程。
本发明实施例还提供一种计算机可读存储介质,通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到本申请可借助软件加必需的通用硬件的方式来实现,当然也可以通过专用硬件包括专用集成电路、专用CLU、专用存储器、专用元器件等来实现。一般情况下,凡由计算机程序完成的功能都可以很容易地用相应的硬件来实现,而且,用来实现同一功能的具体硬件结构也可以是多种多样的,例如模拟电路、数字电路或专用电路等。但是,对本申请而言更多情况下软件程序实现是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在可读取的存储介质中,如计算机的软盘、U盘、移动硬盘、ROM、RAM、磁碟或者光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。当存储介质中的与扫码方法对应的计算机程序指令被电子设备读取或被执行时,可以实现图1-图7所示方法实施例中描述的各个步骤及过程,此处不再一一赘述。
所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存储的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘(Solid State Disk,SSD))等。
可以理解,在本申请所提供的几个实施例中,所揭露的方法、车载装置、车辆和计算机可读存储介质,可以通过其它的方式实现。例如,以上所描述的装置的实施例仅仅是示意性的,具体实施时可以有多种实现方式。
可以理解,本发明实施例所述的方法中的步骤可以根据实际需要进行顺序调整、合并和删减。相应地,本发明实施例所述的控制装置中存储器中的程序指令所实现的功能也可以根据实际需要进行顺序调整、合并和删减。
以上所揭露的仅为本发明的优选实施例而已,当然不能以此来限定本发明之权利范围,本领域普通技术人员可以理解实现上述实施例的全部或部分流程,并依本发明权利要求所作的等同变化,仍属于发明所涵盖的范围。

Claims (15)

  1. 一种扫码方法,其特征在于,包括:
    获取车载摄像头采集的第一图像,所述第一图像包含拍摄到的目标编码图案;
    通过所述第一图像解析所述目标编码图案;
    当解析所述目标编码图案没有获取到解析信息,通过第二图像解析所述目标编码图案,所述第二图像为所述车载摄像头采集的图像,所述第二图像的采集时间与所述第一图像的采集时间的间隔在第一预设时间内。
  2. 如权利要求1所述的方法,其特征在于,当解析所述第二图像中的所述目标编码图案获取到解析信息,根据所述解析信息中的指示执行相应操作。
  3. 如权利要求1所述的方法,其特征在于,当解析所述第二图像中的所述目标编码图案没有获取到解析信息,所述方法还包括:通过第三图像解析所述目标编码图案,所述第三图像为车辆状态改变后,所述车载摄像头采集的图像。
  4. 如权利要求3所述的方法,其特征在于,所述车辆状态改变,包括:提示驾驶员调整所述车辆状态或由车辆自动调整所述车辆状态。
  5. 如权利要求1-4任一所述的方法,其特征在于,所述解析所述目标编码图案,包括:
    检测所述目标编码图案的完整性;
    当检测到所述目标编码图案完整,对齐所述目标编码图案;
    解析所述目标编码图案。
  6. 如权利要求1-5任一所述的方法,其特征在于,所述通过第二图像解析所述目标编码图案,包括:
    根据第二预设时间间隔对距离所述车载摄像头采集所述第一图像的时刻在所述第一预设时间内的图像数据进行采样,获取一个第二图像;
    通过所述一个第二图像解析所述目标编码图案;
    当解析所述一个第二图像中的所述目标编码图案获取到解析信息,继续根据所述第二预设时间间隔对距离所述车载摄像头采集所述第一图像的时刻在所述第一预设时间内的图像数据进行下一次采样。
  7. 如权利要求1-5任一所述的方法,其特征在于,所述通过第二图像解析所述目标编码图案,包括:
    根据第二预设时间间隔对距离所述车载摄像头采集所述第一图像的时刻在所述第一预设时间内的图像数据进行采样,得到多个第二图像;
    根据曝光度或清晰度对所述多个第二图像进行排序;
    按照排序顺序解析所述第二图像中的所述目标编码图案;
    当获取到所述解析信息时,停止解析所述目标编码图案。
  8. 如权利要求1-7所述的方法,其特征在于,所述获取车载摄像头采集的第一图像,具体为:满足触发条件时,获取车载摄像头采集的第一图像。
  9. 如权利要求8所述的方法,其特征在于,所述触发条件,包括:接收到操作者的指令、车速低于阈值时、到达特定地理位置、和接收到通过网络发送的指令中的一种或几种。
  10. 如权利要求1-9所述的方法,其特征在于,所述目标编码图案,包括:二维码和一维条形码。
  11. 如权利要求1-10任一所述的方法,其特征在于,所述第一图像包含目标编码图案,具体为:所述第一图像中包含具有特定几何特征的编码图案,所述特定几何特征为使所述目标编码图案区别于其他编码图案的特征。
  12. 如权利要求11所述的方法,其特征在于,所述特定几何特征包括:边框、底纹、背景颜色和宽高比中的一种或几种。
  13. 一种车载装置,其特征在于,包括:
    处理器和存储器,所述处理器和存储器耦合,所述存储器存储有程序指令,当所述存储器存储的程序指令被所述处理器执行时实现权利要求1-12中任一项所述的扫码方法。
  14. 一种车辆,其特征在于,包括:
    车载摄像头:用于采集图像;
    显示模块:用于显示解析目标编码图案所获取的解析信息对应的内容;
    输入模块:用于接收操作者的输入。
    处理器和存储器,所述处理器和存储器耦合,所述存储器存储有程序指令,当所述存储器存储的程序指令被所述处理器执行时实现权利要求1-12中任一项所述的扫码方法。
  15. 一种计算机可读存储介质,其特征在于,包括程序,当其在计算机上运行时,使得计算机执行如权利要求1至12中任一项所述的扫码方法。
PCT/CN2020/132645 2020-11-30 2020-11-30 一种扫码方法及装置 WO2022110106A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
EP20962995.5A EP4246369A4 (en) 2020-11-30 2020-11-30 CODE SCANNING METHOD AND APPARATUS
CN202080004157.5A CN112585613A (zh) 2020-11-30 2020-11-30 一种扫码方法及装置
PCT/CN2020/132645 WO2022110106A1 (zh) 2020-11-30 2020-11-30 一种扫码方法及装置
US18/325,837 US20230325619A1 (en) 2020-11-30 2023-05-30 Code scanning method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/132645 WO2022110106A1 (zh) 2020-11-30 2020-11-30 一种扫码方法及装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/325,837 Continuation US20230325619A1 (en) 2020-11-30 2023-05-30 Code scanning method and apparatus

Publications (1)

Publication Number Publication Date
WO2022110106A1 true WO2022110106A1 (zh) 2022-06-02

Family

ID=75145412

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/132645 WO2022110106A1 (zh) 2020-11-30 2020-11-30 一种扫码方法及装置

Country Status (4)

Country Link
US (1) US20230325619A1 (zh)
EP (1) EP4246369A4 (zh)
CN (1) CN112585613A (zh)
WO (1) WO2022110106A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115130491A (zh) * 2022-08-29 2022-09-30 荣耀终端有限公司 一种自动扫码方法和终端
CN115240400A (zh) * 2022-07-01 2022-10-25 一汽解放汽车有限公司 车辆位置识别方法和装置、车辆位置输出方法和装置

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20200100481A (ko) * 2019-02-18 2020-08-26 삼성전자주식회사 생체 정보를 인증하기 위한 전자 장치 및 그의 동작 방법
CN113635804B (zh) * 2021-08-03 2023-05-26 安徽产业互联数据智能创新中心有限公司 一种车辆充电管理方法及系统
CN113428042A (zh) * 2021-08-03 2021-09-24 安徽产业互联数据智能创新中心有限公司 一种车辆充电管理方法、系统、充电桩及车载终端
CN114613022A (zh) * 2022-01-28 2022-06-10 中国第一汽车股份有限公司 一种车载缴费系统及方法
CN114723766A (zh) * 2022-04-14 2022-07-08 润芯微科技(江苏)有限公司 一种二维码提取与展示的方法及电子设备
WO2024022394A1 (zh) * 2022-07-28 2024-02-01 华为技术有限公司 一种支付方法及相关装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108537085A (zh) * 2018-03-07 2018-09-14 阿里巴巴集团控股有限公司 一种扫码图像识别方法、装置以及设备
US20190080186A1 (en) * 2017-09-12 2019-03-14 Baidu Online Network Technology (Beijing) Co., Ltd. Traffic light state recognizing method and apparatus, computer device and readable medium
CN110147695A (zh) * 2019-05-29 2019-08-20 郑州天迈科技股份有限公司 一种用于公交站点识别的站点标志牌和公交站点的识别系统
CN110727269A (zh) * 2019-10-09 2020-01-24 陈浩能 车辆控制方法及相关产品
CN111428663A (zh) * 2020-03-30 2020-07-17 北京百度网讯科技有限公司 红绿灯状态的识别方法、装置、电子设备和存储介质

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9836635B2 (en) * 2014-10-09 2017-12-05 Cognex Corporation Systems and methods for tracking optical codes
CN106886275B (zh) * 2015-12-15 2020-03-20 比亚迪股份有限公司 车载终端的控制方法、装置以及车辆
CN106919610B (zh) * 2015-12-28 2020-12-22 中国移动通信集团公司 车联网数据处理方法、系统及服务器
CN108256376A (zh) * 2018-01-09 2018-07-06 佛山科学技术学院 一种车内扫码识别系统及其扫码识别方法
CN109920266A (zh) * 2019-02-20 2019-06-21 武汉理工大学 一种智能车辆定位方法
CN210667732U (zh) * 2019-08-10 2020-06-02 可可若器(北京)信息技术有限公司 一种计算机直接读取的指示标示

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190080186A1 (en) * 2017-09-12 2019-03-14 Baidu Online Network Technology (Beijing) Co., Ltd. Traffic light state recognizing method and apparatus, computer device and readable medium
CN108537085A (zh) * 2018-03-07 2018-09-14 阿里巴巴集团控股有限公司 一种扫码图像识别方法、装置以及设备
CN110147695A (zh) * 2019-05-29 2019-08-20 郑州天迈科技股份有限公司 一种用于公交站点识别的站点标志牌和公交站点的识别系统
CN110727269A (zh) * 2019-10-09 2020-01-24 陈浩能 车辆控制方法及相关产品
CN111428663A (zh) * 2020-03-30 2020-07-17 北京百度网讯科技有限公司 红绿灯状态的识别方法、装置、电子设备和存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4246369A4 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115240400A (zh) * 2022-07-01 2022-10-25 一汽解放汽车有限公司 车辆位置识别方法和装置、车辆位置输出方法和装置
CN115240400B (zh) * 2022-07-01 2023-11-07 一汽解放汽车有限公司 车辆位置识别方法和装置、车辆位置输出方法和装置
CN115130491A (zh) * 2022-08-29 2022-09-30 荣耀终端有限公司 一种自动扫码方法和终端
CN115130491B (zh) * 2022-08-29 2023-01-31 荣耀终端有限公司 一种自动扫码方法和终端

Also Published As

Publication number Publication date
US20230325619A1 (en) 2023-10-12
CN112585613A (zh) 2021-03-30
EP4246369A4 (en) 2024-02-28
EP4246369A1 (en) 2023-09-20

Similar Documents

Publication Publication Date Title
WO2022110106A1 (zh) 一种扫码方法及装置
US20210312214A1 (en) Image recognition method, apparatus and non-transitory computer readable storage medium
WO2021047187A1 (zh) 基于人脸识别进行车辆缴费的方法、相关设备及存储介质
US11321575B2 (en) Method, apparatus and system for liveness detection, electronic device, and storage medium
CN106384513B (zh) 一种基于智能交通的套牌车捕捉系统及方法
KR20210041039A (ko) 이미지 처리 방법 및 장치, 전자 기기 및 기억 매체
CN105528607A (zh) 区域提取方法、模型训练方法及装置
CN113870550B (zh) 基于边缘计算的区域异常检测方法和系统
CN114495299A (zh) 一种车辆停车自动缴费方法、系统及可读存储介质
CN111627057A (zh) 一种距离测量方法、装置及服务器
CN112686252A (zh) 一种车牌检测方法和装置
CN106327876B (zh) 一种基于行车记录仪的套牌车捕捉系统及方法
US11709914B2 (en) Face recognition method, terminal device using the same, and computer readable storage medium
CN113313115B (zh) 车牌属性识别方法及装置、电子设备和存储介质
CN109948618A (zh) 一种远距离车牌识别的终端、系统和方法
CN112560683A (zh) 一种翻拍图像识别方法、装置、计算机设备及存储介质
CN110598704B (zh) 基于深度学习的车牌识别无感支付系统
CN111898540A (zh) 车道线检测方法、装置、计算机设备及计算机可读存储介质
US20220309809A1 (en) Vehicle identification profile methods and systems at the edge
KR20200126743A (ko) 차량 출입 통제 시스템 및 그 방법
CN112435475B (zh) 一种交通状态检测方法、装置、设备及存储介质
KR20160043197A (ko) 에지에 바운더리 코드를 포함하는 차량 번호판, 번호판의 에지에 제공되는 바운더리 코드를 이용한 차량 번호 인식 장치, 시스템, 및 그 방법
CN113780083A (zh) 一种手势识别方法、装置、设备及存储介质
CN113327337A (zh) 一种确保车辆正常缴费出场的方法
CN106201400A (zh) 一种车载输入视频显示控制装置及方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20962995

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020962995

Country of ref document: EP

Effective date: 20230612

NENP Non-entry into the national phase

Ref country code: DE