CN111433779A - System and method for identifying road characteristics - Google Patents

System and method for identifying road characteristics Download PDF

Info

Publication number
CN111433779A
CN111433779A CN201880002003.5A CN201880002003A CN111433779A CN 111433779 A CN111433779 A CN 111433779A CN 201880002003 A CN201880002003 A CN 201880002003A CN 111433779 A CN111433779 A CN 111433779A
Authority
CN
China
Prior art keywords
images
training
road
feature
identification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201880002003.5A
Other languages
Chinese (zh)
Inventor
高钰舒
许鹏飞
韩佳彤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Didi Infinity Technology and Development Co Ltd
Original Assignee
Beijing Didi Infinity Technology and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Didi Infinity Technology and Development Co Ltd filed Critical Beijing Didi Infinity Technology and Development Co Ltd
Publication of CN111433779A publication Critical patent/CN111433779A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/582Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

A road feature recognition system may include acquiring an image associated with a road segment. The image is associated with an identification requirement. The method further includes processing the image to identify road features within the road segment by using a training feature recognition model associated with the recognition requirement, wherein obtaining the training feature recognition model includes: acquiring a first set of images; acquiring at least two training images based on the first set of images, wherein the training images are associated with the recognition requirements; marking the road features in the at least two training images; and training an initial feature recognition model using the training image including the labeled road features to generate the training feature recognition model.

Description

System and method for identifying road characteristics
Technical Field
The present application relates to image processing, and more particularly, to systems and methods for identifying road features in an image.
Background
The development of the internet has brought greater demands to navigation services. In some cases, the electronic navigation map may recommend a route and/or parking location for a user (e.g., a driver). However, in conventional processes for determining recommended routes and/or parking locations, certain road characteristics that affect user selection are not properly or timely considered. For example, sometimes a fence or sign prohibiting parking is temporarily or permanently added so that the previously available parking location is no longer available. Conventional map updates may take so long as to fail to include these fences in the map in a timely manner, which may result in a poor user experience. Accordingly, there is a need to provide systems and methods that can efficiently and timely identify road features in road segments and provide route and/or parking advice based on the identified road features.
Disclosure of Invention
According to a first aspect of the present application, a system for identifying road features is provided. The system may include one or more storage media and one or more processors. The one or more storage media may contain a set of instructions. The one or more processors are used to communicate with the one or more storage media. When executing the instructions, the one or more processors may be instructed to perform the following operations. The one or more processors may acquire an image relating to a road segment, wherein the image is associated with an identification requirement. The one or more processors may identify road features within the road segment by processing images using a training feature recognition model associated with the recognition requirements. The training feature recognition model may be provided by the following process. Acquiring a first set of images; acquiring at least two training images based on the first set of images; marking the road features in the at least two training images; an initial feature recognition model is trained using the training image containing the labeled road features to generate a training feature recognition model. The training images may be associated with the recognition requirements.
In some embodiments, the identification requirement may include at least one of an application scenario of target identification accuracy, identification conditions, or road features.
In some embodiments, the target recognition accuracy may be associated with an intersection-to-parallel ratio (IOU).
In some embodiments, labeling the road features in the at least two training images may include: comparing the target identification precision with an accuracy threshold; in response to the comparison result being that the target recognition precision is greater than the accuracy threshold, marking a shape and a location of the road feature associated with the recognition requirement in the at least two training images; and in response to the comparison result being that the target recognition accuracy is equal to or less than the accuracy threshold, marking an area of the road feature associated with the recognition requirement in the at least two training images.
In some embodiments, acquiring at least two training images based on the first set of images may include: determining whether an image in the first set of images meets the identification requirement; and in response to determining that the images in the first set of images do not meet the identification requirements, processing the images in the first set of images to meet the identification requirements.
In some embodiments, processing images in the first set of images to satisfy the identification requirement may include: changing the brightness of the images in the first set of images; changing the color of the images in the first set of images; rotating images in the first set of images; or to change the viewing angle of the images in the first set of images.
In some embodiments, the training feature recognition model may be further provided by the following process: acquiring a second set of images associated with the identification requirements; and determining whether the training feature recognition model satisfies the recognition requirements based on the second set of images.
In some embodiments, the one or more processors may acquire one or more additional images related to the road segment, wherein the additional images are associated with the identification requirement. The one or more processors may identify the road feature within the road segment using a trained feature recognition model associated with the recognition requirements.
In some embodiments, the one or more processors may update a map in the user terminal by highlighting the road feature on the map.
In some embodiments, the one or more processors may send a message to a user terminal instructing the user terminal to display an alert related to the road feature.
According to another aspect of the present application, a method for identifying road characteristics may include one or more of the following operations. The one or more processors may acquire an image relating to a road segment, wherein the image is associated with an identification requirement. The one or more processors may identify road features within the road segment by processing images using a training feature recognition model associated with the recognition requirements. The training feature recognition model may be provided by the following process. Acquiring a first set of images; acquiring at least two training images based on the first set of images; marking the road features in the at least two training images; an initial feature recognition model is trained using the training image containing the labeled road features to generate a training feature recognition model. The training images may be associated with the recognition requirements.
According to another aspect of the present application, a system for identifying road features may include an image acquisition module to acquire an image associated with a road segment, wherein the image is associated with an identification requirement. The system may include a feature recognition module to identify road features within the road segment by processing an image using a trained feature recognition model associated with the recognition requirements. The training feature recognition model may be provided by the following process. Acquiring a first set of images; acquiring at least two training images based on the first set of images; marking the road features in the at least two training images; an initial feature recognition model is trained using the training image containing the labeled road features to generate a training feature recognition model. The training images may be associated with the recognition requirements.
According to another aspect of the present application, a non-transitory computer-readable medium may include at least one set of instructions. The at least one set of instructions may be executable by one or more processors of a computer server. The one or more processors may acquire an image relating to a road segment, wherein the image is associated with an identification requirement. The one or more processors may identify road features within the road segment by processing images using a training feature recognition model associated with the recognition requirements. The training feature recognition model may be provided by the following process. Acquiring a first set of images; acquiring at least two training images based on the first set of images; marking the road features in the at least two training images; an initial feature recognition model is trained using the training image containing the labeled road features to generate a training feature recognition model. The training images may be associated with the recognition requirements.
Additional features will be set forth in the description which follows, and in part will be readily apparent to those skilled in the art from that description or recognized by practicing the embodiments or the examples described herein. The features of the present application may be achieved by practice or use of various aspects of the methods, instrumentalities and combinations discussed in detail in the following examples.
Drawings
The present application is further described in terms of exemplary embodiments. The embodiments will be further explained by means of the detailed drawing description. The described embodiments are non-limiting exemplary embodiments in which like reference numerals represent similar structures in at least two views of the drawings and wherein:
FIG. 1 is an exemplary schematic diagram of a road feature identification system shown in accordance with some embodiments of the present application;
FIG. 2 is an exemplary diagram illustrating hardware and/or software components of a computing device that may implement a processing engine according to some embodiments of the present application;
FIG. 3 is an exemplary diagram illustrating hardware and/or software components of a mobile device that may implement a user terminal according to some embodiments of the present application;
FIG. 4 is an exemplary block diagram of a processing engine shown in accordance with some embodiments of the present application;
FIG. 5 is an exemplary flow diagram illustrating identifying road features in an image according to some embodiments of the present application; and
FIG. 6 is an exemplary flow diagram illustrating the generation of a trained feature recognition model according to some embodiments of the present application.
Detailed Description
The following description is presented to enable one of ordinary skill in the art to make and use the application and is provided in the context of a particular application and its requirements. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present application. Thus, the present application is not limited to the embodiments shown, but is to be accorded the widest scope consistent with the claims.
The terminology used in the description presented herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this specification and the appended claims, the terms "a," "an," "the," and/or "the" are not intended to be inclusive in the singular, but rather are intended to be inclusive in the plural, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" are intended to cover only those integers, devices, acts, features, steps, elements, operations, and/or components which are not expressly identified, but rather to form an exclusive list of such steps or elements, and that a method or device may include other integers, devices, acts, features, steps, elements, operations, components, and/or groups of one or more of them.
These and other features and characteristics of the present application, as well as the methods of operation and functions of the related structures and the details of manufacture and economies of manufacture, will become more apparent upon consideration of the following description with reference to the accompanying drawings, all of which form a part of this application. It should be understood that the drawings are for illustrative purposes only and are not intended to limit the scope of the present disclosure in any way. The drawings are not drawn according to real scale.
Flow charts are used herein to illustrate operations performed by systems according to embodiments of the present application. It should be expressly understood that the operations in the flowcharts may be performed out of order. Rather, various steps may be processed in reverse order or simultaneously. One or more other operations may also be added to, or one or more operations may be removed from, these flowcharts.
The systems and methods of the present application may be applied to various transportation systems, including terrestrial, marine, aerospace, etc., or any combination thereof. The vehicles of the transportation system may include taxis, private cars, trailers, buses, trains, bullet trains, high speed railways, subways, ships, airplanes, space vehicles, hot air balloons, unmanned vehicles, bicycles, tricycles, motorcycles, and the like, or any combination thereof. The system and method of the present application may be applied to taxi calls, driver services, delivery services, carpooling, bus services, takeaway services, driver renting, vehicle renting, bicycle sharing services, train services, subway services, regular bus services, location services, map services, and the like.
In the process of identifying road features (e.g., road blocks, traffic signs, fences, traffic markings, etc.) in an image, different images associated with different road segments may have different identification requirements, such as the type of road feature to be identified, the identification accuracy of the road feature to be identified, etc. To this end, the systems and methods herein may use different training feature recognition models to recognize road features in different images with different recognition requirements.
In generating a training feature recognition model corresponding to a specific recognition requirement, at least two training images satisfying the specific recognition requirement need to be acquired. If the computer processor is unable to obtain training images from existing images that meet certain recognition requirements, more images need to be taken. Alternatively, rather than retaking a new image that satisfies a particular recognition requirement, one or more existing images may be modified to obtain a training image that satisfies the particular recognition requirement; this approach may reduce the cost of image acquisition.
FIG. 1 is an exemplary schematic diagram of a road feature identification system shown in accordance with some embodiments of the present application. The road feature identification system 100 may include a server 110, a network 120, a user terminal 140, a storage device 150, and a positioning system 160.
In some embodiments, the server 110 may be a single server or a group of servers. The set of servers can be centralized or distributed (e.g., the servers 110 can be a distributed system). In some embodiments, the server 110 may be local or remote. For example, server 110 may access information and/or data stored in user terminal 140 or storage device 150 via network 120. Also for example, server 110 may be directly connected to user terminal 140, and/or storage device 150 to access stored information and/or data. In some embodiments, the server 110 may be implemented on a cloud platform. For example only, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, inter-cloud, multi-cloud, and the like, or any combination thereof. In some embodiments, the server 110 may be implemented on a computing device 200 having one or more components described in FIG. 2 of the present application.
In some embodiments, the server 110 may include a processing engine 112. the processing engine 112 may perform one or more functions described herein by processing information and/or data, for example, the processing engine 112 may identify road features within road segments by processing images using a trained feature recognition model. in some embodiments, the processing engine 112 may include one or at least two processing engines (e.g., a single chip processing engine or a multi-chip processing engine). As merely an example, the processing engine 112 may include one or more hardware processors, such as a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), an application specific instruction set processor (ASIP), an image processing unit (GPU), a physical arithmetic processing unit (PPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), a programmable logic device (P L D), a controller, a microcontroller unit, a Reduced Instruction Set Computer (RISC), a microprocessor, or the like, or any combination of the above examples.
Network 120 may facilitate the exchange of information and/or data in some embodiments, information and/or data may be sent to/from other components in road feature identification system 100 (e.g., server 110, user terminal 140, storage device 150, and location system 160) via network 120. for example, processing engine 112 may retrieve images related to road segments from storage device 150 via network 120. in some embodiments, network 120 may be any one of, or a combination of, a wired network or a wireless network.
In some embodiments, the user terminal 140 may be associated with a user (e.g., a driver, passenger, or courier) of the road feature recognition system 100. The user terminal 140 may include a mobile device 140-1, a tablet 140-2, a laptop 140-3, a built-in device 140-4 in a mobile device, the like, or any combination thereof. In some embodiments, mobile device 140-1 may include a smart home device, a wearable device, a smart mobile device, a virtual reality device, an augmented reality device, and the like, or any combination thereof. In some embodiments, the smart home device may include a smart lighting device, a smart appliance control device, a smart monitoring device, a smart television, a smart video camera, a smart television receiver, a smartA speaker, etc., or any combination thereof. In some embodiments, the wearable device may include a smart bracelet, smart footwear, smart glasses, smart helmet, smart watch, smart wear, smart backpack, smart accessory, or the like, or any combination thereof. In some embodiments, the smart mobile device may include a smart phone, a Personal Digital Assistant (PDA), a gaming device, a navigation device, a POS device, and the like, or any combination thereof. In some embodiments, the virtual reality device and/or augmented reality device may include a virtual reality helmet, virtual reality glasses, a virtual reality patch, an augmented reality helmet, augmented reality glasses, an augmented reality patch, and the like, or any combination thereof. For example, the virtual reality device and/or the augmented reality device may include a Google GlassTM,RiftConTM,FragmentsTM,Gear VRTMAnd the like. In some embodiments, the built-in device 140-4 in the mobility device may include an on-board computer, an on-board television, a tachograph, and the like. In some embodiments, the user terminal 140 may be a device having a positioning technology for locating a user and/or a location of the user terminal 140.
In some embodiments, the user terminal 140 may communicate with other positioning devices (e.g., the positioning system 160) to determine the location of the user and/or the user terminal 140. In some embodiments, the user terminal 140 may send location information to the server 110.
Storage device 150 may store data and/or instructions. In some embodiments, the storage device 150 may store data obtained from the user terminal 140 and/or the server 110. For example, the storage device 150 may store an image acquired from the user terminal 140. In some embodiments, storage device 150 may store data and/or instructions that server 110 uses to perform or use to perform the exemplary methods described in this application. For example, the storage device 150 may store instructions for processing images to identify road features within a road segment using a trained road feature recognition model, which may be executed by the processing engine 112. In some embodiments, storage device 150 may include mass storage, removable storage, volatile read-write memory, read-only memory (ROM), and the like, or any combination thereof. Exemplary mass storage devices may include magnetic disks, optical disks, solid state drives, and the like. Exemplary removable memory may include flash drives, floppy disks, optical disks, memory cards, compact disks, magnetic tape, and the like. Exemplary volatile read and write memories can include Random Access Memory (RAM). Exemplary RAM may include Dynamic RAM (DRAM), double-data-rate synchronous dynamic RAM (DDR SDRAM), Static RAM (SRAM), thyristor RAM (T-RAM), zero-capacitance RAM (Z-RAM), and the like. Exemplary ROMs may include Mask ROM (MROM), Programmable ROM (PROM), Erasable Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), compact disk ROM (CD-ROM), digital versatile disk ROM, and the like. In some embodiments, the storage device 150 may be implemented on a cloud platform. By way of example only, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, inter-cloud, multi-cloud, and the like, or any combination thereof.
In some embodiments, a storage device 150 may be connected to the network 120 to communicate with one or more components (e.g., the server 110, the user terminal 140, the positioning system 160) in the road feature recognition system 100. One or more components of the road feature identification system 100 may access data or instructions stored in the storage device 150 via the network 120. In some embodiments, the storage device 150 may be directly connected to or in communication with one or more components in the road feature recognition system 100 (e.g., the server 110, the user terminal 140, the positioning system 160). In some embodiments, the storage device 150 may be part of the server 110.
The positioning system 160 may determine position information associated with an object (e.g., the user terminal 140). in some embodiments, the positioning system 160 may be a Global Positioning System (GPS), a global navigation satellite system (G L ONASS), a Compass navigation System (COMPASS), a Beidou navigation satellite system, a Galileo positioning system, a quasi-zenith satellite system (QZSS), etc. the position may be a coordinate, such as a latitude coordinate, a longitude coordinate, etc. the positioning system 160 may include one or more satellites, such as satellite 160-1, satellite 160-2, and satellite 160-3. the satellites 160-1 through 160-3 may independently or collectively determine such information. the satellite positioning system 160 may transmit such information to the network 120 or the user terminal 140 via a wireless connection.
FIG. 2 is an exemplary diagram illustrating hardware and/or software components of a computing device that may implement processing engine 112 according to some embodiments of the present application. As shown in fig. 2, computing device 200 may include a processor 210, memory 220, input/output 230, and communication ports 240.
In accordance with the techniques described herein, the processor 210 (e.g., logic circuitry) may execute computer instructions (e.g., program code) and perform the functions of the processing engine 112. For example, the processor 210 may include an interface circuit 210-a and a processing circuit 210-b therein. The interface circuit may be used to receive electronic signals (not shown in fig. 2) from the bus, where the electronic signals encode/include configuration data and/or instructions for processing by the processing circuit. The processing circuitry may perform logical computations and then determine conclusions, results, and/or instructions encoded as electronic signals. The interface circuit may then send an electronic signal from the processing circuit over the bus.
In some embodiments, processor 210 may include one or more hardware processors, such as microcontrollers, microprocessors, Reduced Instruction Set Calculators (RISC), Application Specific Integrated Circuits (ASIC), application specific instruction set processors (ASIP), Central Processing Units (CPU), Graphics Processing Units (GPU), Physical Processing Units (PPU), microcontroller units, Digital Signal Processors (DSP), Field Programmable Gate Arrays (FPGA), Advanced RISC Machines (ARM), programmable logic devices (P L D), any circuit or processor capable of executing one or more functions, or the like, or any combination thereof.
For illustration only, only one processor is depicted in computing device 200. It should be noted, however, that the computing device 200 in the present application may also comprise at least two processors, and thus, operations and/or method steps performed by one processor as described in the present application may also be performed by at least two processors, either jointly or separately. For example, if in the present application the processors of computing device 200 perform steps a and B, it should be understood that steps a and B may also be performed jointly or separately by two different processors of computing device 200 (e.g., a first processor performing step a, a second processor performing step B, or a first and second processor performing steps a and B jointly).
The memory 220 may store data/information from the user terminal 140, the storage device 150, and/or any other component of the road feature identification system 100. In some embodiments, memory 220 may include mass storage, removable storage, volatile read-write storage, read-only memory (ROM), the like, or any combination thereof. For example, the mass storage may include magnetic disks, optical disks, solid state drives, and the like. The removable memory may include a flash memory drive, a floppy disk, an optical disk, a memory card, a compact disk, a magnetic tape, etc. The volatile read-write memory may include Random Access Memory (RAM). The RAM may include Dynamic RAM (DRAM), double-data-rate synchronous dynamic RAM (DDR SDRAM), Static RAM (SRAM), thyristor RAM (T-RAM), zero-capacitance RAM (Z-RAM), and the like. The ROM may include Masked ROM (MROM), Programmable ROM (PROM), Erasable Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), compact disk ROM (CD-ROM), digital versatile disk ROM, and the like. For example, the memory 220 may store a program run by the processing engine 112 that may be used to identify road features within a road segment by processing images using a trained feature recognition model.
The input/output 230 may input and/or output signals, data, information, etc. in some embodiments, the input/output 230 may enable interaction between a user and the processing engine 112. in some embodiments, the input/output 230 may include an input device and an output device.
The communication port 240 may be connected to a network (e.g., network 120) to facilitate data communication. The communication port 240 may establish a connection between the processing engine 112 and the user terminal 140, the location system 160 or the storage device 150. The connection may be a wired connection, a wireless connection, any other communication connection that may enable data transmission and/or reception, and/or any combination of these connections. The wired connection may include, for example, an electrical cable, an optical cable, a telephone line, etc., or any combination thereof. The wireless connection may include, for example, BluetoothTMIn some embodiments, the communication port 240 may be and/or include a standardized communication port, such as RS232, RS485, or the like.
Fig. 3 is an exemplary diagram illustrating hardware and/or software components of a mobile device that may implement user terminal 140 according to some embodiments of the present application. As shown in FIG. 3, the mobile device 300 may include a communication platform 310, a display 320, a Graphics Processor (GPU)330, a Central Processing Unit (CPU)340, an input/output interface 350, a memory card 360, and a memory 390. In some embodiments, any other suitable component, including but not limited to a system bus or a controller (not shown), may also be included in the mobile device 300. In some embodiments, the operating system 370 is mobile (e.g., iOS)TM、AndroidTM、Windows PhoneTMEtc.) and one or more applications 380 may be loaded from memory 390 to memory card 360 for execution by CPU 340. The application 380 (e.g., the taxi-taking application) may include a browser or any other suitable mobile application for receiving and presenting information related to transportation services or other information from the processing engine 112. User interaction with the information flow may be accomplished via the input/output interface 350 and provided to the processing engine 112 and/or other components of the road feature recognition system 100 via the network 120. Just as an illustrationFor example, the road characteristics may be displayed on the user terminal 140 through the display 320 after being transmitted to the service requester. As another example, the service provider may input an image associated with a road segment via the input/output interface 350.
To implement the various modules, units, and functions thereof described herein, a computer hardware platform may be used as the hardware platform for one or more of the components described herein. A computer with user interface components may be used to implement a Personal Computer (PC) or any other type of workstation or terminal device. The computer, when suitably programmed, can act as a server.
Those skilled in the art will appreciate that when a component of the road feature recognition system 100 performs a function, the component may perform the function via electrical and/or electromagnetic signals. For example, when processing engine 112 processes a task, such as making a determination or identifying information, processing engine 112 may execute logic circuitry in its processor to process the task. When the processing engine 112 receives data (e.g., one or more images) from the user terminal 140, the processor of the processing engine 112 may receive an electrical signal encoding/containing the data. The processor of processing engine 112 may receive electrical signals through one or more information exchange ports. If the user terminal 140 communicates with the processing engine 112 over a wired network, the information exchange port may be physically connected to a cable. If the user terminal 140 is in communication with the processing engine 112 via a wireless network, the information exchange port of the processing engine 112 may be one or more antennas that may convert electrical signals into electromagnetic signals. In an electronic device, such as user terminal 140 and/or server 110, when a processor of the electronic device processes instructions, the processor sends the instructions and/or performs actions, which are conducted via electrical signals. For example, when a processor retrieves or stores data from a storage medium (e.g., storage device 150), it may send electrical signals to a read/write device of the storage medium, which may read or write the structured data in the storage medium. The structured data may be transmitted to the processor in the form of electrical signals via a bus of the electronic device. Herein, an electrical signal may refer to one electrical signal, a series of electrical signals, and/or at least two discrete electrical signals.
FIG. 4 is an exemplary block diagram of a processing engine shown in accordance with some embodiments of the present application. The processing engine 112 may include an image acquisition module 410, a feature recognition module 420, and a model acquisition module 430.
The image acquisition module 410 may be used to acquire images associated with road segments. As used herein, a road segment may refer to a portion of a road (e.g., a highway, a street). The location and/or length of the road segment may be predetermined by the road feature recognition system 100 or may be adjusted according to the particular use and/or intended purpose of the image.
In some embodiments, the user terminal 140 may establish communication (e.g., wireless communication) with the server 110 over the network 120 using an application (e.g., application 380 in fig. 3) installed in the user terminal 140. The application may be associated with the road feature recognition system 100. For example, the application may be a taxi cab application associated with the road feature recognition system 100. If one or more images are captured by the camera of the user terminal 140 while the application is running, the application may instruct the user terminal 140 to send the captured one or more images to the storage device 150 and/or the server 110 (e.g., the processing engine 112). In some embodiments, the image acquisition module 410 may acquire images from an electronic device (e.g., the user terminal 140) in real-time. In some embodiments, the image acquisition module 410 may acquire images from a storage medium (e.g., storage device 150, storage device 220 of processing engine 112). In some embodiments, the images may be taken by other devices, such as a tachograph. In some embodiments, one or more images may be extracted from a video captured by the user terminal 140. In some embodiments, the image may be taken by a camera that may or may not belong to the user terminal 140. For example, the images may be extracted from a video taken by a tachograph, wherein, in some embodiments, the video may be used for at least two purposes (e.g., recording driving experience and monitoring changes in road characteristics).
The feature identification module 420 may be used to identify road features within the road segment. In some embodiments, the feature recognition module 420 may identify road features within the road segment by processing the image using a trained feature recognition model.
In some embodiments, a road feature may refer to one or more target objects that belong to the same category associated with the road segment in the image. For example, the target object may include a fence, a sign, a traffic light, a vehicle, a street light, an overpass, a building, a traffic sign, and the like. In some embodiments, vehicle parking may be affected by blocking and/or restricting access to certain areas in the road segment. For example, the road feature may be a fence, a road fence, or a restricted-passage traffic marking. In some embodiments, correct and timely identification of road features (e.g., road features that affect parking) is important to the user experience of online-to-offline services.
In some embodiments, the image may be associated with an identification requirement. As used herein, an identification requirement may refer to a set of at least two identification requirements that identify a road feature in an image. For example, the identification requirements may include identification categories, identification conditions associated with the image, application scenarios for identifying road features in the image, target identification accuracy for identifying road features in the image, or the like, or any combination thereof. In some embodiments, the identification category may refer to a category of road features in the image that need to be identified. The recognition conditions associated with the image may include light conditions (e.g., the image shows a relatively brighter or darker environment), field of view conditions, rotation conditions, resolution conditions, zoom conditions, weather conditions (e.g., the image is taken on a rainy, sunny, or foggy day), traffic conditions (e.g., the image shows a congested or clear road segment), and the like, or any combination thereof. Exemplary application scenarios for identifying road features in an image may include updating maps, generating high precision maps, navigating, determining traffic conditions, recommending parking locations (e.g., getting-on and getting-off locations in a taxi service) to a user (e.g., a driver), and the like.
In some embodiments, the target recognition accuracy of the road features in the recognition image may be represented by an intersection-over-intersection (iou) between the real road features and the recognition road features. The real road feature may refer to a road feature displayed in the image. The feature identification module 420 may identify road features in the image by generating a prediction window in the image. The things in the prediction window may be confirmed as identified road features. The intersection and union ratio between the real road feature and the recognized road feature may be determined based on an intersection area and a union area of the real road feature and the recognized road feature. For example, the intersection ratio between the real road feature and the recognized road feature may be determined by the following formula (1):
Figure BDA0001866498640000111
wherein, the IOUPRepresenting the intersection ratio between the real road characteristics and the identified road characteristics; area (t) represents the Area of the real road feature in the image, Area (i)1) Representing the area of the identified road feature in the image.
In some embodiments, if two or more road features are identified in the image, the identification requirements may include different target identification accuracies associated with the two or more road features. For example, there are two types of road features (e.g., fences and signs) in the image. In the target recognition request, the target recognition accuracy of the recognition fence may be 0.9, and the target recognition accuracy of the recognition signboard may be 0.5.
In some embodiments, if two or more road features are identified in the image, the feature recognition module 420 may use a single training feature recognition model to identify the two or more road features. Alternatively, the feature recognition module 420 may use different trained feature recognition models to separately recognize the two or more road features. For example, two types of road features (e.g., fences and signs) are included in the image. The target recognition accuracy for recognizing the fence was 0.9. The target recognition accuracy of the recognition signboard was 0.5. In some embodiments, the feature recognition module 420 may recognize the two road features using a single training feature recognition model, wherein the training classes required for training of the training feature recognition model may include fences and signs, the training recognition accuracy associated with fences may be greater than or equal to 0.9, and the training recognition accuracy associated with signs may be greater than or equal to 0.5. Alternatively, the feature recognition module 420 may recognize the fence using the first training feature recognition model, wherein the training class of the training requirement of the first training feature recognition model includes the fence, and the training identification accuracy of the training requirement may be greater than or equal to 0.9. The feature recognition module 420 may recognize signs using a second training feature recognition model, where the training class of the training requirement of the second training feature recognition model includes signs, and the training recognition accuracy of the training requirement may be greater than or equal to 0.5.
In some embodiments, after identifying road features in the road segment by processing the image using the trained feature recognition model, the feature recognition module 420 may generate recognition results related to identifying the road features. The recognition results associated with recognizing the road feature may include recognizing a location of the road feature, recognizing a size of the road feature, recognizing a shape of the road feature, recognizing a likelihood that the road feature may belong to a certain category (e.g., a sign or traffic marking), identifying a number of one or more target objects associated with the road feature (e.g., a number of vehicles in the road segment, a number of fences in the road segment), and the like, or any combination thereof. For example, identifying a location of a road feature may indicate identifying the road feature as being located on the left or right side of the road segment. As another example, identifying a road feature may include at least two fences. The location at which the road feature is identified may be indicative of a distance between any two adjacent fences of the at least two fences. As another example, the identifying road feature may be a fence comprising at least two bars perpendicular to the ground. The shape of the identified road feature may represent the distance between any two adjacent bars.
The model acquisition module 430 may be used to pre-generate the training feature recognition model and store the training feature recognition model in a storage medium (e.g., the storage device 150, the memory 220 of the processing engine 112). When the feature recognition module 420 recognizes road features in an image, the model acquisition module 430 may acquire a training feature recognition model from a storage medium. In some embodiments, the model acquisition module 430 may generate the training feature recognition model on-line while the feature recognition module 420 is recognizing road features in the image. In some embodiments, the third party device may pre-generate the training feature recognition model and store the training feature recognition model locally or in a storage medium (e.g., storage device 150, memory 220 of processing engine 112) of the feature recognition system 100. When the feature recognition module 420 recognizes road features in an image, the model acquisition module 430 may acquire the training feature recognition model from a storage medium of the feature recognition system 100 or a third party device. In some embodiments, when the feature recognition module 420 is recognizing road features in an image, the third-party device may generate a training feature recognition model online and send the training feature recognition model to the model acquisition module 430. More details on generating the trained feature recognition model may be found elsewhere in this application (e.g., FIG. 6).
In some embodiments, the training feature recognition model may be associated with training requirements. The training requirements for training the feature recognition model may refer to the function of training the feature recognition model. For example, the training requirements may include training categories, training conditions, application scenarios for training the feature recognition model, training recognition accuracy for training the feature recognition model, and the like, or any combination thereof. The training class may refer to a class of road features that the training feature recognition model may recognize. Exemplary training conditions may include lighting conditions (e.g., the training feature recognition model may identify road features in images taken in relatively bright or dark environments), field of view conditions, rotation conditions, resolution conditions, scaling conditions, weather conditions (e.g., the training feature recognition model may identify road features in images taken on rainy, sunny, or foggy days), traffic conditions (e.g., the training feature recognition model may identify road features in images showing congested or clear road segments), and the like, or any combination thereof. Example application scenarios for training the feature recognition model may include updating maps, generating high-precision maps, navigating, determining traffic conditions, recommending parking locations for a user (e.g., a driver) (e.g., an getting-on location and a getting-off location in a taxi service), and so forth. The training recognition accuracy may refer to an intersection-to-parallel ratio between the real road feature and the road feature recognized by the training feature recognition model.
In some embodiments, if the training categories of the training requirements include two or more categories, the training requirements may include different training recognition accuracies associated with the two or more categories. For example, the training categories of the training requirements include fences and signs. In the requirement of the training mark, the training recognition precision of the recognition fence can be 0.9, and the training recognition precision of the recognition sign can be 0.5.
In some embodiments, the model acquisition module 430 may acquire the training feature recognition model according to training requirements associated with the recognition requirements. For example, if the target recognition accuracy of the recognition requirement is 0.92, the model acquisition module 430 may acquire or generate a training feature recognition model with a training recognition accuracy of greater than or equal to 0.92 of the training requirement. For another example, if a relatively bright environment is displayed in the acquired image (e.g., an average of pixel intensities in the image is greater than an intensity threshold), the model acquisition module 430 may acquire or generate a training feature recognition model whose training requirements of lighting conditions are associated with the relatively bright environment. As another example, if the recognition class of the recognition requirement is a fence, the model acquisition module 430 may acquire or generate a training feature recognition model whose training class of the training requirement includes a fence.
For example, the image acquisition module 410 and the feature recognition module 420 may be combined into one module that may acquire images associated with road segments and identify road features within the road segments by processing the images using a trained feature recognition model.
It should be noted that the foregoing is provided for illustrative purposes only and is not intended to limit the scope of the present application. Many variations and modifications may be made to the teachings of the present application by those of ordinary skill in the art in light of the present disclosure. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application. For example, the processing engine 112 may further include a memory module (not shown in FIG. 4). The storage module may be used to store data generated by any process performed by any component in the processing engine 112. As another example, each portion of processing engine 112 may correspond to a respective memory module. In addition, the components of the processing engine 112 may share a memory module.
FIG. 5 is an exemplary flow diagram illustrating identifying road features in an image according to some embodiments of the present application. In some embodiments, the flow 500 may be applied in the system 100 as shown in fig. 1. For example, flow 500 may be stored as instructions in a storage medium (e.g., storage device 150 or memory 220 of processing engine 112) and may be invoked and/or executed by server 110 (e.g., processing engine 112 of server 110, processor 210 of processing engine 112, or one or more modules in processing engine 112 shown in fig. 4). The operations of the flow/method described below are merely exemplary. In some embodiments, one or more additional operations not described may be added and/or one or more operations described herein may be deleted in the implementation of the processes/methods. Further, the order in which the operations of flow 500 are illustrated in FIG. 5 and described below is not intended to be limiting. The process 500 may be used to identify some road characteristics.
In 510, the image acquisition module 410 (or the processing engine 112, and/or the interface circuit 210-a) may acquire an image associated with the road segment. As used herein, the road segment may refer to a portion of a road (e.g., highway, street). The location and/or length of the road segment may be predetermined by the road feature recognition system 100 or may be adjusted according to the particular use and/or intended purpose of the image.
In some embodiments, the user terminal 140 may establish communication (e.g., wireless communication) with the server 110 over the network 120 using an application (e.g., application 380 in fig. 3) installed in the user terminal 140. An application may be associated with the feature recognition system 100. For example, the application may be taxi-taking software associated with the feature recognition system 100. If the application is running and the camera of the user terminal 140 captures one or more images, the application may instruct the user terminal 140 to send the captured one or more images to the storage device 150 and/or the server 110 (e.g., the processing engine 112). In some embodiments, the image acquisition module 410 may acquire images from an electronic device (e.g., the user terminal 140) in real-time. In some embodiments, the image acquisition module 410 may acquire images from a storage medium (e.g., storage device 150, storage device 220 of processing engine 112). In some embodiments, one or more images may be extracted from a video captured by the user terminal 140. In some embodiments, the image may be taken by a camera that may or may not belong to the user terminal 140. For example, the images may be extracted from a video taken by a tachograph, wherein, in some embodiments, the video may be used for at least two purposes (e.g., recording driving experience and monitoring changes in road characteristics).
In 520, the feature identification module 420 (or the processing engine 112, and/or the flow circuit 210-b) may identify road features within the road segment. In some embodiments, the feature recognition module 420 may identify road features within the road segment by processing the image using a trained feature recognition model.
In some embodiments, a road feature may refer to one or more target objects that belong to the same category associated with the road segment in the image. For example, the target object may include a fence, a sign, a traffic light, a vehicle, a street light, an overpass, a building, a traffic sign, and the like. In some embodiments, vehicle parking may be affected by blocking and/or restricting access to certain areas in the road segment. For example, the road feature may be a fence, a road fence, or a restricted-passage traffic marking. In some embodiments, correct and timely identification of road features (e.g., road features that affect parking) is important to the user experience of online-to-offline services.
In some embodiments, the image may be associated with an identification requirement. As used herein, an identification requirement may refer to a set of at least two identification requirements that identify a road feature in an image. For example, the identification requirements may include identification categories, identification conditions associated with the image, application scenarios for identifying road features in the image, target identification accuracy for identifying road features in the image, or the like, or any combination thereof. In some embodiments, the identification category may refer to a category of road features in the image that need to be identified. The recognition conditions associated with the image may include light conditions (e.g., the image shows a relatively brighter or darker environment), field of view conditions, rotation conditions, resolution conditions, zoom conditions, weather conditions (e.g., the image is taken on a rainy, sunny, or foggy day), traffic conditions (e.g., the image shows a congested or clear road segment), and the like, or any combination thereof. Exemplary application scenarios may include updating maps, generating high-precision maps, navigating, determining traffic conditions, recommending parking locations (e.g., getting-on and getting-off locations in a taxi service) to a user (e.g., a driver), and so forth.
The target recognition accuracy of the road features in the recognition image can be represented by an intersection-over-intersection (iou) between the real road features and the recognized road features. The real road feature may refer to a road feature displayed in the image. The feature identification module 420 may identify road features in the image by generating a prediction window in the image. The things in the prediction window may be confirmed as identified road features. The intersection and union ratio between the real road feature and the recognized road feature may be determined based on an intersection area and a union area of the real road feature and the recognized road feature. For example, the intersection ratio between the real road feature and the recognized road feature may be determined by the following formula (1):
Figure BDA0001866498640000151
wherein, the IOUPRepresenting the intersection ratio between the real road characteristics and the identified road characteristics; area (t) represents the Area of the real road feature in the image, Area (i)1) Representing the area of the identified road feature in the image.
In some embodiments, if two or more road features are identified in the image, the identification requirements may include different target identification accuracies associated with the two or more road features. For example, there are two types of road features (e.g., fences and signs) in the image that need to be identified. In the target recognition request, the target recognition accuracy of the recognition fence may be 0.9, and the target recognition accuracy of the recognition signboard may be 0.5.
In some embodiments, prior to 520, the feature identification module 420 may obtain identification requirements. For example, an operator of server 110 may enter an identification requirement (e.g., via input/output 230). The feature identification module 420 may receive operator input related to identification requirements. For another example, after the image acquisition module 410 acquires an image, the feature recognition module 420 may automatically process the image to acquire the light conditions associated with the image. The light condition may be used as a parameter for identifying the requirement.
In some embodiments, the training feature recognition model may be generated online or offline. In some embodiments, the training feature recognition model may be generated by the processing engine 112 (e.g., the model acquisition module 430) or a third party device in communication with the feature recognition system 100. In some embodiments, the model acquisition module 430 may pre-generate the training feature recognition model and store the training feature recognition model in a storage medium (e.g., the storage device 150, the memory 220 of the processing engine 112). When the feature recognition module 420 recognizes road features in an image, the model acquisition module 430 may acquire a training feature recognition model from a storage medium. In some embodiments, when the feature recognition module 420 recognizes road features in an image, the model acquisition module 430 may generate a training feature recognition model online. In some embodiments, the third party device may pre-generate the training feature recognition model and store the training feature recognition model locally or in a storage medium (e.g., storage device 150, memory 220 of processing engine 112) of the feature recognition system 100. When the feature recognition module 420 recognizes road features in an image, the model acquisition module 430 may acquire the training feature recognition model from a storage medium of the feature recognition system 100 or a third party device. In some embodiments, when the feature recognition module 420 recognizes road features in an image, the third party device may generate a training feature recognition model online and send the training feature recognition model to the model acquisition module 430. More details on generating the trained feature recognition model may be found elsewhere in this application (e.g., FIG. 6).
In some embodiments, the training feature recognition model may be associated with training requirements. The training requirements for training the feature recognition model may refer to the function of training the feature recognition model. For example, the training requirements may include training categories, training conditions, application scenarios for training the feature recognition model, training recognition accuracy for training the feature recognition model, and the like, or any combination thereof. The training class may refer to a class of road features that the training feature recognition model may recognize. Exemplary training conditions may include lighting conditions (e.g., the training feature recognition model may identify road features in images taken in relatively bright or dark environments), field of view conditions, rotation conditions, resolution conditions, scaling conditions, weather conditions (e.g., the training feature recognition model may identify road features in images taken on rainy, sunny, or foggy days), traffic conditions (e.g., the training feature recognition model may identify road features in images showing congested or clear road segments), and the like, or any combination thereof. Example application scenarios for training the feature recognition model may include updating maps, generating high-precision maps, navigating, determining traffic conditions, recommending parking locations for a user (e.g., a driver) (e.g., an getting-on location and a getting-off location in a taxi service), and so forth. The training recognition accuracy may refer to an intersection-to-parallel ratio between the real road feature and the road feature recognized by the training feature recognition model.
In some embodiments, if the training categories of the training requirements include two or more categories, the training requirements may include different training recognition accuracies associated with the two or more categories. For example, the training categories of the training requirements include fences and signs. In the requirement of the training mark, the training recognition precision of the recognition fence can be 0.9, and the training recognition precision of the recognition sign can be 0.5.
In some embodiments, the model acquisition module 430 may acquire the training feature recognition model according to training requirements associated with the recognition requirements. For example, if the target recognition accuracy required for recognition is 0.92, the model acquisition module 430 may acquire or generate a training feature recognition model with a training recognition accuracy greater than or equal to 0.92. For another example, if a relatively bright environment is displayed in the captured image (e.g., the average of the brightness of pixels in the image is greater than the brightness threshold), the model capture module 430 may capture or generate a training feature recognition model whose training requires illumination conditions associated with the relatively bright environment. As another example, if the recognition class of the recognition requirement is a fence, the model acquisition module 430 may acquire or generate a training feature recognition model whose training class of the training requirement includes a fence.
In some embodiments, after identifying road features in the road segment by processing the image using the trained feature recognition model, the feature recognition module 420 may generate recognition results related to identifying the road features. The recognition results associated with recognizing the road feature may include recognizing a location of the road feature, recognizing a size of the road feature, recognizing a shape of the road feature, recognizing a likelihood that the road feature may belong to a certain category (e.g., a sign or traffic marking), identifying a number of one or more target objects associated with recognizing the road feature (e.g., a number of vehicles in the road segment, a number of fences in the road segment), and the like, or any combination thereof. For example, identifying a location of a road feature may indicate identifying the road feature as being located on the left or right side of the road segment. As another example, identifying a road feature may include at least two fences. The location at which the road feature is identified may be indicative of a distance between any two adjacent fences of the at least two fences. As another example, the identifying road feature may be a fence comprising at least two bars perpendicular to the ground. The shape of the identified road feature may represent the distance between any two adjacent bars.
In some embodiments, if two or more road features need to be identified for inclusion in the image, the feature recognition module 420 may use a single training feature recognition model to identify the two or more road features. Alternatively, the feature recognition module 420 may use different trained feature recognition models to separately recognize the two or more road features. For example, two types of road features (e.g., fences and signs) are included in the image. The target recognition accuracy for recognizing the fence was 0.9. The target recognition accuracy of the recognition signboard was 0.5. In some embodiments, the feature recognition module 420 may recognize the two road features using a single training feature recognition model, wherein the training classes required for training of the training feature recognition model may include fences and signs, the training recognition accuracy associated with fences may be greater than or equal to 0.9, and the training recognition accuracy associated with signs may be greater than or equal to 0.5. Alternatively, the feature recognition module 420 may use the first training feature recognition model to recognize the fence, where the training class of the training requirement of the first training feature recognition model may include the fence, and the training recognition accuracy of the training requirement may be greater than or equal to 0.9. The feature recognition module 420 may recognize the sign using a second training feature recognition model, wherein the training class of the training requirement of the second training feature recognition model may include the sign, and the training recognition accuracy of the training requirement may be greater than or equal to 0.5.
In some embodiments, the processing engine 112 may obtain at least two images associated with the road segment. The at least two images may be associated with identification requirements. The processing engine 112 may process the images individually or simultaneously to identify road features within the road segment based on the process 500. In some embodiments, the at least two images may be selected and/or processed from at least two images acquired by a user terminal or other type of device (e.g., a tachograph). In some embodiments, the at least two images may be selected and/or processed from at least two images acquired from different terminals and/or devices. For example, a plurality of drivers may be arranged to record a road segment (or to photograph the road segment) as it passes by the device associated with the plurality of drivers. The captured images may be used in process 500 to identify road features. In some embodiments, identifying the presence, location, and/or dimensions of road features between the at least two images may be mutual authentication. In some embodiments, identifying the presence, location, and/or dimensions of road features between at least two images may be contradictory. If a conflict occurs, a selection process can be performed to further filter the image to obtain more reasonable results. Alternatively, if a conflict arises, the recognition requirements (e.g., higher target accuracy, stricter recognition conditions, and/or fewer application scenarios) may be adjusted to produce more reasonable results.
In some embodiments, the identified road features contained by the image may be used to generate a high precision map. By way of example only, the user terminal 140 (e.g., smartphone, tachograph) may capture an image and transmit the captured image to the server 110 (e.g., processing engine 112) in real-time or at some point in time after capture. The server 110 may process the image to identify road features based on the process 500. In some embodiments, an existing electronic map may be updated by adding the identified road features. As one or more identified road features are added, the accuracy of the map may be improved.
In some embodiments, images containing identifying road features may be used for navigation. By way of example only, the user terminal 140 (e.g., smartphone, tachograph) may capture images and send the captured images to the server 110 (e.g., processing engine 112) in real-time. The server 110 may process the image to identify a traffic sign and/or a traffic line based on the process 500. For example, if the server 110 identifies a turn-around prohibited traffic sign in the image, the server 110 may send a message to the user terminal 140 relating to the turn-around prohibited traffic sign.
In some embodiments, the image including the identified road feature may be used to determine traffic conditions for the road segment (e.g., a congested road segment or a clear road segment). For example, the user terminal 140 (e.g., a smartphone, a tachograph) may capture an image and transmit the captured image to the server 110 in real time or at some point in time after the capture. The server 110 may process the image to identify vehicles and/or traffic lights within the road segment based on the process 500. The server 110 may determine whether the road segment is congested based on the number of vehicles in the road segment. For example, when it is determined that the number of vehicles in the road segment is greater than the number threshold, the server 110 may determine that the road segment is congested at a current or previous point in time. When it is determined that the number of vehicles in the road segment is less than or equal to the number threshold, the server 110 may determine that the road segment is clear. In some embodiments, the server 110 may determine the cause of the congestion for the road segment based on the identified traffic lights. For example, when it is determined that the traffic light in the road segment is red, the server 110 may determine that the traffic light may cause the road segment to be congested. When the traffic light for the road segment is determined to be green or the road segment is determined to have no traffic light, the server 110 may determine that the congestion for the road segment may be caused by an event (e.g., a traffic accident, a concert, or inclement weather). The server 110 may transmit information describing whether the section is congested and the reason for the congestion to the user terminal 140 and instruct the user terminal 140 to display the message. For example, the server 110 may instruct the user terminal 140 to display the blocked road segments using red and the clear road segments using green on the electronic map. For another example, the server 110 may instruct the user terminal 140 to display the cause of the road section congestion in the form of one or a combination of text, picture, voice, video, and the like on the electronic map.
In some embodiments, images containing identifying road features may be used to recommend a parking location (e.g., an getting-on location or a getting-off location in a taxi service) to a user (e.g., a driver). By way of example only, the server 110 may determine an initial parking location. By performing flow 500, server 110 (e.g., processing engine 112) may obtain (e.g., from a storage medium or in real-time from user terminal 140) one or more images containing an initial parking location and determine whether any road features (e.g., fences, no-parking-enabled signs) exist in the images that render the initial parking location no longer suitable for parking. If it is determined that there is no road characteristic that makes the initial parking position no longer suitable for parking, the server 110 may recommend the initial parking position to the user terminal 140. If it is determined that there is at least one road characteristic that makes the initial parking position no longer suitable for parking, the server 110 may determine a new parking position or send a message to the user terminal 140 to cause the user terminal 140 to display a warning about the at least one road characteristic that makes the initial parking position no longer suitable for parking.
In some embodiments, the image including the identified road features may be used to update a map (e.g., an electronic map) in the user terminal 140 by highlighting one or more new road features (e.g., fences, traffic signs, traffic lights, street lights, elevated roads, buildings, traffic markings, etc.). By way of example only, by performing flow 500, server 110 may obtain (e.g., from a storage medium or in real-time from user terminal 140) at least two images and identify one or more road features based on the images. The server 110 may compare the identified road features with road features already present in the map and highlight new road features in the map based on the comparison.
It should be noted that the foregoing is provided for illustrative purposes only and is not intended to limit the scope of the present application. Many variations and modifications may be made to the teachings of the present application by those of ordinary skill in the art in light of the present disclosure. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.
FIG. 6 is an exemplary flow diagram illustrating the generation of a trained feature recognition model according to some embodiments of the present application. In some embodiments, the process 600 may be implemented in the feature recognition system 100 as shown in FIG. 1. For example, flow 600 may be stored in the form of instructions in a storage medium (e.g., storage device 150 or memory 220 of processing engine 112) and may be invoked and/or executed by server 110 (e.g., processing engine 112 of server 110, processor 210 of processing engine 112, or one or more modules in processing engine 112 shown in fig. 4). The operations of flow 600 described below are merely exemplary. In some embodiments, one or more additional operations not described may be added and/or one or more operations described herein may be deleted when flow 600 is completed. Further, the order of the operations in flow 600 as shown in FIG. 6 and described below is not limiting. In some embodiments, the trained feature recognition model described at 520 in FIG. 5 may be obtained based on the flow 600.
In 610, the model acquisition module 430 (or the processing engine 112, and/or the processing circuit 210-b) may acquire a first set of images associated with at least two road segments. In some embodiments, the model retrieval module 430 may retrieve the first set of images from a storage medium (e.g., storage device 150, memory 220 of processing engine 112).
In 620, the model acquisition module 430 (or the processing engine 112, and/or the processing circuitry 210-b) may acquire at least two training images based on the first set of images.
The training images may be associated with training requirements. For example, the light conditions required for training are associated with a relatively bright environment. The training image may show a relatively bright environment. As another example, the training category of the training requirement may be a traffic marking. One or more traffic markings may be included in at least one training image.
In some embodiments, the model acquisition module 430 may select at least two training images from the first set of images. In some embodiments, if the computer processor (e.g., processing engine 112) is unable to acquire training images associated with the training requirements from existing images (e.g., the first set of images), the model acquisition module 430 may modify at least one image in the first set of images to acquire the training images, rather than taking new images associated with the training requirements, which may reduce the cost of image acquisition.
In some embodiments, modifying at least one image in the first set of images to obtain the training image may include the following operations. The model acquisition module 430 may determine whether the images in the first set of images satisfy the training requirements. For example, if the light conditions required for training are associated with a relatively bright environment, the model acquisition module 430 may determine whether the images in the first set of images show a relatively bright environment. If it is determined that the images in the first set of images do not meet the training requirements, the model acquisition module 430 may process the images in the first set of images to meet the training requirements. Processing the images in the first set of images to meet the training requirements may include changing a brightness of the images in the first set of images, changing a color of the images in the first set of images, rotating the images in the first set of images, changing a perspective of the images in the first set of images, and the like or any combination thereof. For example, the light conditions required for training may be associated with a relatively bright environment. The images in the first set of images show a relatively dark environment. The model acquisition module 430 may increase the brightness and/or change the color of the images in the first set of images and determine the processed images with increased brightness and/or changed color as training images.
In 630, the model acquisition module 430 (or the processing engine 112, and/or the processing circuitry 210-b) may label road features in at least two training images. In some embodiments, the road feature may be manually marked, automatically marked, or semi-automatically marked.
In some embodiments, the model acquisition module 430 may label road features in at least two training images based on training requirements. For example, if the training category of the training requirement is a fence, the model acquisition module 430 may mark a fence in the at least two training images.
In some embodiments, the model acquisition module 430 may label road features in at least two training images based on the training recognition accuracy of the training requirements.
In some embodiments, the model acquisition module 430 may compare whether the training recognition accuracy is greater than an accuracy threshold (e.g., 0.5, 0.6, 0.7, 0.8, 0.9.) when the comparison results in the training recognition accuracy being greater than the accuracy threshold, the model acquisition module 430 may label the shape and location of the road feature in the at least two training images.
In some embodiments, the intersection ratio between the marked road features and the real road features may be equal to the training recognition accuracy in the training requirements.
In some embodiments, the model acquisition module 430 may determine positive and negative examples based on labeled road features in the training images. The model acquisition module 430 may determine at least two bounding boxes in the training image. The positive and negative examples may be determined based on an intersection ratio between a bounding box in the training image and the labeled road features. The intersection-and-union ratio between the marked road feature and the bounding box may be equal to a ratio of intersection and union between the marked road feature and the bounding box. For example, if the intersection ratio between the labeled road feature and the bounding box is greater than or equal to the intersection ratio threshold (e.g., 0.3, 0.4, 0.5, 0.6, 0.7, 0.8), the bounding box may be determined to be a positive sample. The bounding box may be determined to be a negative example if the intersection ratio between the marked road feature and the bounding box is less than an intersection ratio threshold.
At 640, the model acquisition module 430 (or the processing engine 112, and/or the processing circuitry 210-b) may train the initial feature recognition model using the training images containing the labeled road features to generate a training feature recognition model associated with the training requirements. For example, the initial feature recognition model may include a convolutional neural network model, an adaptive enhancement model, a gradient-enhanced decision tree, and the like, or any combination thereof. In some embodiments, the model acquisition module 430 may input labeled training images (e.g., positive and negative examples) into the initial feature recognition model to generate a training feature recognition model.
In some embodiments, after generating the training feature recognition model associated with the training requirements, the model acquisition module 430 may test the training feature recognition model to determine whether the training feature recognition model satisfies the training requirements (e.g., determine whether the training feature recognition model may identify the training class with a training recognition accuracy in the training requirements).
The model acquisition module 430 may acquire a second set of images associated with training requirements for training the feature recognition model. For example, a training condition may be associated with a relatively bright environment if required for training. The second set of images may show a relatively bright environment. As another example, if the training category of the training requirement includes a fence. At least one image of the second set of images may include one or more fences. In some embodiments, the model retrieval module 430 may retrieve the second set of images from a storage medium (e.g., storage device 150, storage device 220 of the processing engine 112). In some embodiments, the second set of images may be different from the first set of images.
The model acquisition module 430 may determine whether the training feature recognition model satisfies the training requirements based on the second set of images. In some embodiments, road feature classes in the second set of images that are consistent with training classes in the training requirements may be labeled manually, automatically or semi-automatically. The process of marking road features in the second set of images may be based on the description at 630. The model obtaining module 430 may input the second group of images with the marked road features into the training feature recognition model, obtain recognition results of the second group of images, and determine whether the training feature recognition model can achieve the training recognition accuracy based on the recognition results of the second group of images.
In some embodiments, the model obtaining module 430 may determine whether the training feature recognition model can achieve the training recognition accuracy based on a cross-over ratio, an accuracy rate, a recall rate, etc., or any combination thereof, between the labeled road features and the recognized road features in the second set of images.
In some embodiments, the model acquisition module 430 may determine whether the training feature recognition model is able to achieve the training recognition accuracy based on an average of the intersection ratios between the labeled road features and the recognized road features in the second set of images. If the average value is greater than or equal to an average threshold (e.g., 0.5, 0.6, 0.7, 0.8, 0.9), the model acquisition module 430 may determine that the training feature recognition model may achieve the training recognition accuracy. If the average is less than the average threshold, the model acquisition module 430 may determine that the training feature recognition model cannot achieve the training recognition accuracy.
In some embodiments, the intersection ratio between the labeled road features and the identified road features of the images in the second set of images may be determined based on the intersection area and the union area between the labeled road features and the identified road features of the images in the second set of images. For example, the intersection ratio between the marked road features and the identified road features of the images in the second set of images may be determined based on equation (2) below:
Figure BDA0001866498640000231
wherein, the IOUTRepresenting a cross-over ratio between the marked road features and the identified road features in the second set of images; area (m) represents the area of the marked road feature in the second set of images; area (i)2) Representing the area of the identified road feature in the second set of images.
In some embodiments, the model acquisition module 430 may determine whether the accuracy rate is greater than an accuracy rate threshold. If the accuracy is greater than or equal to an accuracy threshold (e.g., 0.5, 0.6, 0.7, 0.8, 0.9), the model acquisition module 430 may determine that the training feature recognition model is capable of achieving the training recognition accuracy. If the accuracy rate is less than the accuracy rate threshold, the model acquisition module 430 may determine that the training feature recognition model cannot achieve the training recognition accuracy.
In some embodiments, the model acquisition module 430 may determine whether the recall is greater than a recall threshold. If the recall is greater than or equal to the recall threshold (e.g., 0.5, 0.6, 0.7, 0.8, 0.9), the model acquisition module 430 may determine that the training feature recognition model may achieve the training recognition accuracy. If the recall rate is less than the recall rate threshold, the model acquisition module 430 may determine that the training feature recognition model cannot achieve the training recognition accuracy.
In some embodiments, the second set of images may include 4 identified road features, such as true positive, true negative, false positive, and false negative examples. A true example may refer to recognized road features that belong to a training class and are predicted by a training feature recognition model to belong to the training class. A true negative example may refer to a recognized road feature that does not belong to a training class and is predicted by the training feature recognition model to not belong to the training class. False positive examples may refer to recognized road features that do not belong to a training class but are predicted by the training feature recognition model to belong to the training class. False negative examples may refer to recognized road features that belong to a training class, but are predicted by the trained feature recognition model to not belong to the training class. The model obtaining module 430 can determine the accuracy and the recall according to the true positive, the true negative, the false positive, and the false negative. For example, the model acquisition module 430 may determine the accuracy and recall according to equations (3) and (4) below:
Figure BDA0001866498640000232
Figure BDA0001866498640000233
wherein R isaccRepresenting the accuracy; TP represents the number of true instances; FP represents the number of false positive cases; rrecallIndicating a recall rate; FN indicates the number of false negative cases.
In some embodiments, if the training requirements include different training recognition accuracies for recognizing different training classes, the model acquisition module 430 may separately determine whether the training feature recognition model is capable of achieving each training recognition accuracy according to the above description.
It should be noted that the foregoing description is provided for illustrative purposes only, and is not intended to limit the scope of the present application. Many variations and modifications may be made to the teachings of the present application by those of ordinary skill in the art in light of the present disclosure. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application. For example, the process 600 in the present application may be performed by other devices, such as third party devices in communication with the feature recognition system 100.
Having thus described the basic concept, it will be apparent to those skilled in the art that the foregoing disclosure is by way of example only, and is not intended to limit the present application. Various modifications, improvements and adaptations to the present application may occur to those skilled in the art, although not explicitly described herein. Such modifications, improvements and adaptations are proposed in the present application and thus fall within the spirit and scope of the exemplary embodiments of the present application.
Also, this application uses specific language to describe embodiments of the application. Reference throughout this specification to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic described in connection with at least one embodiment of the present application is included in at least one embodiment of the present application. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, certain features, structures, or characteristics of one or more embodiments of the application may be combined as appropriate.
Moreover, those skilled in the art will appreciate that aspects of the present application may be illustrated and described in terms of several patentable species or situations, including any new and useful combination of processes, machines, manufacture, or materials, or any new and useful improvement thereon. Accordingly, various aspects of the present application may be embodied entirely in hardware, entirely in software (including firmware, resident software, micro-code, etc.) or in a combination of hardware and software. The above hardware or software may be referred to as "data block," module, "" engine, "" unit, "" component, "or" system. Furthermore, aspects of the present application may be represented as a computer product, including computer readable program code, embodied in one or more computer readable media.
A computer readable signal medium may comprise a propagated data signal with computer program code embodied therein, for example, on a baseband or as part of a carrier wave. The propagated signal may take any of a variety of forms, including electromagnetic, optical, and the like, or any suitable combination. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code on a computer readable signal medium may be propagated over any suitable medium, including radio, electrical cable, fiber optic cable, radio frequency signals, or the like, or any combination of the preceding.
Computer program code required for operation of portions of the present application may be written in any one or more programming languages, including AN object oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C + +, C #, VB.NET, Python, and the like, a conventional programming language such as C, Visual Basic, Fortran 2003, Perl, COBO L2002, PHP, ABAP, a dynamic programming language such as Python, Ruby, and Groovy, or other programming languages, and the like.
Additionally, the order in which elements and sequences of the processes described herein are processed, the use of alphanumeric characters, or the use of other designations, is not intended to limit the order of the processes and methods described herein, unless explicitly claimed. While various presently contemplated embodiments of the invention have been discussed in the foregoing disclosure by way of example, it is to be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements that are within the spirit and scope of the embodiments herein. For example, although the system components described above may be implemented by hardware devices, they may also be implemented by software-only solutions, such as installing the described system on an existing server or mobile device.
Similarly, it should be noted that in the preceding description of embodiments of the present application, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the embodiments. This method of disclosure, however, is not intended to require more features than are expressly recited in the claims. Indeed, the embodiments may be characterized as having less than all of the features of a single embodiment disclosed above.

Claims (31)

1. A road feature identification system, comprising:
at least one storage medium comprising a set of instructions;
at least one processor in communication with the at least one storage medium, wherein the at least one processor, when executing the instructions, is directed to cause the system to:
acquiring an image relating to a road segment, wherein the image is associated with an identification requirement; and
processing the image to identify road features within the road segment by using a training feature recognition model associated with the recognition requirements,
wherein obtaining the training feature recognition model comprises:
acquiring a first set of images;
acquiring at least two training images based on the first set of images, wherein the training images are associated with the recognition requirements;
marking the road features in the at least two training images; and
an initial feature recognition model is trained using the training image containing the labeled road features to generate the training feature recognition model.
2. The system of claim 1, wherein the identification requirement comprises at least one of an object identification accuracy, an identification condition, or an application scenario of the road feature.
3. The system of claim 2, wherein the target recognition accuracy is associated with an intersection-to-parallel ratio (IOU).
4. The system of claim 2 or 3, wherein the labeling the road features in the at least two training images comprises:
comparing the target identification precision with a precision threshold;
if the target recognition accuracy is greater than the accuracy threshold value as a result of the comparison, marking the shape and the position of the road feature associated with the recognition requirement in the at least two training images; and
if the target identification precision is smaller than or equal to the precision threshold value as a result of comparison, marking an area containing the road feature associated with the identification requirement in the at least two training images.
5. The system of any of claims 1-4, wherein the acquiring the at least two training images based on the first set of images comprises:
determining whether one of the images in the first set of images satisfies the identification requirement; and
processing said one of said first set of images to satisfy said identification requirement if said one of said first set of images does not satisfy said identification requirement.
6. The system of claim 5, wherein said processing said one of said first set of images to satisfy said identification requirement comprises:
changing the brightness of said one of said first set of images;
changing the color of said one of the images in the first set of images;
rotating the one of the images in the first set of images; or
Changing a perspective of the one of the images in the first set of images.
7. The system of any of claims 1-6, wherein obtaining the trained feature recognition model further comprises:
acquiring a second set of images associated with the identification requirements; and
determining whether the training feature recognition model satisfies the recognition requirements based on the second set of images.
8. The system of any one of claims 1-7, wherein the at least one processor is instructed to cause the system to:
acquiring one or more additional images relating to the road segment, wherein the additional images are associated with the identification requirement; and
identifying the road feature within the road segment using the training feature recognition model associated with the recognition requirement.
9. The system of any one of claims 1-8, wherein the at least one processor is instructed to cause the system to:
updating a map in a user terminal by highlighting the road feature on the map of the user terminal.
10. The system of any one of claims 1-8, wherein the at least one processor is instructed to cause the system to:
and sending information to a user terminal, and instructing the user terminal to display a warning related to the road characteristic.
11. A road feature recognition method implemented on a computing device having at least one storage device and at least one processor, comprising:
acquiring an image relating to a road segment, wherein the image is associated with an identification requirement; and
processing the image to identify road features within the road segment by using a training feature recognition model associated with the recognition requirements;
wherein obtaining the training feature recognition model comprises:
acquiring a first set of images;
acquiring at least two training images based on the first set of images, wherein the training images are associated with the recognition requirements;
marking the road features in the at least two training images; and
an initial feature recognition model is trained using the training image containing the labeled road features to generate the training feature recognition model.
12. The method of claim 11, wherein the recognition requirement comprises at least one of an object recognition accuracy, a recognition condition, or an application scenario of the road feature.
13. The method of claim 12, wherein the target recognition accuracy is associated with an intersection-to-parallel ratio (IOU).
14. The method of claim 12 or 13, wherein the labeling the road features in the at least two training images comprises:
comparing the target identification precision with a precision threshold;
if the target recognition accuracy is greater than the accuracy threshold value as a result of the comparison, marking the shape and the position of the road feature associated with the recognition requirement in the at least two training images; and
if the target identification precision is smaller than or equal to the precision threshold value as a result of comparison, marking an area containing the road feature associated with the identification requirement in the at least two training images.
15. The method of any of claims 11-14, wherein the acquiring the at least two training images based on the first set of images comprises:
determining whether one of the images in the first set of images satisfies the identification requirement; and
processing said one of said first set of images to satisfy said identification requirement if said one of said first set of images does not satisfy said identification requirement.
16. The method of claim 15, wherein said processing said one of said first set of images to satisfy said identification requirement comprises:
changing the brightness of said one of said first set of images;
changing the color of said one of the images in the first set of images;
rotating the one of the images in the first set of images; or
Changing a perspective of the one of the images in the first set of images.
17. The method of any of claims 11-16, wherein the obtaining the training feature recognition model further comprises:
acquiring a second set of images associated with the identification requirements; and
determining whether the training feature recognition model satisfies the recognition requirements based on the second set of images.
18. The method of any one of claims 11-17, further comprising:
acquiring one or more additional images relating to the road segment, wherein the additional images are associated with the identification requirement; and
identifying the road feature within the road segment using the training feature recognition model associated with the recognition requirement.
19. The method of any one of claims 11-18, further comprising:
updating a map in a user terminal by highlighting the road feature on the map of the user terminal.
20. The method of any one of claims 11-18, further comprising:
and sending information to a user terminal, and instructing the user terminal to display a warning related to the road characteristic.
21. A road feature identification system, comprising:
an image acquisition module for acquiring an image relating to a road segment, wherein the image is associated with an identification requirement; and
a feature recognition module to process the image to identify road features within the road segment by using a training feature recognition model associated with the recognition requirements,
wherein obtaining the training feature recognition model comprises:
acquiring a first set of images;
acquiring at least two training images based on the first set of images, wherein the training images are associated with the recognition requirements;
marking the road features in the at least two training images; and
an initial feature recognition model is trained using the training image containing the labeled road features to generate the training feature recognition model.
22. The system of claim 21, wherein the identification requirement comprises at least one of an object identification accuracy, an identification condition, or an application scenario of the road feature.
23. The system of claim 22, wherein the target recognition accuracy is associated with an intersection-to-parallel ratio (IOU).
24. The system of claim 22 or 23, wherein the labeling the road features in the at least two training images comprises:
comparing the target identification precision with a precision threshold;
if the target recognition accuracy is greater than the accuracy threshold value as a result of the comparison, marking the shape and the position of the road feature associated with the recognition requirement in the at least two training images; and
if the target identification precision is smaller than or equal to the precision threshold value as a result of comparison, marking an area containing the road feature associated with the identification requirement in the at least two training images.
25. The system of any of claims 21-24, wherein the acquiring the at least two training images based on the first set of images comprises:
determining whether one of the images in the first set of images satisfies the identification requirement; and
processing said one of said first set of images to satisfy said identification requirement if said one of said first set of images does not satisfy said identification requirement.
26. The system of claim 25, wherein said processing said one of said first set of images to satisfy said identification requirement comprises:
changing the brightness of said one of said first set of images;
changing the color of said one of the images in the first set of images;
rotating the one of the images in the first set of images; or
Changing a perspective of the one of the images in the first set of images.
27. The system of any of claims 21-26, wherein obtaining the trained feature recognition model further comprises:
acquiring a second set of images associated with the identification requirements; and
determining whether the training feature recognition model satisfies the recognition requirements based on the second set of images.
28. The system according to any one of claims 21-27,
the image acquisition module is also used for acquiring one or more additional images related to the road segment, wherein the additional images are associated with the identification requirement; and
the feature recognition module is further to recognize the road feature within the road segment using the trained feature recognition model associated with the recognition requirement.
29. The system of any of claims 21-28, wherein the feature identification module is further to:
updating a map in a user terminal by highlighting the road feature on the map of the user terminal.
30. The system of any of claims 21-28, wherein the feature identification module is further to:
and sending information to a user terminal, and instructing the user terminal to display a warning related to the road characteristic.
31. A computer-readable medium comprising at least one set of instructions for identifying road features, wherein the at least one set of instructions, when executed by one or more processors of a computing device, cause the one or more processors to perform a method comprising:
acquiring an image relating to a road segment, wherein the image is associated with an identification requirement; and
processing the image to identify road features within the road segment by using a training feature recognition model associated with the recognition requirements,
wherein obtaining the training feature recognition model comprises:
acquiring a first set of images;
acquiring at least two training images based on the first set of images, wherein the training images are associated with the recognition requirements;
marking the road features in the at least two training images; and
an initial feature recognition model is trained using the training image containing the labeled road features to generate the training feature recognition model.
CN201880002003.5A 2018-11-09 2018-11-09 System and method for identifying road characteristics Pending CN111433779A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/114745 WO2020093351A1 (en) 2018-11-09 2018-11-09 Systems and methods for identifying a road feature

Publications (1)

Publication Number Publication Date
CN111433779A true CN111433779A (en) 2020-07-17

Family

ID=70611651

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880002003.5A Pending CN111433779A (en) 2018-11-09 2018-11-09 System and method for identifying road characteristics

Country Status (2)

Country Link
CN (1) CN111433779A (en)
WO (1) WO2020093351A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112686197B (en) * 2021-01-07 2022-08-19 腾讯科技(深圳)有限公司 Data processing method and related device
CN116229396B (en) * 2023-02-17 2023-11-03 广州丰石科技有限公司 High-speed pavement disease identification and warning method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102254161A (en) * 2011-07-15 2011-11-23 王世峰 Road surface type recognition method and device based on road surface outline and road surface image characteristics
US20150134495A1 (en) * 2013-11-14 2015-05-14 Mihir Naware Omni-channel simulated digital apparel content display
CN106295607A (en) * 2016-08-19 2017-01-04 北京奇虎科技有限公司 Roads recognition method and device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103839415B (en) * 2014-03-19 2016-08-24 重庆攸亮科技股份有限公司 Traffic flow based on pavement image feature identification and occupation rate information getting method
CN106339659A (en) * 2015-07-10 2017-01-18 株式会社理光 Road segment detecting method and device
CN105809138B (en) * 2016-03-15 2019-01-04 武汉大学 A kind of road warning markers detection and recognition methods based on piecemeal identification
CN106845547B (en) * 2017-01-23 2018-08-14 重庆邮电大学 A kind of intelligent automobile positioning and road markings identifying system and method based on camera

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102254161A (en) * 2011-07-15 2011-11-23 王世峰 Road surface type recognition method and device based on road surface outline and road surface image characteristics
US20150134495A1 (en) * 2013-11-14 2015-05-14 Mihir Naware Omni-channel simulated digital apparel content display
CN106295607A (en) * 2016-08-19 2017-01-04 北京奇虎科技有限公司 Roads recognition method and device

Also Published As

Publication number Publication date
WO2020093351A1 (en) 2020-05-14

Similar Documents

Publication Publication Date Title
CN111626208B (en) Method and device for detecting small objects
CN111386559B (en) Method and system for judging whether target road facilities exist at intersection or not
CN110050300B (en) Traffic congestion monitoring system and method
CN110146097B (en) Method and system for generating automatic driving navigation map, vehicle-mounted terminal and server
CN111652940A (en) Target abnormity identification method and device, electronic equipment and storage medium
CN111656140B (en) Artificial intelligence system and method for predicting traffic accident place
CN110675621B (en) System and method for predicting traffic information
US11248925B2 (en) Augmented road line detection and display system
US11594030B2 (en) Systems and methods for monitoring traffic sign violation
CN108416808B (en) Vehicle repositioning method and device
CN111881713A (en) Method, system, device and storage medium for identifying parking place
CN107389085B (en) Method and device for determining road traffic attributes, computer and storage medium
US20210055121A1 (en) Systems and methods for determining recommended locations
TWI725360B (en) Systems and methods for determining new roads on a map
CN111931683B (en) Image recognition method, device and computer readable storage medium
US11963066B2 (en) Method for indicating parking position and vehicle-mounted device
WO2021056303A1 (en) Systems and methods for determining a pick-up location
CN114397685A (en) Vehicle navigation method, device, equipment and storage medium for weak GNSS signal area
CN110782653A (en) Road information acquisition method and system
CN111433779A (en) System and method for identifying road characteristics
CN110689719B (en) System and method for identifying closed road sections
CN116959265A (en) Traffic information prompting method, device, electronic equipment and readable storage medium
CN108346294B (en) Vehicle identification system, method and device
WO2019206301A1 (en) Systems and methods for lane broadcast
CN113428081A (en) Traffic safety control method, vehicle-mounted device and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination