CN110012351B - Label data acquisition method, memory, terminal, vehicle and Internet of vehicles system - Google Patents

Label data acquisition method, memory, terminal, vehicle and Internet of vehicles system Download PDF

Info

Publication number
CN110012351B
CN110012351B CN201910290166.4A CN201910290166A CN110012351B CN 110012351 B CN110012351 B CN 110012351B CN 201910290166 A CN201910290166 A CN 201910290166A CN 110012351 B CN110012351 B CN 110012351B
Authority
CN
China
Prior art keywords
vehicle
video frame
information
frame image
tag data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910290166.4A
Other languages
Chinese (zh)
Other versions
CN110012351A (en
Inventor
肖月庭
李扬
阳光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Tatfook Technology Co Ltd
Original Assignee
Shenzhen Tatfook Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Tatfook Technology Co Ltd filed Critical Shenzhen Tatfook Technology Co Ltd
Priority to CN201910290166.4A priority Critical patent/CN110012351B/en
Publication of CN110012351A publication Critical patent/CN110012351A/en
Application granted granted Critical
Publication of CN110012351B publication Critical patent/CN110012351B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/16Arrangements for providing special services to substations
    • H04L12/18Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/84Generation or processing of descriptive data, e.g. content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/025Services making use of location information using location based information parameters
    • H04W4/027Services making use of location information using location based information parameters using movement velocity, acceleration information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/40Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a tag data acquisition method, which is applied to a vehicle end and used for acquiring a video frame image acquired by a current vehicle; acquiring vehicle information of other vehicles around the current vehicle in a preset area; and determining label data matched with each surrounding vehicle in the video frame image according to the vehicle information and the video frame image, and generating an image set with the label data. The method and the device can obtain a large amount of image data labeled in real time, provide a large amount of credible training data for the deep learning system, expand the available data set and greatly help to improve the accuracy of the deep learning system. In addition, the tag data is marked at the vehicle end, and the data receiving amount of the vehicle end is small, so that the tag data forming speed is higher, the data can be processed in real time, and the difficulty of processing the data by the cloud end due to time difference is reduced. In addition, this application still provides memory, terminal, vehicle and car networking system that have above-mentioned technical advantage.

Description

Label data acquisition method, memory, terminal, vehicle and Internet of vehicles system
Technical Field
The invention relates to the technical field of deep learning, in particular to a tag data acquisition method, a memory, a terminal, a vehicle and a vehicle networking system.
Background
Deep learning is a method based on characterization learning of data in machine learning. An observation (e.g., an image) may be represented using a number of ways, such as a vector of intensity values for each pixel, or more abstractly as a series of edges, a specially shaped region, etc. Tasks (e.g., face recognition or facial expression recognition) are more easily learned from the examples using some specific representation methods. The benefit of deep learning is to replace the manual feature acquisition with unsupervised or semi-supervised feature learning and hierarchical feature extraction efficient algorithms. The supervised learning data requires tag data to be performed.
Tagged data is very difficult to obtain for deep learning systems, and can be obtained substantially only manually or semi-automatically by a human, and is therefore very expensive. In view of this, how to obtain a large amount of tagged data is a technical problem that needs to be solved urgently by those skilled in the art.
Disclosure of Invention
The invention aims to provide a tag data acquisition method, a memory, a terminal, a vehicle and an Internet of vehicles system, which can provide a large amount of data with tags for a deep learning system.
In order to solve the above technical problem, the present invention provides a tag data obtaining method, applied to a vehicle, including:
acquiring a video frame image acquired by a current vehicle;
acquiring vehicle information of other vehicles in a preset area around the current vehicle;
and determining label data matched with each peripheral vehicle in the video frame image according to the vehicle information and the video frame image, and generating an image set with the label data.
Optionally, the determining, according to the vehicle information and the video frame image, tag data matched with each neighboring vehicle in the video frame image includes:
analyzing image information in the video frame image to acquire first speed information corresponding to the surrounding vehicle at a first preset moment;
according to the vehicle information, determining second speed information corresponding to the surrounding vehicle at the first preset time;
and determining label data matched with each surrounding vehicle in the video frame image by using the first speed information and the second speed information.
Optionally, the determining, according to the vehicle information, second speed information corresponding to the nearby vehicle at the first preset time includes:
and when the time sequence data in the vehicle information does not have the first preset time, calculating second speed information corresponding to the surrounding vehicle at the first preset time by adopting interpolation operation.
Optionally, the determining, according to the vehicle information and the video frame image, tag data matched with each neighboring vehicle in the video frame image further includes:
and determining label data matched with each surrounding vehicle in the video frame image by using the speed information, the video frame image and the vehicle color information in the vehicle information.
Optionally, the determining, according to the vehicle information and the video frame image, tag data matched with each neighboring vehicle in the video frame image includes:
and when the matching cannot be carried out by using the speed information and the color information, determining label data matched with each peripheral vehicle in the video frame image by using the vehicle type information and/or the license plate number information.
Optionally, the generating the image set with the tag data includes:
generating a set of video frame images with tag data at different moments;
or further processing the video frame images with the label data at different moments, and independently scratching out the vehicles in the video frame images to respectively make image sets formed after the images are formed.
Optionally, after the generating the image set with the tag data, the method further includes:
and sending the image set with the tag data to a cloud server.
The present invention also provides a memory having stored thereon a computer program which, when executed by a processor, performs the steps of any of the above described tag data acquisition methods.
The present invention also provides a vehicle-mounted terminal, including: a memory storing a computer program that realizes any of the above-described tag data acquisition methods when executed by the processor, and a processor.
The invention also provides a vehicle which comprises a vehicle body and the vehicle-mounted terminal, wherein the vehicle-mounted terminal is installed on the vehicle body.
The invention also provides a vehicle networking system which comprises a cloud server and at least two vehicles in any one of the above.
The tag data acquisition method provided by the invention is applied to a vehicle end and used for acquiring the video frame image acquired by the current vehicle; acquiring vehicle information of other vehicles around the current vehicle in a preset area; and determining label data matched with each surrounding vehicle in the video frame image according to the vehicle information and the video frame image, and generating an image set with the label data. The method and the device can obtain a large amount of image data labeled in real time, provide a large amount of credible training data for the deep learning system, expand the available data set and greatly help to improve the accuracy of the deep learning system. In addition, the tag data is marked at the vehicle end, and the data receiving amount of the vehicle end is small, so that the tag data forming speed is higher, the data can be processed in real time, and the difficulty of processing the data by the cloud end due to time difference is reduced. In addition, this application still provides memory, terminal, vehicle and car networking system that have above-mentioned technical advantage.
Drawings
In order to more clearly illustrate the embodiments or technical solutions of the present invention, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained based on these drawings without creative efforts.
FIG. 1 is a flow chart of an embodiment of a tag data obtaining method according to the present invention;
FIG. 2 is a flowchart of an embodiment of a tag data obtaining method according to the present invention;
FIG. 3 is a diagram illustrating an embodiment of a tag data obtaining method according to the present invention;
FIG. 4 is a flowchart illustrating an embodiment of matching a tag and an image according to vehicle information and a video frame image;
FIG. 5 is a flowchart illustrating another embodiment of matching labels and images according to vehicle information and video frame images in this embodiment;
fig. 6 is a block diagram of a tag data acquiring apparatus according to an embodiment of the present invention;
fig. 7 is a block diagram of a structure of a vehicle-mounted terminal according to an embodiment of the present disclosure;
fig. 8 is a block diagram of a vehicle networking system provided in an embodiment of the present application.
Detailed Description
In order that those skilled in the art will better understand the disclosure, the invention will be described in further detail with reference to the accompanying drawings and specific embodiments. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 shows a flowchart of a specific embodiment of a tag data acquisition method according to the present invention, which is applied to a vehicle. The method specifically comprises the following steps:
step S101: acquiring a video frame image acquired by a current vehicle;
step S102: acquiring vehicle information of other vehicles in a preset area around the current vehicle;
as a specific implementation, the video frame image captured in step S101 may be further processed to analyze the vehicle information of the surrounding vehicle, in this case, step S102 follows step S101. As another specific embodiment, the vehicle information of the nearby vehicle may be passively acquired by broadcasting by the nearby vehicle, in which case there is no restriction in the order between step S101 and step S102.
The vehicle basic information comprises any one or any combination of the following information: color information of the vehicle, sending time information, vehicle type information, license plate number information, position information, and speed information. Of course, other information may be included, and is not limited herein. The position information of the vehicle may be acquired by a high-precision map, or by a laser radar technique. The concrete expression can be the corresponding sequence data of the real-time and the position of the vehicle. Other information, such as vehicle model information, license plate number information, speed information, color information, may be obtained by sensors provided on or in the vehicle. The video frame image can be obtained by a camera, and can be a camera fixedly installed on a vehicle or a camera at a mobile phone end, which does not influence the implementation of the invention.
Step S103: and determining label data matched with each peripheral vehicle in the video frame image according to the vehicle information and the video frame image, and generating an image set with the label data.
The vehicle matches the acquired vehicle information with the video frame image, and determines which peripheral vehicle corresponds to each vehicle in the video frame image, namely which tag data is matched with. After the tag data matched with the surrounding vehicle is determined, the corresponding tag data is added to the picture of the video frame image, and an image set with the tag data is generated. For example, on the processed picture of car a, car B and car C may be specifically framed out, and the corresponding label data may be displayed beside.
In the embodiment of the present application, each vehicle may automatically establish a local network in a preset area around the vehicle, and when other vehicles enter the preset area around the current vehicle, the other vehicles may automatically join the local network. The current vehicle can pack the information of itself and broadcast other vehicles entering its local network, and other vehicles in the local network can automatically receive the broadcast information. When the vehicle leaves the peripheral preset area, the vehicle automatically exits the local network, and the broadcast information of the current vehicle cannot be received.
As a specific embodiment, the local area network may be established by a local area network or LTE-V, but other ways of implementing local area communication are also possible and are not limited to these two ways.
The automobile shares information in a broadcasting mode, and for example, in the information broadcasting mode, the automobile can broadcast tag data representing the related information of the automobile. As a specific embodiment, the tag data may include related characteristic information such as a current vehicle speed, a current location, a vehicle model number, a license plate number, a color, and the like. Other vehicles in the vicinity receive the broadcast information through a local network. These received broadcast signals are authentic tagged information.
Referring to fig. 2, a flowchart of a method provided by the present invention specifically includes:
step S201: receiving broadcast information sent by other vehicles around the current vehicle in a preset area, wherein the broadcast information comprises vehicle information representing the related information of the vehicles around the current vehicle;
as shown in the schematic diagram of the method provided in fig. 3, the vehicle a, the vehicle B, the vehicle C, and the vehicle D all broadcast, so that all vehicles in the local network range corresponding to the vehicles can receive the broadcast information of the vehicles. The tag data may specifically include strong related information including the color, speed, and current time of the vehicle, and other related information including the type of vehicle, the number plate, and the like. Strongly relevant information is used to quickly determine what kind, e.g. car, is. Other relevant information is used to determine the fine categories, such as vehicle type, size, etc.
The current vehicle receives broadcast information sent by other vehicles around, and then tag data of other vehicles can be obtained.
Step S202: acquiring a video frame image acquired by a current vehicle;
and adopting a preset camera device to collect the scenery in the preset visual field range of the current vehicle, and acquiring the collected video frame image. As a specific implementation manner, a camera may be disposed in front of the vehicle, or a terminal camera may be used to collect an image, or a radar system mounted on the vehicle may be used to collect an image, which does not affect the implementation of the present invention.
Step S203: according to the broadcast information and the video frame image, determining label data matched with each peripheral vehicle in the video frame image, and generating an image set with the label data;
after the broadcast information and the video frame image are acquired, since the broadcast information includes the related information of each nearby vehicle, it is possible to determine which nearby vehicle corresponds to each vehicle appearing in the video frame image, that is, which tag data matches each vehicle appearing in the video frame image. After the tag data matched with the surrounding vehicle is determined, the corresponding tag data is added to the picture of the video frame image, and an image set with the tag data is generated. As shown in fig. 3, for example, in a preset area around vehicle a, broadcast information transmitted from vehicles B, C, and D is received. By combining the images acquired by the vehicle a, which of the tag data corresponding to the vehicle B and the tag data corresponding to the vehicle C is can be determined. Specifically, the car B and the car C may be framed on the image processed by the car a, and corresponding tag data may be displayed beside the car B and the car C.
Step S204: and sending the image set with the tag data to a cloud server.
Through matching, label information is added to the matched image in the corresponding image, and thus label data at the moment is established. Uploading the image set with the tags to a cloud server to generate a large amount of available tag data. In the embodiment of the application, the image set with the tag data is a set of video frame images at different time instants, wherein tag data is identified on each surrounding vehicle in each video frame image. The image set with the tag data can also be an image set of a single vehicle only identified with the tag data, and the image of the single vehicle is a separately made image obtained by carrying out secondary processing on each video frame image and scratching each vehicle out of the video frame image.
The tag data acquisition method provided by the invention receives the broadcast information sent by other vehicles around the current vehicle in the preset area, wherein the broadcast information contains the vehicle information representing the related information of the vehicles around; acquiring a video frame image acquired by a current vehicle; and determining label data matched with each surrounding vehicle in the video frame image according to the broadcast information and the video frame image, and generating an image set with the label data. The method and the device can obtain a large amount of image data labeled in real time, provide a large amount of credible training data for the deep learning system, expand the available data set and greatly help to improve the accuracy of the deep learning system. In addition, the tag data is marked at the vehicle end, and the data receiving amount of the vehicle end is small, so that the tag data forming speed is higher, the data can be processed in real time, and the difficulty of processing the data by the cloud end due to time difference is reduced.
The process of matching the tag and the video according to the vehicle information and the video frame image can be implemented in various ways. The specific implementation of this process is described in further detail below. One of the methods is to match the vehicle information with the vehicle speed information in the video frame image, as shown in a flowchart of a specific implementation of matching the tag and the image according to the vehicle information and the video frame image in this embodiment of fig. 4, the specific process may include:
step S301: analyzing image information in the video frame image to acquire first speed information corresponding to the surrounding vehicle at a first preset moment;
the vehicle acquires video frame images at a previous moment and a next moment of a first preset moment, and calculates first speed information corresponding to the first preset moment according to the position change conditions of the surrounding vehicle.
Step S302: according to the vehicle information, determining second speed information corresponding to the surrounding vehicle at the first preset time;
in this embodiment, the vehicle information should at least include the current time and the speed information. And when the time sequence data in the vehicle information does not have a first preset time, calculating second speed information corresponding to the surrounding vehicle at the first preset time by adopting interpolation operation. Since the vehicle information includes speed information, if the image does not match the time-series data of the vehicle information, the speed information at the first preset time is estimated by interpolation using the time-series vehicle information, so as to acquire data at the same time as the video frame image.
In the interpolation operation, the vehicle information of the same vehicle is preferentially interpolated according to the missing vehicle information, and whether the vehicle is the same vehicle can be confirmed through the related information.
Step S303: and determining label data matched with each surrounding vehicle in the video frame image by using the first speed information and the second speed information.
After the first speed information and the second speed information are obtained, whether the difference value between the first speed information and the second speed information is within a preset threshold value range or not can be judged, and if yes, the vehicle information of the actual vehicle with the second speed is determined as the label data of the vehicle with the first speed on the video frame image.
In addition, in the present embodiment, the color information of the surrounding vehicle may be used to determine the tag data that matches each of the surrounding vehicles in the video frame image. As a preferred embodiment, the matching between the image and the label may be performed by comparing the speed and the color information in the label data with the respective moving speeds of the moving objects in the first preset time image calculated from the vehicle-end video frame image and the color information of the objects analyzed by the first preset time image.
Further, when the matching cannot be performed using the speed information, the color information, the tag data of the vehicle type information and/or the license plate number information matched with each surrounding vehicle in the video frame image may be used. Since the strong related information includes the color, speed, and current time of the car, and the other related information includes the model, the number plate, etc., the judgment is most speedy with the strong related information. Preferably, matching with strong related information can be prioritized, and matching with other related information is continued if the matching fails.
In another method, in order to match the vehicle information and the position information in the video frame image, referring to a flowchart of another specific implementation manner of matching the tag and the image according to the vehicle information and the video frame image in this embodiment of fig. 5, the method includes:
step S401: analyzing image information in the video frame image to acquire first position information corresponding to the surrounding vehicle at a second preset moment;
and directly acquiring the position information corresponding to the surrounding vehicle at a second preset time as first position information according to the image information in the video frame image.
Step S402: determining second position information corresponding to the surrounding vehicle at the second preset moment according to the vehicle information;
the vehicle information in this embodiment should at least include the current time and the location information. And when the time sequence data in the vehicle information does not have the second preset time, calculating second position information corresponding to the surrounding vehicle at the second preset time by adopting interpolation operation. Since the vehicle information includes the position information, if the image does not match the time-series data of the vehicle information, the position information at the second preset time is estimated by interpolation using the time-series vehicle information, so as to acquire the data at the same time as the video frame image.
In the interpolation operation, the vehicle information of the same vehicle is preferentially interpolated according to the missing vehicle information, and whether the vehicle is the same vehicle can be confirmed through the related information.
Step S403: and determining label data matched with the peripheral vehicle in the video frame image by using the first position information and the second position information.
Similarly, in the present embodiment, the color information of the surrounding vehicle may be used to determine the tag data that matches each of the surrounding vehicles in the video frame image. As a preferred embodiment, the matching between the image and the tag may be performed by comparing the position and the color information in the tag data using the position information of each moving object in the second preset time image calculated from the vehicle-end image set and the color information of each object analyzed from the second preset time image.
Further, when the matching cannot be performed using the position information, the color information, the tag data matched with each surrounding vehicle in the video frame image may be determined using the vehicle type information and/or the license plate number information.
In the following, the tag data obtaining apparatus provided by the embodiment of the present invention is introduced, and the tag data obtaining apparatus described below and the tag data obtaining method described above may be referred to correspondingly.
Fig. 6 is a block diagram of a tag data acquiring apparatus according to an embodiment of the present invention, where the tag data acquiring apparatus according to fig. 6 may include:
the image acquisition module 100 is configured to acquire a video frame image acquired by a current vehicle;
the data receiving module 200 is configured to obtain vehicle information of other vehicles around the current vehicle in a preset area around the current vehicle;
a matching module 300, configured to determine, according to the vehicle information and the video frame image, tag data that matches each neighboring vehicle in the video frame image, and generate an image set with the tag data.
The tag data acquiring apparatus of this embodiment is configured to implement the foregoing tag data acquiring method, and therefore specific implementations of the tag data acquiring apparatus may refer to the foregoing embodiments of the tag data acquiring method, for example, the image acquiring module 100, the data receiving module 200, and the matching module 300, which are respectively configured to implement steps S101, S102, and S103 in the foregoing tag data acquiring method, so that the specific implementations of the tag data acquiring apparatus may refer to descriptions of corresponding embodiments of each part, and are not described herein again.
Furthermore, the present application provides a memory having stored thereon a computer program which, when being executed by a processor, carries out the steps of any of the above-mentioned vehicle tag data acquisition methods.
The memory provided by the invention can obtain a large amount of image data labeled in real time, provides a large amount of credible training data for a deep learning system, expands an available data set and is greatly helpful for improving the accuracy of the deep learning system. In addition, the tag data is marked at the vehicle end, and the data receiving amount of the vehicle end is small, so that the tag data forming speed is higher, the data can be processed in real time, and the difficulty of processing the data by the cloud end due to time difference is reduced.
In addition, the present application further provides a vehicle-mounted terminal, as shown in fig. 7, which is a block diagram of the structure of the vehicle-mounted terminal provided in the present application, the vehicle-mounted terminal 1 includes a memory 11 and a processor 12; the memory stores a computer program that realizes any of the above-described tag data acquisition methods when executed by the processing.
In addition, this application still provides a vehicle, including vehicle body and aforementioned public vehicle-mounted terminal, vehicle-mounted terminal install in on the vehicle body.
In addition, this application still provides a car networking system, as shown in fig. 8 this application provides a car networking system's block diagram, this system includes two at least above-mentioned vehicles 1 and cloud server 2. The vehicle 1 determines tag data matched with each peripheral vehicle in the video frame images according to the received vehicle information and the acquired video frame images to generate an image set with the tag data; and sends the image set with the tag data to the cloud server 2.
The vehicles can communicate with each other specifically in a local area network or LTE-V mode. When other vehicles enter the peripheral preset area of the current vehicle, the other vehicles automatically join the local network, receive the broadcast information sent by the current vehicle, or broadcast the information of the other vehicles.
After receiving the broadcast information of other vehicles, each vehicle takes the information as tag data, and automatically generates an image set with the tag data by combining with the video frame information acquired by the image acquirer of the vehicle, and sends the image set with the tag data to the cloud server.
After receiving the image sets sent by the vehicles, the cloud server can collect and store the image sets so as to provide credible training data for the deep learning system.
In conclusion, the method and the device can obtain a large amount of real-time labeled image data, provide a large amount of credible training data for the deep learning system, expand the available data set and greatly help to improve the accuracy of the deep learning system. In addition, the tag data is marked at the vehicle end, and the data receiving amount of the vehicle end is small, so that the tag data forming speed is higher, the data can be processed in real time, and the difficulty of processing the data by the cloud end due to time difference is reduced.
The embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The tag data acquisition method, the memory, the terminal, the vehicle and the vehicle networking system provided by the invention are described in detail above. The principles and embodiments of the present invention are explained herein using specific examples, which are presented only to assist in understanding the method and its core concepts. It should be noted that, for those skilled in the art, it is possible to make various improvements and modifications to the present invention without departing from the principle of the present invention, and those improvements and modifications also fall within the scope of the claims of the present invention.

Claims (9)

1. A tag data acquisition method is applied to a vehicle end and comprises the following steps:
acquiring a video frame image acquired by a current vehicle;
acquiring vehicle information of other vehicles in a preset area around the current vehicle through a local network; when other vehicles enter a preset area around the current vehicle, the other vehicles automatically join the local network;
according to the vehicle information and the video frame image, determining label data matched with each peripheral vehicle in the video frame image, and generating an image set with the label data;
and the determining tag data matched with each surrounding vehicle in the video frame image according to the vehicle information and the video frame image comprises: analyzing image information in the video frame image to acquire first speed information corresponding to the surrounding vehicle at a first preset moment; according to the vehicle information, determining second speed information corresponding to the surrounding vehicle at the first preset time; determining label data matched with each surrounding vehicle in the video frame image by using the first speed information and the second speed information; when the time sequence data in the vehicle information does not have the first preset time, calculating second speed information corresponding to the surrounding vehicle at the first preset time by adopting interpolation operation;
the determining tag data matched with each surrounding vehicle in the video frame image by using the first speed information and the second speed information includes: and judging whether the difference value between the first speed information and the second speed information is within a preset threshold range, if so, determining the vehicle information of the actual vehicle with the second speed information as the tag data of the vehicle with the first speed information on the video frame image.
2. The tag data acquisition method according to claim 1, wherein the determining tag data that matches each nearby vehicle in the video frame image based on the vehicle information and the video frame image further includes:
and determining label data matched with each surrounding vehicle in the video frame image by using the speed information, the video frame image and the vehicle color information in the vehicle information.
3. The tag data acquisition method according to claim 2, wherein the determining tag data that matches each nearby vehicle in the video frame image based on the vehicle information and the video frame image includes:
and when the matching cannot be carried out by using the speed information and the color information, determining label data matched with each peripheral vehicle in the video frame image by using the vehicle type information and/or the license plate number information.
4. The tag data acquisition method according to any one of claims 1 to 3, wherein the generating of the image set with tag data includes:
generating a set of video frame images with tag data at different moments;
or further processing the video frame images with the label data at different moments, and independently scratching out the vehicles in the video frame images to respectively make image sets formed after the images are formed.
5. The tag data acquisition method of claim 4, further comprising, after said generating the image set with tag data:
and sending the image set with the tag data to a cloud server.
6. A memory, characterized in that the memory has stored thereon a computer program which, when being executed by a processor, carries out the steps of the tag data acquisition method according to any one of claims 1 to 5.
7. A vehicle-mounted terminal characterized by comprising: memory and processor, characterized in that the memory stores a computer program that implements the tag data acquisition method according to any one of claims 1 to 5 when executed by the processor.
8. A vehicle characterized by comprising a vehicle body and the in-vehicle terminal according to claim 7 mounted on the vehicle body.
9. A vehicle networking system comprising a cloud server and at least two vehicles according to claim 8.
CN201910290166.4A 2019-04-11 2019-04-11 Label data acquisition method, memory, terminal, vehicle and Internet of vehicles system Active CN110012351B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910290166.4A CN110012351B (en) 2019-04-11 2019-04-11 Label data acquisition method, memory, terminal, vehicle and Internet of vehicles system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910290166.4A CN110012351B (en) 2019-04-11 2019-04-11 Label data acquisition method, memory, terminal, vehicle and Internet of vehicles system

Publications (2)

Publication Number Publication Date
CN110012351A CN110012351A (en) 2019-07-12
CN110012351B true CN110012351B (en) 2021-12-31

Family

ID=67171163

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910290166.4A Active CN110012351B (en) 2019-04-11 2019-04-11 Label data acquisition method, memory, terminal, vehicle and Internet of vehicles system

Country Status (1)

Country Link
CN (1) CN110012351B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110261856B (en) * 2019-07-31 2021-06-11 北京邮电大学 Radar detection method and device based on multi-radar cooperative detection

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102566831A (en) * 2011-12-16 2012-07-11 Tcl集团股份有限公司 Target locating method and device as well as image display device
CN104766479A (en) * 2015-01-27 2015-07-08 公安部交通管理科学研究所 Automobile identity recognition method and device based on ultrahigh frequency radio frequency and video image dual-recognition matching
CN106650705A (en) * 2017-01-17 2017-05-10 深圳地平线机器人科技有限公司 Region labeling method and device, as well as electronic equipment
CN109300145A (en) * 2018-08-20 2019-02-01 彭楷文 NEW ADAPTIVE intelligence dazzle system
CN208479822U (en) * 2018-06-28 2019-02-05 华域视觉科技(上海)有限公司 A kind of automobile-used panoramic looking-around system

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104219313A (en) * 2014-09-10 2014-12-17 张晋凯 Networking method for vehicle-mounted terminal
CN104936132B (en) * 2015-05-29 2019-12-06 Oppo广东移动通信有限公司 Machine type communication method, terminal and base station
CN106658418A (en) * 2015-11-02 2017-05-10 中兴通讯股份有限公司 Car networking V2X service data packet transmission method and apparatus thereof
CN105405307A (en) * 2015-12-10 2016-03-16 安徽海聚信息科技有限责任公司 Internet of vehicles service management system
CN111797689B (en) * 2017-04-28 2024-04-16 创新先进技术有限公司 Vehicle damage assessment image acquisition method, device, server and client
US10558864B2 (en) * 2017-05-18 2020-02-11 TuSimple System and method for image localization based on semantic segmentation
CN108364476B (en) * 2018-03-26 2021-01-26 京东方科技集团股份有限公司 Method and device for acquiring Internet of vehicles information
CN108986465B (en) * 2018-07-27 2020-10-23 深圳大学 A method, system and terminal device for detecting traffic flow

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102566831A (en) * 2011-12-16 2012-07-11 Tcl集团股份有限公司 Target locating method and device as well as image display device
CN104766479A (en) * 2015-01-27 2015-07-08 公安部交通管理科学研究所 Automobile identity recognition method and device based on ultrahigh frequency radio frequency and video image dual-recognition matching
CN106650705A (en) * 2017-01-17 2017-05-10 深圳地平线机器人科技有限公司 Region labeling method and device, as well as electronic equipment
CN208479822U (en) * 2018-06-28 2019-02-05 华域视觉科技(上海)有限公司 A kind of automobile-used panoramic looking-around system
CN109300145A (en) * 2018-08-20 2019-02-01 彭楷文 NEW ADAPTIVE intelligence dazzle system

Also Published As

Publication number Publication date
CN110012351A (en) 2019-07-12

Similar Documents

Publication Publication Date Title
EP3016025B1 (en) Image processing device, image processing method, poi information creation system, warning system, and guidance system
CN113159198A (en) Target detection method, device, equipment and storage medium
CN109664820A (en) Driving reminding method, device, equipment and storage medium based on automobile data recorder
CN113076851B (en) Method and device for collecting vehicle violation data and computer equipment
CN114898325B (en) Vehicle dangerous lane change detection method and device and electronic equipment
JP2024052812A (en) Information processing system, information processing method, and program
CN112115737B (en) Vehicle orientation determining method and device and vehicle-mounted terminal
CN112364693A (en) Barrier identification method, device and equipment based on binocular vision and storage medium
CN110012351B (en) Label data acquisition method, memory, terminal, vehicle and Internet of vehicles system
CN117341581A (en) Driver visual field expanding method and device based on automobile data recorder and electronic equipment
CN115082326A (en) Processing method for deblurring video, edge computing equipment and central processor
CN113516685B (en) Target tracking method, device, equipment and storage medium
CN112969053B (en) In-vehicle information transmission method and device, vehicle-mounted equipment and storage medium
CN114973175A (en) Moving object detection method, device, terminal device and storage medium
CN117831000A (en) Traffic light detection method and device, electronic device and storage medium
CN110929737A (en) Label generation method and device
CN113850219B (en) Data collection method, device, vehicle and storage medium
CN116363628A (en) Sign detection method, device, non-volatile storage medium and computer equipment
CN115100558A (en) Target behavior identification method and device, electronic equipment and storage medium
JP6686076B2 (en) Information processing apparatus, information processing method, program, and application program
JP6443144B2 (en) Information output device, information output program, information output method, and information output system
EP4571678A1 (en) Systems and methods for managing segmented image data for vehicles
CN115830588B (en) Target detection method, system, storage medium and device based on point cloud
CN120747970A (en) Method and device for detecting abnormality of vehicle-mounted camera, computer equipment and medium
WO2025031161A1 (en) Object detection method, electronic device, and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: 233000 building 4, national financial incubation Industrial Park, 17 Yannan Road, high tech Zone, Bengbu City, Anhui Province

Patentee after: Dafu Technology (Anhui) Co.,Ltd.

Address before: 518000 the first, second and third floors of 101 and A4 in the third industrial zone A1, A2 and A3 of Shajing Industrial Company, Ho Xiang Road, Shajing street, Bao'an District, Shenzhen City, Guangdong Province

Patentee before: SHENZHEN TATFOOK TECHNOLOGY Co.,Ltd.

CP03 Change of name, title or address