CN112597788B - Target measuring method, target measuring device, electronic apparatus, and computer-readable medium - Google Patents

Target measuring method, target measuring device, electronic apparatus, and computer-readable medium Download PDF

Info

Publication number
CN112597788B
CN112597788B CN202010902456.2A CN202010902456A CN112597788B CN 112597788 B CN112597788 B CN 112597788B CN 202010902456 A CN202010902456 A CN 202010902456A CN 112597788 B CN112597788 B CN 112597788B
Authority
CN
China
Prior art keywords
coordinate
detection frame
corner
frame information
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010902456.2A
Other languages
Chinese (zh)
Other versions
CN112597788A (en
Inventor
兰莎郧
李松泽
戴震
倪凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Heduo Technology Guangzhou Co ltd
Original Assignee
HoloMatic Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by HoloMatic Technology Beijing Co Ltd filed Critical HoloMatic Technology Beijing Co Ltd
Priority to CN202010902456.2A priority Critical patent/CN112597788B/en
Publication of CN112597788A publication Critical patent/CN112597788A/en
Application granted granted Critical
Publication of CN112597788B publication Critical patent/CN112597788B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the disclosure discloses a target determination method, a target determination device, an electronic device and a computer readable medium. One embodiment of the method comprises: and acquiring an image set shot by a vehicle-mounted camera, camera parameter information of the vehicle-mounted camera and an angular point coordinate set. And performing coordinate transformation on each corner coordinate in each corner coordinate set in the corner coordinate set to generate transformation corner coordinates, so as to obtain a transformation corner coordinate set. And determining the detection frame information corresponding to each image in the image set. And performing correction processing on each detection frame information in each detection frame information group in the detection frame information group set to generate corrected detection frame information. A quantitative value of the traffic lights contained in each image of the set of images is determined. And selecting the images with the quantity value of the contained traffic signal lamps meeting the preset condition from the image set as candidate images. The color of each traffic light in the candidate image is identified. This embodiment improves the target measurement accuracy and the target detection efficiency.

Description

Target measuring method, target measuring device, electronic apparatus, and computer-readable medium
Technical Field
Embodiments of the present disclosure relate to the field of computer technologies, and in particular, to a target determination method, an apparatus, an electronic device, and a computer-readable medium.
Background
Target determination is a target detection technique based on target geometry and statistical characteristics. Currently, the commonly used target determination method is to identify the target by manual visual inspection.
However, when the above-mentioned method is used for the target mapping, the following technical problems are often present:
firstly, the manual target measurement depends on human experience, so that the target measurement result is not accurate enough and the target detection efficiency is low;
secondly, because the coordinates of the corner points and the target are not in the same coordinate system, the target detection cannot be carried out;
third, the generated detection boxes may fail to fully frame the target.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose data display methods, apparatuses, electronic devices and computer readable media to solve one or more of the technical problems mentioned in the background section above.
In a first aspect, some embodiments of the present disclosure provide a method of target determination, the method comprising: acquiring an image set shot by a vehicle-mounted camera, camera parameter information of the vehicle-mounted camera and a corner point coordinate set, wherein the corner point coordinates are coordinates in a world coordinate system. And based on the camera parameter information, performing coordinate conversion on each corner coordinate in each corner coordinate set in the corner coordinate set to generate a conversion corner coordinate, so as to obtain a conversion corner coordinate set, wherein the conversion corner coordinate is a coordinate in an image coordinate system. And determining the detection frame information corresponding to each image in the image set based on the conversion corner point coordinate set to obtain a detection frame information set. And correcting each piece of detection frame information in each detection frame information group in the detection frame information group set to generate corrected detection frame information, so as to obtain a corrected detection frame information group set. And determining the quantity value of the traffic signal lamp contained in each image in the image set based on the correction detection frame information group set. And selecting the images with the quantity value of the traffic signal lamps meeting the preset condition from the image set as candidate images. And identifying the color of each traffic signal lamp in the candidate image to obtain a color information set.
In a second aspect, some embodiments of the present disclosure provide a target assay device, the device comprising: the system comprises an acquisition unit, a camera parameter information acquisition unit and a corner point coordinate set, wherein the acquisition unit is configured to acquire an image set shot by a vehicle-mounted camera, the camera parameter information of the vehicle-mounted camera and the corner point coordinate set, and the corner point coordinates are coordinates in a world coordinate system. And a coordinate conversion unit configured to perform coordinate conversion on each corner coordinate in each corner coordinate set in the corner coordinate set based on the camera parameter information to generate a conversion corner coordinate, so as to obtain a conversion corner coordinate set, where the conversion corner coordinate is a coordinate in an image coordinate system. And the first determining unit is configured to determine the detection frame information corresponding to each image in the image set based on the set of transformed corner point coordinates to obtain a set of detection frame information. And the correcting unit is configured to perform correction processing on each detection frame information in each detection frame information group in the detection frame information group set to generate corrected detection frame information, so as to obtain a corrected detection frame information group set. And the second determining unit is configured to determine the quantity value of the traffic signal lamp contained in each image in the image set based on the set of the correction detection frame information. And the selecting unit is configured to select the images of which the number values of the contained traffic signal lamps meet the preset conditions from the image set as candidate images. And the identification unit is used for identifying the color of each traffic signal lamp in the candidate image to obtain a color information set.
In a third aspect, some embodiments of the present disclosure provide an electronic device, comprising: one or more processors; a storage device having one or more programs stored thereon which, when executed by one or more processors, cause the one or more processors to implement the method as described in the first aspect.
In a fourth aspect, some embodiments of the disclosure provide a computer readable medium having a computer program stored thereon, wherein the program, when executed by a processor, implements the method as described in the first aspect.
The above embodiments of the present disclosure have the following advantages: firstly, an image set shot by a vehicle-mounted camera, camera parameter information of the vehicle-mounted camera and a corner point coordinate set are obtained, wherein the corner point coordinates are coordinates in a world coordinate system. And secondly, based on the camera parameter information, performing coordinate conversion on each corner coordinate in each corner coordinate set in the corner coordinate set to generate a conversion corner coordinate, so as to obtain a conversion corner coordinate set, wherein the conversion corner coordinate is a coordinate in an image coordinate system. And the data are processed conveniently in the same coordinate system by converting the coordinates. And determining the detection frame information corresponding to each image in the image set based on the conversion corner point coordinate set to obtain a detection frame information set. By obtaining the information of the detection frame, the primary determination of the target is realized. In addition, each piece of detection frame information in each detection frame information group in the detection frame information group set is subjected to correction processing to generate corrected detection frame information, so that a corrected detection frame information group set is obtained. Through correcting the detection frame information, the target frame is more accurate. Further, based on the set of correction detection frame information sets, the quantity value of traffic lights contained in each image in the image set is determined. Then, an image containing a traffic signal whose number value satisfies a predetermined condition is selected from the above-described image set as a candidate image. And finally, identifying the color of each traffic signal lamp in the candidate image to obtain a color information set. Through correcting the detection frame, the accuracy of target detection is improved. The problem of when carrying out the target measurement through artifical visual inspection too rely on people's experience and lead to the target measurement result inaccurate is solved. Meanwhile, the measuring efficiency is improved to a certain extent by a programmed measuring method.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
FIG. 1 is a schematic diagram of an application scenario of a target determination method according to some embodiments of the present disclosure;
FIG. 2 is a flow diagram of some embodiments of a target determination method according to the present disclosure;
FIG. 3 is a schematic structural diagram of some embodiments of a target determination method according to the present disclosure;
FIG. 4 is a schematic block diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be mutually grouped without conflict.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 is a schematic view of an application scenario of a target determination method according to some embodiments of the present disclosure.
In the application scenario diagram of fig. 1, first, the computing device 101 may obtain a set of images 102 captured by a vehicle-mounted camera, camera parameter information 104 of the vehicle-mounted camera, and a set of corner coordinates 103, where the corner coordinates are coordinates in a world coordinate system. Next, based on the camera parameter information 104, coordinate conversion is performed on each corner coordinate in each corner coordinate set in the corner coordinate set 103 to generate a conversion corner coordinate set 105, and the conversion corner coordinate set is obtained, where the conversion corner coordinate is a coordinate in an image coordinate system. Further, based on the set of transformed corner point coordinates 105, the detection frame information corresponding to each image in the image set 102 is determined, and a set of detection frame information 106 is obtained. In addition, each piece of detection frame information in each detection frame information group in the detection frame information group set 106 is subjected to correction processing to generate corrected detection frame information, resulting in a corrected detection frame information group set 107. Further, based on the set 107 of the set of rectification detection frames, a traffic light quantity value 108 included in each image in the set 102 of images is determined. Thus, an image containing a traffic signal whose number value 108 satisfies a predetermined condition is selected as a candidate image 109 from the above-described image set. Finally, the color of each traffic light in the candidate image 109 is identified, resulting in a color information set 110.
The computing device 101 may be hardware or software. When the computing device is hardware, it may be implemented as a distributed cluster composed of multiple servers or terminal devices, or may be implemented as a single server or a single terminal device. When the computing device is embodied as software, it may be installed in the hardware devices enumerated above. It may be implemented, for example, as multiple pieces of software and software modules used to provide distributed services, or as a single piece of software or software module. And is not particularly limited herein.
It should be understood that the number of computing devices in FIG. 1 is merely illustrative. There may be any number of computing devices, as implementation needs dictate.
With continued reference to fig. 2, a flow 200 of some embodiments of a target determination method according to the present disclosure is shown. The method may be performed by the computing device 101 of fig. 1. The data display method comprises the following steps:
step 201, acquiring an image set shot by a vehicle-mounted camera, camera parameter information of the vehicle-mounted camera, and an angular point coordinate set.
In some embodiments, the subject performing the object determination method (e.g., the computing device 101 shown in fig. 1) may obtain the set of images captured by the onboard camera, the camera parameter information of the onboard camera, and the set of corner coordinates by a wired connection or a wireless connection. Wherein the camera parameter information includes, but is not limited to, at least one of: a first camera parameter, a second camera parameter, a third camera parameter. The corner coordinates are three-dimensional coordinates in a world coordinate system. The first camera parameter represents an intra-camera parameter. The second camera parameter is a rotation matrix. The third camera parameter is a translation vector.
As an example, the second camera parameter may be
Figure BDA0002660217670000061
The third camera parameter may be
Figure BDA0002660217670000062
The set of corner point coordinates may be [ [ [12, 14, 16 ]],[12,18,16]],[[29,14,16],[39,18,16]]]。
Step 202, based on the camera parameter information, performing coordinate transformation on each corner coordinate in each corner coordinate set in the corner information set to generate transformation corner coordinates, so as to obtain a transformation corner coordinate set.
In some embodiments, the executing entity may perform coordinate transformation on each corner coordinate in each corner coordinate set in the corner information set based on the camera parameter information to generate a transformation corner coordinate, so as to obtain a transformation corner coordinate set. Wherein the corner point coordinates are coordinates in an image coordinate system. The image coordinate system is a coordinate system established with the upper left corner of the image as the origin, the line parallel to the horizontal direction of the image as the horizontal axis, and the line parallel to the vertical direction of the image as the vertical axis.
In some optional implementation manners of some embodiments, the executing entity may perform coordinate transformation on each corner coordinate in each corner coordinate set in the corner information set to generate a transformation corner coordinate based on the camera parameter information and the following formula, so as to obtain a transformation corner coordinate set:
Figure BDA0002660217670000063
where u represents the abscissa in the coordinates of the above-mentioned conversion corner point. v denotes the ordinate in the coordinates of the above-mentioned conversion corner point. K denotes the above-described first camera parameter. R represents the second camera parameter described above. t represents the third camera parameter described above. 0TRepresenting a transposed matrix of the 0 matrix. XwRepresenting the abscissa in the coordinates of the above-mentioned corner points. Y iswRepresenting the ordinate in the coordinates of the above-mentioned corner points. ZwRepresenting the vertical coordinates in the corner coordinates above. XcAnd the abscissa of the coordinates of the corner points is represented as the corresponding abscissa in the camera coordinate system. Y iscAnd expressing the vertical coordinate corresponding to the vertical coordinate in the corner point coordinate system in a camera coordinate system. ZcAnd representing the corresponding vertical coordinate of the vertical coordinate in the corner point coordinates under the camera coordinate system.
As an example, the above corner point coordinates may be [ -1, 0, 1 [ ]]. The transpose matrix 0 of the above 0 matrixTMay be [ 000 ]]. The second camera parameter may be
Figure BDA0002660217670000071
The third camera parameter may be
Figure BDA0002660217670000072
The first camera parameter may be 1. The coordinates of the conversion angular point are obtained by the formula
Figure BDA0002660217670000073
(the calculation procedure is as follows).
Figure BDA0002660217670000074
The above formula is taken as an invention point of the embodiment of the present disclosure, thereby solving the second technical problem mentioned in the background art, that is, the problem that target detection cannot be performed because the corner coordinates and the target are not in the same coordinate system.
Firstly, the transformation of the corner point coordinates under the world coordinate system into the camera coordinate system is realized through the second camera parameters and the third camera parameters. And secondly, converting the corner point coordinates in the camera coordinate system into the image coordinate system based on the first camera parameters. Because coordinate dimensions under each coordinate system are different, the matrix is filled up through the parameter '1', and conversion of coordinates with different dimensions under different coordinate systems can be realized through matrix operation.
And 203, determining the detection frame information corresponding to each image in the image set based on the conversion corner point coordinate set to obtain a detection frame information set.
In some embodiments, the executing entity may determine, based on the set of transformed corner point coordinates, detection frame information corresponding to each image in the image set, so as to obtain a set of detection frame information. Wherein, the detection box information includes a binary group, and the binary group includes: and detecting a first coordinate of the frame and a second coordinate of the frame. The detection frame information is a projection of each conversion corner point coordinate set in the conversion corner point coordinate set on each image.
Step 204, performing correction processing on each detection frame information in each detection frame information group in the detection frame information group set to generate corrected detection frame information, so as to obtain a corrected detection frame information group set.
In some embodiments, the execution body may perform, by various means, a correction process on each detection frame information in each detection frame information group in the detection frame information group set to generate corrected detection frame information, resulting in a corrected detection frame information group set. The correction processing refers to scaling or shifting the corner coordinates in the corner coordinate set in the detection frame information.
In some optional implementation manners of some embodiments, the executing body may perform a correction process on each detection frame information in each detection frame information group in the detection frame information group set by using the following formula to generate corrected detection frame information, so as to obtain a corrected detection frame information group set:
Figure BDA0002660217670000081
wherein x is1And represents an abscissa in the first coordinate of the detection frame. y is1And represents a vertical coordinate in the first coordinate of the detection frame. x is the number of2And represents the abscissa of the second coordinate of the detection frame. y is2And represents the ordinate of the second coordinate of the detection frame. nx1And an abscissa indicating a first coordinate of the detection frame included in the correction detection frame information. ny1And a vertical coordinate in the first coordinate of the detection frame included in the correction detection frame information. nx2And an abscissa indicating a second coordinate of the detection frame included in the correction detection frame information. ny2And a vertical coordinate in the second coordinate of the detection frame included in the correction detection frame information. iw represents the horizontal pixel value of the image captured by the onboard camera. ih denotes a vertical pixel value of an image captured by the onboard camera. w denotes a first threshold value. Value of x2-x1. h represents a second threshold value, and the value is y2-y1. lw represents the left-end offset coefficient, and the value range is [0, + ∞ ]. rw represents the right-hand offset coefficient and ranges from 0, + ∞. th represents the upper offset coefficient, which ranges from 0, + ∞. dh represents the lower end offset coefficient, with a value in the range of [0, + ∞). max () represents taking the maximum value in each row in the matrix. min () represents taking the minimum value in each row in the matrix.
As an example, the above-described detection frame first coordinate may be [2, 8 ]. The second coordinate of the detection frame may be [3, 10 ]. The first threshold may be 1. The second threshold may be 2. The left-end offset coefficient may be 2. The right-end offset coefficient may be 3. The upper end offset coefficient may be 1. The lower end offset coefficient may be 4. The horizontal pixel value of the image photographed by the above-described onboard camera may be 1920. The vertical pixel value of the image captured by the above-described onboard camera may be 1080. The correction detection information generated by the above formula may be [ [0, 6], [6, 18] ] (calculation process is as follows).
Figure BDA0002660217670000091
The above formula is an inventive point of the embodiments of the present disclosure, thereby solving the technical problem mentioned in the background art, i.e., the problem that the generated detection frames may not all frame the targets.
Due to the fact that the cameras have various specifications, different visual angles and different focal lengths, images shot by the cameras are distorted. And the detection frames corresponding to the detection frame information cannot frame the targets completely. The first threshold value and the second threshold value indicate the length and width of the detection frame corresponding to the detection frame information. By introducing the upper end offset coefficient, the lower end offset coefficient, the left end offset coefficient and the right end offset coefficient, the detection frame is scaled and shifted, the problem of inaccurate frame setting caused by image distortion is solved, and the detection frame can completely frame the target.
Step 205, determining the quantity value of the traffic lights contained in each image in the image set based on the set of the rectification detection frame information groups.
In some embodiments, the executing body may determine the number value of the traffic lights included in each image in the image set in various ways based on the set of rectification detection frame information. The number value of the traffic signal lamps contained in the image is equal to the number of the correction detection frames in the corresponding correction detection frame information group.
In step 206, an image containing a traffic signal light with a quantity value satisfying a predetermined condition is selected from the image set as a candidate image.
In some embodiments, the execution subject may select, as the candidate image, an image containing a traffic signal whose number value satisfies a predetermined condition from the image set. The predetermined condition may be that the number of traffic lights in the image set is the largest.
And step 207, identifying the color of each traffic signal lamp in the candidate image to obtain a color information set.
In some embodiments, the executing entity may identify a color of each traffic light in the candidate image, resulting in a color information set. The color of each traffic signal lamp in the candidate image may be identified by taking a color corresponding to the largest area of the red corresponding area, the green corresponding area and the yellow corresponding area in the area determined by the correction detection frame information as the color of the traffic signal lamp.
In some optional implementations of some embodiments, the performing subject may recognize the color of the traffic signal lamp through a pre-trained color recognition model. Specifically, the pre-trained color recognition model may include a feature extraction layer, a feature summarization layer, and a classification layer. The feature extraction layer is used for identifying the traffic signal lamps in the images and extracting features. The characteristic summarizing layer is used for summarizing the extracted characteristics. The classification layer is used for classifying according to the summarized characteristics.
In some optional implementations of some embodiments, the execution subject may send the color information set to a vehicle with a display function for display.
The above embodiments of the present disclosure have the following advantages: firstly, an image set shot by a vehicle-mounted camera, camera parameter information of the vehicle-mounted camera and a corner point coordinate set are obtained, wherein the corner point coordinates are coordinates in a world coordinate system. And secondly, based on the camera parameter information, performing coordinate conversion on each corner coordinate in each corner coordinate set in the corner coordinate set to generate a conversion corner coordinate, so as to obtain a conversion corner coordinate set, wherein the conversion corner coordinate is a coordinate in an image coordinate system. And the data are processed conveniently in the same coordinate system by converting the coordinates. And determining the detection frame information corresponding to each image in the image set based on the conversion corner point coordinate set to obtain a detection frame information set. By obtaining the information of the detection frame, the primary determination of the target is realized. In addition, each piece of detection frame information in each detection frame information group in the detection frame information group set is subjected to correction processing to generate corrected detection frame information, so that a corrected detection frame information group set is obtained. Through correcting the detection frame information, the target frame is more accurate. Further, based on the set of correction detection frame information sets, the quantity value of traffic lights contained in each image in the image set is determined. Then, an image containing a traffic signal whose number value satisfies a predetermined condition is selected from the above-described image set as a candidate image. And finally, identifying the color of each traffic signal lamp in the candidate image to obtain a color information set. Through correcting the detection frame, the accuracy of target detection is improved. The problem of when carrying out the target measurement through artifical visual inspection too rely on people's experience and lead to the target measurement result inaccurate is solved. Meanwhile, the measuring efficiency is improved to a certain extent by a programmed measuring method.
With further reference to FIG. 3, as an implementation of the above-described method for the above-described figures, the present disclosure provides some embodiments of a target measurement apparatus, which correspond to those of the method embodiments described above with reference to FIG. 2, and which may be particularly applicable to various electronic devices. As shown in fig. 3, the target assay device 300 of some embodiments includes: an acquisition unit 301 configured to acquire a set of images captured by a vehicle-mounted camera, camera parameter information of the vehicle-mounted camera, and a set of corner point coordinates, wherein the corner point coordinates are coordinates in a world coordinate system. A coordinate transformation unit 302, configured to perform coordinate transformation on each corner coordinate in each corner coordinate set in the corner coordinate set to generate transformation corner coordinates based on the camera parameter information, so as to obtain a transformation corner coordinate set, where the transformation corner coordinates are coordinates in an image coordinate system. The first determining unit 303 is configured to determine, based on the set of transformed corner point coordinates, detection frame information corresponding to each image in the set of images, to obtain a set of detection frame information. A correcting unit 304 configured to perform correction processing on each detection frame information in each detection frame information group in the detection frame information group set to generate corrected detection frame information, resulting in a corrected detection frame information group set. A second determining unit 305 configured to determine a quantity value of traffic lights included in each image in the image set based on the set of correction detection frame information; a selecting unit 306 configured to select, from the image set, an image containing a traffic signal light whose number value satisfies a predetermined condition as a candidate image. The identifying unit 307 identifies the color of each traffic signal in the candidate image to obtain a color information set.
It will be understood that the units described in the apparatus 300 correspond to the various steps in the method described with reference to fig. 2. Thus, the operations, features and resulting advantages described above with respect to the method are also applicable to the apparatus 300 and the units included therein, and are not described herein again.
Referring now to FIG. 4, a block diagram of an electronic device (e.g., computing device 101 of FIG. 1)400 suitable for use in implementing some embodiments of the present disclosure is shown. The server shown in fig. 4 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 4, electronic device 400 may include a processing device (e.g., central processing unit, graphics processor, etc.) 401 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)402 or a program loaded from a storage device 408 into a Random Access Memory (RAM) 403. In the RAM 403, various programs and data necessary for the operation of the electronic apparatus 400 are also stored. The processing device 401, the ROM 402, and the RAM 403 are connected to each other via a bus 404. An input/output (I/O) interface 404 is also connected to bus 404.
Generally, the following devices may be connected to the I/O interface 404: input devices 406 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 407 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 408 including, for example, tape, hard disk, etc.; and a communication device 409. The communication means 409 may allow the electronic device 400 to communicate wirelessly or by wire with other devices to exchange data. While fig. 4 illustrates an electronic device 400 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 4 may represent one device or may represent multiple devices as desired.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In some such embodiments, the computer program may be downloaded and installed from a network through the communication device 409, or from the storage device 408, or from the ROM 402. The computer program, when executed by the processing apparatus 401, performs the above-described functions defined in the methods of some embodiments of the present disclosure.
It should be noted that the computer readable medium described above in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the apparatus; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring an image set shot by a vehicle-mounted camera, camera parameter information of the vehicle-mounted camera and a corner point coordinate set, wherein the corner point coordinates are coordinates in a world coordinate system. And based on the camera parameter information, performing coordinate conversion on each corner coordinate in each corner coordinate set in the corner coordinate set to generate a conversion corner coordinate, so as to obtain a conversion corner coordinate set, wherein the conversion corner coordinate is a coordinate in an image coordinate system. And determining the detection frame information corresponding to each image in the image set based on the conversion corner point coordinate set to obtain a detection frame information set. And correcting each piece of detection frame information in each detection frame information group in the detection frame information group set to generate corrected detection frame information, so as to obtain a corrected detection frame information group set. And determining the quantity value of the traffic signal lamp contained in each image in the image set based on the correction detection frame information group set. And selecting the images with the quantity value of the traffic signal lamps meeting the preset condition from the image set as candidate images. And identifying the color of each traffic signal lamp in the candidate image to obtain a color information set.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by software, and may also be implemented by hardware. The described units may also be provided in a processor, and may be described as: a processor includes an acquisition unit, a coordinate conversion unit, a first determination unit, a correction unit, a second determination unit, a selection unit, and an identification unit. The names of the units do not form a limitation on the units themselves in some cases, and for example, the acquiring unit may be further described as "a unit that acquires a set of images captured by a vehicle-mounted camera, camera parameter information of the vehicle-mounted camera, and a set of corner point coordinates".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (8)

1. A method of target determination comprising:
acquiring an image set shot by a vehicle-mounted camera, camera parameter information of the vehicle-mounted camera and a corner point coordinate set, wherein the corner point coordinates are coordinates in a world coordinate system;
based on the camera parameter information, performing coordinate conversion on each corner coordinate in each corner coordinate set in the corner coordinate set to generate a conversion corner coordinate, so as to obtain a conversion corner coordinate set, wherein the conversion corner coordinate is a coordinate in an image coordinate system;
determining detection frame information corresponding to each image in the image set based on the conversion corner point coordinate set to obtain a detection frame information set, wherein the detection frame information in the detection frame information set comprises: a doublet, the doublet comprising: detecting a first coordinate of a frame and a second coordinate of the frame, wherein the information of the frame is the projection of each conversion corner point coordinate set in the conversion corner point coordinate set on each image;
correcting each detection frame information in each detection frame information group in the detection frame information group set to generate corrected detection frame information, so as to obtain a corrected detection frame information group set;
determining the quantity value of traffic lights contained in each image in the image set based on the set of correction detection frame information groups;
selecting images, of which the number values of the traffic lights meet a preset condition, from the image set as candidate images;
and identifying the color of each traffic signal lamp in the candidate image to obtain a color information set.
2. The method of claim 1, wherein the method further comprises:
and sending the color information set to a vehicle with a display function for display.
3. The method of claim 2, wherein the identifying a color of each traffic light in the candidate image comprises:
and identifying the color of the traffic signal lamp through a pre-trained color identification model.
4. The method of claim 3, wherein the camera parameter information comprises at least one of: a first camera parameter, a second camera parameter, a third camera parameter; and
the coordinate conversion of each corner coordinate in each corner coordinate set in the corner coordinate set to generate converted corner coordinates includes:
coordinate transforming the corner coordinates by the following formula to generate transformed corner coordinates:
Figure FDA0003200880170000021
wherein u represents an abscissa in the coordinates of the conversion corner, v represents an ordinate in the coordinates of the conversion corner, K represents the first camera parameter, R represents the second camera parameter, t represents the third camera parameter, 0TA transposed matrix representing a 0 matrix, XwRepresenting the abscissa, Y, of the coordinates of said corner pointswRepresenting the ordinate, Z, in the coordinates of said corner pointswRepresenting a vertical coordinate, X, of said corner coordinatescRepresents the corresponding abscissa, Y, of the abscissa of the corner coordinates in the camera coordinate systemcRepresents the corresponding ordinate, Z, of the ordinate in the corner point coordinate in the camera coordinate systemcAnd representing the corresponding vertical coordinate of the vertical coordinate in the corner point coordinate under the camera coordinate system.
5. The method of claim 4, wherein the detection box information comprises a duplet comprising: detecting a first coordinate of the frame and a second coordinate of the frame; and
the performing a correction process on each detection frame information in each detection frame information group in the detection frame information group set to generate corrected detection frame information includes:
correcting the first coordinate and the second coordinate of the detection frame in the binary group included in the detection frame information by the following formula to generate corrected detection frame information:
Figure FDA0003200880170000031
wherein x is1Represents the abscissa, y, of the first coordinate of the detection frame1Representing the ordinate, x, in the first coordinate of the detection frame2Represents the abscissa, y, of the second coordinate of the detection frame2Representing the ordinate, nx, in the second coordinate of the detection box1An abscissa, ny, in a first coordinate of the test frame included in the information representing the correction test frame1Indicating a vertical coordinate, nx, in a first coordinate of the detection frame included in the correction detection frame information2Represents an abscissa, ny, in a second coordinate of the test frame included in the correction test frame information2Indicating a vertical coordinate in a second coordinate of the detection frame included in the correction detection frame information, iw indicating a horizontal pixel value of an image shot by the vehicle-mounted camera, ih indicating a vertical pixel value of the image shot by the vehicle-mounted camera, w indicating a first threshold value, and x2-x1H represents a second threshold value, and the value is y2-y1Lw represents the left-end offset coefficient, with a range of [0, + ∞ ], rw represents the right-end offset coefficient, with a range of [0, + infinity), th represents the upper-end offset coefficient, with a range of [0, + ∞ ], dh represents the lower-end offset coefficient, with a range of [0, + infinity), max () represents the maximum value in each row of the matrix, and min () represents the minimum value in each row of the matrix.
6. A target assay device, comprising:
an acquisition unit configured to acquire a set of images captured by a vehicle-mounted camera, camera parameter information of the vehicle-mounted camera, a set of corner point coordinates, wherein the corner point coordinates are coordinates in a world coordinate system;
a coordinate transformation unit configured to perform coordinate transformation on each corner coordinate in each corner coordinate set in the corner coordinate set based on the camera parameter information to generate transformation corner coordinates, resulting in a transformation corner coordinate set, wherein the transformation corner coordinates are coordinates in an image coordinate system;
a first determining unit, configured to determine, based on the set of transformed corner point coordinates, detection frame information corresponding to each image in the set of images, resulting in a set of detection frame information, where the detection frame information in the set of detection frame information includes: a doublet, the doublet comprising: detecting a first coordinate of a frame and a second coordinate of the frame, wherein the information of the frame is the projection of each conversion corner point coordinate set in the conversion corner point coordinate set on each image;
a correction unit configured to perform correction processing on each detection frame information in each detection frame information group in the detection frame information group set to generate corrected detection frame information, resulting in a corrected detection frame information group set;
a second determination unit configured to determine a quantity value of traffic lights included in each image in the image set based on the set of rectification detection frame information groups;
a selection unit configured to select, as candidate images, images from the image set, which contain traffic signal lights whose number values satisfy a predetermined condition;
and the identification unit is used for identifying the color of each traffic signal lamp in the candidate image to obtain a color information set.
7. An electronic device, comprising: one or more processors;
a storage device having one or more programs stored thereon;
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-5.
8. A computer-readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any one of claims 1-5.
CN202010902456.2A 2020-09-01 2020-09-01 Target measuring method, target measuring device, electronic apparatus, and computer-readable medium Active CN112597788B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010902456.2A CN112597788B (en) 2020-09-01 2020-09-01 Target measuring method, target measuring device, electronic apparatus, and computer-readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010902456.2A CN112597788B (en) 2020-09-01 2020-09-01 Target measuring method, target measuring device, electronic apparatus, and computer-readable medium

Publications (2)

Publication Number Publication Date
CN112597788A CN112597788A (en) 2021-04-02
CN112597788B true CN112597788B (en) 2021-09-21

Family

ID=75180246

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010902456.2A Active CN112597788B (en) 2020-09-01 2020-09-01 Target measuring method, target measuring device, electronic apparatus, and computer-readable medium

Country Status (1)

Country Link
CN (1) CN112597788B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113342914B (en) * 2021-06-17 2023-04-25 重庆大学 Data set acquisition and automatic labeling method for detecting terrestrial globe area

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105488810A (en) * 2016-01-20 2016-04-13 东南大学 Focused light field camera internal and external parameter calibration method
CN106651953A (en) * 2016-12-30 2017-05-10 山东大学 Vehicle position and gesture estimation method based on traffic sign
CN107784672A (en) * 2016-08-26 2018-03-09 百度在线网络技术(北京)有限公司 For the method and apparatus for the external parameter for obtaining in-vehicle camera
CN110717438A (en) * 2019-10-08 2020-01-21 东软睿驰汽车技术(沈阳)有限公司 Traffic signal lamp identification method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109271892A (en) * 2018-08-30 2019-01-25 百度在线网络技术(北京)有限公司 A kind of object identification method, device, equipment, vehicle and medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105488810A (en) * 2016-01-20 2016-04-13 东南大学 Focused light field camera internal and external parameter calibration method
CN107784672A (en) * 2016-08-26 2018-03-09 百度在线网络技术(北京)有限公司 For the method and apparatus for the external parameter for obtaining in-vehicle camera
CN106651953A (en) * 2016-12-30 2017-05-10 山东大学 Vehicle position and gesture estimation method based on traffic sign
CN110717438A (en) * 2019-10-08 2020-01-21 东软睿驰汽车技术(沈阳)有限公司 Traffic signal lamp identification method and device

Also Published As

Publication number Publication date
CN112597788A (en) 2021-04-02

Similar Documents

Publication Publication Date Title
KR20200018411A (en) Method and apparatus for detecting burr of electrode piece
CN113607185B (en) Lane line information display method, lane line information display device, electronic device, and computer-readable medium
CN111414879A (en) Face shielding degree identification method and device, electronic equipment and readable storage medium
CN113255619A (en) Lane line recognition and positioning method, electronic device, and computer-readable medium
CN112330576A (en) Distortion correction method, device and equipment for vehicle-mounted fisheye camera and storage medium
CN113537153A (en) Meter image identification method and device, electronic equipment and computer readable medium
CN110705511A (en) Blurred image recognition method, device, equipment and storage medium
CN115272182A (en) Lane line detection method, lane line detection device, electronic device, and computer-readable medium
CN113781478B (en) Oil tank image detection method, oil tank image detection device, electronic equipment and computer readable medium
CN113592033B (en) Oil tank image recognition model training method, oil tank image recognition method and device
CN112597788B (en) Target measuring method, target measuring device, electronic apparatus, and computer-readable medium
CN111965383B (en) Vehicle speed information generation method and device, electronic equipment and computer readable medium
CN115620264B (en) Vehicle positioning method and device, electronic equipment and computer readable medium
CN112464921A (en) Obstacle detection information generation method, apparatus, device and computer readable medium
CN112232326A (en) Driving information generation method and device, electronic equipment and computer readable medium
CN113808134B (en) Oil tank layout information generation method, oil tank layout information generation device, electronic apparatus, and medium
CN113688928B (en) Image matching method and device, electronic equipment and computer readable medium
CN113379006B (en) Image recognition method and device, electronic equipment and computer readable medium
CN112528970A (en) Guideboard detection method, device, equipment and computer readable medium
CN114638846A (en) Pickup pose information determination method, pickup pose information determination device, pickup pose information determination equipment and computer readable medium
CN111062920B (en) Method and device for generating semiconductor detection report
CN113780148A (en) Traffic sign image recognition model training method and traffic sign image recognition method
CN112766068A (en) Vehicle detection method and system based on gridding labeling
CN113239994A (en) Power grid defect detection method and device based on YOLOv4-tiny algorithm, storage medium and electronic equipment
CN114842448B (en) Three-dimensional lane line generation method and device, electronic device and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Target determination method, device, electronic equipment and computer-readable medium

Effective date of registration: 20230228

Granted publication date: 20210921

Pledgee: Bank of Shanghai Co.,Ltd. Beijing Branch

Pledgor: HOLOMATIC TECHNOLOGY (BEIJING) Co.,Ltd.

Registration number: Y2023980033668

CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: 201, 202, 301, No. 56-4 Fenghuang South Road, Huadu District, Guangzhou City, Guangdong Province, 510806

Patentee after: Heduo Technology (Guangzhou) Co.,Ltd.

Address before: 100095 101-15, 3rd floor, building 9, yard 55, zique Road, Haidian District, Beijing

Patentee before: HOLOMATIC TECHNOLOGY (BEIJING) Co.,Ltd.