WO2023273158A1 - Method and apparatus for determining operating range of camera in cooperative vehicle infrastructure and roadside device - Google Patents

Method and apparatus for determining operating range of camera in cooperative vehicle infrastructure and roadside device Download PDF

Info

Publication number
WO2023273158A1
WO2023273158A1 PCT/CN2021/135146 CN2021135146W WO2023273158A1 WO 2023273158 A1 WO2023273158 A1 WO 2023273158A1 CN 2021135146 W CN2021135146 W CN 2021135146W WO 2023273158 A1 WO2023273158 A1 WO 2023273158A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
camera
focal length
per unit
intercepted
Prior art date
Application number
PCT/CN2021/135146
Other languages
French (fr)
Chinese (zh)
Inventor
邓烽
时一峰
苑立彬
Original Assignee
阿波罗智联(北京)科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿波罗智联(北京)科技有限公司 filed Critical 阿波罗智联(北京)科技有限公司
Publication of WO2023273158A1 publication Critical patent/WO2023273158A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Definitions

  • the present disclosure relates to the technical field of intelligent transportation, in particular to the technical field of visual processing, and in particular to a method, a device and a roadside device for determining a camera action distance in vehicle-road coordination.
  • the roadside perception system provides beyond-the-horizon perception information for vehicle-road coordination.
  • the camera is one of the most important sensors of the roadside perception system, and its working distance is an important index to measure the perception system.
  • the related method is to directly use the original image provided by the camera, perform de-distortion according to the internal parameters of the camera, and then perform two-dimensional plane detection and three-dimensional perception positioning in the entire image, and the working distance is determined directly based on the original image.
  • the present disclosure provides a method, device, electronic equipment, storage medium, computer program product, roadside equipment, and cloud control platform for determining camera action distance in vehicle-road coordination.
  • a method for determining the working distance of a camera in vehicle-road coordination comprising: acquiring a captured image and a pixel focal length of the camera; intercepting an intercepted image including a target object from the captured image; based on the pixel focal length and The number of pixels per unit distance of the intercepted image determines the maximum operating distance of the camera, wherein the number of pixels per unit distance of the intercepted image is the number of pixels included in the smallest unit that can be recognized by the detection model in the intercepted image.
  • a device for determining the working distance of a camera in vehicle-road coordination includes: an acquisition module configured to acquire the captured image and pixel focal length of the camera; an intercepting module configured to obtain the captured image from The middle interception includes the intercepted image of the target object; the determination module is configured to determine the maximum operating distance of the camera based on the pixel focal length and the number of pixels per unit distance of the intercepted image, wherein the number of pixels per unit distance of the intercepted image is the number of pixels that can be detected in the intercepted image The number of pixels included in the smallest unit recognized by the model.
  • an electronic device includes at least one processor; and a memory connected in communication with the at least one processor; wherein, the memory stores instructions executable by the at least one processor , the instructions are executed by at least one processor, so that the at least one processor can execute the above-mentioned method for determining the working distance of a camera in vehicle-road coordination.
  • an embodiment of the present application provides a computer-readable medium, on which computer instructions are stored, and the computer instructions are used to enable a computer to execute the above-mentioned method for determining the working distance of a camera in vehicle-road coordination.
  • an embodiment of the present application provides a computer program product, which includes a computer program, and when the computer program is executed by a processor, implements the above-mentioned method for determining the working distance of a camera in vehicle-road coordination.
  • an embodiment of the present application provides a roadside device, including the above-mentioned electronic device.
  • an embodiment of the present application provides a cloud control platform, including the above-mentioned electronic device.
  • FIG. 1 is a flowchart of an embodiment of a method for determining a camera range in vehicle-road coordination according to the present disclosure
  • FIG. 2 is a schematic diagram of an application scenario of a method for determining a camera action distance in vehicle-road coordination according to the present disclosure
  • Fig. 3 is a flow chart of an embodiment of obtaining the number of pixels per unit distance corresponding to the intercepted image according to the present disclosure
  • FIG. 4 is a flowchart of one embodiment of determining a geographic location of a target object according to the present disclosure
  • Fig. 5 is a schematic structural diagram of an embodiment of a device for determining a camera action distance in vehicle-road coordination according to the present disclosure
  • Fig. 6 is a block diagram of an electronic device used to implement the method for determining the working distance of a camera in vehicle-road coordination according to an embodiment of the present disclosure.
  • FIG. 1 shows a schematic flowchart 100 of an embodiment of a method for determining a camera range in vehicle-road coordination that can be applied in the present disclosure.
  • the method for determining the working distance of the camera in the vehicle-road coordination includes the following steps:
  • Step 110 acquire the captured image and pixel focal length of the camera.
  • the executing subject of the method for determining the working distance of the camera may receive the camera parameters input by the user, and after obtaining the camera parameters, further calculate according to the camera parameters to obtain the pixel focal length of the camera , the pixel focal length is the focal length in pixels.
  • the above execution subject may also use the camera to acquire the captured image of the camera, and the captured image may include the target object for distance measurement.
  • Step 120 intercepting an intercepted image including the target object from the collected image.
  • the execution subject may use an image processing method to perform image processing on the acquired image.
  • the image processing method may include image correction, image filtering, image grayscale, image enhancement, and the like.
  • the above-mentioned executive body can perform image segmentation on the processed collected image. Image segmentation is to divide the image into several specific regions with unique properties. Image segmentation methods mainly include threshold-based segmentation methods, region-based segmentation methods, and edge-based segmentation methods. In the field of deep learning, multi-layer neural network models, such as deep neural network and convolutional neural network, can be used for image segmentation.
  • the above-mentioned executive body can use image segmentation methods to collect images Segmentation is performed, and an intercepted image including the target object is intercepted from the collected image.
  • the execution subject may also perform target object recognition on the acquired image to determine the position information of the target object in the image. Then the execution subject intercepts the captured image according to the determined position information, and obtains the captured image including the target object.
  • Step 130 based on the pixel focal length and the number of pixels per unit distance of the intercepted image, determine the maximum working distance of the camera.
  • the execution subject after the execution subject obtains the pixel focal length of the camera, it can obtain the number of pixels per unit distance corresponding to the intercepted image through calculation based on the collected image and the intercepted image or calculation based on the intercepted image and the detection model, and the corresponding pixel number of the intercepted image
  • the number of pixels per unit distance may be the number of pixels included in the smallest unit that can be recognized by the detection model in the intercepted image, and the detection model is a model used to detect the target object.
  • the detection model corresponds to the smallest target object that can be detected. According to the actual size of the smallest target object, the number of pixels corresponding to the smallest target object can be determined within a unit distance. The number of pixels per unit distance of the image.
  • the execution subject may determine the number of pixels per unit distance corresponding to the captured image, and determine the ratio between the two according to the resolution of the collected image and the resolution of the intercepted image.
  • the number of distance pixels determines the number of pixels per unit distance corresponding to the intercepted image.
  • the execution subject After the execution subject obtains the pixel focal length of the camera and the number of pixels per unit distance corresponding to the intercepted image, it can calculate the maximum operating distance of the camera according to the ratio between the focal length of the pixel and the number of pixels per unit distance.
  • the maximum operating distance can be The farthest distance for the target object to be detected.
  • the above-mentioned execution subject can calculate the maximum operating distance of the camera according to the formula, which can be:
  • max_distance indicates the maximum operating distance of the camera
  • focal indicates the pixel focal length of the camera
  • min_pixels_per_meter indicates the number of pixels per unit distance corresponding to the intercepted image.
  • the resolution of the captured image is (w1, h1)
  • the number of pixels per unit distance corresponding to the captured image is min1
  • the intercepted image is intercepted from the captured image
  • the resolution of the captured image is (w2, h2)
  • the execution subject can determine the ratio between the resolution of the captured image and the resolution of the intercepted image as w1/w2, and determine that the number of pixels per unit distance corresponding to the intercepted image is min1/(w1/w2).
  • FIG. 2 is a schematic diagram of an application scenario of the method for determining the working distance of a camera in vehicle-road coordination according to this embodiment.
  • the terminal 201 can display the physical parameter input interface of the camera to the user through the display screen, and the user can input the physical parameter of the camera in the physical parameter input interface, and the terminal 201 can obtain the pixel focal length of the camera according to the physical parameter and an acquired image including an object of interest.
  • the terminal 201 may perform image processing on the collected image, and intercept the intercepted image including the target object from the acquired collected image, and then the terminal 201 calculates the maximum working distance of the camera according to the pixel focal length of the camera and the number of pixels per unit distance of the intercepted image.
  • the method for determining the working distance of the camera in the vehicle-road coordination obtains the captured image and the pixel focal length of the camera, and then captures the captured image including the target object from the captured image, and finally based on the pixel focal length and the unit distance of the captured image
  • the number of pixels determines the maximum operating distance of the camera.
  • the number of pixels per unit distance of the intercepted image is the number of pixels included in the smallest unit that can be recognized by the detection model in the intercepted image.
  • the automatic adjustment of the camera’s operating distance can be realized by intercepting the collected image.
  • the intercepted image is intercepted from the collected image, there is a certain proportional relationship between the resolution of the intercepted image and the resolution of the collected image, and the number of pixels per unit distance corresponding to the intercepted image is also the same as the number of pixels per unit distance corresponding to the collected image.
  • the proportional relationship can use image interception to realize the change of the number of pixels per unit distance, so as to realize the change of the camera's working distance, and can increase the working distance of the camera without changing the focal length of the camera, which reduces the cost of the zoom camera and improves the adjustment of the camera. Range flexibility.
  • the above step 110, obtaining the pixel focal length of the camera may include the following steps: obtaining the physical focal length and imaging sensor parameters of the camera; determining the pixel of the camera based on the physical focal length, imaging sensor parameters and the resolution of the collected image focal length.
  • the above-mentioned executive body can read or receive the camera parameters input by the user through the network, and obtain the physical parameters of the camera.
  • the physical parameters of the camera can be the basic parameters for the camera to shoot, and can include imaging sensor parameters, physical focal length, shutter Some parameters such as speed are used to constant camera performance.
  • the above-mentioned execution subject can provide the user with an input interface of the physical parameters of the camera through a display device such as a display screen, and the user can input the physical parameters of the camera in the input interface, so that the above-mentioned execution subject can obtain the physical focal length and imaging sensor parameters of the camera through user input ; Or, the above execution subject can read the physical parameters of the camera from the network according to the camera parameters stored in the network, and obtain the physical focal length and imaging sensor parameters of the camera.
  • the physical parameters of the camera acquired by the execution subject include the physical focal length of the camera and the parameters of the imaging sensor, which can further determine the resolution of the captured image of the camera.
  • the above execution subject can determine the pixel focal length of the camera by using the pixel focal length calculation formula according to the physical focal length of the camera, the imaging sensor parameters and the resolution of the captured image.
  • the pixel focal length calculation formula can be:
  • focal represents the pixel focal length of the camera
  • lens represents the physical focal length of the camera
  • img_width and img_height represent the resolution of the captured image
  • sensor_size represents the imaging sensor parameters of the camera.
  • the pixel focal length of the camera is determined through the physical focal length, imaging sensor parameters and the resolution of the captured image, and the pixel focal length can be determined according to the calculation relationship among the physical focal length, imaging sensor parameters and the resolution of the captured image , improving the efficiency and accuracy of determining the focal length of a pixel.
  • the above step 110, obtaining the pixel focal length of the camera may also include the following steps: determining the first internal reference matrix of the camera based on the collected image and the camera calibration algorithm; obtaining the pixel focal length of the camera from the first internal reference matrix .
  • the above-mentioned executive body can obtain the captured image, use the camera calibration algorithm to calibrate the captured image, and calculate the first internal reference matrix of the camera.
  • the first internal reference matrix can include the pixel focal length of the camera and the center of the photosensitive plate of the camera.
  • the above execution subject can calibrate the captured image by using the Zhang Zhengyou checkerboard calibration algorithm, and obtain the first internal parameter matrix and external parameter matrix of the camera. The above execution subject may determine the pixel focal length of the camera from the calculated first internal reference matrix.
  • the pixel focal length of the camera is determined through the camera calibration algorithm, and the pixel focal length can be determined quickly and accurately according to the collected images, which improves the efficiency and accuracy of determining the pixel focal length.
  • Fig. 3 has shown the method step that obtains the unit distance pixel number of intercepted image, and it may comprise the following steps:
  • Step 310 acquiring the number of pixels per unit distance of the sample image in the detection model.
  • the execution subject may read the detection model, and obtain the number of pixels per unit distance of the sample image in the detection model.
  • the detection model is trained based on the sample image, and is used to detect the target object in the collected image.
  • the detection model corresponds to the smallest target object that can be detected. According to the actual size of the smallest target object, the smallest target object can be determined.
  • the corresponding number of pixels includes the number of pixels per unit distance, so that the number of pixels per unit distance of the sample image in the detection model can be obtained, that is, the number of pixels per unit distance of the sample image is included in the smallest unit that can be recognized by the detection model in the sample image of pixels.
  • Step 320 acquire the resolution of the sample image, and determine the ratio between the resolution of the sample image and the resolution of the intercepted image.
  • the detection model can detect images with a preset resolution, and the model can be obtained through training with sample images of the same resolution.
  • the above execution subject can obtain the resolution of the sample image in the detection model, and then determine the resolution of the intercepted image.
  • the execution subject may calculate a ratio between the resolution of the sample image and the resolution of the intercepted image according to the resolution of the sample image and the resolution of the intercepted image.
  • Step 330 based on the number of pixels per unit distance and the ratio value of the sample image, the number of pixels per unit distance of the intercepted image is obtained.
  • the unit distance corresponding to the intercepted image can be calculated according to the number of pixels per unit distance of the sample image and the ratio value The number of pixels, so that the number of pixels included in the smallest unit that can be recognized by the detection model in the intercepted image can be obtained.
  • the number of pixels per unit distance of the intercepted image is calculated through the proportional relationship between the sample image of the detection model and the intercepted image, so as to determine the number of pixels included in the smallest unit that can be recognized by the detection model in the intercepted image , the number of pixels in the intercepted image that can be recognized by the detection model as the minimum unit distance can be obtained.
  • FIG. 4 shows the method steps for determining the geographic location of the target object, which may include the following steps:
  • Step 410 based on the resolution of the intercepted image and the first internal reference matrix, determine a second internal reference matrix for the intercepted image.
  • the execution subject can use the camera calibration algorithm to determine the first internal parameter matrix and external parameter matrix of the camera.
  • the first internal parameter matrix includes the pixel focal length of the camera and the coordinate value of the center of the photosensitive plate of the camera in the pixel coordinate system.
  • the above execution subject can determine the resolution of the intercepted image, and after obtaining the first internal reference matrix, determine the coordinate value of the center of the photosensitive plate of the camera in the pixel coordinate system under the resolution of the intercepted image, that is, the coordinate value can be the intercepted image half of the resolution.
  • the execution subject may determine the second internal reference matrix of the intercepted image according to the determined pixel focal length and the coordinate values corresponding to the intercepted image.
  • the above execution subject performs Zhang Zhengyou checkerboard calibration algorithm on the collected images to determine the first internal parameter matrix, and the first internal parameter matrix is:
  • fx and fy are the pixel focal length of the camera
  • cx and cy are the coordinate values of the center of the photosensitive plate of the camera in the pixel coordinate system at the resolution of the captured image.
  • Step 420 Determine the geographic location of the target object based on the second internal reference matrix and external reference matrix of the intercepted image.
  • the extrinsic parameter matrix acquired by the above execution subject remains unchanged, and after obtaining the second internal reference matrix of the intercepted image, the second internal reference matrix and extrinsic parameter matrix of the intercepted image can be used to compare the target in the intercepted image
  • the object performs perceptual positioning to determine the geographic location of the target object.
  • the geographic location may be the actual location of the target object and may be represented by coordinates.
  • the second internal reference matrix corresponding to the intercepted image is determined through the pixel focal length of the camera and the resolution of the intercepted image, and the geographic location of the target object is determined according to the second internal reference matrix and the external reference matrix, which can more accurately determine The geographic location of the target object is obtained, which improves the accuracy of location detection.
  • the present disclosure provides an embodiment of a device for determining the working distance of a camera in vehicle-road coordination.
  • This device embodiment is similar to the method embodiment shown in FIG. 1
  • the device can be specifically applied to various electronic devices.
  • the device 500 for determining the camera working distance in the vehicle-road coordination in this embodiment includes: an acquisition module 510 , an interception module 520 and a determination module 530 .
  • the obtaining module 510 is configured to obtain the captured image and pixel focal length of the camera
  • the intercepting module 520 is configured to intercept the intercepted image including the target object from the collected image
  • the determination module 530 is configured to determine the maximum operating distance of the camera based on the pixel focal length and the number of pixels per unit distance of the intercepted image, wherein the number of pixels per unit distance of the intercepted image is included in the smallest unit that can be identified by the detection model in the intercepted image number of pixels.
  • the obtaining module 510 is further configured to: obtain the physical focal length and imaging sensor parameters of the camera; determine the pixel focal length of the camera based on the physical focal length, imaging sensor parameters and the resolution of the captured image .
  • the obtaining module 510 is further configured to: determine a first internal reference matrix of the camera based on the collected images and the camera calibration algorithm; obtain the pixel focal length of the camera from the first internal reference matrix.
  • the number of pixels per unit distance of the intercepted image is obtained based on the following steps: obtaining the number of pixels per unit distance of the sample image in the detection model, wherein the detection model is used to detect the target object, and the unit distance of the sample image is The number of distance pixels is the number of pixels included in the smallest unit that can be recognized by the detection model in the sample image; obtain the resolution of the sample image, and determine the ratio between the resolution of the sample image and the resolution of the intercepted image; based on the sample image The number of pixels per unit distance and the scale value of , to obtain the number of pixels per unit distance of the intercepted image.
  • the determination module 530 is further configured to: determine a second internal reference matrix of the intercepted image based on the resolution of the intercepted image and the first internal reference matrix; and the extrinsic matrix to determine the geographic location of the target object.
  • the device for determining the working distance of the camera in the vehicle-road coordination obtains the captured image and the pixel focal length of the camera, and then captures the captured image including the target object from the captured image, and finally based on the pixel focal length and the unit distance of the captured image
  • the number of pixels determines the maximum operating distance of the camera.
  • the number of pixels per unit distance of the intercepted image is the number of pixels included in the smallest unit that can be recognized by the detection model in the intercepted image.
  • the automatic adjustment of the camera’s operating distance can be realized by intercepting the collected image.
  • the intercepted image is intercepted from the collected image, there is a certain proportional relationship between the resolution of the intercepted image and the resolution of the collected image, and the number of pixels per unit distance corresponding to the intercepted image is also the same as the number of pixels per unit distance corresponding to the collected image.
  • the proportional relationship can use image interception to realize the change of the number of pixels per unit distance, so as to realize the change of the camera's working distance, and can increase the working distance of the camera without changing the focal length of the camera, which reduces the cost of the zoom camera and improves the adjustment of the camera. Range flexibility.
  • the acquisition, storage and application of the user's personal information involved are in compliance with relevant laws and regulations, and do not violate public order and good customs.
  • the present disclosure also provides an electronic device, a readable storage medium, a computer program product, a roadside device, and a cloud control platform.
  • FIG. 6 shows a schematic block diagram of an example electronic device 600 that may be used to implement embodiments of the present disclosure.
  • Electronic device is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers.
  • Electronic devices may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smart phones, wearable devices, and other similar computing devices.
  • the components shown herein, their connections and relationships, and their functions, are by way of example only, and are not intended to limit implementations of the disclosure described and/or claimed herein.
  • the electronic device 600 includes a computing unit 601, which can perform calculations according to a computer program stored in a read-only memory (ROM) 602 or a computer program loaded from a storage unit 608 into a random access memory (RAM) 603. Various appropriate actions and processes are performed. In the RAM 603, various programs and data necessary for the operation of the device 600 can also be stored.
  • the computing unit 601, ROM 602, and RAM 603 are connected to each other through a bus 604.
  • An input/output (I/O) interface 605 is also connected to bus 604 .
  • the I/O interface 605 includes: an input unit 606, such as a keyboard, a mouse, etc.; an output unit 607, such as various types of displays, speakers, etc.; a storage unit 608, such as a magnetic disk, an optical disk etc.; and a communication unit 609, such as a network card, a modem, a wireless communication transceiver, and the like.
  • the communication unit 609 allows the device 600 to exchange information/data with other devices over a computer network such as the Internet and/or various telecommunication networks.
  • the computing unit 601 may be various general-purpose and/or special-purpose processing components having processing and computing capabilities. Some examples of computing units 601 include, but are not limited to, central processing units (CPUs), graphics processing units (GPUs), various dedicated artificial intelligence (AI) computing chips, various computing units that run machine learning model algorithms, digital signal processing processor (DSP), and any suitable processor, controller, microcontroller, etc.
  • the computing unit 601 executes various methods and processes described above, for example, a method for determining the working distance of a camera in vehicle-road coordination.
  • the method for determining the camera range in vehicle-road coordination can be implemented as a computer software program, which is tangibly contained in a machine-readable medium, such as the storage unit 608 .
  • part or all of the computer program may be loaded and/or installed on the device 600 via the ROM 602 and/or the communication unit 609.
  • the computer program When the computer program is loaded into the RAM 603 and executed by the computing unit 601, one or more steps of the method for determining the working distance of the camera in the vehicle-road coordination described above can be executed.
  • the calculation unit 601 may be configured in any other appropriate way (for example, by means of firmware) to execute the method for determining the working distance of the camera in the vehicle-road coordination.
  • Various implementations of the systems and techniques described above herein can be implemented in digital electronic circuit systems, integrated circuit systems, field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), application specific standard products (ASSPs), systems on chips Implemented in a system of systems (SOC), load programmable logic device (CPLD), computer hardware, firmware, software, and/or combinations thereof.
  • FPGAs field programmable gate arrays
  • ASICs application specific integrated circuits
  • ASSPs application specific standard products
  • SOC system of systems
  • CPLD load programmable logic device
  • computer hardware firmware, software, and/or combinations thereof.
  • programmable processor can be special-purpose or general-purpose programmable processor, can receive data and instruction from storage system, at least one input device, and at least one output device, and transmit data and instruction to this storage system, this at least one input device, and this at least one output device an output device.
  • Program codes for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general-purpose computer, a special purpose computer, or other programmable data processing devices, so that the program codes, when executed by the processor or controller, make the functions/functions specified in the flow diagrams and/or block diagrams Action is implemented.
  • the program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, apparatus, or device.
  • a machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • a machine-readable medium may include, but is not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatus, or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, portable computer discs, hard drives, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read only memory
  • CD-ROM compact disk read only memory
  • magnetic storage or any suitable combination of the foregoing.
  • the systems and techniques described herein can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user. ); and a keyboard and pointing device (eg, a mouse or a trackball) through which a user can provide input to the computer.
  • a display device e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor
  • a keyboard and pointing device eg, a mouse or a trackball
  • Other kinds of devices can also be used to provide interaction with the user; for example, the feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and can be in any form (including Acoustic input, speech input or, tactile input) to receive input from the user.
  • the systems and techniques described herein can be implemented in a computing system that includes back-end components (e.g., as a data server), or a computing system that includes middleware components (e.g., an application server), or a computing system that includes front-end components (e.g., as a a user computer having a graphical user interface or web browser through which a user can interact with embodiments of the systems and techniques described herein), or including such backend components, middleware components, Or any combination of front-end components in a computing system.
  • the components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include: Local Area Network (LAN), Wide Area Network (WAN) and the Internet.
  • a computer system may include clients and servers.
  • Clients and servers are generally remote from each other and typically interact through a communication network.
  • the relationship of client and server arises by computer programs running on the respective computers and having a client-server relationship to each other.
  • the server can be a cloud server, a server of a distributed system, or a server combined with a blockchain.
  • the roadside equipment may also include communication components, etc., and the electronic equipment and communication components may be integrally integrated or separately configured.
  • Electronic devices can obtain data from sensing devices (such as roadside cameras), such as pictures and videos, for image and video processing and data calculation.
  • the electronic device itself may also have a sensing data acquisition function and a communication function, such as an AI camera, and the electronic device may directly perform image and video processing and data calculation based on the acquired sensing data.
  • the cloud control platform performs processing in the cloud
  • the electronic devices included in the cloud control platform can obtain data from sensing devices (such as roadside cameras), such as pictures and videos, to perform image and video processing and data calculation; the cloud control platform It can also be called vehicle-road collaborative management platform, edge computing platform, cloud computing platform, central system, cloud server, etc.
  • steps may be reordered, added or deleted using the various forms of flow shown above.
  • each step described in the present disclosure may be executed in parallel, sequentially, or in a different order, as long as the desired result of the technical solution disclosed in the present disclosure can be achieved, no limitation is imposed herein.

Abstract

The present disclosure relates to the technical field of intelligent traffic, and specifically relates to the technical field of visual processing. Disclosed are a method and apparatus for determining the operating range of a camera in cooperative vehicle infrastructure and a roadside device. The specific implementation solution comprises: first obtaining an acquisition image and a pixel focal length of the camera; then cutting a cut image comprising a target object from the acquisition image; and finally determining the maximum operating range of the camera on the basis of the pixel focal length and the number of pixels per unit distance of the cut image, the number of pixels per unit distance of the cut image being the number of pixels comprised in the minimum unit that can be recognized by a detection model in the cut image. The operating range of the camera can be automatically adjusted by cutting the acquisition image, and the flexibility of adjusting the operating range of the camera is improved.

Description

车路协同中相机作用距离确定方法、装置和路侧设备Method, device and roadside equipment for determining camera action distance in vehicle-road coordination
本专利申请要求于2021年6月29日提交的、申请号为202110724108.5、发明名称为“车路协同中相机作用距离确定方法、装置和路侧设备”的中国专利申请的优先权,该申请的全文以引用的方式并入本申请中。This patent application claims the priority of the Chinese patent application filed on June 29, 2021 with the application number 202110724108.5 and the title of the invention is "Method, Device and Roadside Equipment for Determining Camera Action Distance in Vehicle-Infrastructure Coordination". This application is incorporated by reference in its entirety.
技术领域technical field
本公开涉及智能交通技术领域,具体涉及视觉处理技术领域,尤其涉及车路协同中相机作用距离确定方法、装置和路侧设备。The present disclosure relates to the technical field of intelligent transportation, in particular to the technical field of visual processing, and in particular to a method, a device and a roadside device for determining a camera action distance in vehicle-road coordination.
背景技术Background technique
在车路协同V2X基础设施建设中,路侧感知系统为车路协同提供了超视距的感知信息。相机作为路侧感知系统的最主要的传感器之一,其作用距离是衡量感知系统的重要指标。In the construction of vehicle-road coordination V2X infrastructure, the roadside perception system provides beyond-the-horizon perception information for vehicle-road coordination. The camera is one of the most important sensors of the roadside perception system, and its working distance is an important index to measure the perception system.
相关方法是直接使用相机提供的原始图像,根据相机内参进行去畸变,然后在整幅图像中进行二维平面检测及三维立体感知定位,其作用距离是直接基于原始图像进行确定。The related method is to directly use the original image provided by the camera, perform de-distortion according to the internal parameters of the camera, and then perform two-dimensional plane detection and three-dimensional perception positioning in the entire image, and the working distance is determined directly based on the original image.
发明内容Contents of the invention
本公开提供了一种车路协同中相机作用距离确定方法、装置、电子设备、存储介质、计算机程序产品、路侧设备以及云控平台。The present disclosure provides a method, device, electronic equipment, storage medium, computer program product, roadside equipment, and cloud control platform for determining camera action distance in vehicle-road coordination.
根据本公开的一方面,提供了一种车路协同中相机作用距离确定方法,该方法包括:获取相机的采集图像和像素焦距;从采集图像中截取包括目标对象的截取图像;基于像素焦距和截取图像的单位距离像素数,确定相机的最大作用距离,其中,截取图像的单位距离像素数为截取图像中能够被检测模型识别的最小单位内包括的像素数。According to an aspect of the present disclosure, there is provided a method for determining the working distance of a camera in vehicle-road coordination, the method comprising: acquiring a captured image and a pixel focal length of the camera; intercepting an intercepted image including a target object from the captured image; based on the pixel focal length and The number of pixels per unit distance of the intercepted image determines the maximum operating distance of the camera, wherein the number of pixels per unit distance of the intercepted image is the number of pixels included in the smallest unit that can be recognized by the detection model in the intercepted image.
根据本公开的另一方面,提供了一种车路协同中相机作用距离确定装 置,该装置包括:获取模块,被配置成获取相机的采集图像和像素焦距;截取模块,被配置成从采集图像中截取包括目标对象的截取图像;确定模块,被配置成基于像素焦距和截取图像的单位距离像素数,确定相机的最大作用距离,其中,截取图像的单位距离像素数为截取图像中能够被检测模型识别的最小单位内包括的像素数。According to another aspect of the present disclosure, there is provided a device for determining the working distance of a camera in vehicle-road coordination, the device includes: an acquisition module configured to acquire the captured image and pixel focal length of the camera; an intercepting module configured to obtain the captured image from The middle interception includes the intercepted image of the target object; the determination module is configured to determine the maximum operating distance of the camera based on the pixel focal length and the number of pixels per unit distance of the intercepted image, wherein the number of pixels per unit distance of the intercepted image is the number of pixels that can be detected in the intercepted image The number of pixels included in the smallest unit recognized by the model.
根据本公开的另一方面,提供了一种电子设备,该电子设备包括至少一个处理器;以及,与至少一个处理器通信连接的存储器;其中,存储器存储有可被至少一个处理器执行的指令,指令被至少一个处理器执行,以使至少一个处理器能够执行上述车路协同中相机作用距离确定方法。According to another aspect of the present disclosure, an electronic device is provided, the electronic device includes at least one processor; and a memory connected in communication with the at least one processor; wherein, the memory stores instructions executable by the at least one processor , the instructions are executed by at least one processor, so that the at least one processor can execute the above-mentioned method for determining the working distance of a camera in vehicle-road coordination.
根据本公开的另一方面,本申请实施例提供了一种计算机可读介质,其上存储有计算机指令,该计算机指令用于使计算机能够执行上述车路协同中相机作用距离确定方法。According to another aspect of the present disclosure, an embodiment of the present application provides a computer-readable medium, on which computer instructions are stored, and the computer instructions are used to enable a computer to execute the above-mentioned method for determining the working distance of a camera in vehicle-road coordination.
根据本公开的另一方面,本申请实施例提供了一种计算机程序产品,其包括计算机程序,计算机程序在被处理器执行时实现上述车路协同中相机作用距离确定方法。According to another aspect of the present disclosure, an embodiment of the present application provides a computer program product, which includes a computer program, and when the computer program is executed by a processor, implements the above-mentioned method for determining the working distance of a camera in vehicle-road coordination.
根据本公开的另一方面,本申请实施例提供了一种路侧设备,包括如上述的电子设备。According to another aspect of the present disclosure, an embodiment of the present application provides a roadside device, including the above-mentioned electronic device.
根据本公开的另一方面,本申请实施例提供了一种云控平台,包括如上述的电子设备。According to another aspect of the present disclosure, an embodiment of the present application provides a cloud control platform, including the above-mentioned electronic device.
应当理解,本部分所描述的内容并非旨在标识本公开的实施例的关键或重要特征,也不用于限制本公开的范围。本公开的其它特征将通过以下的说明书而变得容易理解。It should be understood that what is described in this section is not intended to identify key or important features of the embodiments of the present disclosure, nor is it intended to limit the scope of the present disclosure. Other features of the present disclosure will be readily understood through the following description.
附图说明Description of drawings
附图用于更好地理解本方案,不构成对本公开的限定。其中:The accompanying drawings are used to better understand the present solution, and do not constitute a limitation to the present disclosure. in:
图1是根据本公开的车路协同中相机作用距离确定方法的一个实施例的流程图;FIG. 1 is a flowchart of an embodiment of a method for determining a camera range in vehicle-road coordination according to the present disclosure;
图2是根据本公开的车路协同中相机作用距离确定方法的一个应用场景的示意图;FIG. 2 is a schematic diagram of an application scenario of a method for determining a camera action distance in vehicle-road coordination according to the present disclosure;
图3是根据本公开的获取截取图像对应的单位距离像素数的一个实施例的流程图;Fig. 3 is a flow chart of an embodiment of obtaining the number of pixels per unit distance corresponding to the intercepted image according to the present disclosure;
图4是根据本公开的确定目标对象的地理位置的一个实施例的流程图;FIG. 4 is a flowchart of one embodiment of determining a geographic location of a target object according to the present disclosure;
图5是根据本公开的车路协同中相机作用距离确定装置的一个实施例的结构示意图;Fig. 5 is a schematic structural diagram of an embodiment of a device for determining a camera action distance in vehicle-road coordination according to the present disclosure;
图6是用来实现本公开实施例的车路协同中相机作用距离确定方法的电子设备的框图。Fig. 6 is a block diagram of an electronic device used to implement the method for determining the working distance of a camera in vehicle-road coordination according to an embodiment of the present disclosure.
具体实施方式detailed description
以下结合附图对本公开的示范性实施例做出说明,其中包括本公开实施例的各种细节以助于理解,应当将它们认为仅仅是示范性的。因此,本领域普通技术人员应当认识到,可以对这里描述的实施例做出各种改变和修改,而不会背离本公开的范围和精神。同样,为了清楚和简明,以下的描述中省略了对公知功能和结构的描述。Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and they should be regarded as exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
参考图1,图1示出了可以应用于本公开的车路协同中相机作用距离确定方法的实施例的流程示意图100。该车路协同中相机作用距离确定方法包括以下步骤:Referring to FIG. 1 , FIG. 1 shows a schematic flowchart 100 of an embodiment of a method for determining a camera range in vehicle-road coordination that can be applied in the present disclosure. The method for determining the working distance of the camera in the vehicle-road coordination includes the following steps:
步骤110,获取相机的采集图像和像素焦距。 Step 110, acquire the captured image and pixel focal length of the camera.
在本实施例中,相机作用距离确定方法的执行主体(例如终端设备或者服务器)可以接收用户输入的相机参数,并在获取到相机参数后,根据相机参数进一步计算,获取到该相机的像素焦距,像素焦距是以像素为单位的焦距。作为示例,上述执行主体可以获取到相机的物理焦距和像素的物理尺寸,并根据像素焦距的计算公式:像素焦距=物理焦距/像素的物理尺寸,获取到相机的像素焦距。In this embodiment, the executing subject of the method for determining the working distance of the camera (such as a terminal device or a server) may receive the camera parameters input by the user, and after obtaining the camera parameters, further calculate according to the camera parameters to obtain the pixel focal length of the camera , the pixel focal length is the focal length in pixels. As an example, the above execution subject can obtain the physical focal length of the camera and the physical size of the pixels, and obtain the pixel focal length of the camera according to the calculation formula of the pixel focal length: pixel focal length=physical focal length/pixel physical size.
上述执行主体还可以利用相机获取到该相机的采集图像,该采集图像可以包括进行距离测量的目标对象。The above execution subject may also use the camera to acquire the captured image of the camera, and the captured image may include the target object for distance measurement.
步骤120,从采集图像中截取包括目标对象的截取图像。 Step 120, intercepting an intercepted image including the target object from the collected image.
在本实施例中,上述执行主体可以利用图像处理方法对获取到的采集图像进行图像处理,图像处理方法可以包括图像校正、图像滤波、图像灰度化、图像增强等。上述执行主体可以对处理后的采集图像进行图像分割, 图像分割就是把图像分成若干个特定的、具有独特性质的区域,图像分割方法主要包括基于阈值的分割方法、基于区域的分割方法、基于边缘的分割方法以及基于特定理论的分割方法等,在深度学习领域,可以使用多层神经网络模型,例如深度神经网络、卷积神经网络来进行图像分割,上述执行主体可以通过图像分割方法对采集图像进行分割,从采集图像中截取包括目标对象的截取图像。In this embodiment, the execution subject may use an image processing method to perform image processing on the acquired image. The image processing method may include image correction, image filtering, image grayscale, image enhancement, and the like. The above-mentioned executive body can perform image segmentation on the processed collected image. Image segmentation is to divide the image into several specific regions with unique properties. Image segmentation methods mainly include threshold-based segmentation methods, region-based segmentation methods, and edge-based segmentation methods. In the field of deep learning, multi-layer neural network models, such as deep neural network and convolutional neural network, can be used for image segmentation. The above-mentioned executive body can use image segmentation methods to collect images Segmentation is performed, and an intercepted image including the target object is intercepted from the collected image.
上述执行主体还可以对获取到的采集图像进行目标对象识别,确定出目标对象在采集图像中的位置信息。然后上述执行主体根据确定出的位置信息对采集图像进行截取,获取到包括目标对象的截取图像。The execution subject may also perform target object recognition on the acquired image to determine the position information of the target object in the image. Then the execution subject intercepts the captured image according to the determined position information, and obtains the captured image including the target object.
步骤130,基于像素焦距和截取图像的单位距离像素数,确定相机的最大作用距离。 Step 130, based on the pixel focal length and the number of pixels per unit distance of the intercepted image, determine the maximum working distance of the camera.
在本实施例中,上述执行主体获取到相机的像素焦距后,可以通过根据采集图像和截取图像进行计算或者根据截取图像和检测模型进行计算获取截取图像对应的单位距离像素数,截取图像对应的单位距离像素数可以为截取图像中能够被检测模型识别的最小单位内包括的像素数,检测模型是用于对目标对象进行检测的模型。检测模型对应有能够检测的最小目标对象,根据最小目标对象的实际尺寸,可以确定最小目标对象对应的像素数在单位距离内包括的像素数,上述执行主体可以根据截取图像和检测模型确定出截取图像的单位距离像素数。In this embodiment, after the execution subject obtains the pixel focal length of the camera, it can obtain the number of pixels per unit distance corresponding to the intercepted image through calculation based on the collected image and the intercepted image or calculation based on the intercepted image and the detection model, and the corresponding pixel number of the intercepted image The number of pixels per unit distance may be the number of pixels included in the smallest unit that can be recognized by the detection model in the intercepted image, and the detection model is a model used to detect the target object. The detection model corresponds to the smallest target object that can be detected. According to the actual size of the smallest target object, the number of pixels corresponding to the smallest target object can be determined within a unit distance. The number of pixels per unit distance of the image.
或者,上述执行主体可以确定出采集图像对应的单位距离像素数,并根据采集图像的分辨率和截取图像的分辨率确定两者之间的比例值,可以根据该比例值和采集图像对应的单位距离像素数确定出截取图像对应的单位距离像素数。Alternatively, the execution subject may determine the number of pixels per unit distance corresponding to the captured image, and determine the ratio between the two according to the resolution of the collected image and the resolution of the intercepted image. The number of distance pixels determines the number of pixels per unit distance corresponding to the intercepted image.
上述执行主体获取到相机的像素焦距和截取图像对应的单位距离像素数后,可以根据像素焦距和单位距离像素数之间的比值关系,计算出相机的最大作用距离,最大作用距离可以是相机对目标对象进行检测的最远距离,上述执行主体可以根据公式计算相机的最大作用距离,该公式可以是:After the execution subject obtains the pixel focal length of the camera and the number of pixels per unit distance corresponding to the intercepted image, it can calculate the maximum operating distance of the camera according to the ratio between the focal length of the pixel and the number of pixels per unit distance. The maximum operating distance can be The farthest distance for the target object to be detected. The above-mentioned execution subject can calculate the maximum operating distance of the camera according to the formula, which can be:
max_distance=focal/min_pixels_per_metermax_distance=focal/min_pixels_per_meter
其中,max_distance表示相机的最大作用距离,focal表示相机的像素焦距,min_pixels_per_meter表示截取图像对应的单位距离像素数。Among them, max_distance indicates the maximum operating distance of the camera, focal indicates the pixel focal length of the camera, and min_pixels_per_meter indicates the number of pixels per unit distance corresponding to the intercepted image.
作为示例,采集图像的分辨率为(w1,h1),采集图像对应的单位距离像素数为min1,截取图像从采集图像中截取得来,截取图像的分辨率为(w2,h2),则上述执行主体可以确定采集图像的分辨率与截取图像的分辨率之间的比例值为w1/w2,确定出截取图像对应的单位距离像素数为min1/(w1/w2)。上述执行主体可以确定在采集图像下,相机的最大作用距离为max_distance1=focal/min1;在截取图像下,相机的最大作用距离为max_distance=focal/min1/(w1/w2)。As an example, the resolution of the captured image is (w1, h1), the number of pixels per unit distance corresponding to the captured image is min1, the intercepted image is intercepted from the captured image, and the resolution of the captured image is (w2, h2), then the above The execution subject can determine the ratio between the resolution of the captured image and the resolution of the intercepted image as w1/w2, and determine that the number of pixels per unit distance corresponding to the intercepted image is min1/(w1/w2). The above-mentioned executive body can determine that the maximum working distance of the camera is max_distance1=focal/min1 under the captured image; the maximum working distance of the camera is max_distance=focal/min1/(w1/w2) under the intercepted image.
继续参考图2,图2是根据本实施例的车路协同中相机作用距离确定方法的一个应用场景的示意图。在图2的应用场景中,终端201可以通过显示屏向用户展示相机的物理参数输入界面,用户可以在物理参数输入界面中输入相机的物理参数,终端201可以根据该物理参数获取相机的像素焦距和包括目标对象的采集图像。终端201可以对采集图像进行图像处理,从获取到的采集图像中截取包括目标对象的截取图像,然后终端201根据相机的像素焦距和截取图像的单位距离像素数,计算出相机的最大作用距离。Continue to refer to FIG. 2 , which is a schematic diagram of an application scenario of the method for determining the working distance of a camera in vehicle-road coordination according to this embodiment. In the application scenario of FIG. 2, the terminal 201 can display the physical parameter input interface of the camera to the user through the display screen, and the user can input the physical parameter of the camera in the physical parameter input interface, and the terminal 201 can obtain the pixel focal length of the camera according to the physical parameter and an acquired image including an object of interest. The terminal 201 may perform image processing on the collected image, and intercept the intercepted image including the target object from the acquired collected image, and then the terminal 201 calculates the maximum working distance of the camera according to the pixel focal length of the camera and the number of pixels per unit distance of the intercepted image.
本公开的实施例提供的车路协同中相机作用距离确定方法,通过获取相机的采集图像和像素焦距,然后从采集图像中截取包括目标对象的截取图像,最后基于像素焦距和截取图像的单位距离像素数,确定相机的最大作用距离,截取图像的单位距离像素数为截取图像中能够被检测模型识别的最小单位内包括的像素数,能够通过对采集图像进行截取实现相机作用距离的自动调整,由于截取图像是从采集图像中进行截取得到,截取图像的分辨率与采集图像的分辨率存在一定的比例关系,则截取图像对应的单位距离像素数与采集图像对应的单位距离像素数也存在相同的比例关系,能够利用图像截取实现单位距离像素数的变化,从而实现相机作用距离的变化,能够在不改变相机焦距的前提下提升相机的作用距离,降低了变焦相机的成本,提高了调整相机作用距离的灵活性。The method for determining the working distance of the camera in the vehicle-road coordination provided by the embodiments of the present disclosure obtains the captured image and the pixel focal length of the camera, and then captures the captured image including the target object from the captured image, and finally based on the pixel focal length and the unit distance of the captured image The number of pixels determines the maximum operating distance of the camera. The number of pixels per unit distance of the intercepted image is the number of pixels included in the smallest unit that can be recognized by the detection model in the intercepted image. The automatic adjustment of the camera’s operating distance can be realized by intercepting the collected image. Since the intercepted image is intercepted from the collected image, there is a certain proportional relationship between the resolution of the intercepted image and the resolution of the collected image, and the number of pixels per unit distance corresponding to the intercepted image is also the same as the number of pixels per unit distance corresponding to the collected image. The proportional relationship can use image interception to realize the change of the number of pixels per unit distance, so as to realize the change of the camera's working distance, and can increase the working distance of the camera without changing the focal length of the camera, which reduces the cost of the zoom camera and improves the adjustment of the camera. Range flexibility.
作为一个可选实现方式,上述步骤110,获取相机的像素焦距,可以 包括以下步骤:获取相机的物理焦距和成像传感器参数;基于物理焦距、成像传感器参数和采集图像的分辨率,确定相机的像素焦距。As an optional implementation, the above step 110, obtaining the pixel focal length of the camera, may include the following steps: obtaining the physical focal length and imaging sensor parameters of the camera; determining the pixel of the camera based on the physical focal length, imaging sensor parameters and the resolution of the collected image focal length.
具体地,上述执行主体可以通过网络读取或者接收用户输入的相机参数,获取到该相机的物理参数,相机的物理参数可以是相机进行拍摄的基本参数,可以包括成像传感器参数、物理焦距、快门速度等用来恒量相机性能的一些参数。Specifically, the above-mentioned executive body can read or receive the camera parameters input by the user through the network, and obtain the physical parameters of the camera. The physical parameters of the camera can be the basic parameters for the camera to shoot, and can include imaging sensor parameters, physical focal length, shutter Some parameters such as speed are used to constant camera performance.
上述执行主体可以通过显示屏等显示设备为用户提供相机物理参数的输入界面,用户可以在输入界面中输入相机的物理参数,从而上述执行主体可以通过用户输入获取到相机的物理焦距和成像传感器参数;或者,上述执行主体可以根据在网络中存储的相机参数,对该相机的物理参数进行网络读取,获取到相机的物理焦距和成像传感器参数。The above-mentioned execution subject can provide the user with an input interface of the physical parameters of the camera through a display device such as a display screen, and the user can input the physical parameters of the camera in the input interface, so that the above-mentioned execution subject can obtain the physical focal length and imaging sensor parameters of the camera through user input ; Or, the above execution subject can read the physical parameters of the camera from the network according to the camera parameters stored in the network, and obtain the physical focal length and imaging sensor parameters of the camera.
上述执行主体获取到相机的物理参数中包括相机的物理焦距和成像传感器参数,可以进一步确定相机的采集图像的分辨率。上述执行主体可以根据相机的物理焦距、成像传感器参数和采集图像的分辨率,利用像素焦距计算公式确定相机的像素焦距,该像素焦距计算公式可以是:The physical parameters of the camera acquired by the execution subject include the physical focal length of the camera and the parameters of the imaging sensor, which can further determine the resolution of the captured image of the camera. The above execution subject can determine the pixel focal length of the camera by using the pixel focal length calculation formula according to the physical focal length of the camera, the imaging sensor parameters and the resolution of the captured image. The pixel focal length calculation formula can be:
Figure PCTCN2021135146-appb-000001
Figure PCTCN2021135146-appb-000001
其中,focal表示相机的像素焦距,lens表示相机的物理焦距,img_width和img_height表示采集图像的分辨率,sensor_size表示相机的成像传感器参数。Among them, focal represents the pixel focal length of the camera, lens represents the physical focal length of the camera, img_width and img_height represent the resolution of the captured image, and sensor_size represents the imaging sensor parameters of the camera.
在本实现方式中,通过物理焦距、成像传感器参数和采集图像的分辨率,确定相机的像素焦距,能够根据物理焦距、成像传感器参数和采集图像的分辨率三者之间的计算关系确定像素焦距,提高了确定像素焦距的效率和准确性。In this implementation, the pixel focal length of the camera is determined through the physical focal length, imaging sensor parameters and the resolution of the captured image, and the pixel focal length can be determined according to the calculation relationship among the physical focal length, imaging sensor parameters and the resolution of the captured image , improving the efficiency and accuracy of determining the focal length of a pixel.
作为一个可选实现方式,上述步骤110,获取相机的像素焦距,还可以包括以下步骤:基于采集图像和相机标定算法,确定相机的第一内参矩阵;从第一内参矩阵中获取相机的像素焦距。As an optional implementation, the above step 110, obtaining the pixel focal length of the camera, may also include the following steps: determining the first internal reference matrix of the camera based on the collected image and the camera calibration algorithm; obtaining the pixel focal length of the camera from the first internal reference matrix .
具体地,上述执行主体可以获取到采集图像,利用相机标定算法对采集图像进行标定处理,计算出相机的第一内参矩阵,该第一内参矩阵可以包括相机的像素焦距和相机感光板中心在像素坐标系下的坐标,例如,上 述执行主体可以利用张正友棋盘格标定算法对采集图像进行标定,获取到相机的第一内参矩阵和外参矩阵。上述执行主体可以从计算得到的第一内参矩阵中确定相机的像素焦距。Specifically, the above-mentioned executive body can obtain the captured image, use the camera calibration algorithm to calibrate the captured image, and calculate the first internal reference matrix of the camera. The first internal reference matrix can include the pixel focal length of the camera and the center of the photosensitive plate of the camera. For the coordinates in the coordinate system, for example, the above execution subject can calibrate the captured image by using the Zhang Zhengyou checkerboard calibration algorithm, and obtain the first internal parameter matrix and external parameter matrix of the camera. The above execution subject may determine the pixel focal length of the camera from the calculated first internal reference matrix.
在本实现方式中,通过相机标定算法确定相机的像素焦距,能够根据采集图像快速且准确地确定出像素焦距,提高了确定像素焦距的效率和准确性。In this implementation, the pixel focal length of the camera is determined through the camera calibration algorithm, and the pixel focal length can be determined quickly and accurately according to the collected images, which improves the efficiency and accuracy of determining the pixel focal length.
参考图3,图3示出了获取截取图像的单位距离像素数的方法步骤,其可以包括以下步骤:With reference to Fig. 3, Fig. 3 has shown the method step that obtains the unit distance pixel number of intercepted image, and it may comprise the following steps:
步骤310,获取检测模型中样本图像的单位距离像素数。 Step 310, acquiring the number of pixels per unit distance of the sample image in the detection model.
在本实施例中,上述执行主体可以对检测模型进行读取,获取到检测模型中样本图像的单位距离像素数。其中,检测模型是基于样本图像进行训练,用于对采集图像中的目标对象进行检测的模型,该检测模型对应有能够检测的最小目标对象,根据最小目标对象的实际尺寸,可以确定最小目标对象对应的像素数在单位距离内包括的像素数,从而能够获取到检测模型中样本图像的单位距离像素数,即样本图像的单位距离像素数为样本图像中能够被检测模型识别的最小单位内包括的像素数。In this embodiment, the execution subject may read the detection model, and obtain the number of pixels per unit distance of the sample image in the detection model. Among them, the detection model is trained based on the sample image, and is used to detect the target object in the collected image. The detection model corresponds to the smallest target object that can be detected. According to the actual size of the smallest target object, the smallest target object can be determined. The corresponding number of pixels includes the number of pixels per unit distance, so that the number of pixels per unit distance of the sample image in the detection model can be obtained, that is, the number of pixels per unit distance of the sample image is included in the smallest unit that can be recognized by the detection model in the sample image of pixels.
步骤320,获取样本图像的分辨率,并确定样本图像的分辨率和截取图像的分辨率之间的比例值。 Step 320, acquire the resolution of the sample image, and determine the ratio between the resolution of the sample image and the resolution of the intercepted image.
在本实施例中,检测模型能够检测预设分辨率的图像,可以通过同一分辨率的样本图像进行训练得到的模型。上述执行主体可以获取到检测模型中样本图像的分辨率,再确定截取图像的分辨率。上述执行主体可以根据样本图像的分辨率和截取图像的分辨率,计算出样本图像的分辨率和截取图像的分辨率之间的比例值。In this embodiment, the detection model can detect images with a preset resolution, and the model can be obtained through training with sample images of the same resolution. The above execution subject can obtain the resolution of the sample image in the detection model, and then determine the resolution of the intercepted image. The execution subject may calculate a ratio between the resolution of the sample image and the resolution of the intercepted image according to the resolution of the sample image and the resolution of the intercepted image.
步骤330,基于样本图像的单位距离像素数和比例值,获取截取图像的单位距离像素数。 Step 330, based on the number of pixels per unit distance and the ratio value of the sample image, the number of pixels per unit distance of the intercepted image is obtained.
在本实施例中,上述执行主体确定出样本图像的分辨率和截取图像的分辨率之间的比例值后,可以根据样本图像的单位距离像素数和比例值,计算出截取图像对应的单位距离像素数,从而可以获取到截取图像中能够被检测模型识别的最小单位内包括的像素数。In this embodiment, after the execution subject determines the ratio between the resolution of the sample image and the resolution of the intercepted image, the unit distance corresponding to the intercepted image can be calculated according to the number of pixels per unit distance of the sample image and the ratio value The number of pixels, so that the number of pixels included in the smallest unit that can be recognized by the detection model in the intercepted image can be obtained.
在本实现方式中,通过检测模型的样本图像和截取图像之间的比例关系,计算出截取图像的单位距离像素数,从而确定出截取图像中能够被检测模型识别的最小单位内包括的像素数,能够获取到截取图像中能够被检测模型识别最小单位距离像素数。In this implementation, the number of pixels per unit distance of the intercepted image is calculated through the proportional relationship between the sample image of the detection model and the intercepted image, so as to determine the number of pixels included in the smallest unit that can be recognized by the detection model in the intercepted image , the number of pixels in the intercepted image that can be recognized by the detection model as the minimum unit distance can be obtained.
参考图4,图4示出了确定目标对象的地理位置的方法步骤,其可以包括以下步骤:Referring to FIG. 4, FIG. 4 shows the method steps for determining the geographic location of the target object, which may include the following steps:
步骤410,基于截取图像的分辨率和第一内参矩阵,确定截取图像的第二内参矩阵。 Step 410, based on the resolution of the intercepted image and the first internal reference matrix, determine a second internal reference matrix for the intercepted image.
在本实施例中,上述执行主体可以利用相机标定算法确定相机的第一内参矩阵和外参矩阵,第一内参矩阵包括相机的像素焦距和相机感光板中心在像素坐标系下的坐标值。In this embodiment, the execution subject can use the camera calibration algorithm to determine the first internal parameter matrix and external parameter matrix of the camera. The first internal parameter matrix includes the pixel focal length of the camera and the coordinate value of the center of the photosensitive plate of the camera in the pixel coordinate system.
上述执行主体可以确定截取图像的分辨率,并在获取到第一内参矩阵后,在截取图像的分辨率下确定相机感光板中心在像素坐标系下的坐标值,即该坐标值可以是截取图像的分辨率的一半。上述执行主体可以根据确定出的像素焦距和截取图像对应的坐标值确定出截取图像的第二内参矩阵。The above execution subject can determine the resolution of the intercepted image, and after obtaining the first internal reference matrix, determine the coordinate value of the center of the photosensitive plate of the camera in the pixel coordinate system under the resolution of the intercepted image, that is, the coordinate value can be the intercepted image half of the resolution. The execution subject may determine the second internal reference matrix of the intercepted image according to the determined pixel focal length and the coordinate values corresponding to the intercepted image.
作为示例,上述执行主体对采集图像进行张正友棋盘格标定算法确定出第一内参矩阵,第一内参矩阵为:As an example, the above execution subject performs Zhang Zhengyou checkerboard calibration algorithm on the collected images to determine the first internal parameter matrix, and the first internal parameter matrix is:
Figure PCTCN2021135146-appb-000002
Figure PCTCN2021135146-appb-000002
其中,fx和fy为相机的像素焦距,cx和cy为在采集图像的分辨率下相机感光板中心在像素坐标系下的坐标值。上述执行主体获取到截取图像的分辨率(W,H)后,可以将分辨率的一半作为在截取图像的分辨率下相机感光板中心在像素坐标系下的坐标值,即在截取图像的分辨率下相机感光板中心在像素坐标系下的坐标值为(W/2,H/2),并且维持像素焦距不变。则可以确定截取图像对应的第二内参矩阵为:Among them, fx and fy are the pixel focal length of the camera, cx and cy are the coordinate values of the center of the photosensitive plate of the camera in the pixel coordinate system at the resolution of the captured image. After the execution subject obtains the resolution (W, H) of the intercepted image, half of the resolution can be used as the coordinate value of the center of the photosensitive plate of the camera in the pixel coordinate system at the resolution of the intercepted image, that is, in the resolution of the intercepted image The coordinate value of the center of the photosensitive plate of the camera in the pixel coordinate system is (W/2, H/2), and the focal length of the pixel remains unchanged. Then it can be determined that the second internal parameter matrix corresponding to the intercepted image is:
Figure PCTCN2021135146-appb-000003
Figure PCTCN2021135146-appb-000003
步骤420,基于截取图像的第二内参矩阵和外参矩阵,确定目标对象的地理位置。Step 420: Determine the geographic location of the target object based on the second internal reference matrix and external reference matrix of the intercepted image.
在本实施例中,上述执行主体获取到外参矩阵保持不变,并在获取到 截取图像的第二内参矩阵后,可以利用截取图像的第二内参矩阵和外参矩阵对截取图像中的目标对象进行感知定位,确定出目标对象的地理位置,该地理位置可以是目标对象的实际位置,可以用坐标进行表示。In this embodiment, the extrinsic parameter matrix acquired by the above execution subject remains unchanged, and after obtaining the second internal reference matrix of the intercepted image, the second internal reference matrix and extrinsic parameter matrix of the intercepted image can be used to compare the target in the intercepted image The object performs perceptual positioning to determine the geographic location of the target object. The geographic location may be the actual location of the target object and may be represented by coordinates.
在本实现方式中,通过相机的像素焦距和截取图像的分辨率确定截取图像对应的第二内参矩阵,并根据第二内参矩阵和外参矩阵,确定目标对象的地理位置,能够更准确的确定出目标对象的地理位置,提高了位置检测的准确性。In this implementation, the second internal reference matrix corresponding to the intercepted image is determined through the pixel focal length of the camera and the resolution of the intercepted image, and the geographic location of the target object is determined according to the second internal reference matrix and the external reference matrix, which can more accurately determine The geographic location of the target object is obtained, which improves the accuracy of location detection.
进一步参考图5,作为对上述各图所示方法的实现,本公开提供了一种车路协同中相机作用距离确定装置的一个实施例,该装置实施例与图1所示的方法实施例相对应,该装置具体可以应用于各种电子设备中。With further reference to FIG. 5 , as an implementation of the methods shown in the above figures, the present disclosure provides an embodiment of a device for determining the working distance of a camera in vehicle-road coordination. This device embodiment is similar to the method embodiment shown in FIG. 1 Correspondingly, the device can be specifically applied to various electronic devices.
如图5所示,本实施例的车路协同中相机作用距离确定装置500包括:获取模块510,截取模块520和确定模块530。As shown in FIG. 5 , the device 500 for determining the camera working distance in the vehicle-road coordination in this embodiment includes: an acquisition module 510 , an interception module 520 and a determination module 530 .
其中,获取模块510,被配置成获取相机的采集图像和像素焦距;Wherein, the obtaining module 510 is configured to obtain the captured image and pixel focal length of the camera;
截取模块520,被配置成从采集图像中截取包括目标对象的截取图像;The intercepting module 520 is configured to intercept the intercepted image including the target object from the collected image;
确定模块530,被配置成基于像素焦距和截取图像的单位距离像素数,确定相机的最大作用距离,其中,截取图像的单位距离像素数为截取图像中能够被检测模型识别的最小单位内包括的像素数。The determination module 530 is configured to determine the maximum operating distance of the camera based on the pixel focal length and the number of pixels per unit distance of the intercepted image, wherein the number of pixels per unit distance of the intercepted image is included in the smallest unit that can be identified by the detection model in the intercepted image number of pixels.
在本实施例的一些可选的方式中,获取模块510,进一步被配置成:获取相机的物理焦距和成像传感器参数;基于物理焦距、成像传感器参数和采集图像的分辨率,确定相机的像素焦距。In some optional ways of this embodiment, the obtaining module 510 is further configured to: obtain the physical focal length and imaging sensor parameters of the camera; determine the pixel focal length of the camera based on the physical focal length, imaging sensor parameters and the resolution of the captured image .
在本实施例的一些可选的方式中,获取模块510,进一步被配置成:基于采集图像和相机标定算法,确定相机的第一内参矩阵;从第一内参矩阵中获取相机的像素焦距。In some optional manners of this embodiment, the obtaining module 510 is further configured to: determine a first internal reference matrix of the camera based on the collected images and the camera calibration algorithm; obtain the pixel focal length of the camera from the first internal reference matrix.
在本实施例的一些可选的方式中,截取图像的单位距离像素数基于以下步骤获取:获取检测模型中样本图像的单位距离像素数,其中,检测模型用于检测目标对象,样本图像的单位距离像素数为样本图像中能够被检测模型识别的最小单位内包括的像素数;获取样本图像的分辨率,并确定样本图像的分辨率和截取图像的分辨率之间的比例值;基于样本图像的单位距离像素数和比例值,获取截取图像的单位距离像素数。In some optional ways of this embodiment, the number of pixels per unit distance of the intercepted image is obtained based on the following steps: obtaining the number of pixels per unit distance of the sample image in the detection model, wherein the detection model is used to detect the target object, and the unit distance of the sample image is The number of distance pixels is the number of pixels included in the smallest unit that can be recognized by the detection model in the sample image; obtain the resolution of the sample image, and determine the ratio between the resolution of the sample image and the resolution of the intercepted image; based on the sample image The number of pixels per unit distance and the scale value of , to obtain the number of pixels per unit distance of the intercepted image.
在本实施例的一些可选的方式中,确定模块530,进一步被配置成:基于截取图像的分辨率和第一内参矩阵,确定截取图像的第二内参矩阵;基于截取图像的第二内参矩阵和外参矩阵,确定目标对象的地理位置。In some optional manners of this embodiment, the determination module 530 is further configured to: determine a second internal reference matrix of the intercepted image based on the resolution of the intercepted image and the first internal reference matrix; and the extrinsic matrix to determine the geographic location of the target object.
本公开的实施例提供的车路协同中相机作用距离确定装置,通过获取相机的采集图像和像素焦距,然后从采集图像中截取包括目标对象的截取图像,最后基于像素焦距和截取图像的单位距离像素数,确定相机的最大作用距离,截取图像的单位距离像素数为截取图像中能够被检测模型识别的最小单位内包括的像素数,能够通过对采集图像进行截取实现相机作用距离的自动调整,由于截取图像是从采集图像中进行截取得到,截取图像的分辨率与采集图像的分辨率存在一定的比例关系,则截取图像对应的单位距离像素数与采集图像对应的单位距离像素数也存在相同的比例关系,能够利用图像截取实现单位距离像素数的变化,从而实现相机作用距离的变化,能够在不改变相机焦距的前提下提升相机的作用距离,降低了变焦相机的成本,提高了调整相机作用距离的灵活性。The device for determining the working distance of the camera in the vehicle-road coordination provided by the embodiments of the present disclosure obtains the captured image and the pixel focal length of the camera, and then captures the captured image including the target object from the captured image, and finally based on the pixel focal length and the unit distance of the captured image The number of pixels determines the maximum operating distance of the camera. The number of pixels per unit distance of the intercepted image is the number of pixels included in the smallest unit that can be recognized by the detection model in the intercepted image. The automatic adjustment of the camera’s operating distance can be realized by intercepting the collected image. Since the intercepted image is intercepted from the collected image, there is a certain proportional relationship between the resolution of the intercepted image and the resolution of the collected image, and the number of pixels per unit distance corresponding to the intercepted image is also the same as the number of pixels per unit distance corresponding to the collected image. The proportional relationship can use image interception to realize the change of the number of pixels per unit distance, so as to realize the change of the camera's working distance, and can increase the working distance of the camera without changing the focal length of the camera, which reduces the cost of the zoom camera and improves the adjustment of the camera. Range flexibility.
本公开的技术方案中,所涉及的用户个人信息的获取,存储和应用等,均符合相关法律法规的规定,且不违背公序良俗。In the technical solution of the present disclosure, the acquisition, storage and application of the user's personal information involved are in compliance with relevant laws and regulations, and do not violate public order and good customs.
根据本公开的实施例,本公开还提供了一种电子设备、一种可读存储介质、一种计算机程序产品、一种路侧设备和一种云控平台。According to the embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium, a computer program product, a roadside device, and a cloud control platform.
图6示出了可以用来实施本公开的实施例的示例电子设备600的示意性框图。电子设备旨在表示各种形式的数字计算机,诸如,膝上型计算机、台式计算机、工作台、个人数字助理、服务器、刀片式服务器、大型计算机、和其它适合的计算机。电子设备还可以表示各种形式的移动装置,诸如,个人数字处理、蜂窝电话、智能电话、可穿戴设备和其它类似的计算装置。本文所示的部件、它们的连接和关系、以及它们的功能仅仅作为示例,并且不意在限制本文中描述的和/或者要求的本公开的实现。FIG. 6 shows a schematic block diagram of an example electronic device 600 that may be used to implement embodiments of the present disclosure. Electronic device is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. Electronic devices may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are by way of example only, and are not intended to limit implementations of the disclosure described and/or claimed herein.
如图6所示,电子设备600包括计算单元601,其可以根据存储在只读存储器(ROM)602中的计算机程序或者从存储单元608加载到随机访问存储器(RAM)603中的计算机程序,来执行各种适当的动作和处理。在RAM 603中,还可存储设备600操作所需的各种程序和数据。计算单元601、ROM 602以及RAM 603通过总线604彼此相连。输入/输出(I/O) 接口605也连接至总线604。As shown in FIG. 6 , the electronic device 600 includes a computing unit 601, which can perform calculations according to a computer program stored in a read-only memory (ROM) 602 or a computer program loaded from a storage unit 608 into a random access memory (RAM) 603. Various appropriate actions and processes are performed. In the RAM 603, various programs and data necessary for the operation of the device 600 can also be stored. The computing unit 601, ROM 602, and RAM 603 are connected to each other through a bus 604. An input/output (I/O) interface 605 is also connected to bus 604 .
电子设备600中的多个部件连接至I/O接口605,包括:输入单元606,例如键盘、鼠标等;输出单元607,例如各种类型的显示器、扬声器等;存储单元608,例如磁盘、光盘等;以及通信单元609,例如网卡、调制解调器、无线通信收发机等。通信单元609允许设备600通过诸如因特网的计算机网络和/或各种电信网络与其他设备交换信息/数据。Multiple components in the electronic device 600 are connected to the I/O interface 605, including: an input unit 606, such as a keyboard, a mouse, etc.; an output unit 607, such as various types of displays, speakers, etc.; a storage unit 608, such as a magnetic disk, an optical disk etc.; and a communication unit 609, such as a network card, a modem, a wireless communication transceiver, and the like. The communication unit 609 allows the device 600 to exchange information/data with other devices over a computer network such as the Internet and/or various telecommunication networks.
计算单元601可以是各种具有处理和计算能力的通用和/或专用处理组件。计算单元601的一些示例包括但不限于中央处理单元(CPU)、图形处理单元(GPU)、各种专用的人工智能(AI)计算芯片、各种运行机器学习模型算法的计算单元、数字信号处理器(DSP)、以及任何适当的处理器、控制器、微控制器等。计算单元601执行上文所描述的各个方法和处理,例如车路协同中相机作用距离确定方法。例如,在一些实施例中,车路协同中相机作用距离确定方法可被实现为计算机软件程序,其被有形地包含于机器可读介质,例如存储单元608。在一些实施例中,计算机程序的部分或者全部可以经由ROM 602和/或通信单元609而被载入和/或安装到设备600上。当计算机程序加载到RAM 603并由计算单元601执行时,可以执行上文描述的车路协同中相机作用距离确定方法的一个或多个步骤。备选地,在其他实施例中,计算单元601可以通过其他任何适当的方式(例如,借助于固件)而被配置为执行车路协同中相机作用距离确定方法。The computing unit 601 may be various general-purpose and/or special-purpose processing components having processing and computing capabilities. Some examples of computing units 601 include, but are not limited to, central processing units (CPUs), graphics processing units (GPUs), various dedicated artificial intelligence (AI) computing chips, various computing units that run machine learning model algorithms, digital signal processing processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 601 executes various methods and processes described above, for example, a method for determining the working distance of a camera in vehicle-road coordination. For example, in some embodiments, the method for determining the camera range in vehicle-road coordination can be implemented as a computer software program, which is tangibly contained in a machine-readable medium, such as the storage unit 608 . In some embodiments, part or all of the computer program may be loaded and/or installed on the device 600 via the ROM 602 and/or the communication unit 609. When the computer program is loaded into the RAM 603 and executed by the computing unit 601, one or more steps of the method for determining the working distance of the camera in the vehicle-road coordination described above can be executed. Alternatively, in other embodiments, the calculation unit 601 may be configured in any other appropriate way (for example, by means of firmware) to execute the method for determining the working distance of the camera in the vehicle-road coordination.
本文中以上描述的系统和技术的各种实施方式可以在数字电子电路系统、集成电路系统、场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、芯片上系统的系统(SOC)、负载可编程逻辑设备(CPLD)、计算机硬件、固件、软件、和/或它们的组合中实现。这些各种实施方式可以包括:实施在一个或者多个计算机程序中,该一个或者多个计算机程序可在包括至少一个可编程处理器的可编程系统上执行和/或解释,该可编程处理器可以是专用或者通用可编程处理器,可以从存储系统、至少一个输入装置、和至少一个输出装置接收数据和指令,并且将数据和指令传输至该存储系统、该至少一个输入装置、和该至少一个输出装置。Various implementations of the systems and techniques described above herein can be implemented in digital electronic circuit systems, integrated circuit systems, field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), application specific standard products (ASSPs), systems on chips Implemented in a system of systems (SOC), load programmable logic device (CPLD), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include being implemented in one or more computer programs executable and/or interpreted on a programmable system including at least one programmable processor, the programmable processor Can be special-purpose or general-purpose programmable processor, can receive data and instruction from storage system, at least one input device, and at least one output device, and transmit data and instruction to this storage system, this at least one input device, and this at least one output device an output device.
用于实施本公开的方法的程序代码可以采用一个或多个编程语言的任何组合来编写。这些程序代码可以提供给通用计算机、专用计算机或其他可编程数据处理装置的处理器或控制器,使得程序代码当由处理器或控制器执行时使流程图和/或框图中所规定的功能/操作被实施。程序代码可以完全在机器上执行、部分地在机器上执行,作为独立软件包部分地在机器上执行且部分地在远程机器上执行或完全在远程机器或服务器上执行。Program codes for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general-purpose computer, a special purpose computer, or other programmable data processing devices, so that the program codes, when executed by the processor or controller, make the functions/functions specified in the flow diagrams and/or block diagrams Action is implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。In the context of the present disclosure, a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, apparatus, or device. A machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatus, or devices, or any suitable combination of the foregoing. More specific examples of machine-readable storage media would include one or more wire-based electrical connections, portable computer discs, hard drives, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
为了提供与用户的交互,可以在计算机上实施此处描述的系统和技术,该计算机具有:用于向用户显示信息的显示装置(例如,CRT(阴极射线管)或者LCD(液晶显示器)监视器);以及键盘和指向装置(例如,鼠标或者轨迹球),用户可以通过该键盘和该指向装置来将输入提供给计算机。其它种类的装置还可以用于提供与用户的交互;例如,提供给用户的反馈可以是任何形式的传感反馈(例如,视觉反馈、听觉反馈、或者触觉反馈);并且可以用任何形式(包括声输入、语音输入或者、触觉输入)来接收来自用户的输入。To provide for interaction with the user, the systems and techniques described herein can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user. ); and a keyboard and pointing device (eg, a mouse or a trackball) through which a user can provide input to the computer. Other kinds of devices can also be used to provide interaction with the user; for example, the feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and can be in any form (including Acoustic input, speech input or, tactile input) to receive input from the user.
可以将此处描述的系统和技术实施在包括后台部件的计算系统(例如,作为数据服务器)、或者包括中间件部件的计算系统(例如,应用服务器)、或者包括前端部件的计算系统(例如,具有图形用户界面或者网络浏览器的用户计算机,用户可以通过该图形用户界面或者该网络浏览器来与此处描述的系统和技术的实施方式交互)、或者包括这种后台部件、中间件部件、或者前端部件的任何组合的计算系统中。可以通过任何形式或者介质 的数字数据通信(例如,通信网络)来将系统的部件相互连接。通信网络的示例包括:局域网(LAN)、广域网(WAN)和互联网。The systems and techniques described herein can be implemented in a computing system that includes back-end components (e.g., as a data server), or a computing system that includes middleware components (e.g., an application server), or a computing system that includes front-end components (e.g., as a a user computer having a graphical user interface or web browser through which a user can interact with embodiments of the systems and techniques described herein), or including such backend components, middleware components, Or any combination of front-end components in a computing system. The components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include: Local Area Network (LAN), Wide Area Network (WAN) and the Internet.
计算机系统可以包括客户端和服务器。客户端和服务器一般远离彼此并且通常通过通信网络进行交互。通过在相应的计算机上运行并且彼此具有客户端-服务器关系的计算机程序来产生客户端和服务器的关系。服务器可以是云服务器,也可以为分布式系统的服务器,或者是结合了区块链的服务器。A computer system may include clients and servers. Clients and servers are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, a server of a distributed system, or a server combined with a blockchain.
可选的,路侧设备除了包括电子设备,还可以包括通信部件等,电子设备可以和通信部件一体集成,也可以分体设置。电子设备可以获取感知设备(如路侧相机)的数据,例如图片和视频等,从而进行图像视频处理和数据计算。可选的,电子设备自身也可以具备感知数据获取功能和通信功能,例如是AI相机,电子设备可以直接基于获取的感知数据进行图像视频处理和数据计算。Optionally, in addition to electronic equipment, the roadside equipment may also include communication components, etc., and the electronic equipment and communication components may be integrally integrated or separately configured. Electronic devices can obtain data from sensing devices (such as roadside cameras), such as pictures and videos, for image and video processing and data calculation. Optionally, the electronic device itself may also have a sensing data acquisition function and a communication function, such as an AI camera, and the electronic device may directly perform image and video processing and data calculation based on the acquired sensing data.
可选的,云控平台在云端执行处理,云控平台包括的电子设备可以获取感知设备(如路侧相机)的数据,例如图片和视频等,从而进行图像视频处理和数据计算;云控平台也可以称为车路协同管理平台、边缘计算平台、云计算平台、中心系统、云端服务器等。Optionally, the cloud control platform performs processing in the cloud, and the electronic devices included in the cloud control platform can obtain data from sensing devices (such as roadside cameras), such as pictures and videos, to perform image and video processing and data calculation; the cloud control platform It can also be called vehicle-road collaborative management platform, edge computing platform, cloud computing platform, central system, cloud server, etc.
应该理解,可以使用上面所示的各种形式的流程,重新排序、增加或删除步骤。例如,本发公开中记载的各步骤可以并行地执行也可以顺序地执行也可以不同的次序执行,只要能够实现本公开公开的技术方案所期望的结果,本文在此不进行限制。It should be understood that steps may be reordered, added or deleted using the various forms of flow shown above. For example, each step described in the present disclosure may be executed in parallel, sequentially, or in a different order, as long as the desired result of the technical solution disclosed in the present disclosure can be achieved, no limitation is imposed herein.
上述具体实施方式,并不构成对本公开保护范围的限制。本领域技术人员应该明白的是,根据设计要求和其他因素,可以进行各种修改、组合、子组合和替代。任何在本公开的精神和原则之内所作的修改、等同替换和改进等,均应包含在本公开保护范围之内。The specific implementation manners described above do not limit the protection scope of the present disclosure. It should be apparent to those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made depending on design requirements and other factors. Any modifications, equivalent replacements and improvements made within the spirit and principles of the present disclosure shall be included within the protection scope of the present disclosure.

Claims (15)

  1. 一种车路协同中相机作用距离确定方法,包括:A method for determining the working distance of a camera in vehicle-road coordination, comprising:
    获取相机的采集图像和像素焦距;Obtain the captured image and pixel focal length of the camera;
    从所述采集图像中截取包括目标对象的截取图像;intercepting an intercepted image including a target object from the acquired image;
    基于所述像素焦距和所述截取图像的单位距离像素数,确定所述相机的最大作用距离,其中,所述截取图像的单位距离像素数为所述截取图像中能够被检测模型识别的最小单位内包括的像素数。Determine the maximum operating distance of the camera based on the pixel focal length and the number of pixels per unit distance of the intercepted image, wherein the number of pixels per unit distance of the intercepted image is the smallest unit that can be recognized by a detection model in the intercepted image The number of pixels included.
  2. 根据权利要求1所述的方法,其中,所述获取相机的像素焦距,包括:The method according to claim 1, wherein said obtaining the pixel focal length of the camera comprises:
    获取所述相机的物理焦距和成像传感器参数;Obtain the physical focal length and imaging sensor parameters of the camera;
    基于所述物理焦距、所述成像传感器参数和所述采集图像的分辨率,确定所述相机的像素焦距。A pixel focal length of the camera is determined based on the physical focal length, the imaging sensor parameters, and the resolution of the captured image.
  3. 根据权利要求1所述的方法,其中,所述获取相机的像素焦距,包括:The method according to claim 1, wherein said obtaining the pixel focal length of the camera comprises:
    基于所述采集图像和相机标定算法,确定所述相机的第一内参矩阵;Determining a first internal reference matrix of the camera based on the collected image and the camera calibration algorithm;
    从所述第一内参矩阵中获取所述相机的像素焦距。Obtain the pixel focal length of the camera from the first internal reference matrix.
  4. 根据权利要求1所述的方法,其中,所述截取图像的单位距离像素数基于以下步骤获取:The method according to claim 1, wherein the number of pixels per unit distance of the intercepted image is obtained based on the following steps:
    获取所述检测模型中样本图像的单位距离像素数,其中,所述检测模型用于检测所述目标对象,所述样本图像的单位距离像素数为所述样本图像中能够被所述检测模型识别的最小单位内包括的像素数;Obtain the number of pixels per unit distance of the sample image in the detection model, wherein the detection model is used to detect the target object, and the number of pixels per unit distance of the sample image is the number of pixels in the sample image that can be recognized by the detection model The number of pixels included in the smallest unit of ;
    获取所述样本图像的分辨率,并确定所述样本图像的分辨率和所述截取图像的分辨率之间的比例值;Acquiring the resolution of the sample image, and determining a ratio between the resolution of the sample image and the resolution of the intercepted image;
    基于所述样本图像的单位距离像素数和所述比例值,获取所述截取图像的单位距离像素数。Based on the number of pixels per unit distance of the sample image and the ratio value, the number of pixels per unit distance of the intercepted image is acquired.
  5. 根据权利要求3所述的方法,其中,所述方法还包括:The method according to claim 3, wherein the method further comprises:
    基于所述截取图像的分辨率和所述第一内参矩阵,确定所述截取图像对应的第二内参矩阵;determining a second internal reference matrix corresponding to the intercepted image based on the resolution of the intercepted image and the first internal reference matrix;
    基于所述截取图像对应的第二内参矩阵和外参矩阵,确定所述目标对象的地理位置。Based on the second internal parameter matrix and external parameter matrix corresponding to the intercepted image, the geographic location of the target object is determined.
  6. 一种车路协同中相机作用距离确定装置,包括:A camera action distance determination device in vehicle-road coordination, comprising:
    获取模块,被配置成获取相机的采集图像和像素焦距;An acquisition module configured to acquire the captured image and pixel focal length of the camera;
    截取模块,被配置成从所述采集图像中截取包括目标对象的截取图像;an intercepting module configured to intercept an intercepted image including a target object from the acquired image;
    确定模块,被配置成基于所述像素焦距和所述截取图像的单位距离像素数,确定所述相机的最大作用距离,其中,所述截取图像的单位距离像素数为所述截取图像中能够被检测模型识别的最小单位内包括的像素数。The determination module is configured to determine the maximum operating distance of the camera based on the pixel focal length and the number of pixels per unit distance of the intercepted image, wherein the number of pixels per unit distance of the intercepted image is the number of pixels per unit distance in the intercepted image that can be The number of pixels included in the smallest unit recognized by the detection model.
  7. 根据权利要求6所述的装置,其中,所述获取模块,进一步被配置成:The device according to claim 6, wherein the acquisition module is further configured to:
    获取所述相机的物理焦距和成像传感器参数;Obtain the physical focal length and imaging sensor parameters of the camera;
    基于所述物理焦距、所述成像传感器参数和所述采集图像的分辨率,确定所述相机的像素焦距。A pixel focal length of the camera is determined based on the physical focal length, the imaging sensor parameters, and the resolution of the captured image.
  8. 根据权利要求6所述的装置,其中,所述获取模块,进一步被配置成:The device according to claim 6, wherein the acquisition module is further configured to:
    基于所述采集图像和相机标定算法,确定所述相机的第一内参矩阵;Determining a first internal reference matrix of the camera based on the collected image and the camera calibration algorithm;
    从所述第一内参矩阵中获取所述相机的像素焦距。Obtain the pixel focal length of the camera from the first internal reference matrix.
  9. 根据权利要求6所述的装置,其中,所述截取图像的单位距离像素数基于以下步骤获取:The device according to claim 6, wherein the number of pixels per unit distance of the intercepted image is obtained based on the following steps:
    获取所述检测模型中样本图像的单位距离像素数,其中,所述检测模型用于检测所述目标对象,所述样本图像的单位距离像素数为所述样本图 像中能够被所述检测模型识别的最小单位内包括的像素数;Obtain the number of pixels per unit distance of the sample image in the detection model, wherein the detection model is used to detect the target object, and the number of pixels per unit distance of the sample image is the number of pixels in the sample image that can be recognized by the detection model The number of pixels included in the smallest unit of ;
    获取所述样本图像的分辨率,并确定所述样本图像的分辨率和所述截取图像的分辨率之间的比例值;Acquiring the resolution of the sample image, and determining a ratio between the resolution of the sample image and the resolution of the intercepted image;
    基于所述样本图像的单位距离像素数和所述比例值,获取所述截取图像的单位距离像素数。Based on the number of pixels per unit distance of the sample image and the ratio value, the number of pixels per unit distance of the intercepted image is acquired.
  10. 根据权利要求8所述的装置,其中,所述确定模块,进一步被配置成:The device according to claim 8, wherein the determining module is further configured to:
    基于所述截取图像的分辨率和所述第一内参矩阵,确定所述截取图像对应的第二内参矩阵;determining a second internal reference matrix corresponding to the intercepted image based on the resolution of the intercepted image and the first internal reference matrix;
    基于所述截取图像对应的第二内参矩阵和外参矩阵,确定所述目标对象的地理位置。Based on the second internal parameter matrix and external parameter matrix corresponding to the intercepted image, the geographic location of the target object is determined.
  11. 一种电子设备,包括:An electronic device comprising:
    至少一个处理器;以及at least one processor; and
    与所述至少一个处理器通信连接的存储器;其中,a memory communicatively coupled to the at least one processor; wherein,
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1-5中任一项所述的方法。The memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can perform any one of claims 1-5. Methods.
  12. 一种存储有计算机指令的非瞬时计算机可读存储介质,其中,所述计算机指令用于使计算机执行根据权利要求1-5中任一项所述的方法。A non-transitory computer-readable storage medium storing computer instructions, wherein the computer instructions are used to cause a computer to execute the method according to any one of claims 1-5.
  13. 一种计算机程序产品,包括计算机程序,所述计算机程序在被处理器执行时实现根据权利要求1-5中任一项所述的方法。A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-5.
  14. 一种路侧设备,包括如权利要求11所述的电子设备。A roadside device, comprising the electronic device according to claim 11.
  15. 一种云控平台,包括如权利要求11所述的电子设备。A cloud control platform, comprising the electronic device according to claim 11.
PCT/CN2021/135146 2021-06-29 2021-12-02 Method and apparatus for determining operating range of camera in cooperative vehicle infrastructure and roadside device WO2023273158A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110724108.5 2021-06-29
CN202110724108.5A CN113470103B (en) 2021-06-29 2021-06-29 Method and device for determining camera acting distance in vehicle-road cooperation and road side equipment

Publications (1)

Publication Number Publication Date
WO2023273158A1 true WO2023273158A1 (en) 2023-01-05

Family

ID=77873630

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/135146 WO2023273158A1 (en) 2021-06-29 2021-12-02 Method and apparatus for determining operating range of camera in cooperative vehicle infrastructure and roadside device

Country Status (2)

Country Link
CN (1) CN113470103B (en)
WO (1) WO2023273158A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113470103B (en) * 2021-06-29 2023-11-24 阿波罗智联(北京)科技有限公司 Method and device for determining camera acting distance in vehicle-road cooperation and road side equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5757830A (en) * 1996-02-07 1998-05-26 Massachusetts Institute Of Technology Compact micro-optical edge-emitting semiconductor laser assembly
CN105163024A (en) * 2015-08-27 2015-12-16 华为技术有限公司 Method for obtaining target image and target tracking device
CN111241887A (en) * 2018-11-29 2020-06-05 北京市商汤科技开发有限公司 Target object key point identification method and device, electronic equipment and storage medium
CN113344906A (en) * 2021-06-29 2021-09-03 阿波罗智联(北京)科技有限公司 Vehicle-road cooperative camera evaluation method and device, road side equipment and cloud control platform
CN113470103A (en) * 2021-06-29 2021-10-01 阿波罗智联(北京)科技有限公司 Method and device for determining camera action distance in vehicle-road cooperation and road side equipment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106570904B (en) * 2016-10-25 2019-04-09 大连理工大学 A kind of multiple target relative pose recognition methods based on Xtion camera
JP7163025B2 (en) * 2017-09-28 2022-10-31 キヤノン株式会社 Image measuring device, image measuring method, imaging device, program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5757830A (en) * 1996-02-07 1998-05-26 Massachusetts Institute Of Technology Compact micro-optical edge-emitting semiconductor laser assembly
CN105163024A (en) * 2015-08-27 2015-12-16 华为技术有限公司 Method for obtaining target image and target tracking device
CN111241887A (en) * 2018-11-29 2020-06-05 北京市商汤科技开发有限公司 Target object key point identification method and device, electronic equipment and storage medium
CN113344906A (en) * 2021-06-29 2021-09-03 阿波罗智联(北京)科技有限公司 Vehicle-road cooperative camera evaluation method and device, road side equipment and cloud control platform
CN113470103A (en) * 2021-06-29 2021-10-01 阿波罗智联(北京)科技有限公司 Method and device for determining camera action distance in vehicle-road cooperation and road side equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LI YINGCHUN, TANG LIMING, SUN HUAYAN: "Performance Analysis of Laser Active Imaging System Used for Spatial Object", JOURNAL OF THE ACADEMY OF EQUIPMENT COMMAND & TECHNOLOGY, vol. 19, no. 1, 29 February 2008 (2008-02-29), pages 65 - 69, XP093018715, ISSN: 2095-3828 *

Also Published As

Publication number Publication date
CN113470103B (en) 2023-11-24
CN113470103A (en) 2021-10-01

Similar Documents

Publication Publication Date Title
KR102320649B1 (en) Methods and devices for determining facial image quality, electronic devices and computer storage media
US9418319B2 (en) Object detection using cascaded convolutional neural networks
CN111862224B (en) Method and device for determining external parameters between camera and laser radar
JP5075757B2 (en) Image processing apparatus, image processing program, image processing method, and electronic apparatus
KR102566998B1 (en) Apparatus and method for determining image sharpness
JP2017520050A (en) Local adaptive histogram flattening
CN110136198B (en) Image processing method, apparatus, device and storage medium thereof
CN112272292B (en) Projection correction method, apparatus and storage medium
US10122912B2 (en) Device and method for detecting regions in an image
CN111307039A (en) Object length identification method and device, terminal equipment and storage medium
US11295426B2 (en) Image processing system, server apparatus, image processing method, and image processing program
CN110996082A (en) Projection adjusting method and device, projector and readable storage medium
US11694331B2 (en) Capture and storage of magnified images
WO2021042638A1 (en) Method and apparatus for extracting test target image of projector xpr-tilt glass, and electronic device
WO2023273158A1 (en) Method and apparatus for determining operating range of camera in cooperative vehicle infrastructure and roadside device
CN111191619B (en) Method, device and equipment for detecting virtual line segment of lane line and readable storage medium
CN113610702B (en) Picture construction method and device, electronic equipment and storage medium
US20230146924A1 (en) Neural network analysis of lfa test strips
WO2022246605A1 (en) Key point calibration method and apparatus
WO2019200785A1 (en) Fast hand tracking method, device, terminal, and storage medium
WO2024055531A1 (en) Illuminometer value identification method, electronic device, and storage medium
WO2019223763A1 (en) Method, device, and equipment for image detection, medium, patterning control system and method
CN113112551B (en) Camera parameter determining method and device, road side equipment and cloud control platform
CN113108919B (en) Human body temperature detection method, device and storage medium
CN115375774A (en) Method, apparatus, device and storage medium for determining external parameters of a camera

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21948084

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE