WO2022166308A1 - Control instruction generation method and device, and control method and device for visual sensor - Google Patents

Control instruction generation method and device, and control method and device for visual sensor Download PDF

Info

Publication number
WO2022166308A1
WO2022166308A1 PCT/CN2021/131695 CN2021131695W WO2022166308A1 WO 2022166308 A1 WO2022166308 A1 WO 2022166308A1 CN 2021131695 W CN2021131695 W CN 2021131695W WO 2022166308 A1 WO2022166308 A1 WO 2022166308A1
Authority
WO
WIPO (PCT)
Prior art keywords
range
image
image acquisition
vision sensor
temporary
Prior art date
Application number
PCT/CN2021/131695
Other languages
French (fr)
Chinese (zh)
Inventor
李文斌
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2022166308A1 publication Critical patent/WO2022166308A1/en

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems

Definitions

  • the present application relates to the field of vehicle automatic driving, and in particular, to a control instruction generation method and device for a visual sensor, and a control method and device.
  • sensors play an extremely important role.
  • a number of different sensors can be installed on the vehicle, for example, vision sensors that can provide reversing images, front-view images, rear-view images, top-view images, and panoramic parking images for outside the vehicle; Fatigue driving sensors, sensors that can monitor the status of the instrument panel; and sensors that can provide forward collision warnings for Advanced Driving Assistance Systems (ADAS), sensors that can provide lane departure warnings, can Sensors that automatically control high beams, sensors that can recognize traffic signals, sensors that can detect pedestrians, sensors that can perform adaptive cruise control, sensors that can perform blind-spot detection, and sensors that have night vision capabilities.
  • ADAS Advanced Driving Assistance Systems
  • the autonomous driving control platform can be connected with a variety of sensors. After the data received by the sensor is transmitted to the automatic driving control platform, the automatic driving control platform performs a series of processing on the data, and finally outputs the control instructions for the vehicle. Under this premise, the end-to-end low latency from the sensor receiving the data to the vehicle executing the autonomous driving control command has become the goal that the industry is constantly pursuing.
  • the purpose of the present application is to provide a control instruction generation method and device for a visual sensor, and a control method and device.
  • the time from the visual sensor receiving the data to the vehicle executing the automatic driving control instruction can be reduced, and the occurrence of traffic accidents caused by the untimely acquisition of the data of the object determined to collide with the vehicle can be reduced.
  • a first aspect of the embodiments of the present application provides a method for generating a control instruction for a visual sensor, where the visual sensor collects image data by scanning, and the method includes: acquiring image data; The object with which the vehicle collides; an image acquisition range adjustment instruction is generated according to the object; the image acquisition range adjustment instruction is sent to the vision sensor, and the image acquisition range adjustment instruction is used to instruct the image acquisition range adjustment instruction to be originally collected according to the preset range.
  • the visual sensor collects images according to a temporary range, and the temporary range is smaller than the preset range and includes the area where the object is located.
  • the vision sensor can immediately scan the area where the object identified as about to collide with the vehicle is located in an emergency, without waiting for the vision sensor to scan the preset range in the current cycle and then rescan the preset range to reduce the image.
  • the size of the collection range reduces the time to acquire the image data of the object that will collide with the vehicle, thereby reducing the risk of traffic accidents due to the untimely collection of the image data of the object that will collide with the vehicle in an emergency.
  • the temporary area is a rectangle.
  • the image acquisition range can be narrowed down to the rectangular area where the object identified as about to collide with the vehicle is located, thereby reducing the image acquisition time and preventing the untimely avoidance of danger due to the excessively long image acquisition time in an emergency. occur.
  • the visual sensor includes a camera.
  • a second aspect of the embodiments of the present application provides a control method for a visual sensor, where the visual sensor acquires image data by scanning, the method includes: acquiring an image acquisition range adjustment instruction; adjusting according to the image acquisition range The instruction controls the visual sensor to adjust the image acquisition range, and the image acquisition range adjustment instruction is used to instruct the vision sensor that originally acquired the image according to the preset range to acquire the image according to the temporary range, the temporary range is smaller than the preset range, and includes The area where the object identified as about to collide with the vehicle is located.
  • the vision sensor can immediately obtain the image data of the object recognized as the object that will collide with the vehicle, without waiting for the vision sensor to scan the preset range in the current cycle and then re-pre-predict all the preset ranges.
  • Set the range to scan reduce the size of the image acquisition range, reduce the time to acquire the image data of the object that is identified as about to collide with the vehicle, and thus reduce the risk of being identified as about to collide with the vehicle in an emergency. Risk of traffic accidents due to untimely acquisition of image data of the colliding objects.
  • the temporary area is a rectangle.
  • the image acquisition range can be narrowed down to the rectangular area where the object identified as about to collide with the vehicle is located, thereby reducing the image acquisition time and preventing untimely avoidance due to excessive image acquisition time in emergency situations situation occurs.
  • the temporary range of the current image acquisition cycle is adjusted, and when the acquired image range includes the temporary range, image acquisition is performed on the union of the acquired image range and the temporary range .
  • the visual sensor includes a camera.
  • a third aspect of the embodiments of the present application provides a control instruction generation device for a visual sensor, the visual sensor collects image data by scanning, and the device includes: an image data acquisition module for acquiring image data; a module, which is used to determine the object that will collide with the vehicle according to the image data; a control instruction generation module, which is used to generate an image acquisition range adjustment instruction according to the object; a control instruction sending module, which is used to send the visual
  • the sensor sends the image acquisition range adjustment instruction, and the image acquisition range adjustment instruction is used to instruct the visual sensor that originally acquired the image according to the preset range to acquire the image according to the temporary range, the temporary range is smaller than the preset range, and Contains the area in which the object is located.
  • the temporary area is a rectangle.
  • the visual sensor includes a camera.
  • a fourth aspect of the embodiments of the present application provides a control device for a visual sensor, the visual sensor collects image data by scanning, and the device includes: a control instruction receiving module, which is configured to acquire an image acquisition range adjustment instruction a control module, which is used to control the visual sensor to adjust the image acquisition range according to the image acquisition range adjustment instruction, and the image acquisition range adjustment instruction is used to instruct the visual sensor that originally acquired the image according to the preset range to acquire the image according to the temporary range , the temporary range is smaller than the preset range and includes an area where an object identified as about to collide with the vehicle is located.
  • the temporary area is a rectangle.
  • the method further includes adjusting the temporary range of the current image acquisition cycle, and when the acquired image range includes the temporary range, performing the process on the union of the acquired image range and the temporary range.
  • Image Acquisition further includes adjusting the temporary range of the current image acquisition cycle, and when the acquired image range includes the temporary range, performing the process on the union of the acquired image range and the temporary range.
  • the visual sensor includes a camera.
  • a fifth aspect of the embodiments of the present application provides a driving risk prediction method, including:
  • the image data is obtained by scanning with a vision sensor
  • Acquire an image acquisition range adjustment instruction and control the visual sensor to adjust the image acquisition range according to the image acquisition range adjustment instruction, where the image acquisition range adjustment instruction is used to instruct the visual sensor that originally acquired the image according to the preset range to acquire the image according to the temporary range , the temporary range is smaller than the preset range and includes the area where the object is located.
  • the temporary range is a rectangle.
  • the temporary range of the current image acquisition cycle is adjusted, and when the acquired image range includes the temporary range, the acquired image The union of the range and the temporary range is imaged.
  • the visual sensor includes a camera.
  • a sixth aspect of the present application provides a computer-readable storage medium on which program instructions are stored, and when executed by a computer, the program instructions cause the computer to execute the above-mentioned first, second and fifth aspects and possible implementations thereof. any of the methods provided.
  • a seventh aspect of the present application provides a computer program, by running the program, a computer can execute any one of the methods provided in the first, second, and fifth aspects and their possible implementations, or as the second aspect and its possible implementations. Any of the means provided by the implementation of the .
  • 1a is a schematic structural diagram of a driving risk prediction system provided by an embodiment of the present application.
  • FIG. 1b is a schematic structural diagram of a control device for a visual sensor provided by an embodiment of the present application
  • FIG. 2a is a flowchart of a visual sensor control method provided by an embodiment of the present application.
  • Fig. 2b is a sub-flow chart of the visual sensor control method provided by the embodiment of the present application.
  • 3a-3f are exemplary images of a traffic scene captured by a visual sensor provided in an embodiment of the present application.
  • FIGS. 4a-4d are schematic diagrams of the vision sensor scanning a scene according to different image acquisition range adjustment instructions provided by an embodiment of the present application;
  • 5a-5d are images of the scene captured by the vision sensor provided by the embodiment of the present application at times t 1 -t 4 .
  • Vision sensor 10 vision sensor control device 11; ADAS calculation and control device 20; perception module 21; home control module 22; hazardous area identification module 23; control instruction generation module 24; ISP30; ADAS platform 100; vision sensor scanning control module 110 ; comparison module 111 ; driving risk prediction system 200 ; ECU 210 ; image 300 ; intersection 301 ; road 302 ; traffic sign 303 ;
  • ADAS Advanced Driving Assistance System
  • ADAS includes multiple sensors and data processing platforms. Its working principle is to collect the data of the moving body and its surrounding environment through a variety of sensors installed on the moving body. After the data is processed and analyzed by the data processing platform, the traveling path of the moving body is planned, and control commands are sent to The control module performs the relevant operations.
  • ISP Image Signal Processor
  • Electronic control unit (Electronic Control Unit, ECU for short) is used to calculate various input data and process various input instructions according to a pre-designed program, and further control each actuator to perform various predetermined control functions.
  • the vision sensor scans the image multiple times, frame by frame. After each scan, the vision sensor is able to obtain information about a line of images from the real world and feed it to the ISP. After the ISP receives a complete frame of images composed of multiple lines of images, image processing is performed on the frame of images. The processed images are transmitted to the data processing module in the ADAS data processing platform for algorithm processing, and finally generate control instructions.
  • the exposure interval of two adjacent frames of images is about 33ms.
  • the ADAS data processing platform cannot obtain image information in the real world. If the data processing platform recognizes a danger signal and needs to immediately obtain the image of the specified area in the current lens screen, it needs to wait for a maximum of 33ms. In the case of emergency risk avoidance, emergency decision-making and other scenarios, there is a problem of large delay.
  • the embodiments of the present application provide a method and device for generating a control instruction for a visual sensor, a control method and device, and a driving risk prediction method and system. It can obtain the image of the specified position in the current lens screen in the emergency avoidance and emergency decision-making scenarios, and suppress the traffic accident caused by the delayed acquisition of the image.
  • the first embodiment a driving risk prediction system.
  • FIG. 1 a shows the module structure of a driving risk prediction system 200 with a vision sensor control device 11 and an ADAS calculation and control device 20 .
  • the driving risk prediction system 200 of the present embodiment is located on the vehicle.
  • the vehicle is, for example, a car.
  • the driving risk prediction system 200 may include a plurality of sensors, a plurality of sensor control devices, an ADAS platform 100 and a vehicle ECU 210 .
  • the visual sensor control device 11 is connected to the visual sensor 10 for controlling the acquisition of image data by the visual sensor 10 .
  • the ADAS platform 100 can be connected with the vision sensor 10 and the vision sensor control device 11 through lines, for sending instructions to the vision sensor control device 11 and receiving and processing data collected by the vision sensor and other sensors.
  • the vehicle ECU 210 can receive control instructions from the ADAS platform 100, and further control the vehicle to perform corresponding driving operations.
  • the senor may be a visual sensor 10, such as a camera, or a sensor that can acquire data by scanning, such as a lidar sensor, a millimeter-wave sensor, or the like.
  • the sensor control device may be the visual sensor control device 11 .
  • the ADAS platform 100 may include a driving hazard prediction device, an ISP 30 configured as the ADAS calculation and control device 20 .
  • the ADAS calculation and control device 20 may include: a perception module 21 , a control module 22 , a dangerous area identification module 23 , and a control instruction generation module 24 .
  • the vehicle ECU 210 can be connected to the ADAS computing and control device 20 in a wired or wireless manner to further control the vehicle to perform corresponding operations according to the instructions generated by the ADAS computing and control device 20 .
  • the vision sensor 10 is used to acquire image data of a traffic scene, and it can be installed in different positions of the vehicle.
  • the visual sensor 10 has a pixel array composed of a plurality of unit pixels arranged two-dimensionally.
  • the vision sensor can scan the preset range line by line in chronological order, and scan the image of one line area, that is, it can acquire the image data of one line area, and output the data from the line image through the line to ISP30.
  • the vision sensor adds an end mark to the scanned image data of the last line, indicating that one cycle of image acquisition is completed.
  • the ISP 30 starts to process the received image data of the preset range according to the end marker.
  • the vision sensor 10 may also scan the temporary range instructed to perform image capture according to the image capture range adjustment instruction. After the vision sensor 10 has finished scanning the last line of images in the temporary range, the vision sensor 10 adds an end mark to the last line of image data, indicating that the image acquisition for the current cycle is completed. The ISP 30 starts processing the received temporary image according to the end marker.
  • the ISP 30 is a processor that performs image processing on image data output from the vision sensor 10 .
  • the ISP 30 receives the line image data output from the vision sensor 10 in chronological order, and after receiving the line image data with the end mark, performs, for example, gamma correction, color interpolation, and Processing of automatic white balance, etc.
  • the ISP 30 may be a chip integrated in the ADAS platform 100 or a chip integrated with the visual sensor 10 .
  • the ISP 30 is integrated in the ADAS platform 100, and is provided with an image interface to receive image data sent by the visual sensor 10 through a data line.
  • the ADAS calculation and control device 20 is used for processing image data obtained from a plurality of sensors, and generating control instructions to control the vehicle ECU to perform corresponding operations.
  • the ADAS calculation and control device 20 has a perception module 21 , a control module 22 , a dangerous area identification module 23 and a control instruction generation module 24 .
  • the perception module 21 is a device capable of performing algorithmic processing on the image data from the ISP 30 . It is used to perform image detection on the acquired image, so as to identify the object in the image and obtain the information of the object.
  • the objects may be traffic participants in the surroundings of the vehicle.
  • the traffic participants may include pedestrians, surrounding vehicles, traffic signs, obstacles, and the like.
  • the information of the object may include: the position of the object in the world coordinate system, and the size of the object.
  • the perception module can perform image detection using neural network models, object recognition algorithms, Structure from Motion (SFM) algorithms, video tracking, and other computer vision techniques.
  • SFM Structure from Motion
  • the perception module 21 determines the location and size of the traffic participant according to the identified pixel coordinates of the traffic participant and the calibration parameters of the visual sensor 10 .
  • the calibration parameters may be internal parameters, external parameters, and position information of the vision sensor lens.
  • the perception module 21 obtains any one of the traffic participants according to the pixel coordinates of the same traffic participant in the first image and the second image obtained by the visual sensors located at different positions of the vehicle at the same time, the internal reference and the external reference corresponding to the visual sensor.
  • the position information corresponding to the pixel coordinate point in the world coordinate system and then determine the position of the traffic participant; according to the image area composed of multiple pixel coordinates and the zoom factor of the visual sensor, determine the traffic participant in the world coordinate system. size.
  • the perception module 21 can obtain the positioning information of the vehicle in real time through the inertial navigation device/lidar, or can obtain the positioning information of the vehicle in real time by using satellite positioning technology (eg, GPS technology), and can also use other existing Any positioning technology acquires the positioning information of the vehicle in real time, which is not limited in this embodiment of the present application.
  • the positioning information of the vehicle may include longitude and latitude, altitude, and attitude information of the vehicle (such as the heading of the vehicle).
  • the latitude, longitude and altitude in the positioning information of the vehicle are data in a world coordinate system (also referred to as a geographic coordinate system).
  • the distance of the traffic participant relative to the vehicle is determined according to the positioning information of the vehicle and the location of the traffic participant.
  • the perception module 21 may also receive the current planning of the vehicle's travel trajectory by the control module 22 .
  • the sensing module 21 includes: an input interface, an output interface, a program memory, a working memory and a microcontroller.
  • the input interface is used to receive image data output from the ISP; the output interface is used to output the information of traffic participants to the control module 22 and the danger area identification module 23; the microcontroller can read commands from the program memory and execute each process in sequence.
  • the microcontroller temporarily expands the program stored in the program memory in advance into the working memory, and performs various actions according to the command group. Algorithms used to obtain information about traffic participants in scene data are implemented through a combination of microcontrollers and software.
  • the software may be a module constituting a computer program for executing specific processing corresponding to each functional block. Such computer programs may be stored in program memory.
  • the dangerous area identification module 23 is an algorithm processing device different from the perception module 21 , and can determine whether the traffic participant is about to collide with the vehicle according to the information of the traffic participant obtained from the perception module 21 .
  • the conditions for determining whether the traffic participant is about to collide with the vehicle may include: at the current moment, the position of the same traffic participant intersects the current driving trajectory of the vehicle, the distance between the same traffic participant and the vehicle is less than the first distance, and The size of the same traffic participant exceeds the preset value.
  • the control instruction generation module 24 is used to generate an image acquisition range adjustment instruction when the traffic participant is judged to be about to collide with the vehicle, and controls the vision sensor that originally acquired image data according to the preset range to perform image acquisition according to the temporary range. Areas smaller than the preset range and containing traffic participants identified as about to collide with the vehicle.
  • the image acquisition range adjustment instruction is used to instruct the vision sensor to perform image acquisition on the area where the traffic participant identified as about to collide with the vehicle is located.
  • the temporary range can be the row area or column area where the traffic participants are located for image acquisition, and the temporary range can also be the rectangular, triangular and circular areas where the traffic participants are located.
  • the control instruction sending module is configured to send the image acquisition range adjustment instruction to the visual sensor 10 .
  • the control command sending module can be a data line, one end of which is connected to the control command generating module, and the other end is connected to the vision sensor control device through the ISP30.
  • the image acquisition range adjustment instruction can also be sent to the vision sensor control device 11 in the form of signal transmission.
  • the home control module 22 is a computing device different from the perception module 21 and the dangerous area identification module 23 , and is used to plan the driving path of the vehicle and generate vehicle driving control instructions according to the information of the traffic participants acquired by the perception module 21 .
  • the vehicle ECU 210 includes a microprocessor (CPU), a memory (ROM, RAM), an input/output interface (I/O), an analog-to-digital converter (A/D), and an integrated circuit.
  • the vehicle ECU is connected to the home control module 22 through the input interface through the connecting line, and is used for receiving the vehicle travel control command generated by the home control module, and further controls each actuator to perform the corresponding travel operation according to the vehicle travel control command.
  • the vision sensor control device 11 can control and control the vision sensor 10 to scan images according to the temporary range based on the image acquisition range adjustment instruction output from the control instruction generation module 24 .
  • the temporary range is obtained by reducing the preset range, and the temporary range includes the traffic participants identified by the danger area identification module 23 as being about to collide with the vehicle.
  • FIG. 1 b shows a schematic block diagram of the vision sensor control device 11 .
  • the visual sensor control device 11 may include a visual sensor scanning control module 110 and a comparison module 111 .
  • the vision sensor scanning control module 110 may be a control circuit that controls the vision sensor 10, and is integrated on the vision sensor 10 to control the scanning range of the vision sensor according to the image acquisition range adjustment instruction. For example, when the vision sensor scanning control module 110 receives the image acquisition range adjustment instruction for data acquisition for the Nth row to the Mth row, it controls the vision sensor 10 to complete the row area currently scanned, and starts to scan from the Nth row, Until the M-th line is scanned.
  • the comparison module 111 may adjust the temporary range of the current image capture cycle according to the image capture range adjustment instruction of the control command generation module 24 .
  • the comparison module 111 respectively compares the temporary range with the uncollected range in which image capture has not been performed and the captured range in which image capture has been carried out.
  • the comparison module 111 generates a final adjustment instruction for performing image acquisition on the union of the acquired range and the temporary range.
  • the comparison module 111 When the acquired range does not include the temporary range, that is, the temporary range is within the uncollected range, the comparison module 111 generates a final adjustment instruction for image acquisition of the temporary range, and sends the final adjustment instruction to the vision sensor scanning control module 110 .
  • the vision sensor completes the scanning of the current line area, adds an end mark to the image data corresponding to the current line area, and then starts image acquisition for the temporary range.
  • the comparison module 111 receives an image acquisition range adjustment instruction for image acquisition for the N-th row to the M-th row (M>N>A).
  • the vision sensor scan control module 110 controls the vision sensor 10 to stop the current scan, and starts to scan from the Nth row until the Mth row is scanned.
  • the comparison module 111 When the acquired range all includes the temporary range, there is no need to perform image acquisition on the temporary range again, just send the image data of the acquired range to the ISP.
  • the comparison module 111 generates a final adjustment instruction for the image acquisition at the end of the current acquired range, and sends the final adjustment instruction to the vision sensor scanning control module 110 .
  • the vision sensor scanning control module 110 controls the vision sensor to perform image acquisition again according to the preset range. For example, when the visual sensor captures the image of the A-th row, when the comparison module 111 receives the image capture range adjustment instruction for the image capture of the N-th row to the M-th row (A>M>N), the captured image The range already includes the temporary range, just send the currently captured image range to the ISP for processing. Therefore, the visual sensor scanning control module 110 controls the visual sensor 10 to complete the scanning of the A-th row, and adds an end mark to the A-th row to end the image acquisition of the currently acquired range.
  • the comparison module 111 When the acquired range includes part of the temporary range, it is not necessary to perform image acquisition on the temporary range again, and it is only necessary to continue image acquisition on the basis of the currently acquired image range.
  • the comparison module 111 generates a final adjustment instruction for continuing image acquisition, and sends the final adjustment instruction to the vision sensor scanning control module 110 .
  • the visual sensor scanning control module 110 receives the image acquisition range adjustment instruction, and controls the visual sensor 10 to continue image acquisition. For example, when the visual sensor has scanned from the 1st row to the Pth row, the comparison module 111 receives the image acquisition range adjustment instruction (N ⁇ P ⁇ M) for image acquisition for the Nth row to the Mth row (N ⁇ P ⁇ M).
  • the range already partially contains the temporary range, just let the vision sensor continue to scan until the temporary range is scanned. Therefore, the vision sensor scanning control module 110 controls the vision sensor 10 to continue to perform the current scanning sequence until the scanning of the M-th row area is completed.
  • the scanning method of the vision sensor is the scanning method of the vision sensor.
  • Figures 3a-3f show exemplary images 300 of a traffic scene captured by a vision sensor with a frame resolution of 1920x1080.
  • the image 300 is a color image presented with black and white lines, in order to comply with the provisions of the Implementing Regulations of the Patent Law.
  • Image 300 includes road 302 with intersection 301 , traffic signs 303 , traffic lights 304 , other vehicles 305 , 306 , 307 , 308 , and pedestrians 309 .
  • the vision sensor 10 scans the scene shown in FIG. 3a according to the preset range. That is, scanning is performed line by line from the L1 th line in the image range shown in FIG. 3 until the scanning reaches the L1080 th line to complete the scanning of one frame of image.
  • the L1th row, the L2th row and the L1080th row are schematically drawn with dotted lines.
  • the visual sensor control device 11 When the visual sensor control device 11 receives the image acquisition range adjustment instruction from the control instruction generation module 24, it scans the scene according to the image acquisition range adjustment instruction.
  • the image acquisition range adjustment instructions are different, and the scanning methods of the vision sensor are also different. The following describes the scanning method of the vision sensor according to three different image acquisition range adjustment instructions.
  • the first scanning mode the image acquisition range adjustment instruction instructs to scan the rectangular area.
  • the comparison module 111 adjusts the image acquisition range adjustment instruction
  • the indicated line L33-L44 region is aligned with the scanned line L1-L22 region and the unscanned line L23-L1080 region.
  • the scanned image range does not include the scanned area indicated by the image acquisition range adjustment instruction, and the comparison module 111 generates the final adjustment instruction for scanning the L33-L44, and sends it to the vision sensor scanning control module 110 .
  • the vision sensor scanning control module 110 controls the vision sensor to stop scanning the line L22, and start scanning line by line from the L33 line until scanning to the L44 line. When the scanning of the L44th line is completed, the scanning starts from the L1th line again.
  • the comparison module 111 compares the scanned lines L1-L100.
  • the line area is compared with the L14-L44 line area indicated by the image acquisition range adjustment instruction.
  • the scanned image range part all includes the range to be scanned by the image acquisition range adjustment instruction.
  • the vision sensor scanning control module 110 controls the vision sensor 10 to add an end mark to the image data of the L100th line, and then starts scanning from the L1th line.
  • the comparison module 111 compares the scanned lines L1-L200.
  • the line area is compared with the area for scanning in the L14-L500 lines indicated by the image acquisition range adjustment instruction.
  • the scanned image range part includes the scanned area indicated by the image acquisition range adjustment instruction, and the comparison module 111 generates a final adjustment instruction for continuing to scan until all the scanned areas indicated by the image acquisition range adjustment instruction are scanned, and sends it to the vision sensor for scanning.
  • Control module 110 controls the vision sensor 10 to continue scanning until the scanning reaches the L500 line. When the scanning of the L500th line is completed, the scanning starts from the L1th line again.
  • the vision sensor 10 is not limited to start scanning again from the L1th line, but can also continue to scan the area specified by the image acquisition range adjustment instruction line by line, and can also scan from any line as required. Start scanning.
  • the second scanning mode the image acquisition range adjustment instruction specifies to scan the oval area where the traffic participants are located.
  • the image acquisition range adjustment instruction instructs to scan the elliptical area X in the image range of the vision sensor.
  • the content of the first embodiment will be cited for description, or only a brief description will be given.
  • the vision sensor scanning control module 110 does not receive the image acquisition range adjustment instruction from the control instruction generation module 24, the vision sensor 10 scans the scene shown according to the preset range. It is the same as that in the first scanning mode, and will not be repeated here.
  • the vision sensor scanning control module 110 when the vision sensor scanning control module 110 receives the scan of the area X when the vision sensor 10 scans the L22th row, it controls the vision sensor to complete the scan of the L22th row, and then starts to scan the area X. Scan until the area X is scanned, and then start scanning again from line L1.
  • the third scanning method specifies to scan the rectangular area where the traffic participants are located.
  • the image acquisition range adjustment instruction is to scan the area Y.
  • the content of the first embodiment will be cited for description, or only a brief description will be given.
  • the vision sensor scanning control module does not receive the image acquisition range adjustment instruction from the control instruction generation module, the vision sensor scans the scene shown according to the preset range. It is the same as that in the first scanning mode, and will not be repeated here.
  • the vision sensor scanning control module 110 When the vision sensor scanning control module 110 receives the image acquisition range adjustment instruction from the sensor control instruction generation module 24 when the vision sensor 10 scans to the L22 line, it controls the vision sensor to adjust the scanned area.
  • the image acquisition range adjustment instruction is to scan the area Y
  • control the visual sensor 10 to scan the area Y after completing the scanning of the L22 row, until the scanning of the area Y is completed, start from the L1 row again. Start scanning.
  • the image acquisition range adjustment instruction may specify to scan the circular, square and other shaped areas where the traffic participants are located. It is also possible to designate to scan the pixel area constituting the traffic participant, as long as the area can all contain the area where the traffic participant identified as about to collide with the vehicle is located.
  • the perception module 21 receives the first image and the second image obtained by the visual sensors located at different positions of the vehicle at the same time, it uses the neural network model, object recognition algorithm, structure restoration algorithm in motion, video tracking and other computer vision technologies Detect the image, extract the features in the image, and match with the preset features to determine the traffic signs 303, traffic lights 304, pedestrians 309 and surrounding vehicles 305, 306, 307, 308 and other traffic participants in the image; The pixel coordinates of each traffic participant and the calibration parameters of the visual sensor 10 are obtained to determine the location of the traffic participant and the size of the traffic participant; according to the obtained vehicle positioning information, the distance of the traffic participant relative to the vehicle is determined.
  • the dangerous area identification module 23 determines whether the traffic participant is about to collide with the vehicle according to the traffic participant information.
  • the criterion for determining whether the traffic participant is about to collide with the vehicle may include: at the current moment, the position of the same traffic participant intersects the current driving trajectory of the vehicle, and the distance of the same traffic participant relative to the vehicle is less than The first distance and the size of the same traffic participant exceeds a preset value.
  • the control instruction generation module 24 generates an image acquisition range adjustment instruction for scanning the area where the one or more traffic participants are located according to the traffic participant who is about to collide with the vehicle identified by the danger area identification module 23, and converts the image to the image acquisition range.
  • the acquisition range adjustment instruction is sent to the sensor scanning control module 11 through the ISP30.
  • Scenario 1 The vehicle ahead approaches the vehicle
  • Figures 4a-4d show 4 frames of images captured by the vision sensor at times t 1 -t 4 , respectively.
  • the change of the position of the preceding vehicle 401 in the image range can be seen from Figs. 4a-4d.
  • the proportion of the pixel connection area of the preceding vehicle 401 in the image range of the entire vision sensor gradually increases, and the distance between the preceding vehicle and the host vehicle decreases.
  • the dangerous area identification module 23 finds that the position of the preceding vehicle 401 intersects the current driving trajectory of the vehicle, the distance of the vehicle ahead relative to the vehicle is less than the first distance, and the size of the vehicle ahead exceeds a preset value. It is determined that the preceding vehicle 401 is a traffic participant about to collide with the own vehicle.
  • the control instruction generation module 24 generates, according to the identification result of the dangerous area identification module 23, an image acquisition range adjustment instruction for image acquisition of the area where the preceding vehicle 401 is located. As shown in Fig. 4d, the area where the front vehicle 401 is located corresponds to the area in the L500th to L900th line in the image range of the vision sensor, and the control instruction generation module generates an image acquisition range adjustment instruction for image acquisition of the L500th to L900th line area , and send the image acquisition range adjustment instruction to the sensor scanning control module through the ISP.
  • Scenario 2 The pedestrian in front approaches the vehicle
  • Figures 5a-5d show 4 frames of images captured by the vision sensor at times t 5 -t 8 , respectively.
  • the position of the pedestrian 501 in the image range of the vision sensor varies.
  • the pedestrian 501 in front moves from the right side of the image range to the center of the image range.
  • the dangerous area identification module 23 determines that the pedestrian in front intersects the current driving trajectory of the vehicle at the position of the pedestrian 501 in front at time t8 , the distance between the pedestrian 501 in front and the vehicle is less than the first distance, and the size of the pedestrian in front exceeds a preset value, and determines the pedestrian in front.
  • 501 is the traffic participant who is about to collide with the vehicle.
  • the control instruction generation module 24 generates an image acquisition range adjustment instruction for imaging the area where the pedestrian 501 in front is located. As shown in FIG. 5d , the area where the pedestrian 501 is located corresponds to the L460th to L980th lines in the image range of the vision sensor. Therefore, the control instruction generation module 24 generates an image acquisition range for the image acquisition of the area of the L460th to L980th lines. An adjustment instruction is sent, and the image acquisition range adjustment instruction is sent to the sensor scanning control module through the ISP.
  • the traffic participant information of these traffic participants is also sent to the control module 24 .
  • the home control module plans the driving path of the vehicle and generates control instructions according to the traffic participant information obtained by the perception module, and the vehicle ECU further controls the various components of the vehicle to perform corresponding operations according to the control instructions.
  • Figures 2a-2b show a flow chart of a driving risk prediction method.
  • the driving hazard prediction method includes the following steps:
  • Step S1 the vision sensor scans the scene.
  • the vision sensor scans the preset image range of the vision sensor line by line in chronological order, and after scanning a line area, the image data of the line area is transmitted to the ISP.
  • Step S2 The ISP processes the received image.
  • the ISP is configured to process the previously received images of all the line areas after receiving the image of the line area with the end mark.
  • the processing of the image by the ISP may include adjustment of parameters such as image-to-white balance, scanning, and optimization of image noise.
  • Step S3 using the perception module to perform algorithmic processing on the image data processed by the ISP, to obtain the information of the traffic participants in the image.
  • Step S41 The danger area identification module determines an object that will collide with the vehicle.
  • the dangerous area identification module determines whether the traffic participant is about to collide with the vehicle according to the acquired traffic participant information. When the following conditions are met, it is determined that the traffic participant may collide with the vehicle, and the traffic participant is the object that will collide with the vehicle: at the current moment, the position of the same traffic participant The current driving trajectory of the vehicle The intersection, the distance of the same traffic participant relative to the own vehicle is less than the first distance, and the size of the same traffic participant exceeds a preset value.
  • Step S42 when the traffic participant is determined to collide with the vehicle, the control instruction generation module generates an image acquisition range adjustment instruction, and performs image acquisition on a temporary range including the traffic participant.
  • the temporary area may be a rectangular area including the traffic participant.
  • Step S43 The visual sensor control device receives and executes the image acquisition range adjustment instruction generated in step S42.
  • step S43 may also include the following sub-steps:
  • Step S431 The comparison module receives the image acquisition range adjustment instruction generated by the control instruction generation module, and compares the temporary range with the scanned image range and the unscanned image range respectively.
  • the comparison module adjusts the temporary range of the current image acquisition cycle according to the comparison result, and when the acquired image range includes the temporary range, performs image acquisition on the union of the acquired image range and the temporary range.
  • Step S4321 When the captured image range includes all the temporary ranges, the comparison module generates a final adjustment instruction for ending the current image capture.
  • Step S4331 The visual sensor scanning control module 110 controls the visual sensor to add an end mark to the current row area after completing the image acquisition of the current row area to complete the image acquisition of the currently acquired image range.
  • Step S4322 When the captured image range includes part of the temporary range, the comparison module generates a final adjustment instruction for continuing image capture.
  • Step S4332 The vision sensor scanning control module 110 controls the vision sensor to complete the currently collected line area, and continues to perform image collection on the remaining uncollected temporary areas.
  • Step S4323 when the captured image range does not include the temporary range, the comparison module generates a final adjustment instruction for image capturing for the temporary range.
  • Step S4333 The vision sensor scanning control module 110 controls the vision sensor to complete the currently collected line area, and starts image collection for the temporary area.
  • step S43 For the description of step S43 and its sub-steps performed by the visual sensor control apparatus, reference may be made to the description of the visual sensor control apparatus in the first embodiment of the present application, and for the sake of brevity, only a brief description is given here.
  • the generated final adjustment instruction or image acquisition range adjustment instruction can be transmitted to the vision sensor scanning control module through the ISP.
  • the vision sensor scanning control module controls the vision sensor to scan according to the control instruction.
  • step S5 is also executed: the control module plans the travel path of the vehicle according to the acquired traffic participant information and generates a travel control instruction.
  • Step S6 The vehicle ECU controls the vehicle to perform corresponding operations according to the driving control instruction.
  • steps S2, S3, S5 and S6 may be executed in sequence, so as to avoid collision between the vehicle and the traffic participants.
  • Step S100 the visual sensor scans the image range shown in Fig. 4a-Fig. 4d in time sequence.
  • the vision sensor transmits the image data to the ISP after scanning a line of area.
  • the vision sensor adds an end mark to the last line of area.
  • Step S200 After the ISP receives the line area with the end mark. Processes the previously received images of all the row regions, and sends the processed images to the perception module.
  • Step S300 The perception module performs algorithm processing on the image processed by the ISP, and obtains the information of the vehicle 401 in the image range shown in FIG. 4a-FIG. 4d.
  • Step S410 the dangerous area identification module determines whether the vehicle 401 is about to collide with the own vehicle according to the acquired information of the vehicle 401 .
  • Step S420 Generate an image acquisition range adjustment instruction for scanning the vehicle 401.
  • the image acquisition range adjustment instruction is to scan the L508-L806 line area where the vehicle 401 is located.
  • Step S430 The visual sensor control device receives and executes the image acquisition range adjustment instruction.
  • Step S110 the vision sensor scans the L508-L806 row area where the vehicle 401 is located.
  • Step S210 The ISP processes the received images in the L508-L806 line area, and sends the processed images to the perception module.
  • Step S310 The perception module performs algorithmic processing on the image of the L508-L806 row area, and obtains information of the vehicle 401 in the image of the L508-L806 row area, the information may include the position of the vehicle 401, the distance of the vehicle 401 relative to the vehicle, and The size of the vehicle 401 .
  • Step S500 The control module plans the driving path of the vehicle according to the information of the vehicle 401 and generates a control instruction.
  • Step S600 The vehicle ECU controls the vehicle to perform corresponding operations according to the control instruction.
  • Embodiments of the present application further provide a computer-readable storage medium, on which a computer program is stored, and the program is executed by a processing device to generate a control instruction and a sensor control method and calculation, and the method includes the solutions described in the foregoing embodiments at least one of the.
  • the computer storage medium of the embodiments of the present application may adopt any combination of one or more computer-readable media.
  • the computer-readable medium may be a computer-readable signal medium or a computer-readable storage medium.
  • the computer readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above.
  • a computer-readable storage medium can be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or apparatus.
  • a computer-readable signal medium may include a propagated data signal in baseband or as part of a carrier wave with computer-readable program code embodied thereon. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium can also be any computer-readable medium, other than a computer-readable storage medium, that can transmit, propagate, or transmit data for use by or in connection with the instruction execution system, apparatus, or apparatus. program.
  • Program code embodied on a computer readable medium may be transmitted using any suitable medium including, but not limited to, wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for performing the operations of the present application may be written in one or more programming languages, including object-oriented programming languages—such as Java, Smalltalk, C++, but also conventional Procedural programming language - such as the "C" language or similar programming language.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or service.
  • the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, through the Internet using an Internet service provider) connect).
  • LAN local area network
  • WAN wide area network
  • Internet service provider an external computer
  • the fifth embodiment of the present application provides a computer program, and the computer can execute the control method provided by the embodiment of the present application by running the program, or function as the above-mentioned control device.
  • the disclosed system, apparatus and method may be implemented in other manners.
  • the apparatus embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the functions, if implemented in the form of software functional units and sold or used as independent products, may be stored in a computer-readable storage medium.
  • the technical solution of the present application can be embodied in the form of a software product in essence, or the part that contributes to the prior art or the part of the technical solution.
  • the computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a service device, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only storage device (Read-Only Memory, ROM), random access storage device (Random Access Memory, RAM), magnetic disk or optical disk and other various programs that can store program codes medium.

Abstract

A control instruction generation method for a visual sensor (10), wherein the visual sensor (10) acquires image data by scanning, and the method comprises: acquiring image data; determining, according to the image data, an object about to collide with a vehicle; generating an image acquisition range adjustment instruction according to the object; sending the image acquisition range adjustment instruction to the visual sensor (10), the image acquisition range adjustment instruction being used for instructing the visual sensor (10) that originally acquires images according to a preset range to acquire images according to a temporary range, and the temporary range being smaller than the preset range and comprising an area where the object is located. Therefore, in an emergency, the vehicle can acquire, in time, image data of the object identified as being about to collide with the vehicle, thereby reducing the occurrence of traffic accidents.

Description

一种用于视觉传感器的控制指令生成方法及装置、控制方法及装置A method and device for generating a control instruction for a vision sensor, and a control method and device 技术领域technical field
本申请涉及车辆自动驾驶领域,具体涉及一种用于视觉传感器的控制指令生成方法及装置、控制方法及装置。The present application relates to the field of vehicle automatic driving, and in particular, to a control instruction generation method and device for a visual sensor, and a control method and device.
背景技术Background technique
在自动驾驶平台中,传感器扮演着极为重要的角色。车辆上能够安装多个不同的传感器,例如,用于车外的能够提供倒车影像、前视影像、后视影像、俯视影像、全景泊车影像的视觉传感器;用于车舱内的能够监控乘客是否疲劳驾驶的传感器、能够监控仪表盘的状态的传感器;以及用于高级驾驶辅助系统(Advanced Driving Assistance System,简称ADAS)的能够提供正向碰撞警告的传感器、能够提供车道偏离警告的传感器、能够自动控制远光灯的传感器、能够识别交通信号的传感器、能够检测行人的传感器、能够进行自适应巡航控制的传感器、能够进行盲点检测的传感器以及具有夜视功能的传感器等。In autonomous driving platforms, sensors play an extremely important role. A number of different sensors can be installed on the vehicle, for example, vision sensors that can provide reversing images, front-view images, rear-view images, top-view images, and panoramic parking images for outside the vehicle; Fatigue driving sensors, sensors that can monitor the status of the instrument panel; and sensors that can provide forward collision warnings for Advanced Driving Assistance Systems (ADAS), sensors that can provide lane departure warnings, can Sensors that automatically control high beams, sensors that can recognize traffic signals, sensors that can detect pedestrians, sensors that can perform adaptive cruise control, sensors that can perform blind-spot detection, and sensors that have night vision capabilities.
自动驾驶控制平台能够与多种传感器连接。传感器接收的数据传输到自动驾驶控制平台后,自动驾驶控制平台对数据进行一系列的处理,最终输出对车辆的控制指令。在此前提下,从传感器接受数据到车辆执行自动驾驶控制指令的端到端低时延成为业内不断追求的目标。The autonomous driving control platform can be connected with a variety of sensors. After the data received by the sensor is transmitted to the automatic driving control platform, the automatic driving control platform performs a series of processing on the data, and finally outputs the control instructions for the vehicle. Under this premise, the end-to-end low latency from the sensor receiving the data to the vehicle executing the autonomous driving control command has become the goal that the industry is constantly pursuing.
发明内容SUMMARY OF THE INVENTION
鉴于现有技术的以上问题,本申请的目的在于提供一种用于视觉传感器的控制指令生成方法及装置、控制方法及装置。能够减少视觉传感器接收数据到车辆执行自动驾驶控制指令的时间,减少因对被确定为将要与车辆发生碰撞的对象的数据获取不及时造成的交通事故的发生。In view of the above problems in the prior art, the purpose of the present application is to provide a control instruction generation method and device for a visual sensor, and a control method and device. The time from the visual sensor receiving the data to the vehicle executing the automatic driving control instruction can be reduced, and the occurrence of traffic accidents caused by the untimely acquisition of the data of the object determined to collide with the vehicle can be reduced.
本申请实施例的第一方面提供了一种用于视觉传感器的控制指令生成方法,所述视觉传感器通过扫描而采集图像数据,所述方法包括:获取图像数据;根据所述图像数据确定将要与车辆发生碰撞的对象;根据所述对象生成图像采集范围调节指令;向所述视觉传感器发送所述图像采集范围调节指令,所述图像采集范围调节指令用于指示原本按照预设范围采集图像的所述视觉传感器按照临时范围采集图像,所述临时范围小于所述预设范围,且包含所述对象所在的区域。A first aspect of the embodiments of the present application provides a method for generating a control instruction for a visual sensor, where the visual sensor collects image data by scanning, and the method includes: acquiring image data; The object with which the vehicle collides; an image acquisition range adjustment instruction is generated according to the object; the image acquisition range adjustment instruction is sent to the vision sensor, and the image acquisition range adjustment instruction is used to instruct the image acquisition range adjustment instruction to be originally collected according to the preset range. The visual sensor collects images according to a temporary range, and the temporary range is smaller than the preset range and includes the area where the object is located.
通过上述设置,能够使视觉传感器在紧急情况下立即扫描被识别为将要与车辆发生碰撞的对象所在区域,无需等待视觉传感器扫描完当前周期内的预设范围后再重新扫描预设范围,缩小图像采集范围的大小,减少获取将要与车辆发生碰撞的对象的图像数据的时间,进而降低在紧急情况下,因对将要与车辆发生碰撞的对象的图像数据采集不及时而发生的交通事故的风险。Through the above settings, the vision sensor can immediately scan the area where the object identified as about to collide with the vehicle is located in an emergency, without waiting for the vision sensor to scan the preset range in the current cycle and then rescan the preset range to reduce the image. The size of the collection range reduces the time to acquire the image data of the object that will collide with the vehicle, thereby reducing the risk of traffic accidents due to the untimely collection of the image data of the object that will collide with the vehicle in an emergency.
在一种可能的实现方式中,所述临时范围为矩形。In a possible implementation manner, the temporary area is a rectangle.
通过上述设置,能够图像采集范围缩小到被识别为将要与车辆发生碰撞的对象所在的矩形区域,进而减少图像采集的时间,防止在紧急情况下因采集图像时间过长造成避险不及时的情况发生。Through the above settings, the image acquisition range can be narrowed down to the rectangular area where the object identified as about to collide with the vehicle is located, thereby reducing the image acquisition time and preventing the untimely avoidance of danger due to the excessively long image acquisition time in an emergency. occur.
在一种可能的实现方式中,所述视觉传感器包括摄像头。In a possible implementation, the visual sensor includes a camera.
本申请实施例的第二方面提供了一种用于视觉传感器的控制方法,所述视觉传感器通过扫描而采集图像数据,所述方法包括:获取图像采集范围调节指令;根据所述图像采集范围调节指令控制视觉传感器调整图像采集范围,所述图像采集范围调节指令用于指示原本按照预设范围采集图像的所述视觉传感器按照临时范围采集图像,所述临时范围小于所述预设范围,且包含被识别为将要与所述车辆发生碰撞的对象所在的区域。A second aspect of the embodiments of the present application provides a control method for a visual sensor, where the visual sensor acquires image data by scanning, the method includes: acquiring an image acquisition range adjustment instruction; adjusting according to the image acquisition range The instruction controls the visual sensor to adjust the image acquisition range, and the image acquisition range adjustment instruction is used to instruct the vision sensor that originally acquired the image according to the preset range to acquire the image according to the temporary range, the temporary range is smaller than the preset range, and includes The area where the object identified as about to collide with the vehicle is located.
通过上述设置,使得视觉传感器能够立即获取被识别为将要与所述车辆发生碰撞的对象的图像数据,无需等待视觉传感器扫描完当前周期内的预设范围后再重新对所述预设范围全部预设范围进行扫描,缩小图像采集范围的大小,减少获取被识别为将要与所述车辆发生碰撞的对象的图像数据的时间,进而降低在紧急情况下,因对被识别为将要与所述车辆发生碰撞的对象的图像数据采集不及时而发生的交通事故的风险。Through the above settings, the vision sensor can immediately obtain the image data of the object recognized as the object that will collide with the vehicle, without waiting for the vision sensor to scan the preset range in the current cycle and then re-pre-predict all the preset ranges. Set the range to scan, reduce the size of the image acquisition range, reduce the time to acquire the image data of the object that is identified as about to collide with the vehicle, and thus reduce the risk of being identified as about to collide with the vehicle in an emergency. Risk of traffic accidents due to untimely acquisition of image data of the colliding objects.
在一种可能的实现方式中,所述临时范围为矩形。In a possible implementation manner, the temporary area is a rectangle.
通过上述设置,能够图像采集范围缩小到被识别为将要与所述车辆发生碰撞的对象所在的矩形区域,进而减少图像采集的时间,防止在紧急情况下因采集图像时间过长造成避险不及时的情况发生。Through the above arrangement, the image acquisition range can be narrowed down to the rectangular area where the object identified as about to collide with the vehicle is located, thereby reducing the image acquisition time and preventing untimely avoidance due to excessive image acquisition time in emergency situations situation occurs.
在一种可能的实现方式中,调整当前图像采集周期的所述临时范围,当已采集的图像范围包含所述临时范围时,对已采集的图像范围和所述临时范围的并集进行图像采集。In a possible implementation manner, the temporary range of the current image acquisition cycle is adjusted, and when the acquired image range includes the temporary range, image acquisition is performed on the union of the acquired image range and the temporary range .
通过上述设置,避免因已采集的图像范围和临时范围全部重复或部分重复导致的对临时范围的重复图像采集,进一步减少图像采集花费的时间。Through the above setting, repeated image acquisition for the temporary range caused by the complete or partial duplication of the acquired image range and the temporary range is avoided, and the time spent on image acquisition is further reduced.
在一种可能的实现方式中,所述视觉传感器包括摄像头。In a possible implementation, the visual sensor includes a camera.
本申请实施例的第三方面提供了一种用于视觉传感器的控制指令生成装置,所述视觉传感器通过扫描而采集图像数据,所述装置包括:图像数据获取模块,用于获取图像数据;识别模块,其用于根据所述图像数据确定将要与车辆发生碰撞的对象;控制指令生成模块,其用于根据所述对象生成图像采集范围调节指令;控制指令发送模块,其用于向所述视觉传感器发送所述图像采集范围调节指令,所述图像采集范围调节指令用于指示原本按照预设范围采集图像的所述视觉传感器按照临时范围采集图像,所述临时范围小于所述预设范围,且包含所述对象所在的区域。A third aspect of the embodiments of the present application provides a control instruction generation device for a visual sensor, the visual sensor collects image data by scanning, and the device includes: an image data acquisition module for acquiring image data; a module, which is used to determine the object that will collide with the vehicle according to the image data; a control instruction generation module, which is used to generate an image acquisition range adjustment instruction according to the object; a control instruction sending module, which is used to send the visual The sensor sends the image acquisition range adjustment instruction, and the image acquisition range adjustment instruction is used to instruct the visual sensor that originally acquired the image according to the preset range to acquire the image according to the temporary range, the temporary range is smaller than the preset range, and Contains the area in which the object is located.
在一种可能的实现方式中,所述临时范围为矩形。In a possible implementation manner, the temporary area is a rectangle.
在一种可能的实现方式中,所述视觉传感器包括摄像头。In a possible implementation, the visual sensor includes a camera.
本申请实施例的第四方面提供了一种用于视觉传感器的控制装置,所述视觉传感器通过扫描而采集图像数据,所述装置包括:控制指令接收模块,其用于获取图像采集范围调节指令;控制模块,其用于根据所述图像采集范围调节指令控制视觉传感器调整图像采集范围,所述图像采集范围调节指令用于指示原本按照预设范围采集图像 的所述视觉传感器按照临时范围采集图像,所述临时范围小于所述预设范围,且包含被识别为将要与所述车辆发生碰撞的对象所在的区域。A fourth aspect of the embodiments of the present application provides a control device for a visual sensor, the visual sensor collects image data by scanning, and the device includes: a control instruction receiving module, which is configured to acquire an image acquisition range adjustment instruction a control module, which is used to control the visual sensor to adjust the image acquisition range according to the image acquisition range adjustment instruction, and the image acquisition range adjustment instruction is used to instruct the visual sensor that originally acquired the image according to the preset range to acquire the image according to the temporary range , the temporary range is smaller than the preset range and includes an area where an object identified as about to collide with the vehicle is located.
在一种可能的实现方式中,所述临时范围为矩形。In a possible implementation manner, the temporary area is a rectangle.
在一种可能的实现方式中,还包括调整当前图像采集周期的所述临时范围,当已采集的图像范围包含所述临时范围时,对已采集的图像范围和所述临时范围的并集进行图像采集。In a possible implementation manner, the method further includes adjusting the temporary range of the current image acquisition cycle, and when the acquired image range includes the temporary range, performing the process on the union of the acquired image range and the temporary range. Image Acquisition.
在一种可能的实现方式中,所述视觉传感器包括摄像头。In a possible implementation, the visual sensor includes a camera.
本申请实施例的第五方面提供一种驾驶危险预测方法,包括:A fifth aspect of the embodiments of the present application provides a driving risk prediction method, including:
获取图像数据,所述图像数据通过视觉传感器扫描而获得;acquiring image data, the image data is obtained by scanning with a vision sensor;
根据所述图像数据,确定将要与车辆发生碰撞的对象;determining an object that will collide with the vehicle according to the image data;
根据所述对象生成图像采集范围调节指令;generating an image acquisition range adjustment instruction according to the object;
向所述视觉传感器发送所述图像采集范围调节指令,sending the image acquisition range adjustment instruction to the vision sensor,
获取图像采集范围调节指令,根据所述图像采集范围调节指令控制视觉传感器调整图像采集范围,所述图像采集范围调节指令用于指示原本按照预设范围采集图像的所述视觉传感器按照临时范围采集图像,所述临时范围小于所述预设范围,且包含所述对象所在的区域。Acquire an image acquisition range adjustment instruction, and control the visual sensor to adjust the image acquisition range according to the image acquisition range adjustment instruction, where the image acquisition range adjustment instruction is used to instruct the visual sensor that originally acquired the image according to the preset range to acquire the image according to the temporary range , the temporary range is smaller than the preset range and includes the area where the object is located.
一种可能的实现方式中,所述临时范围为矩形。In a possible implementation manner, the temporary range is a rectangle.
在一种可能的实现方式中,在接收到所述图像采集范围调节指令时,调整当前图像采集周期的所述临时范围,当已采集的图像范围包含所述临时范围时,对已采集的图像范围和所述临时范围的并集进行图像采集。In a possible implementation manner, when receiving the image acquisition range adjustment instruction, the temporary range of the current image acquisition cycle is adjusted, and when the acquired image range includes the temporary range, the acquired image The union of the range and the temporary range is imaged.
在一种可能的实现方式中,所述视觉传感器包括摄像头。In a possible implementation, the visual sensor includes a camera.
本申请第六方面提供一种计算机可读存储介质,其上存储有程序指令,所述程序指令当被计算机执行时使得所述计算机执行上述第一、二和五方面及其可能的实现方式所提供的方法中的任一方法。A sixth aspect of the present application provides a computer-readable storage medium on which program instructions are stored, and when executed by a computer, the program instructions cause the computer to execute the above-mentioned first, second and fifth aspects and possible implementations thereof. any of the methods provided.
本申请第七方面提供一种计算机程序,计算机通过运行该程序能够执行上述第一、二和五方面及其可能的实现方式所提供的方法中的任一方法,或者作为第二方面及其可能的实现方式所提供的装置中的任一装置发挥作用。A seventh aspect of the present application provides a computer program, by running the program, a computer can execute any one of the methods provided in the first, second, and fifth aspects and their possible implementations, or as the second aspect and its possible implementations. Any of the means provided by the implementation of the .
本申请的这些和其它方面在以下(多个)实施例的描述中会更加简明易懂。These and other aspects of the present application will be more clearly understood in the following description of the embodiment(s).
附图说明Description of drawings
以下参照附图来进一步说明本申请的各个特征和各个特征之间的联系。附图均为示例性的,一些特征并不以实际比例示出,并且一些附图中可能省略了本申请所涉及领域的惯常的且对于本申请非必要的特征,或是额外示出了对于本申请非必要的特征,附图所示的各个特征的组合并不用以限制本申请。另外,在本说明书全文中,相同的附图标记所指代的内容也是相同的。具体的附图说明如下:The various features of the present application and the connections between the various features are further explained below with reference to the accompanying drawings. The drawings are exemplary, some features are not shown to scale, and some of the drawings may omit features that are customary in the field to which the application relates and not essential to the application, or additionally show The non-essential features of the present application, and the combination of individual features shown in the drawings are not intended to limit the present application. In addition, the same reference numerals refer to the same contents throughout the present specification. The specific drawings are described as follows:
图1a是本申请实施例提供的驾驶危险预测系统的结构示意图;1a is a schematic structural diagram of a driving risk prediction system provided by an embodiment of the present application;
图1b是本申请实施例提供的用于视觉传感器的控制装置的结构示意图;FIG. 1b is a schematic structural diagram of a control device for a visual sensor provided by an embodiment of the present application;
图2a是本申请实施例提供的视觉传感器控制方法的流程图;2a is a flowchart of a visual sensor control method provided by an embodiment of the present application;
图2b是本申请实施例提供的视觉传感器控制方法的子流程图;Fig. 2b is a sub-flow chart of the visual sensor control method provided by the embodiment of the present application;
图3a-图3f是本申请实施例提供的视觉传感器捕捉到的交通场景的示例性图像;3a-3f are exemplary images of a traffic scene captured by a visual sensor provided in an embodiment of the present application;
图4a-图4d是本申请实施例提供的视觉传感器根据不同的图像采集范围调节指令对场景进行扫描的示意图;4a-4d are schematic diagrams of the vision sensor scanning a scene according to different image acquisition range adjustment instructions provided by an embodiment of the present application;
图5a-图5d是本申请实施例提供的视觉传感器在t 1-t 4时刻捕捉到的场景的图像。 5a-5d are images of the scene captured by the vision sensor provided by the embodiment of the present application at times t 1 -t 4 .
附图标记说明Description of reference numerals
视觉传感器10;视觉传感器控制装置11;ADAS计算与控制装置20;感知模块21;归控模块22;危险区域识别模块23;控制指令生成模块24;ISP30;ADAS平台100;视觉传感器扫描控制模块110;比对模块111;驾驶危险预测系统200;ECU210;图像300;路口301;道路302;交通标志303;交通信号灯304;车辆305、306、307、308、401;行人309、501。 Vision sensor 10; vision sensor control device 11; ADAS calculation and control device 20; perception module 21; home control module 22; hazardous area identification module 23; control instruction generation module 24; ISP30; ADAS platform 100; vision sensor scanning control module 110 ; comparison module 111 ; driving risk prediction system 200 ; ECU 210 ; image 300 ; intersection 301 ; road 302 ; traffic sign 303 ;
具体实施方式Detailed ways
下面结合实施方式中的附图,对本申请的具体实施方式所涉及的技术方案进行描述。在对技术方案的具体内容进行描述前,先简单说明一下本申请中所使用的术语。The technical solutions involved in the specific embodiments of the present application will be described below with reference to the accompanying drawings in the embodiments. Before describing the specific content of the technical solution, the terms used in this application are briefly explained.
说明书和权利要求书中的词语“第一、第二、第三等”或模块A、模块B、模块C等类似用语,仅用于区别类似的对象,不代表针对对象的特定排序,可以理解地,在允许的情况下可以互换特定的顺序或先后次序,以使这里描述的本申请实施例能够以除了在这里图示或描述的以外的顺序实施。The words "first, second, third, etc." in the description and claims, or similar terms such as module A, module B, module C, etc., are only used to distinguish similar objects, and do not represent a specific ordering of objects, which can be understood Indeed, where permitted, the specific order or sequence may be interchanged to enable the embodiments of the application described herein to be practiced in sequences other than those illustrated or described herein.
说明书和权利要求书中使用的术语“包括”不应解释为限制于其后列出的内容;它不排除其它的元件或步骤。因此,其应当诠释为指定所提到的所述特征、整体、步骤或部件的存在,但并不排除存在或添加一个或更多其它特征、整体、步骤或部件及其组群。因此,表述“包括装置A和B的设备”不应局限为仅由部件A和B组成的设备。The term "comprising" used in the description and claims should not be interpreted as being limited to what is listed thereafter; it does not exclude other elements or steps. Accordingly, it should be interpreted as specifying the presence of said features, integers, steps or components mentioned, but not excluding the presence or addition of one or more other features, integers, steps or components and groups thereof. Therefore, the expression "apparatus comprising means A and B" should not be limited to apparatuses consisting of parts A and B only.
本说明书中提到的“一个实施例”或“实施例”意味着与该实施例结合描述的特定特征、结构或特性包括在本申请的至少一个实施例中。因此,在本说明书各处出现的用语“在一个实施例中”或“在实施例中”并不一定都指同一实施例,但可以指同一实施例。此外,在一个或多个实施例中,能够以任何适当的方式组合各特定特征、结构或特性,如从本公开对本领域的普通技术人员显而易见的那样。Reference in this specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the terms "in one embodiment" or "in an embodiment" in various places in this specification are not necessarily all referring to the same embodiment, but can refer to the same embodiment. Furthermore, the particular features, structures or characteristics can be combined in any suitable manner, as would be apparent to one of ordinary skill in the art from this disclosure, in one or more embodiments.
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。如有不一致,以本说明书中所说明的含义或者根据本说明书中记载的内容得出的含义为准。另外,本文中所使用的术语只是为了描述本申请实施例的目的,不是旨在限制本申请。Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the technical field to which this application belongs. If there is any inconsistency, the meaning described in this specification or the meaning derived from the content described in this specification shall prevail. In addition, the terms used herein are only for the purpose of describing the embodiments of the present application, and are not intended to limit the present application.
高级驾驶辅助系统(Advanced Driving Assistance System,简称ADAS)包括多个传感器以及数据处理平台。其工作原理是通过安装在移动体上的多种传感器来采集移动体及其周围环境的数据,所述数据经过数据处理平台的处理和分析后,规划移动体的行驶路径,并发送控制命令给控制模块执行相关操作。Advanced Driving Assistance System (ADAS) includes multiple sensors and data processing platforms. Its working principle is to collect the data of the moving body and its surrounding environment through a variety of sensors installed on the moving body. After the data is processed and analyzed by the data processing platform, the traveling path of the moving body is planned, and control commands are sent to The control module performs the relevant operations.
图像信号处理器(Image Signal Processor,简称ISP)用于对前端图像传感器输出的图像进行处理的装置。Image Signal Processor (ISP) is a device used to process the image output by the front-end image sensor.
电子控制单元(Electronic Control Unit,简称ECU),用于按照预先设计的程序计算输入的各种数据和处理输入的各种指令,进一步控制各个执行机构来执行各种预 定的控制功能。Electronic control unit (Electronic Control Unit, ECU for short) is used to calculate various input data and process various input instructions according to a pre-designed program, and further control each actuator to perform various predetermined control functions.
首先,说明发明人发现的现有技术中存在的不足:First, the deficiencies in the prior art discovered by the inventors are explained:
在当前ADAS系统中,视觉传感器按帧对图像进行多次扫描,每次扫描后,视觉传感器都能够从现实世界中获取一行图像的信息,然后将其输送至ISP。待ISP接收到由多行图像构成的一帧完整的图像后,再对该帧图像进行图像处理。处理后的图像被传输至ADAS数据处理平台中的数据处理模块进行算法处理,并最终生成控制指令。In current ADAS systems, the vision sensor scans the image multiple times, frame by frame. After each scan, the vision sensor is able to obtain information about a line of images from the real world and feed it to the ISP. After the ISP receives a complete frame of images composed of multiple lines of images, image processing is performed on the frame of images. The processed images are transmitted to the data processing module in the ADAS data processing platform for algorithm processing, and finally generate control instructions.
在视觉传感器的帧率为30fps(每秒曝光30帧图像)的情况下,相邻两帧图像的曝光间隔约为33ms。这样导致在33ms的时间间隔内,ADAS数据处理平台无法获取到现实世界中的图像信息。如果数据处理平台识别到危险信号,需要立刻获取到当前镜头画面中指定区域的图像时,最多需要等待33ms的时间。在紧急避险、紧急决策等场景的情况下,存在延时较大的问题。When the frame rate of the vision sensor is 30fps (30 frames of image exposure per second), the exposure interval of two adjacent frames of images is about 33ms. As a result, within a time interval of 33ms, the ADAS data processing platform cannot obtain image information in the real world. If the data processing platform recognizes a danger signal and needs to immediately obtain the image of the specified area in the current lens screen, it needs to wait for a maximum of 33ms. In the case of emergency risk avoidance, emergency decision-making and other scenarios, there is a problem of large delay.
鉴于现有技术中存在这样的问题,本申请实施例提供了用于视觉传感器的控制指令生成方法和装置、控制方法和装置以及一种驾驶危险预测方法及系统。能够在紧急避险、紧急决策场景下获取到当前镜头画面中指定位置的图像,抑制因图像的延时获取造成的交通事故。In view of such problems in the prior art, the embodiments of the present application provide a method and device for generating a control instruction for a visual sensor, a control method and device, and a driving risk prediction method and system. It can obtain the image of the specified position in the current lens screen in the emergency avoidance and emergency decision-making scenarios, and suppress the traffic accident caused by the delayed acquisition of the image.
本申请的一个方式的概要如以下实施方式所述。An outline of one embodiment of the present application is described in the following embodiments.
第一实施方式:驾驶危险预测系统。The first embodiment: a driving risk prediction system.
图1a示出了具备视觉传感器控制装置11和ADAS计算与控制装置20的驾驶危险预测系统200的模块结构。FIG. 1 a shows the module structure of a driving risk prediction system 200 with a vision sensor control device 11 and an ADAS calculation and control device 20 .
本实施方式的驾驶危险预测系统200位于车辆上。车辆例如是汽车。所述驾驶危险预测系统200可以包括多个传感器、多个传感器控制装置,ADAS平台100以及车辆ECU210。视觉传感器控制装置11与视觉传感器10连接,用于控制视觉传感器10对图像数据的采集。The driving risk prediction system 200 of the present embodiment is located on the vehicle. The vehicle is, for example, a car. The driving risk prediction system 200 may include a plurality of sensors, a plurality of sensor control devices, an ADAS platform 100 and a vehicle ECU 210 . The visual sensor control device 11 is connected to the visual sensor 10 for controlling the acquisition of image data by the visual sensor 10 .
ADAS平台100能够通过线路与视觉传感器10和视觉传感器控制装置11连接,用于向视觉传感器控制装置11发送指令以及接收并处理视觉传感器和其他传感器采集的数据。车辆ECU210能够接收来自ADAS平台100的控制指令,进一步控制车辆执行相应的行驶操作。The ADAS platform 100 can be connected with the vision sensor 10 and the vision sensor control device 11 through lines, for sending instructions to the vision sensor control device 11 and receiving and processing data collected by the vision sensor and other sensors. The vehicle ECU 210 can receive control instructions from the ADAS platform 100, and further control the vehicle to perform corresponding driving operations.
在本申请的一些实施例中,传感器可以为视觉传感器10,如摄像头,也可以是激光雷达传感器、毫米波传感器等能够通过扫描获取数据的传感器。相应的,传感器控制装置可以为视觉传感器控制装置11。ADAS平台100可以包括:配置为ADAS计算与控制装置20的驾驶危险预测装置、ISP30。ADAS计算与控制装置20可以包括:感知模块21、归控模块22、危险区域识别模块23、以及控制指令生成模块24。车辆ECU210连接能够通过有线或无线的方式与ADAS计算与控制装置20连接,用于根据ADAS计算与控制装置20生成的指令进一步控制车辆执行相应的操作。In some embodiments of the present application, the sensor may be a visual sensor 10, such as a camera, or a sensor that can acquire data by scanning, such as a lidar sensor, a millimeter-wave sensor, or the like. Correspondingly, the sensor control device may be the visual sensor control device 11 . The ADAS platform 100 may include a driving hazard prediction device, an ISP 30 configured as the ADAS calculation and control device 20 . The ADAS calculation and control device 20 may include: a perception module 21 , a control module 22 , a dangerous area identification module 23 , and a control instruction generation module 24 . The vehicle ECU 210 can be connected to the ADAS computing and control device 20 in a wired or wireless manner to further control the vehicle to perform corresponding operations according to the instructions generated by the ADAS computing and control device 20 .
视觉传感器10用于获取交通场景的图像数据,其能够安装于车辆的不同位置。视觉传感器10具有由二维排列的多个单位像素构成的像素阵列。在获取场景的图像数据时,视觉传感器可以按照时间顺序对预设范围逐行进行扫描,每扫描一行区域的图像,即能获取一行区域的图像数据,并将来自该行图像的数据通过线路输出给ISP30。 当最后一行图像被扫描完毕后,视觉传感器在最后一行扫描的图像数据中添加结束标记,表示完成一个周期的图像采集。ISP30根据该结束标记开始对收到的预设范围的图像数据进行处理。视觉传感器还10可以根据图像采集范围调节指令对其指示进行图像采集的临时范围进行扫描。当视觉传感器还10对临时范围的最后一行图像扫描完毕后,视觉传感器10在最后一行图像数据中加入结束标记,表示完成对当前周期的图像采集。ISP30根据该结束标记开始对收到的临时图像进行处理。The vision sensor 10 is used to acquire image data of a traffic scene, and it can be installed in different positions of the vehicle. The visual sensor 10 has a pixel array composed of a plurality of unit pixels arranged two-dimensionally. When acquiring the image data of the scene, the vision sensor can scan the preset range line by line in chronological order, and scan the image of one line area, that is, it can acquire the image data of one line area, and output the data from the line image through the line to ISP30. When the last line of images is scanned, the vision sensor adds an end mark to the scanned image data of the last line, indicating that one cycle of image acquisition is completed. The ISP 30 starts to process the received image data of the preset range according to the end marker. The vision sensor 10 may also scan the temporary range instructed to perform image capture according to the image capture range adjustment instruction. After the vision sensor 10 has finished scanning the last line of images in the temporary range, the vision sensor 10 adds an end mark to the last line of image data, indicating that the image acquisition for the current cycle is completed. The ISP 30 starts processing the received temporary image according to the end marker.
ISP30是对从视觉传感器10输出的图像数据进行图像处理的处理器。ISP30按时间顺序接收来自视觉传感器10输出的行图像数据,待收到带有结束标记的行图像数据后,对来自视觉传感器10输出的所有行图像数据进行例如伽马校正、颜色插补处理以及自动白平衡的处理等。ISP30可以为集成在ADAS平台100内的芯片,也可以是与所述视觉传感器10集成在一起的芯片。在本实施例中,ISP30集成在ADAS平台100内,并设置有图像接口,通过数据线接收视觉传感器10发送的图像数据。The ISP 30 is a processor that performs image processing on image data output from the vision sensor 10 . The ISP 30 receives the line image data output from the vision sensor 10 in chronological order, and after receiving the line image data with the end mark, performs, for example, gamma correction, color interpolation, and Processing of automatic white balance, etc. The ISP 30 may be a chip integrated in the ADAS platform 100 or a chip integrated with the visual sensor 10 . In this embodiment, the ISP 30 is integrated in the ADAS platform 100, and is provided with an image interface to receive image data sent by the visual sensor 10 through a data line.
ADAS计算与控制装置20,用于处理从多个传感器获取的图像数据,并生成控制指令控制车辆ECU执行相应的操作。ADAS计算与控制装置20具有感知模块21、归控模块22、危险区域识别模块23以及控制指令生成模块24。The ADAS calculation and control device 20 is used for processing image data obtained from a plurality of sensors, and generating control instructions to control the vehicle ECU to perform corresponding operations. The ADAS calculation and control device 20 has a perception module 21 , a control module 22 , a dangerous area identification module 23 and a control instruction generation module 24 .
感知模块21是能够对来自ISP30的图像数据的进行算法处理的装置。其用于对获取到的图像进行图像检测,以识别图像中的对象并获取对象的信息。所述对象可以是车辆周边环境中的交通参与者。所述交通参与者可包括行人、周围车辆、交通标记以及障碍物等。所述对象的信息可以包括:对象在世界坐标系下的位置、对象的大小。感知模块可使用神经网络模型、物体识别算法、运动中恢复结构(Structure fromMotion,SFM)算法、视频跟踪和其他计算机视觉技术进行图像检测。The perception module 21 is a device capable of performing algorithmic processing on the image data from the ISP 30 . It is used to perform image detection on the acquired image, so as to identify the object in the image and obtain the information of the object. The objects may be traffic participants in the surroundings of the vehicle. The traffic participants may include pedestrians, surrounding vehicles, traffic signs, obstacles, and the like. The information of the object may include: the position of the object in the world coordinate system, and the size of the object. The perception module can perform image detection using neural network models, object recognition algorithms, Structure from Motion (SFM) algorithms, video tracking, and other computer vision techniques.
感知模块21根据识别出的交通参与者的像素坐标和视觉传感器10的标定参数,确定交通参与者的位置和大小。标定参数可以为视觉传感器镜头的内参、外参、位置信息等。感知模块21根据在同一时刻下,位于车辆不同位置的视觉传感器获得的第一图像和第二图像中同一交通参与者的像素坐标、视觉传感器对应的内参、外参获得该交通参与者的任一像素坐标点对应的在世界坐标系下的位置信息,进而确定该交通参与者的位置;根据多个像素坐标构成的图像面积和视觉传感器的缩放系数,确定该交通参与者在世界坐标系中的大小。在车辆运行过程中,感知模块21可以通过惯导设备/激光雷达实时获取车辆的定位信息,也可以采用卫星定位技术(例如:GPS技术)实时获取车辆的定位信息,还可以采用其他现有的任意一种定位技术实时获取车辆的定位信息,本申请实施例对此不作限定。车辆的定位信息可以包括经纬度、海拔高度以及车辆的姿态信息(如车头朝向)等。上述车辆的定位信息中的经纬度以及海拔高度均是在世界坐标系(也可称为地理坐标系)中的数据。根据车辆的定位信息和交通参与者的位置确定交通参与者相对于车辆的距离。感知模块21还可以接收归控模块22当前规划的车辆的行驶轨迹。所述感知模块21包括:输入接口、输出接口、程序存储器、工作存储器以及微控制器。输入接口用于接收从ISP输出的图像数据;输出接口用于向归控模块22和危险区域识别模块23输出交通参与者的信息;微控制器能够从程序存储器读出命令并依次执行各处理。微控制器将预先存储在程序存储器中的程序暂时展开到工作存储器中,按照其命令组进行各种动作。用于获取场景数据的中交通 参与者信息的算法通过微控制器和软件的组合来实现。软件可以是构成用于执行与各功能块对应的特定处理的计算机程序的模块。这样的计算机程序可以存储在程序存储器中。The perception module 21 determines the location and size of the traffic participant according to the identified pixel coordinates of the traffic participant and the calibration parameters of the visual sensor 10 . The calibration parameters may be internal parameters, external parameters, and position information of the vision sensor lens. The perception module 21 obtains any one of the traffic participants according to the pixel coordinates of the same traffic participant in the first image and the second image obtained by the visual sensors located at different positions of the vehicle at the same time, the internal reference and the external reference corresponding to the visual sensor. The position information corresponding to the pixel coordinate point in the world coordinate system, and then determine the position of the traffic participant; according to the image area composed of multiple pixel coordinates and the zoom factor of the visual sensor, determine the traffic participant in the world coordinate system. size. During the operation of the vehicle, the perception module 21 can obtain the positioning information of the vehicle in real time through the inertial navigation device/lidar, or can obtain the positioning information of the vehicle in real time by using satellite positioning technology (eg, GPS technology), and can also use other existing Any positioning technology acquires the positioning information of the vehicle in real time, which is not limited in this embodiment of the present application. The positioning information of the vehicle may include longitude and latitude, altitude, and attitude information of the vehicle (such as the heading of the vehicle). The latitude, longitude and altitude in the positioning information of the vehicle are data in a world coordinate system (also referred to as a geographic coordinate system). The distance of the traffic participant relative to the vehicle is determined according to the positioning information of the vehicle and the location of the traffic participant. The perception module 21 may also receive the current planning of the vehicle's travel trajectory by the control module 22 . The sensing module 21 includes: an input interface, an output interface, a program memory, a working memory and a microcontroller. The input interface is used to receive image data output from the ISP; the output interface is used to output the information of traffic participants to the control module 22 and the danger area identification module 23; the microcontroller can read commands from the program memory and execute each process in sequence. The microcontroller temporarily expands the program stored in the program memory in advance into the working memory, and performs various actions according to the command group. Algorithms used to obtain information about traffic participants in scene data are implemented through a combination of microcontrollers and software. The software may be a module constituting a computer program for executing specific processing corresponding to each functional block. Such computer programs may be stored in program memory.
危险区域识别模块23是不同于感知模块21的算法处理装置,其能够根据来自感知模块21获取的交通参与者的信息来判断交通参与者是否将要与车辆发生碰撞。对交通参与者是否将要与车辆发生碰撞的判定条件可以包括:在当前时刻下,同一交通参与者的位置本车当前的行驶轨迹相交、同一交通参与者相对于本车的距离小于第一距离以及同一交通参与者的大小超过预设值。当交通参与者满足上述判定标准时,表明所述交通参与者将要与车辆发生碰撞,需要立即获取所述交通参与者的图像,进而进一步规划车辆行驶轨迹,不能等待视觉传感器按时间顺序扫描完当前交通场景的预设范围后再重新按照预设范围进行扫描。The dangerous area identification module 23 is an algorithm processing device different from the perception module 21 , and can determine whether the traffic participant is about to collide with the vehicle according to the information of the traffic participant obtained from the perception module 21 . The conditions for determining whether the traffic participant is about to collide with the vehicle may include: at the current moment, the position of the same traffic participant intersects the current driving trajectory of the vehicle, the distance between the same traffic participant and the vehicle is less than the first distance, and The size of the same traffic participant exceeds the preset value. When the traffic participant meets the above judgment criteria, it indicates that the traffic participant is about to collide with the vehicle, and the image of the traffic participant needs to be obtained immediately to further plan the vehicle's driving trajectory, and cannot wait for the visual sensor to scan the current traffic in chronological order. The preset range of the scene is then scanned again according to the preset range.
控制指令生成模块24用于在所述交通参与者被判断为将要与车辆发生碰撞时生成图像采集范围调节指令,控制原本按照预设范围采集图像数据的视觉传感器按照临时范围进行图像采集,临时范围小于预设范围并包含被识别为将要与车辆发生碰撞的交通参与者所在的区域。图像采集范围调节指令用于指示视觉传感器对被识别为将要与本车发生碰撞的交通参与者所在的区域进行图像采集。根据视觉传感器扫描方式的不同,临时范围可以为交通参与者所在的行区域或列区域进行图像采集,临时范围还可以为交通参与者所在的矩形、三角形、圆形区域。例如,当交通参与者是行人时,临时范围为包含行人所在区域的矩形区域。控制指令发送模块用于向所述视觉传感器10发送所述图像采集范围调节指令。控制指令发送模块可以为数据线,其一端与控制指令生成模块连接,其另一端通过ISP30与视觉传感器控制装置连接。还可以以信号传输的方式将图像采集范围调节指令发送至视觉传感器控制装置11。The control instruction generation module 24 is used to generate an image acquisition range adjustment instruction when the traffic participant is judged to be about to collide with the vehicle, and controls the vision sensor that originally acquired image data according to the preset range to perform image acquisition according to the temporary range. Areas smaller than the preset range and containing traffic participants identified as about to collide with the vehicle. The image acquisition range adjustment instruction is used to instruct the vision sensor to perform image acquisition on the area where the traffic participant identified as about to collide with the vehicle is located. According to the different scanning methods of the visual sensor, the temporary range can be the row area or column area where the traffic participants are located for image acquisition, and the temporary range can also be the rectangular, triangular and circular areas where the traffic participants are located. For example, when the traffic participant is a pedestrian, the temporary extent is a rectangular area containing the area where the pedestrian is located. The control instruction sending module is configured to send the image acquisition range adjustment instruction to the visual sensor 10 . The control command sending module can be a data line, one end of which is connected to the control command generating module, and the other end is connected to the vision sensor control device through the ISP30. The image acquisition range adjustment instruction can also be sent to the vision sensor control device 11 in the form of signal transmission.
归控模块22是不同于感知模块21和危险区域识别模块23的运算装置,其用于根据感知模块21获取的交通参与者的信息规划车辆的行驶路径并生成车辆行驶控制指令。The home control module 22 is a computing device different from the perception module 21 and the dangerous area identification module 23 , and is used to plan the driving path of the vehicle and generate vehicle driving control instructions according to the information of the traffic participants acquired by the perception module 21 .
车辆ECU210,包括微处理器(CPU)、存储器(ROM、RAM)、输入/输出接口(I/O)、模数转换器(A/D)以及集成电路。车辆ECU通过输入接口通过连接线与归控模块22连接,用于接收归控模块生成的车辆行驶控制指令,并根据该车辆行驶控制指令进一步控制各个执行机构来执行相应的行驶操作。The vehicle ECU 210 includes a microprocessor (CPU), a memory (ROM, RAM), an input/output interface (I/O), an analog-to-digital converter (A/D), and an integrated circuit. The vehicle ECU is connected to the home control module 22 through the input interface through the connecting line, and is used for receiving the vehicle travel control command generated by the home control module, and further controls each actuator to perform the corresponding travel operation according to the vehicle travel control command.
视觉传感器控制装置11,其能够基于来自控制指令生成模块24输出的图像采集范围调节指令控制控制视觉传感器10按照临时范围进行图像的扫描。所述临时范围是将所述预设范围缩小而得到的,并且所述临时范围包含被危险区域识别模块23识别为将要与本车发生碰撞的交通参与者。The vision sensor control device 11 can control and control the vision sensor 10 to scan images according to the temporary range based on the image acquisition range adjustment instruction output from the control instruction generation module 24 . The temporary range is obtained by reducing the preset range, and the temporary range includes the traffic participants identified by the danger area identification module 23 as being about to collide with the vehicle.
图1b示出了视觉传感器控制装置11的模块示意图。所述视觉传感器控制装置11可以包括视觉传感器扫描控制模块110和比对模块111。FIG. 1 b shows a schematic block diagram of the vision sensor control device 11 . The visual sensor control device 11 may include a visual sensor scanning control module 110 and a comparison module 111 .
视觉传感器扫描控制模块110可以是控制视觉传感器10的控制电路,并且集成在视觉传感器10上,用于根据图像采集范围调节指令来控制视觉传感器的扫描范围。例如,当视觉传感器扫描控制模块110收到对第N行至第M行进行数据采集的图像采集范围调节指令时,控制视觉传感器10完成当前扫描的行区域,并开始从第N行 进行扫描,直至将第M行扫描完毕。The vision sensor scanning control module 110 may be a control circuit that controls the vision sensor 10, and is integrated on the vision sensor 10 to control the scanning range of the vision sensor according to the image acquisition range adjustment instruction. For example, when the vision sensor scanning control module 110 receives the image acquisition range adjustment instruction for data acquisition for the Nth row to the Mth row, it controls the vision sensor 10 to complete the row area currently scanned, and starts to scan from the Nth row, Until the M-th line is scanned.
比对模块111可以根据控制指令生成模块24的图像采集范围调节指令来调整当前图像采集周期的临时范围。比对模块111将临时范围分别与未进行图像采集的未采集范围和已进行图像采集的已采集范围进行比对。当已采集范围包含所述临时范围时,比对模块111生成对已采集范围和所述临时范围的并集进行图像采集的最终调节指令。The comparison module 111 may adjust the temporary range of the current image capture cycle according to the image capture range adjustment instruction of the control command generation module 24 . The comparison module 111 respectively compares the temporary range with the uncollected range in which image capture has not been performed and the captured range in which image capture has been carried out. When the acquired range includes the temporary range, the comparison module 111 generates a final adjustment instruction for performing image acquisition on the union of the acquired range and the temporary range.
当已采集范围不包含临时范围时,即临时范围位于未采集范围内,比对模块111生成对临时范围进行图像采集的最终调节指令,并将该最终调节指令发送至视觉传感器扫描控制模块110。视觉传感器完成当前行区域的扫描,并对当前行区域对应的图像数据加入结束标记,然后开始对临时范围进行图像采集。例如,当视觉传感器扫描到第A行时,比对模块111收到对第N行至第M行进行图像采集的图像采集范围调节指令时(M>N>A)。视觉传感器扫描控制模块110控制视觉传感器10停止当前的扫描,并开始从第N行进行扫描,直至完成对第M行的扫描。When the acquired range does not include the temporary range, that is, the temporary range is within the uncollected range, the comparison module 111 generates a final adjustment instruction for image acquisition of the temporary range, and sends the final adjustment instruction to the vision sensor scanning control module 110 . The vision sensor completes the scanning of the current line area, adds an end mark to the image data corresponding to the current line area, and then starts image acquisition for the temporary range. For example, when the visual sensor scans the A-th row, the comparison module 111 receives an image acquisition range adjustment instruction for image acquisition for the N-th row to the M-th row (M>N>A). The vision sensor scan control module 110 controls the vision sensor 10 to stop the current scan, and starts to scan from the Nth row until the Mth row is scanned.
当已采集范围全部包含临时范围时,无需再次对临时范围进行图像采集,只需将已采集范围的图像数据发送至ISP即可。比对模块111生成对当前已采集范围结束图像采集的最终调节指令,并将该最终调节指令发送至视觉传感器扫描控制模块110。视觉传感器扫描控制模块110控制视觉传感器重新按照所述预设范围进行图像采集。例如,当视觉传感器对第A行进行图像采集时,比对模块111收到对第N行至第M行进行图像采集的图像采集范围调节指令时(A>M>N),已采集的图像范围已经包括临时范围,只需将当前已采集的图像范围发送给ISP进行处理即可。因此,视觉传感器扫描控制模块110控制视觉传感器10完成对第A行的扫描,并为第A行添加结束标记,结束对当前已采集范围的图像采集。When the acquired range all includes the temporary range, there is no need to perform image acquisition on the temporary range again, just send the image data of the acquired range to the ISP. The comparison module 111 generates a final adjustment instruction for the image acquisition at the end of the current acquired range, and sends the final adjustment instruction to the vision sensor scanning control module 110 . The vision sensor scanning control module 110 controls the vision sensor to perform image acquisition again according to the preset range. For example, when the visual sensor captures the image of the A-th row, when the comparison module 111 receives the image capture range adjustment instruction for the image capture of the N-th row to the M-th row (A>M>N), the captured image The range already includes the temporary range, just send the currently captured image range to the ISP for processing. Therefore, the visual sensor scanning control module 110 controls the visual sensor 10 to complete the scanning of the A-th row, and adds an end mark to the A-th row to end the image acquisition of the currently acquired range.
当已采集范围包含部分临时范围时,无需再次重新对临时范围进行图像采集,只需在当前采集的图像范围的基础上继续进行图像采集。比对模块111生成继续进行图像采集的最终调节指令,并将该最终调节指令发送至视觉传感器扫描控制模块110。视觉传感器扫描控制模块110接收所述图像采集范围调节指令,控制所述视觉传感器10继续进行图像采集。例如,当视觉传感器已从第1行扫描至第P行时,比对模块111收到对第N行至第M行进行图像采集的图像采集范围调节指令(N<P<M),已采集范围已经部分包含临时范围,只需使视觉传感器继续进行扫描,直至把临时范围扫描完毕即可。因此,视觉传感器扫描控制模块110控制视觉传感器10继续执行当前的扫描顺序,直至完成第M行区域的扫描。When the acquired range includes part of the temporary range, it is not necessary to perform image acquisition on the temporary range again, and it is only necessary to continue image acquisition on the basis of the currently acquired image range. The comparison module 111 generates a final adjustment instruction for continuing image acquisition, and sends the final adjustment instruction to the vision sensor scanning control module 110 . The visual sensor scanning control module 110 receives the image acquisition range adjustment instruction, and controls the visual sensor 10 to continue image acquisition. For example, when the visual sensor has scanned from the 1st row to the Pth row, the comparison module 111 receives the image acquisition range adjustment instruction (N<P<M) for image acquisition for the Nth row to the Mth row (N<P<M). The range already partially contains the temporary range, just let the vision sensor continue to scan until the temporary range is scanned. Therefore, the vision sensor scanning control module 110 controls the vision sensor 10 to continue to perform the current scanning sequence until the scanning of the M-th row area is completed.
下面结合附图说明驾驶危险预测系统的各个部件的动作方式。The operation of each component of the driving risk prediction system will be described below with reference to the accompanying drawings.
视觉传感器的扫描方式。The scanning method of the vision sensor.
图3a-图3f示出了画面分辨率为1920×1080的视觉传感器捕捉到的交通场景的示例性图像300。该图像300是以黑白线条呈现的彩色图像,以符合专利法实施细则的规定。图像300包括具有交叉路口301的道路302、交通标志303、交通信号灯304、其他车辆305、306、307、308以及行人309。Figures 3a-3f show exemplary images 300 of a traffic scene captured by a vision sensor with a frame resolution of 1920x1080. The image 300 is a color image presented with black and white lines, in order to comply with the provisions of the Implementing Regulations of the Patent Law. Image 300 includes road 302 with intersection 301 , traffic signs 303 , traffic lights 304 , other vehicles 305 , 306 , 307 , 308 , and pedestrians 309 .
当视觉传感器控制装置11未收到来自控制指令生成模块24的图像采集范围调节指令时,视觉传感器10按照预设范围对图3a所示场景进行扫描。即从图3所示的图像范围中的第L1行开始逐行进行扫描,直至扫描至第L1080行完成一帧图像的扫描。 为了清晰地示出图中的车辆和行人,仅以虚线示意性的绘制出第L1行、第L2行和第L1080行。When the vision sensor control device 11 does not receive the image acquisition range adjustment instruction from the control instruction generation module 24, the vision sensor 10 scans the scene shown in FIG. 3a according to the preset range. That is, scanning is performed line by line from the L1 th line in the image range shown in FIG. 3 until the scanning reaches the L1080 th line to complete the scanning of one frame of image. In order to clearly illustrate the vehicles and pedestrians in the figure, only the L1th row, the L2th row and the L1080th row are schematically drawn with dotted lines.
当视觉传感器控制装置11收到来自控制指令生成模块24的图像采集范围调节指令时,则根据图像采集范围调节指令对场景进行扫描。图像采集范围调节指令不同,视觉传感器的扫描方式也不同。下面根据三种不同的图像采集范围调节指令介绍视觉传感器的扫描方式。When the visual sensor control device 11 receives the image acquisition range adjustment instruction from the control instruction generation module 24, it scans the scene according to the image acquisition range adjustment instruction. The image acquisition range adjustment instructions are different, and the scanning methods of the vision sensor are also different. The following describes the scanning method of the vision sensor according to three different image acquisition range adjustment instructions.
第一扫描方式:图像采集范围调节指令指示对矩形区域进行扫描。The first scanning mode: the image acquisition range adjustment instruction instructs to scan the rectangular area.
如图3b所示,当视觉传感器控制装11置在视觉传感器扫描到第L22行时收到对第L33-L44行进行扫描的图像采集范围调节指令后,比对模块111将图像采集范围调节指令指示的第L33-L44行区域与已扫描的第L1-L22行区域和未扫描的第L23-L1080行区域进行比对。已扫描的图像范围不包含图像采集范围调节指令指示扫描的区域,比对模块111生成对第L33-L44进行扫描的最终调节指令,并将其发送至视觉传感器扫描控制模块110。视觉传感器扫描控制模块110控制视觉传感器停止对第L22行的扫描,并开始从第L33进行逐行地扫描,直至扫描至L44行为止。当第L44行扫描完毕后,重新从第L1行开始进行扫描。As shown in Fig. 3b, when the vision sensor control device 11 is placed on the line L22 when the vision sensor scans the line L22 and receives the image acquisition range adjustment instruction for scanning the lines L33-L44, the comparison module 111 adjusts the image acquisition range adjustment instruction The indicated line L33-L44 region is aligned with the scanned line L1-L22 region and the unscanned line L23-L1080 region. The scanned image range does not include the scanned area indicated by the image acquisition range adjustment instruction, and the comparison module 111 generates the final adjustment instruction for scanning the L33-L44, and sends it to the vision sensor scanning control module 110 . The vision sensor scanning control module 110 controls the vision sensor to stop scanning the line L22, and start scanning line by line from the L33 line until scanning to the L44 line. When the scanning of the L44th line is completed, the scanning starts from the L1th line again.
如图3c所示,当视觉传感器控制装置在视觉传感器扫描到第L100行时收到对第L14-L44行进行扫描的图像采集范围调节指令后,比对模块111将已扫描的第L1-L100行区域与图像采集范围调节指令指示的第L14-L44行区域进行比对。已扫描的图像范围部全部包含图像采集范围调节指令指示扫描的范围,比对模块111生成结束扫描的最终调节指令,并将其发送至视觉传感器扫描控制模块110。视觉传感器扫描控制模块110控制视觉传感器10对第L100行的图像数据添加结束标记,然后开始从第L1行开始进行扫描。As shown in Fig. 3c, when the vision sensor control device receives the image acquisition range adjustment instruction for scanning the lines L14-L44 when the vision sensor scans the line L100, the comparison module 111 compares the scanned lines L1-L100. The line area is compared with the L14-L44 line area indicated by the image acquisition range adjustment instruction. The scanned image range part all includes the range to be scanned by the image acquisition range adjustment instruction. The vision sensor scanning control module 110 controls the vision sensor 10 to add an end mark to the image data of the L100th line, and then starts scanning from the L1th line.
如图3d所示,当视觉传感器扫描控制模块在视觉传感器扫描到第L200行时收到第L14-L500行进行扫描的图像采集范围调节指令后,比对模块111将已扫描的第L1-L200行区域与图像采集范围调节指令指示的第L14-L500行进行扫描的区域进行比对。已扫描的图像范围部分包含图像采集范围调节指令指示扫描的区域,比对模块111生成继续扫描直至图像采集范围调节指令指示扫描的区域全部扫描完毕的最终调节指令,并将其发送至视觉传感器扫描控制模块110。视觉传感器扫描控制模块110控制视觉传感器10继续扫描,直至扫描至L500行。当第L500行扫描完毕后,重新从第L1行开始进行扫描。As shown in Fig. 3d, when the vision sensor scanning control module receives the image acquisition range adjustment instruction for scanning the lines L14-L500 when the vision sensor scans the line L200, the comparison module 111 compares the scanned lines L1-L200. The line area is compared with the area for scanning in the L14-L500 lines indicated by the image acquisition range adjustment instruction. The scanned image range part includes the scanned area indicated by the image acquisition range adjustment instruction, and the comparison module 111 generates a final adjustment instruction for continuing to scan until all the scanned areas indicated by the image acquisition range adjustment instruction are scanned, and sends it to the vision sensor for scanning. Control module 110 . The vision sensor scanning control module 110 controls the vision sensor 10 to continue scanning until the scanning reaches the L500 line. When the scanning of the L500th line is completed, the scanning starts from the L1th line again.
当对图像采集范围调节指令指定的区域扫描完毕后,视觉传感器10不限于重新从第L1行开始扫描,也可以接着图像采集范围调节指令指定的区域继续逐行扫描,还可以根据需要从任一行开始进行扫描。After scanning the area specified by the image acquisition range adjustment instruction, the vision sensor 10 is not limited to start scanning again from the L1th line, but can also continue to scan the area specified by the image acquisition range adjustment instruction line by line, and can also scan from any line as required. Start scanning.
第二扫描方式:图像采集范围调节指令指定对交通参与者所在的椭圆形区域进行扫描。The second scanning mode: the image acquisition range adjustment instruction specifies to scan the oval area where the traffic participants are located.
如图3e所示,图像采集范围调节指令指示对视觉传感器的图像范围中的椭圆形区域X进行扫描。在下面的说明中,对于第二扫描方式中与第一扫描方式同样的处理,引用第一实施例的内容进行说明,或仅进行简要说明。As shown in FIG. 3e, the image acquisition range adjustment instruction instructs to scan the elliptical area X in the image range of the vision sensor. In the following description, for the same processing in the second scanning method as the first scanning method, the content of the first embodiment will be cited for description, or only a brief description will be given.
视觉传感器扫描控制模块110未收到来自控制指令生成模块24的图像采集范围 调节指令时,视觉传感器10按照预设范围对所示场景进行扫描。其与第一扫描方式中的相同,在此不再赘述。When the vision sensor scanning control module 110 does not receive the image acquisition range adjustment instruction from the control instruction generation module 24, the vision sensor 10 scans the scene shown according to the preset range. It is the same as that in the first scanning mode, and will not be repeated here.
如图3e所示,当视觉传感器扫描控制模块110在视觉传感器10扫描到第L22行时收到为对区域X进行扫描时,控制视觉传感器完成对第L22行的扫描后,开始对区域X进行扫描,直至区域X扫描完毕后,重新从第L1行开始进行扫描。As shown in FIG. 3e, when the vision sensor scanning control module 110 receives the scan of the area X when the vision sensor 10 scans the L22th row, it controls the vision sensor to complete the scan of the L22th row, and then starts to scan the area X. Scan until the area X is scanned, and then start scanning again from line L1.
第三扫描方式:图像采集范围调节指令指定对交通参与者所在的矩形区域进行扫描。The third scanning method: the image acquisition range adjustment instruction specifies to scan the rectangular area where the traffic participants are located.
如图3f所示,所述图像采集范围调节指令为对区域Y进行扫描。在下面的说明中,对于第三扫描方式中与第一扫描方式同样的处理,引用第一实施例的内容进行说明,或仅进行简要说明。As shown in Fig. 3f, the image acquisition range adjustment instruction is to scan the area Y. In the following description, for the same processing in the third scanning method as the first scanning method, the content of the first embodiment will be cited for description, or only a brief description will be given.
视觉传感器扫描控制模块未收到来自控制指令生成模块的图像采集范围调节指令时,视觉传感器按照预设范围对所示场景进行扫描。其与第一扫描方式中的相同,在此不再赘述。When the vision sensor scanning control module does not receive the image acquisition range adjustment instruction from the control instruction generation module, the vision sensor scans the scene shown according to the preset range. It is the same as that in the first scanning mode, and will not be repeated here.
当视觉传感器扫描控制模块110在视觉传感器10扫描到第L22行时收到来自传感器控制指令生成模块24的图像采集范围调节指令后,控制视觉传感器调整扫描的区域。当所述图像采集范围调节指令为对区域Y进行扫描时,控制视觉传感器10完成对第L22行的扫描后,开始对区域Y进行扫描,直至对区域Y的扫描完毕后,重新从第L1行开始进行扫描。When the vision sensor scanning control module 110 receives the image acquisition range adjustment instruction from the sensor control instruction generation module 24 when the vision sensor 10 scans to the L22 line, it controls the vision sensor to adjust the scanned area. When the image acquisition range adjustment instruction is to scan the area Y, control the visual sensor 10 to scan the area Y after completing the scanning of the L22 row, until the scanning of the area Y is completed, start from the L1 row again. Start scanning.
可替代的,图像采集范围调节指令可以是指定对交通参与者所在的圆形、正方形以及其他形状的区域进行扫描。还可以是指定对构成交通参与者的像素区域进行扫描,只要该区域能够全部包含被识别为将要与本车发生碰撞的交通参与者所在的区域即可。Alternatively, the image acquisition range adjustment instruction may specify to scan the circular, square and other shaped areas where the traffic participants are located. It is also possible to designate to scan the pixel area constituting the traffic participant, as long as the area can all contain the area where the traffic participant identified as about to collide with the vehicle is located.
下面结合图3a所示的场景说明感知模块21、危险区域识别模块23、控制指令生成模块24以及归控模块22的工作方式。The working modes of the perception module 21 , the dangerous area identification module 23 , the control instruction generation module 24 and the control module 22 will be described below with reference to the scene shown in FIG. 3 a .
再次参照图3a所示的场景的图像。感知模块21接收到在同一时刻下,位于车辆不同位置的视觉传感器获得的第一图像和第二图像后,使用神经网络模型、物体识别算法、运动中恢复结构算法、视频跟踪和其他计算机视觉技术对图像进行检测,提取图像中的特征,并与预设特征进行匹配,确定图像中的交通标志303、交通信号灯304、行人309以及周围车辆305、306、307、308等交通参与者;根据识别出的各交通参与者的像素坐标和视觉传感器10的标定参数,确定交通参与者的位置、交通参与者的大小;根据获取的车辆的定位信息,确定交通参与者相对于车辆的距离。Refer again to the image of the scene shown in Figure 3a. After the perception module 21 receives the first image and the second image obtained by the visual sensors located at different positions of the vehicle at the same time, it uses the neural network model, object recognition algorithm, structure restoration algorithm in motion, video tracking and other computer vision technologies Detect the image, extract the features in the image, and match with the preset features to determine the traffic signs 303, traffic lights 304, pedestrians 309 and surrounding vehicles 305, 306, 307, 308 and other traffic participants in the image; The pixel coordinates of each traffic participant and the calibration parameters of the visual sensor 10 are obtained to determine the location of the traffic participant and the size of the traffic participant; according to the obtained vehicle positioning information, the distance of the traffic participant relative to the vehicle is determined.
在感知模块21获得了关于多个交通参与者的位置、交通参与者的大小、交通参与者相对于车辆的距离以及归控模块22当前规划的车辆的行驶轨迹之后,将这些交通参与者的信息发送至危险区域识别模块23。危险区域识别模块23根据这些交通参与者信息来判断交通参与者是否将要与车辆发生碰撞。对所述交通参与者是否将要与本车发生碰撞的判定标准可以包括:在当前时刻下,同一交通参与者的位置与本车当前的行驶轨迹相交、同一交通参与者相对于本车的距离小于第一距离以及同一交通参与者的大小超过预设值。当满足以上调节时,表明需要立即获取所述交通参与者的图像,不能等待视觉传感器按时间顺序扫描完预设范围。After the perception module 21 obtains the positions of the plurality of traffic participants, the size of the traffic participants, the distances of the traffic participants relative to the vehicle, and the driving trajectory of the vehicle currently planned by the control module 22, the information of these traffic participants is converted into Sent to the hazardous area identification module 23 . The dangerous area identification module 23 determines whether the traffic participant is about to collide with the vehicle according to the traffic participant information. The criterion for determining whether the traffic participant is about to collide with the vehicle may include: at the current moment, the position of the same traffic participant intersects the current driving trajectory of the vehicle, and the distance of the same traffic participant relative to the vehicle is less than The first distance and the size of the same traffic participant exceeds a preset value. When the above adjustment is satisfied, it indicates that the image of the traffic participant needs to be acquired immediately, and the vision sensor cannot wait for the visual sensor to scan the preset range in time sequence.
控制指令生成模块24根据危险区域识别模块23识别出的将要与车辆发生碰撞的交通参与者生成对该一个或多个交通参与者所在的区域进行扫描的图像采集范围调节指令,并将所述图像采集范围调节指令通过ISP30发送至传感器扫描控制模块11。The control instruction generation module 24 generates an image acquisition range adjustment instruction for scanning the area where the one or more traffic participants are located according to the traffic participant who is about to collide with the vehicle identified by the danger area identification module 23, and converts the image to the image acquisition range. The acquisition range adjustment instruction is sent to the sensor scanning control module 11 through the ISP30.
下面参照附图,接合不同场景对危险区域识别模块23和控制指令生成模块24的工作方式进行说明:The working mode of the dangerous area identification module 23 and the control instruction generation module 24 will be described below with reference to the accompanying drawings in conjunction with different scenarios:
场景一:前方车辆靠近本车Scenario 1: The vehicle ahead approaches the vehicle
图4a-图4d分别示出了在t 1-t 4时刻下视觉传感器捕获的4帧图像。从图4a-图4d中能够看出前方车辆401在图像范围中位置的变化。其中,前方车辆401的像素连接区域在整个视觉传感器的图像范围中的占比逐渐增大,前方车辆相对于本车的距离减小。危险区域识别模块23根据t 4时刻,前方车辆401的位置本车的当前行驶轨迹相交、前方车辆相对于本车的距离小于第一距离以及前方车辆的大小超过预设值。判定前方车辆401为将要与本车发生碰撞的交通参与者。控制指令生成模块24根据危险区域识别模块23的识别结果生成对前方车辆401的所在的区域进行图像采集的图像采集范围调节指令。如图4d所示,前方车辆401所在区域对应视觉传感器的图像范围中的第L500至第L900行区域,控制指令生成模块生成对第L500至第L900行的区域进行图像采集的图像采集范围调节指令,并将所述图像采集范围调节指令通过ISP发送至传感器扫描控制模块。 Figures 4a-4d show 4 frames of images captured by the vision sensor at times t 1 -t 4 , respectively. The change of the position of the preceding vehicle 401 in the image range can be seen from Figs. 4a-4d. The proportion of the pixel connection area of the preceding vehicle 401 in the image range of the entire vision sensor gradually increases, and the distance between the preceding vehicle and the host vehicle decreases. According to the time t4, the dangerous area identification module 23 finds that the position of the preceding vehicle 401 intersects the current driving trajectory of the vehicle, the distance of the vehicle ahead relative to the vehicle is less than the first distance, and the size of the vehicle ahead exceeds a preset value. It is determined that the preceding vehicle 401 is a traffic participant about to collide with the own vehicle. The control instruction generation module 24 generates, according to the identification result of the dangerous area identification module 23, an image acquisition range adjustment instruction for image acquisition of the area where the preceding vehicle 401 is located. As shown in Fig. 4d, the area where the front vehicle 401 is located corresponds to the area in the L500th to L900th line in the image range of the vision sensor, and the control instruction generation module generates an image acquisition range adjustment instruction for image acquisition of the L500th to L900th line area , and send the image acquisition range adjustment instruction to the sensor scanning control module through the ISP.
场景二:前方行人靠近本车Scenario 2: The pedestrian in front approaches the vehicle
图5a-图5d分别示出了在t 5-t 8时刻下视觉传感器捕获的4帧图像。从图5a-图5d中能够看出,行人501在视觉传感器的图像范围中的位置变化。从图5a-图5d中能够看出,前方行人501从图像范围的右侧移动至图像范围的中央。危险区域识别模块23根据前方行人501在t 8时刻下的位置本车的当前行驶轨迹相交、前方行人501相对于本车的距离小于第一距离以及前方行人的大小超过预设值,判定前方行人501为将要与本车发生碰撞的交通参与者。控制指令生成模块24生成对前方行人501所在的区域进行图像的图像采集范围调节指令。如图5d所示,前方行人501所在区域对应视觉传感器的图像范围中的第L460至第L980行,因此,控制指令生成模块24生成对第L460至第L980行的区域进行图像采集的图像采集范围调节指令,并将所述图像采集范围调节指令通过ISP发送至传感器扫描控制模块。 Figures 5a-5d show 4 frames of images captured by the vision sensor at times t 5 -t 8 , respectively. As can be seen from Figures 5a-5d, the position of the pedestrian 501 in the image range of the vision sensor varies. As can be seen in Figures 5a-5d, the pedestrian 501 in front moves from the right side of the image range to the center of the image range. The dangerous area identification module 23 determines that the pedestrian in front intersects the current driving trajectory of the vehicle at the position of the pedestrian 501 in front at time t8 , the distance between the pedestrian 501 in front and the vehicle is less than the first distance, and the size of the pedestrian in front exceeds a preset value, and determines the pedestrian in front. 501 is the traffic participant who is about to collide with the vehicle. The control instruction generation module 24 generates an image acquisition range adjustment instruction for imaging the area where the pedestrian 501 in front is located. As shown in FIG. 5d , the area where the pedestrian 501 is located corresponds to the L460th to L980th lines in the image range of the vision sensor. Therefore, the control instruction generation module 24 generates an image acquisition range for the image acquisition of the area of the L460th to L980th lines. An adjustment instruction is sent, and the image acquisition range adjustment instruction is sent to the sensor scanning control module through the ISP.
再次参照图3a,在感知模块21获得了多帧场景图像的关于交通标志303、交通信号灯304、行人309以及周围车辆305、306、307、308等多个交通参与者的交通参与者信息之后,还将这些交通参与者的交通参与者信息发送至归控模块24。归控模块根据感知模块获取的交通参与者信息规划车辆的行驶路径并生成控制指令,车辆ECU根据该控制指令进一步控制车辆的各部件执行相应操作。Referring to FIG. 3a again, after the perception module 21 obtains the traffic participant information about the traffic signs 303 , traffic lights 304 , pedestrians 309 and surrounding vehicles 305 , 306 , 307 , 308 and other traffic participants in the multi-frame scene images, The traffic participant information of these traffic participants is also sent to the control module 24 . The home control module plans the driving path of the vehicle and generates control instructions according to the traffic participant information obtained by the perception module, and the vehicle ECU further controls the various components of the vehicle to perform corresponding operations according to the control instructions.
第二实施方式:驾驶危险预测方法Second Embodiment: Driving Risk Prediction Method
图2a-图2b示出了驾驶危险预测方法的流程图。Figures 2a-2b show a flow chart of a driving risk prediction method.
驾驶危险预测方法包括以下步骤:The driving hazard prediction method includes the following steps:
步骤S1:视觉传感器对场景进行扫描。Step S1: the vision sensor scans the scene.
其中,视觉传感器照时间顺序逐行对视觉传感器的预设图像范围进行扫描,每扫描完一行区域,该行区域的图像数据传输给ISP。The vision sensor scans the preset image range of the vision sensor line by line in chronological order, and after scanning a line area, the image data of the line area is transmitted to the ISP.
步骤S2:ISP对收到的图像进行处理。Step S2: The ISP processes the received image.
其中,ISP配置为当收到带有结束标记的行区域的图像后再对先前收到的全部行区域的图像进行处理。Wherein, the ISP is configured to process the previously received images of all the line areas after receiving the image of the line area with the end mark.
ISP对图像的处理可以包括对图像对白平衡、扫描等参数的调整,以及图像噪声等的优化。The processing of the image by the ISP may include adjustment of parameters such as image-to-white balance, scanning, and optimization of image noise.
步骤S3:利用感知模块对ISP处理后的图像数据进行算法处理,获取图像的交通参与者的信息。Step S3: using the perception module to perform algorithmic processing on the image data processed by the ISP, to obtain the information of the traffic participants in the image.
其中,对于算法处理的描述可以参见本申请第一实施方式中对感知模块的描述。For the description of the algorithm processing, reference may be made to the description of the perception module in the first embodiment of the present application.
步骤S41:危险区域识别模块确定将要与车辆发生碰撞的对象。Step S41: The danger area identification module determines an object that will collide with the vehicle.
危险区域识别模块根据获取的交通参与者信息判断交通参与者是否将要与本车发生碰撞。当满足以下条件时,即判断该交通参与者可能与本车发生碰撞,该交通参与者是将要与本车发生碰撞的对象:在当前时刻下,同一交通参与者的位置本车的当前行驶轨迹相交、同一交通参与者相对于本车的距离小于第一距离以及同一交通参与者的大小超过预设值。The dangerous area identification module determines whether the traffic participant is about to collide with the vehicle according to the acquired traffic participant information. When the following conditions are met, it is determined that the traffic participant may collide with the vehicle, and the traffic participant is the object that will collide with the vehicle: at the current moment, the position of the same traffic participant The current driving trajectory of the vehicle The intersection, the distance of the same traffic participant relative to the own vehicle is less than the first distance, and the size of the same traffic participant exceeds a preset value.
步骤S42:当所述交通参与者被判定为将要与本车发生碰撞时,控制指令生成模块生成图像采集范围调节指令,对包含交通参与者的临时范围进行图像采集。Step S42: when the traffic participant is determined to collide with the vehicle, the control instruction generation module generates an image acquisition range adjustment instruction, and performs image acquisition on a temporary range including the traffic participant.
其中,所述临时范围可以为包含所述交通参与者的矩形区域。Wherein, the temporary area may be a rectangular area including the traffic participant.
步骤S43:视觉传感器控制装置接收并执行步骤S42生成的图像采集范围调节指令。Step S43: The visual sensor control device receives and executes the image acquisition range adjustment instruction generated in step S42.
其中,步骤S43还可以包括以下子步骤:Wherein, step S43 may also include the following sub-steps:
步骤S431:比对模块接收来自控制指令生成模块生成的图像采集范围调节指令,并将临时范围分别与已扫描的图像范围和未扫描的图像范围进行比对。Step S431: The comparison module receives the image acquisition range adjustment instruction generated by the control instruction generation module, and compares the temporary range with the scanned image range and the unscanned image range respectively.
比对模块根据比对结果调整当前图像采集周期的所述临时范围,当已采集的图像范围包含所述临时范围时,对已采集的图像范围和所述临时范围的并集进行图像采集。The comparison module adjusts the temporary range of the current image acquisition cycle according to the comparison result, and when the acquired image range includes the temporary range, performs image acquisition on the union of the acquired image range and the temporary range.
步骤S4321:当已采集的图像范围包含全部临时范围时,比对模块生成结束当前图像采集的最终调节指令。Step S4321: When the captured image range includes all the temporary ranges, the comparison module generates a final adjustment instruction for ending the current image capture.
步骤S4331:视觉传感器扫描控制模块110控制视觉传感器在完成对当前行区域的图像采集后,给当前行区域加入结束标记,完成当前已采集图像范围的图像采集。Step S4331: The visual sensor scanning control module 110 controls the visual sensor to add an end mark to the current row area after completing the image acquisition of the current row area to complete the image acquisition of the currently acquired image range.
步骤S4322:当已采集的图像范围包含部分临时范围时,比对模块生成继续进行图像采集的最终调节指令。Step S4322: When the captured image range includes part of the temporary range, the comparison module generates a final adjustment instruction for continuing image capture.
步骤S4332:视觉传感器扫描控制模块110控制视觉传感器完成当前所采集的行区域,并继续对其余未被采集的临时范围进行图像采集。Step S4332: The vision sensor scanning control module 110 controls the vision sensor to complete the currently collected line area, and continues to perform image collection on the remaining uncollected temporary areas.
步骤S4323:当已采集的图像范围不包含临时范围时,比对模块生成对临时范围进行图像采集的最终调节指令。Step S4323 : when the captured image range does not include the temporary range, the comparison module generates a final adjustment instruction for image capturing for the temporary range.
步骤S4333:视觉传感器扫描控制模块110控制视觉传感器完成当前所采集的行区域,并开始对临时范围进行图像采集。Step S4333: The vision sensor scanning control module 110 controls the vision sensor to complete the currently collected line area, and starts image collection for the temporary area.
其中,关于视觉传感器控制装置执行的步骤S43以及其子步骤的描述可以参见本申请第一实施方式中对视觉传感器控制装置的描述,为了简洁起见,再此仅简要描述。For the description of step S43 and its sub-steps performed by the visual sensor control apparatus, reference may be made to the description of the visual sensor control apparatus in the first embodiment of the present application, and for the sake of brevity, only a brief description is given here.
其中,生成的最终调节指令或图像采集范围调节指令可以通过ISP传输给视觉传 感器扫描控制模块。视觉传感器扫描控制模块根据所述控制指令控制视觉传感器进行扫描。Wherein, the generated final adjustment instruction or image acquisition range adjustment instruction can be transmitted to the vision sensor scanning control module through the ISP. The vision sensor scanning control module controls the vision sensor to scan according to the control instruction.
在执行步骤S41-S43的同时,还执行步骤S5:归控模块根据获取的交通参与者信息规划车辆的行驶路径并生成行驶控制指令。While executing steps S41-S43, step S5 is also executed: the control module plans the travel path of the vehicle according to the acquired traffic participant information and generates a travel control instruction.
步骤S6:车辆ECU根据行驶控制指令控制车辆执行相应的操作。Step S6: The vehicle ECU controls the vehicle to perform corresponding operations according to the driving control instruction.
在步骤S43执行完毕后,可以依次执行步骤S2、步骤S3、步骤S5以及步骤S6,从而避免车辆与交通参与者发生碰撞。After step S43 is executed, steps S2, S3, S5 and S6 may be executed in sequence, so as to avoid collision between the vehicle and the traffic participants.
下面结合图4a-图4d所示的场景对驾驶危险预测方法的具体实施例进行说明。A specific embodiment of the driving risk prediction method will be described below with reference to the scenarios shown in FIGS. 4 a to 4 d .
步骤S100:视觉传感器按时间顺序对图4a-图4d所示的图像范围进行扫描。Step S100: the visual sensor scans the image range shown in Fig. 4a-Fig. 4d in time sequence.
视觉传感器每扫描完一行区域即将该刚的图像数据传输至ISP,当扫描完最后一行区域时,视觉传感器在该最后一行区域加入结束标记。The vision sensor transmits the image data to the ISP after scanning a line of area. When the last line of area is scanned, the vision sensor adds an end mark to the last line of area.
步骤S200:ISP收到带有结束标记的行区域后。对之前收到的全部行区域的图像进行处理,并将处理后的图像发送至感知模块。Step S200: After the ISP receives the line area with the end mark. Processes the previously received images of all the row regions, and sends the processed images to the perception module.
步骤S300:感知模块对ISP处理后的图像进行算法处理,获取图4a-图4d所示的图像范中车辆401的信息。Step S300: The perception module performs algorithm processing on the image processed by the ISP, and obtains the information of the vehicle 401 in the image range shown in FIG. 4a-FIG. 4d.
步骤S410:危险区域识别模块根据获取的车辆401的信息判断车辆401是否将要与本车发生碰撞。Step S410 : the dangerous area identification module determines whether the vehicle 401 is about to collide with the own vehicle according to the acquired information of the vehicle 401 .
从图4a-图4d能够看出,在t 1-t 4这段时间内,车辆401在整个视觉传感器的图像范围中的占比逐渐上升,在t 4时刻下,车辆401的位置本车的当前行驶轨迹相交、车辆401相对于本车的距离小于第一距离以及车辆401的大小超过预设值。车辆401将要与本车发生碰撞,需要立即获取车辆401的位置信息。 It can be seen from Fig. 4a to Fig. 4d that during the period from t 1 to t 4 , the proportion of the vehicle 401 in the image range of the entire vision sensor gradually increases. The current travel trajectories intersect, the distance of the vehicle 401 relative to the own vehicle is less than the first distance, and the size of the vehicle 401 exceeds a preset value. The vehicle 401 is about to collide with the own vehicle, and the location information of the vehicle 401 needs to be obtained immediately.
步骤S420:生成对车辆401进行扫描的图像采集范围调节指令。Step S420: Generate an image acquisition range adjustment instruction for scanning the vehicle 401.
所述图像采集范围调节指令为对车辆401所在的L508-L806行区域进行扫描。The image acquisition range adjustment instruction is to scan the L508-L806 line area where the vehicle 401 is located.
步骤S430:视觉传感器控制装置接收并执行所述图像采集范围调节指令。Step S430: The visual sensor control device receives and executes the image acquisition range adjustment instruction.
步骤S110:视觉传感器对车辆401所在的L508-L806行区域进行扫描。Step S110 : the vision sensor scans the L508-L806 row area where the vehicle 401 is located.
步骤S210:ISP对收到的L508-L806行区域的图像进行处理,并将处理后的图像发送至感知模块。Step S210: The ISP processes the received images in the L508-L806 line area, and sends the processed images to the perception module.
步骤S310:感知模块对L508-L806行区域的图像进行算法处理,获取L508-L806行区域的图像的车辆401的信息,所述信息可以包括车辆401的位置、车辆401相对于本车的距离以及车辆401的大小。Step S310: The perception module performs algorithmic processing on the image of the L508-L806 row area, and obtains information of the vehicle 401 in the image of the L508-L806 row area, the information may include the position of the vehicle 401, the distance of the vehicle 401 relative to the vehicle, and The size of the vehicle 401 .
步骤S500:归控模块根据车辆401的信息规划本车的行驶路径并生成控制指令。Step S500: The control module plans the driving path of the vehicle according to the information of the vehicle 401 and generates a control instruction.
步骤S600:车辆ECU根据所述控制指令控制车辆执行相应的操作。Step S600: The vehicle ECU controls the vehicle to perform corresponding operations according to the control instruction.
第三实施方式:计算机可读存储介质Third Embodiment: Computer-readable storage medium
本申请实施例还提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理装置执行控制指令生成方法和传感器控制方法和计算,该方法包括上述各个实施例所描述的方案中的至少之一。Embodiments of the present application further provide a computer-readable storage medium, on which a computer program is stored, and the program is executed by a processing device to generate a control instruction and a sensor control method and calculation, and the method includes the solutions described in the foregoing embodiments at least one of the.
本申请实施例的计算机存储介质,可以采用一个或多个计算机可读的介质的任意组合。计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质。计算机可读存储介质例如可以是,但不限于,电、磁、光、电磁、红外线、或半导体的系统、 装置或装置件,或者任意以上的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机存取存储装置(RAM)、只读存储装置(ROM)、可擦式可编程只读存储装置(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储装置(CD-ROM)、光存储装置件、磁存储装置件、或者上述的任意合适的组合。在本文件中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者装置件使用或者与其结合使用。The computer storage medium of the embodiments of the present application may adopt any combination of one or more computer-readable media. The computer-readable medium may be a computer-readable signal medium or a computer-readable storage medium. The computer readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above. More specific examples (non-exhaustive list) of computer readable storage media include: electrical connections having one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM) ), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing. In this document, a computer-readable storage medium can be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or apparatus.
计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者装置件使用或者与其结合使用的程序。A computer-readable signal medium may include a propagated data signal in baseband or as part of a carrier wave with computer-readable program code embodied thereon. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing. A computer-readable signal medium can also be any computer-readable medium, other than a computer-readable storage medium, that can transmit, propagate, or transmit data for use by or in connection with the instruction execution system, apparatus, or apparatus. program.
计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括、但不限于无线、电线、光缆、RF等等,或者上述的任意合适的组合。Program code embodied on a computer readable medium may be transmitted using any suitable medium including, but not limited to, wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
可以以一种或多种程序设计语言或其组合来编写用于执行本申请操作的计算机程序代码,所述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务装置上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络,包括局域网(LAN)或广域网(WAN),连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。Computer program code for performing the operations of the present application may be written in one or more programming languages, including object-oriented programming languages—such as Java, Smalltalk, C++, but also conventional Procedural programming language - such as the "C" language or similar programming language. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or service. Where a remote computer is involved, the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, through the Internet using an Internet service provider) connect).
第四实施方式:计算机程序Fourth Embodiment: Computer Program
本申请第五实施方式提供一种计算机程序,计算机通过运行该程序能够执行本申请实施例所提供的控制方法,或者作为上述的控制装置发挥作用。The fifth embodiment of the present application provides a computer program, and the computer can execute the control method provided by the embodiment of the present application by running the program, or function as the above-mentioned control device.
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。Those of ordinary skill in the art can realize that the units and algorithm steps of each example described in conjunction with the embodiments disclosed herein can be implemented in electronic hardware, or a combination of computer software and electronic hardware. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the technical solution. Skilled artisans may implement the described functionality using different methods for each particular application, but such implementations should not be considered beyond the scope of this application.
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that, for the convenience and brevity of description, the specific working process of the above-described systems, devices and units may refer to the corresponding processes in the foregoing method embodiments, which will not be repeated here.
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided in this application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are only illustrative. For example, the division of the units is only a logical function division. In actual implementation, there may be other division methods. For example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented. On the other hand, the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到 多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。In addition, each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务装置,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储装置(Read-Only Memory,ROM)、随机存取存储装置(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。The functions, if implemented in the form of software functional units and sold or used as independent products, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application can be embodied in the form of a software product in essence, or the part that contributes to the prior art or the part of the technical solution. The computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a service device, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present application. The aforementioned storage medium includes: U disk, mobile hard disk, read-only storage device (Read-Only Memory, ROM), random access storage device (Random Access Memory, RAM), magnetic disk or optical disk and other various programs that can store program codes medium.
注意,上述仅为本申请的较佳实施例及所运用的技术原理。本领域技术人员会理解,本申请不限于这里所述的特定实施例,对本领域技术人员来说能够进行各种明显的变化、重新调整和替代而不会脱离本申请的保护范围。因此,虽然通过以上实施例对本申请进行了较为详细的说明,但是本申请不仅仅限于以上实施例,在不脱离本申请的构思的情况下,还可以包括更多其他等效实施例,均属于本申请的保护范畴。Note that the above are only the preferred embodiments of the present application and the applied technical principles. Those skilled in the art will understand that the present application is not limited to the specific embodiments described herein, and various obvious changes, readjustments and substitutions can be made by those skilled in the art without departing from the protection scope of the present application. Therefore, although the present application has been described in detail through the above embodiments, the present application is not limited to the above embodiments, and can also include more other equivalent embodiments without departing from the concept of the present application, all of which belong to The scope of protection of this application.

Claims (14)

  1. 一种用于视觉传感器的控制指令生成方法,其特征在于,所述视觉传感器通过扫描而采集图像数据,所述方法包括:A control instruction generation method for a vision sensor, characterized in that the vision sensor collects image data by scanning, and the method includes:
    获取图像数据;get image data;
    根据所述图像数据确定将要与车辆发生碰撞的对象;determining an object about to collide with the vehicle according to the image data;
    根据所述对象生成图像采集范围调节指令;generating an image acquisition range adjustment instruction according to the object;
    向所述视觉传感器发送所述图像采集范围调节指令,sending the image acquisition range adjustment instruction to the vision sensor,
    所述图像采集范围调节指令用于指示原本按照预设范围采集图像的所述视觉传感器按照临时范围采集图像,所述临时范围小于所述预设范围,且包含所述对象所在的区域。The image acquisition range adjustment instruction is used to instruct the vision sensor that originally acquired the image according to the preset range to acquire the image according to the temporary range, and the temporary range is smaller than the preset range and includes the area where the object is located.
  2. 根据权利要求1所述的方法,其特征在于,所述临时范围为矩形。The method of claim 1, wherein the temporary area is a rectangle.
  3. 根据权利要求1或2所述的方法,其特征在于,所述视觉传感器包括摄像头。The method of claim 1 or 2, wherein the visual sensor comprises a camera.
  4. 一种用于视觉传感器的控制方法,其特征在于,所述视觉传感器通过扫描而采集图像数据,所述方法包括:A control method for a vision sensor, characterized in that the vision sensor collects image data by scanning, the method comprising:
    获取图像采集范围调节指令;Obtain the image acquisition range adjustment instruction;
    根据所述图像采集范围调节指令控制视觉传感器调整图像采集范围,Control the vision sensor to adjust the image acquisition range according to the image acquisition range adjustment instruction,
    所述图像采集范围调节指令用于指示原本按照预设范围采集图像的所述视觉传感器按照临时范围采集图像,所述临时范围小于所述预设范围,且包含被识别为将要与车辆发生碰撞的对象所在的区域。The image acquisition range adjustment instruction is used to instruct the vision sensor that originally acquired the image according to the preset range to collect the image according to the temporary range, the temporary range is smaller than the preset range, and includes the image that is identified as about to collide with the vehicle. The area where the object is located.
  5. 根据权利要求4所述的方法,其特征在于,所述临时范围为矩形。The method of claim 4, wherein the temporary area is a rectangle.
  6. 根据权利要求4所述方法,其特征在于,还包括:调整当前图像采集周期的所述临时范围,当已采集的图像范围包含所述临时范围时,对已采集的图像范围和所述临时范围的并集进行图像采集。The method according to claim 4, further comprising: adjusting the temporary range of the current image acquisition cycle, and when the acquired image range includes the temporary range, adjusting the acquired image range and the temporary range The union of images is collected.
  7. 根据权利要求4-6中任一项所述的用于视觉传感器的控制方法,其特征在于,所述视觉传感器包括摄像头。The control method for a visual sensor according to any one of claims 4-6, wherein the visual sensor comprises a camera.
  8. 一种用于视觉传感器的控制指令生成装置,其特征在于,所述视觉传感器通过扫描而采集图像数据,所述装置包括:A control instruction generation device for a vision sensor, characterized in that the vision sensor collects image data by scanning, and the device includes:
    图像数据获取模块,用于获取图像数据;an image data acquisition module for acquiring image data;
    识别模块,其用于根据所述图像数据确定将要与车辆发生碰撞的对象;an identification module for determining an object that will collide with the vehicle according to the image data;
    控制指令生成模块,其用于根据所述对象生成图像采集范围调节指令;a control instruction generation module, which is used for generating an image acquisition range adjustment instruction according to the object;
    控制指令发送模块,其用于向所述视觉传感器发送所述图像采集范围调节指令,a control instruction sending module, which is used for sending the image acquisition range adjustment instruction to the visual sensor,
    所述图像采集范围调节指令用于指示原本按照预设范围采集图像的所述视觉传感器按照临时范围采集图像,所述临时范围小于所述预设范围,且包含所述对象所在的区域。The image acquisition range adjustment instruction is used to instruct the vision sensor that originally acquired the image according to the preset range to acquire the image according to the temporary range, and the temporary range is smaller than the preset range and includes the area where the object is located.
  9. 根据权利要求8所述的装置,其特征在于,所述临时范围为矩形。The apparatus of claim 8, wherein the temporary area is rectangular.
  10. 根据权利要求8或9所述的装置,其特征在于,所述视觉传感器包括摄像头。The apparatus of claim 8 or 9, wherein the visual sensor comprises a camera.
  11. 一种用于视觉传感器的控制装置,其特征在于,所述视觉传感器通过扫描而采集图像数据,所述装置包括:A control device for a vision sensor, characterized in that the vision sensor collects image data by scanning, and the device comprises:
    控制指令接收模块,其用于获取图像采集范围调节指令;a control instruction receiving module, which is used to obtain an image acquisition range adjustment instruction;
    控制模块,其用于根据所述图像采集范围调节指令控制视觉传感器调整图像采集范围,a control module, which is used to control the vision sensor to adjust the image acquisition range according to the image acquisition range adjustment instruction,
    所述图像采集范围调节指令用于指示原本按照预设范围采集图像的所述视觉传感器按照临时范围采集图像,所述临时范围小于所述预设范围,且包含被识别为将要与车辆发生碰撞的对象所在的区域。The image acquisition range adjustment instruction is used to instruct the vision sensor that originally acquired the image according to the preset range to acquire the image according to the temporary range, the temporary range is smaller than the preset range, and includes the objects identified as about to collide with the vehicle. The area where the object is located.
  12. 根据权利要求11所述的装置,其特征在于,所述临时范围为矩形。The apparatus of claim 11, wherein the temporary area is rectangular.
  13. 根据权利要求11所述装置,其特征在于,还包括:调整当前图像采集周期的所述临时范围,当已采集的图像范围包含所述临时范围时,对已采集的图像范围和所述临时范围的并集进行图像采集。The device according to claim 11, further comprising: adjusting the temporary range of the current image acquisition cycle, and when the acquired image range includes the temporary range, adjusting the acquired image range and the temporary range The union of images is collected.
  14. 根据权利要求11-13中任一项所述的装置,其特征在于,所述视觉传感器包括摄像头。The apparatus of any one of claims 11-13, wherein the vision sensor comprises a camera.
PCT/CN2021/131695 2021-02-07 2021-11-19 Control instruction generation method and device, and control method and device for visual sensor WO2022166308A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110169629.9 2021-02-07
CN202110169629.9A CN114911219A (en) 2021-02-07 2021-02-07 Control instruction generation method and device for visual sensor, and control method and device

Publications (1)

Publication Number Publication Date
WO2022166308A1 true WO2022166308A1 (en) 2022-08-11

Family

ID=82741833

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/131695 WO2022166308A1 (en) 2021-02-07 2021-11-19 Control instruction generation method and device, and control method and device for visual sensor

Country Status (2)

Country Link
CN (1) CN114911219A (en)
WO (1) WO2022166308A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040022416A1 (en) * 1993-08-11 2004-02-05 Lemelson Jerome H. Motor vehicle warning and control system and method
CN102096803A (en) * 2010-11-29 2011-06-15 吉林大学 Safe state recognition system for people on basis of machine vision
US8108147B1 (en) * 2009-02-06 2012-01-31 The United States Of America As Represented By The Secretary Of The Navy Apparatus and method for automatic omni-directional visual motion-based collision avoidance
CN203246465U (en) * 2013-05-07 2013-10-23 创研光电股份有限公司 Driving recorder with lane departure warning and front space warning functions
CN110502971A (en) * 2019-07-05 2019-11-26 江苏大学 Road vehicle recognition methods and system based on monocular vision
CN110855895A (en) * 2019-12-06 2020-02-28 深圳市大富科技股份有限公司 Camera shooting control method and terminal

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040022416A1 (en) * 1993-08-11 2004-02-05 Lemelson Jerome H. Motor vehicle warning and control system and method
US8108147B1 (en) * 2009-02-06 2012-01-31 The United States Of America As Represented By The Secretary Of The Navy Apparatus and method for automatic omni-directional visual motion-based collision avoidance
CN102096803A (en) * 2010-11-29 2011-06-15 吉林大学 Safe state recognition system for people on basis of machine vision
CN203246465U (en) * 2013-05-07 2013-10-23 创研光电股份有限公司 Driving recorder with lane departure warning and front space warning functions
CN110502971A (en) * 2019-07-05 2019-11-26 江苏大学 Road vehicle recognition methods and system based on monocular vision
CN110855895A (en) * 2019-12-06 2020-02-28 深圳市大富科技股份有限公司 Camera shooting control method and terminal

Also Published As

Publication number Publication date
CN114911219A (en) 2022-08-16

Similar Documents

Publication Publication Date Title
JP7380180B2 (en) Solid-state imaging device, imaging device, imaging method, and imaging program
JP7342197B2 (en) Imaging device and method of controlling the imaging device
JP6409680B2 (en) Driving support device and driving support method
US20200117926A1 (en) Apparatus, method, and system for controlling parking of vehicle
JP2008250503A (en) Operation support device
WO2010070920A1 (en) Device for generating image of surroundings of vehicle
JP2007172540A (en) Moving object discrimination system, moving object discrimination method, and computer program
US20220319192A1 (en) Driving assistance device, driving assistance method, and non-transitory computer-readable medium
JP5434277B2 (en) Driving support device and driving support method
WO2022166308A1 (en) Control instruction generation method and device, and control method and device for visual sensor
US20230245468A1 (en) Image processing device, mobile object control device, image processing method, and storage medium
WO2022153896A1 (en) Imaging device, image processing method, and image processing program
JP7467402B2 (en) IMAGE PROCESSING SYSTEM, MOBILE DEVICE, IMAGE PROCESSING METHOD, AND COMPUTER PROGRAM
WO2022102425A1 (en) Signal processing device, and signal processing method
CN114119576A (en) Image processing method and device, electronic equipment and vehicle

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21924304

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21924304

Country of ref document: EP

Kind code of ref document: A1