WO2021161614A1 - Image transmission system - Google Patents

Image transmission system Download PDF

Info

Publication number
WO2021161614A1
WO2021161614A1 PCT/JP2020/043733 JP2020043733W WO2021161614A1 WO 2021161614 A1 WO2021161614 A1 WO 2021161614A1 JP 2020043733 W JP2020043733 W JP 2020043733W WO 2021161614 A1 WO2021161614 A1 WO 2021161614A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
scene
information
vehicle
control unit
Prior art date
Application number
PCT/JP2020/043733
Other languages
French (fr)
Japanese (ja)
Inventor
直浩 平岩
Original Assignee
アイシン・エィ・ダブリュ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by アイシン・エィ・ダブリュ株式会社 filed Critical アイシン・エィ・ダブリュ株式会社
Publication of WO2021161614A1 publication Critical patent/WO2021161614A1/en

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled

Definitions

  • the present invention relates to an image transmission system.
  • Patent Document 1 discloses a technique for compressing a designated portion of captured image data, for example, a license plate, a traffic light, or a wall of a traveling vehicle at a compression rate lower than that of other portions.
  • the portion of the license plate becomes low compression.
  • uniform low compression of specific objects often did not reduce the amount of information about non-essential objects.
  • the present invention has been made in view of the above problems, and an object of the present invention is to provide a technique for reducing the possibility that the amount of information of an unimportant object becomes excessive.
  • the image transmission system is mounted on a vehicle and is important to be in a specific state among an image capturing unit that captures an image of the road on which the vehicle is traveling and an object existing in the image.
  • Information reduction processing that performs information reduction processing on the image so that the amount of information reduction per unit area of the important object is smaller than the amount of information reduction per unit area of non-important objects other than the important object.
  • a unit and an image transmission unit that transmits the image after information reduction processing to an external device are provided.
  • FIG. 3A is a diagram showing an example of a captured image
  • FIG. 3B is a diagram showing a plane including a vehicle width axis and a vehicle length axis.
  • FIG. 4A is a scene determination process for a specific lane congestion scene
  • FIGS. 4B to 4D are diagrams for explaining a specific lane congestion scene.
  • FIG. 5A is a scene determination process for an obstacle existence scene
  • FIG. 5B is a diagram for explaining an obstacle existence scene.
  • FIG. 6A is a scene determination process for a map non-recording scene
  • FIG. 6B is a diagram for explaining a map non-recording scene.
  • Image transmission system configuration (2) Image transmission processing: (2-1) Scene judgment processing (specific lane congestion scene): (2-2) Scene judgment processing (obstacle existence scene): (2-3) Scene judgment processing (map non-recording scene): (3) Other embodiments:
  • FIG. 1 is a block diagram showing a configuration of a navigation system 10 including an image transmission system according to an embodiment of the present invention.
  • the navigation system 10 is provided in the vehicle, and includes a control unit 20 including a CPU, RAM, ROM, and the like, and a recording medium 30.
  • the control unit 20 can execute a program stored in the recording medium 30 or the ROM. Map information 30a is recorded in advance on the recording medium 30.
  • the map information 30a is information used for specifying the position of an intersection, route guidance, etc., and specifies node data indicating the position of a node set on the road on which the vehicle travels, and the shape of the road between the nodes. It includes shape interpolation point data indicating the position of the shape interpolation point, link data indicating the connection between nodes, and feature data indicating the position and shape of features existing on the road and its surroundings.
  • the nodes indicate intersections.
  • the link data is associated with information indicating the number of lanes existing in the road section indicated by the link data and the width of the lane.
  • the position indicated by the node or the shape interpolation point indicates the position of the center line on the road section, and the position of the lane and the range in which the lane exists can be specified by the position, the number of lanes, and the width of the lane. be.
  • Feature data is data indicating the existence of various features.
  • the feature includes at least a traffic sign. That is, the feature data indicating the traffic sign includes the position of the traffic sign and the identification information of the traffic sign.
  • the position of the traffic sign may be indicated by coordinates (latitude, longitude, etc.), or may be indicated as a position on a road section indicated by a link, for example, by a distance from a node or the like.
  • the vehicle in this embodiment includes a camera 40, a GNSS receiving unit 41, a vehicle speed sensor 42, a gyro sensor 43, a user I / F unit 44, and a communication unit 45.
  • the GNSS receiving unit 41 is a device that receives signals from the Global Navigation Satellite System, receives radio waves from navigation satellites, and outputs a signal for calculating the current position of the vehicle via an interface (not shown).
  • the control unit 20 acquires this signal and acquires the current position (latitude, longitude, etc.) of the vehicle in the coordinate system of the map.
  • the vehicle speed sensor 42 outputs a signal corresponding to the rotational speed of the wheels provided in the vehicle.
  • the control unit 20 acquires this signal via an interface (not shown) to acquire the vehicle speed.
  • the gyro sensor 43 detects the angular acceleration for turning in the horizontal plane of the vehicle and outputs a signal corresponding to the direction of the vehicle.
  • the control unit 20 acquires this signal and acquires the traveling direction of the vehicle.
  • the vehicle speed sensor 42, the gyro sensor 43, and the like are used to identify the traveling locus of the vehicle, and in the present embodiment, the current position is specified based on the starting point and the traveling locus of the vehicle, and the departure point and the traveling are The current position of the vehicle specified based on the trajectory is corrected based on the output signal of the GNSS receiving unit 41.
  • the camera 40 is a device that acquires an image in the field of view directed to the front of the vehicle.
  • the optical axis of the camera 40 is fixed to the vehicle, and the direction of the optical axis may be known in the navigation system 10.
  • the camera 40 is attached to the vehicle in a posture in which the vehicle width direction and the center of the optical axis are perpendicular to each other and the front of the vehicle in the traveling direction is included in the field of view.
  • the control unit 20 can detect other vehicles (peripheral vehicles) existing in the vicinity of the vehicle by acquiring the image output by the camera 40 and analyzing the image by extracting the feature amount or the like.
  • the user I / F unit 44 is an interface unit for inputting user instructions and providing various information to the user, and includes a touch panel type display, a speaker, and the like (not shown). That is, the user I / F unit 44 includes an image and sound output unit and an input unit for instructions by the user.
  • the communication unit 45 is a device for communicating with an external device. In the present embodiment, the control unit 20 communicates with the server 50 via the communication unit 45.
  • the control unit 20 receives the input of the destination by the user via the input unit of the user I / F unit 44 (not shown) by the function of the navigation program (not shown), and from the current position of the vehicle to the destination based on the map information 30a. Search for the planned travel route. Further, the control unit 20 controls the user I / F unit 44 by the function of the navigation program, and executes guidance for traveling along the planned travel route. In the present embodiment, the control unit 20 can execute a function of transmitting an image taken by the camera 40 to the server 50 as an additional function of the navigation program.
  • the navigation program includes an image transmission program 21 for transmitting the image.
  • the image transmission program 21 includes an image capturing unit 21a, a scene determination unit 21b, an information reduction processing unit 21c, and an image transmission unit 21d in order to realize an image transmission function.
  • the image capturing unit 21a is a program module mounted on the vehicle and causing the control unit 20 to execute a function of capturing an image of the road on which the vehicle is traveling.
  • the control unit 20 controls the camera 40 in the process of traveling the vehicle, and photographs the landscape including the road in front of the vehicle.
  • the image output from the camera 40 by shooting is recorded on the recording medium 30 as image information 30b.
  • the control unit 20 acquires the image information 30b at regular intervals while the vehicle is running, but of course, the image is taken under various conditions, for example, a specific road type or a road section. It may be.
  • the scene determination unit 21b is a program module that causes the control unit 20 to execute a function of determining which of the predetermined scenes the image captured by the function of the image capture unit 21a is.
  • the server 50 acquires an image, and the server 50 performs various analyzes based on the image. For this reason, a predetermined characteristic image is targeted for analysis, and the characteristic image is defined in advance as a scene to be analyzed.
  • the scene may be an analysis target and is not limited, but in the present embodiment, the explanation will be made mainly assuming that three types of scenes are the analysis target.
  • One of the three types of scenes is a scene in which a specific lane on the road is congested and there is no congestion in a lane other than the specific lane (referred to as a specific lane congestion scene).
  • the remaining two scenes are a scene in which an obstacle exists on the road on which the vehicle travels (called an obstacle existence scene) and a scene in which the presence of a traffic sign included in the image is not shown in the map information (map not shown). It is called a recording scene).
  • the control unit 20 recognizes peripheral vehicles based on the image information 30b in order to specify whether or not the scene is a specific lane congestion scene. Further, the control unit 20 specifies the distance and direction of the peripheral vehicle as seen from the vehicle based on the image information 30b, and specifies the relative position of the peripheral vehicle as seen from the vehicle. Further, the control unit 20 recognizes a white line on the road based on the image information 30b, and identifies the lane in which each peripheral vehicle exists. Then, the control unit 20 determines that there is a traffic jam in a specific lane when peripheral vehicles existing in the specific lane form a convoy and the vehicle speed is equal to or less than a predetermined speed (for example, 20 km / h). Identify. When there is another lane in which there is no traffic jam on the road on which the vehicle travels, the control unit 20 determines that the image indicated by the image information 30b is a specific lane traffic jam scene.
  • a predetermined speed for example, 20 km / h
  • the control unit 20 In order to identify whether or not the scene has an obstacle, the control unit 20 recognizes the area occupied by the image of the road on which the vehicle travels based on the image information 30b. Further, the control unit 20 detects an area surrounded by the image of the road, and if there is a stationary area having a predetermined size or more, determines that an obstacle on the road is photographed in the area. .. In this case, the control unit 20 determines that the image indicated by the image information 30b is an obstacle presence scene.
  • the control unit 20 recognizes a traffic sign based on the image information 30b in order to identify whether or not the scene is a map non-recording scene. Further, the control unit 20 identifies the current location of the vehicle when the image information 30b is captured, and searches for a traffic sign existing within a predetermined range from the current location with reference to the map information 30a. Then, when the recognized traffic sign is not included in the map information 30a, the control unit 20 determines that the image indicated by the image information 30b is a map non-recording scene.
  • the control unit 20 associates the scene identification information with the image information 30b.
  • the identification information may be any information indicating a scene, and examples thereof include an ID and the like. It may be determined that the same image information 30b corresponds to two or more scenes.
  • the information reduction processing unit 21c determines that the amount of information reduction per unit area of the important object is the information per unit area of the non-important object other than the important object.
  • This is a program module that causes the control unit 20 to execute a function of performing information reduction processing on an image so as to be less than the reduction amount. That is, the control unit 20 performs image processing to reduce the amount of information in order to reduce the amount of communication when transmitting the image information 30b to the server 50 as much as possible by the function of the information reduction processing unit 21c.
  • the image information 30b obtained by being photographed by the camera 40 becomes difficult to analyze because the object becomes difficult to identify and the detailed structure of the object becomes difficult to understand as the amount of information is reduced. Therefore, in the present embodiment, the amount of information reduction in the important part of the image is relatively small, and the amount of information reduction in the non-important part is relatively large.
  • the control unit 20 extracts the important object in which the specific object is in the specific state for each scene of the image indicated by the image information 30b. If the scene in the image is a specific lane congestion scene, the important object is a vehicle that is in a specific lane. Therefore, when the scene of the image is a specific lane congestion scene, the control unit 20 identifies an image of a vehicle existing in a specific lane among the surrounding vehicles as an important object, and positions the important object in the image. To identify. In this way, if a vehicle existing in a specific lane is regarded as an important object, it is possible to perform an analysis focusing on a vehicle in a specific lane that is congested at the image transmission destination.
  • the control unit 20 specifies the image of the portion determined to be an obstacle as an important object when the scene is specified. In addition, the control unit 20 specifies the position of the important object in the image. In this way, if an obstacle on the road on which the vehicle travels is regarded as an important object, it is possible to perform analysis focusing on the obstacle at the image transmission destination.
  • the important object is a traffic sign that is not recorded in the map information 30a. Therefore, when the scene of the image is a map non-recording scene, the control unit 20 identifies the image of the traffic sign recognized when specifying the scene as an important object. In addition, the control unit 20 specifies the position of the important object in the image. As described above, if a traffic sign not recorded in the map information 30a is regarded as an important object, the image transmission destination should analyze the traffic sign not recorded in the map information 30a and add it to the map information 30a. Information indicating a traffic sign can be generated.
  • the control unit 20 When the position of the important object is specified, the control unit 20 performs information reduction processing so that the important object has a smaller amount of information reduction than the non-important object.
  • Information reduction may be carried out by various methods, but in the present embodiment, it is carried out by compression processing.
  • the compression process may be various methods, and in the present embodiment, compression is performed in the JPEG format. That is, the control unit 20 adjusts the compression ratio of the rectangular region including the important object so that the compression ratio is lower than that of the rectangular region not including the important object, and JPEG compresses the image information 30b.
  • the rectangle may be specified by various methods. For example, a rectangle having a plurality of sizes is defined in advance, and a rectangle having the smallest size including each important object is selected.
  • rectangles having a default size are defined in advance and the minimum number of rectangles including important objects are selected.
  • the image in the bounding box which will be described later, has a low compression rate, and the image not included in the bounding box has a high compression rate.
  • the amount of information reduced for the important part of the image is suppressed, while the amount of information is further reduced for the non-important part. Therefore, when the analysis or the like is performed based on the image information 30b after the information reduction, the possibility that the amount of information of the important object is insufficient is reduced.
  • the image transmission unit 21d is a program module that causes the control unit 20 to execute a function of transmitting an image after information reduction processing to an external device. That is, the control unit 20 controls the communication unit 45 by the function of the image transmission unit 21d, and transmits the compressed image information 30b to the server 50.
  • the image information 30b is associated with scene identification information. Therefore, the server 50 can specify which scene the image information 30b is based on the identification information.
  • the scene identification information is information about important objects.
  • information indicating the position of the important object in the image is associated with the image.
  • Information indicating the position of the important object in the image can also be said to be information about the important object.
  • various other information is assumed as the information indicating the position of the important object in the image.
  • the image after the information reduction is associated with the scene identification information and the information about the important object, and is transmitted to the server 50. According to the above configuration, it is possible to transmit an image in which the amount of information of important objects is likely to be sufficient and the amount of information of non-important objects is unlikely to be excessive.
  • the server 50 includes a control unit including a CPU, ROM, RAM, etc. (not shown) and a recording medium.
  • the control unit of the server 50 can execute various programs recorded on the recording medium to execute various processes.
  • the control unit functions as an image receiving unit 50a and an image recording unit 50b by a program (not shown).
  • the image receiving unit 50a causes the control unit to execute a function of receiving an image. That is, when an image is transmitted from the navigation system 10, the control unit receives the image via a communication unit (not shown) by the function of the image receiving unit 50a.
  • the image recording unit 50b causes the control unit to execute a function of associating an image with information about an important object in the image and recording the information on a recording medium. That is, when the image is received, the control unit saves the image in the recording medium by the function of the image recording unit 50b.
  • the image is associated with scene identification information and information about the important object indicating the position of the important object in the image. Therefore, the server 50 records the information about the important object included in the image on the recording medium in association with the image.
  • the server 50 identifies the important object, the type of the scene, etc. in the image based on the information about the important object, and is important. It becomes possible to perform image analysis and the like related to objects.
  • step S100 the control unit 20 acquires the image information 30b and performs distortion correction or the like by the lens.
  • the control unit 20 executes image recognition processing for surrounding vehicles and traffic signs by using YOLO (You Only Look Once), pattern matching, and the like. As a result, the control unit 20 detects the image of the surrounding vehicle and the image of the traffic sign included in the image information 30b.
  • FIG. 3A is a diagram showing an example of an image I taken by the camera 40 and after distortion correction has been performed. This example is an example in which a peripheral vehicle is recognized by the object recognition process.
  • the bounding box B is a rectangular area surrounding the peripheral vehicles detected from the image I.
  • the size and position of the bounding box B are represented by, for example, the coordinates of the upper left vertex and the coordinates of the lower right vertex of the bounding box B.
  • the control unit 20 acquires the height h (number of pixels) of the bounding box B and the representative coordinates Bo (x, y) of the bounding box B from the coordinates of the two diagonal vertices of the bounding box B.
  • the representative coordinates Bo are, for example, the center coordinates of the bounding box B (midpoints in the width direction and the height direction) and the like.
  • the control unit 20 specifies the relative orientation of the surrounding vehicles as seen from the vehicle based on the position of the representative coordinate Bo of the bounding box B. Further, the control unit 20 specifies the distance from the vehicle to the peripheral vehicle based on the height h of the bounding box B and the type of the peripheral vehicle.
  • each coordinate in the image I is associated with the relative orientation of the object reflected in the coordinates when the vehicle is used as a reference, and information indicating the correspondence is stored in the recording medium 30. There is. Based on this correspondence, the control unit 20 acquires the relative orientation of the surrounding vehicles reflected in the representative coordinates Bo.
  • the control unit 20 defines a vehicle coordinate system based on the vehicle.
  • the vehicle coordinate system is a coordinate system defined by a vehicle width axis (X axis shown in FIG. 3B) and a vehicle length axis (Y axis shown in FIG. 3B) that are orthogonal to each other.
  • FIG. 3B shows a plane including the vehicle width axis and the vehicle length axis.
  • the point O is the origin of the vehicle coordinate system in the vehicle.
  • the vehicle length axis is parallel to the link indicating the road section on which the vehicle is traveling.
  • the relative orientation is represented by, for example, the angle ( ⁇ ) formed by the straight line SL connecting the origin O of the vehicle coordinate system and the point corresponding to the representative coordinate Bo and the vehicle length axis (for example, when ⁇ is a negative value, the vehicle travels. It indicates that it is on the left side of the vehicle length axis when facing forward in the direction, and on the right side if it is a positive value).
  • control unit 20 identifies the type of peripheral vehicle in the bounding box B by the object recognition process.
  • the type of the peripheral vehicle may be any type indicating the size of the vehicle body, and may be classified into, for example, a freight vehicle, a passenger car, a two-wheeled vehicle, and the like. Further, in the present embodiment, a typical vehicle height (for example, 1.5 [m] in the case of a passenger car) is specified for each type of peripheral vehicle. Further, the linear distance between the vehicle and the peripheral vehicle and the height h of the bounding box B when the peripheral vehicle is photographed by the camera 40 are measured in advance. Information indicating the correspondence between the height h of the bounding box B and the linear distance with respect to the origin of the vehicle coordinate system is stored in the recording medium 30 for each type of vehicle.
  • the linear distance is D1 [m]
  • the linear distance is D2. It is associated with [m].
  • Information indicating the correspondence between other types such as freight vehicles and two-wheeled vehicles is stored in the recording medium 30.
  • the control unit 20 calculates a linear distance D (see FIG. 3B) corresponding to the height h of the bounding box B based on this correspondence.
  • the control unit 20 acquires the relative orientation ⁇ of the surrounding vehicles included in the image and the linear distance D from the vehicle based on the image taken by the camera 40.
  • the control unit 20 gives the same identification information to the peripheral vehicle while the same peripheral vehicle is being photographed. Therefore, the control unit 20 identifies the characteristics (for example, colors, patterns in the bounding box B, etc.) of the images of the peripheral vehicles for which the relative orientation ⁇ and the linear distance D are specified, and the identification information corresponding to the characteristics. (For example, a number or the like) is associated with information indicating a relative direction ⁇ , a straight line distance D, and a type of a peripheral vehicle, and recorded on the recording medium 30.
  • the control unit 20 In order to give the same identification information to the same peripheral vehicle each time an image is taken, the control unit 20 refers to the recording medium 30 and corresponds to the peripheral vehicle recognized by the immediately preceding image and the latest image. It is determined whether or not it matches the characteristics of the attached image. If they match, the control unit 20 also gives the identification information given to the peripheral vehicle in the immediately preceding image to the peripheral vehicle recognized in the latest image. As a result, the same identification information is given to the peripheral vehicles that are continuously photographed by the camera 40.
  • step S100 in addition to the recognition of surrounding vehicles as described above, the recognition of traffic signs is also performed.
  • the control unit 20 when the control unit 20 recognizes a traffic sign based on the characteristics of each traffic sign, a bounding box indicating the traffic sign is specified.
  • FIG. 6B shows an example of image information 30b including a traffic sign.
  • the control unit 20 recognizes the traffic sign and identifies a bounding box B indicating the presence of the traffic sign and its position in the image. Then, the control unit 20 associates the identification information of the traffic sign with the information indicating the position of the bounding box B and records it on the recording medium 30.
  • the white line recognition process may be carried out by various methods. For example, when the control unit 20 executes a straight line detection process using a Hough transform or the like, the color of the area sandwiched by the detected straight lines is white, and the width of the white area is within a predetermined distance, it is set as a white line. Examples include recognition processing. In the present embodiment, since it is assumed that a white line existing at the end in the width direction of the lane and extending in the vehicle traveling direction (white solid line indicating the lane boundary line, white broken line, etc.) is recognized, the white line disappears. Conditions such as extending toward a point may be added.
  • the control unit 20 executes the area recognition process by the function of the scene determination unit 21b (step S110).
  • the area recognition process is a process of recognizing a continuous area in the image indicated by the image information 30b. That is, among the objects in the image, there is a high possibility that the road surface, the sky, etc. form a substantially uniform continuous region.
  • the control unit 20 specifies a region in which pixels in which a color change (for example, a color difference specified by brightness or saturation) is within a predetermined range are continuous.
  • a color change for example, a color difference specified by brightness or saturation
  • various other conditions are assumed, such as an area in which pixels within a specific color range (for example, a preset range as a road surface color) are continuous. May be specified.
  • the control unit 20 specifies a continuous region below the image including the lower end of the image as an image region of the road surface on which the vehicle travels. Then, the information indicating the area of the image of the road surface is recorded on the recording medium 30.
  • the control unit 20 determines whether or not any of the object, the white line, and the area is recognized by the function of the scene determination unit 21b (step S120). That is, if the image taken by the camera 40 does not include surrounding vehicles and traffic signs, the control unit 20 does not determine that the object has been recognized. Further, if the image taken by the camera 40 does not include a white line existing at the end in the width direction of the lane and extending in the vehicle traveling direction, the control unit 20 does not determine that the white line has been recognized. Further, the control unit 20 does not determine that the area is recognized when the image taken by the camera 40 does not include an image of the road surface on which the vehicle travels.
  • control unit 20 ends the image transmission process. That is, if none of the surrounding vehicle, the traffic sign, the white line which is the boundary line of the lane, and the road surface on which the vehicle travels is recognized in the image taken by the camera 40, the image is not targeted for transmission.
  • the control unit 20 executes the scene determination process by the function of the scene determination unit 21b (step S125).
  • the scene determination process is a process for determining whether the image information 30b corresponds to any of a specific lane congestion scene, an obstacle existence scene, and a map non-recording scene, or does not correspond to any of these. The details of the scene determination process will be described later.
  • the image information 30b is associated with the identification information indicating the scene.
  • the control unit 20 determines whether or not the image information 30b includes an important object by the function of the information reduction processing unit 21c (step S130).
  • the scene is determined based on the fact that the specific object is in a specific state. Then, the object in the specific state is an important object. Therefore, when the control unit 20 includes identification information indicating that the image information 30b is any one of a specific lane congestion scene, an obstacle existence scene, and a map non-recording scene, the control unit 20 includes an important object. judge. Of course, a process of detecting an important object from the image information 30b may be performed.
  • control unit 20 performs uniform compression by the function of the information reduction processing unit 21c (step S140).
  • uniform compression the image is compressed so that the compression ratio is common over the entire area of the image information 30b.
  • control unit 20 When it is determined in step S130 that an important object is included, the control unit 20 performs non-uniform compression by the function of the information reduction processing unit 21c (step S140), that is, the control unit 20 is in the bounding box B.
  • the image or the image in a rectangle of a predetermined size including the bounding box B is compressed at a low compression rate.
  • the control unit 20 compresses the image other than the portion having the low compression rate at a higher compression rate.
  • the compression rate may be specified by various methods, or may be selected according to the content of the image and the communication situation.
  • the control unit 20 transmits the image information 30b to the server 50 by the function of the image transmission unit 21d (step S145). That is, the control unit 20 controls the communication unit 45 and transmits the compressed image information 30b to the server 50 in a state in which the scene identification information is associated with the image information 30b.
  • the image information 30b includes various other information such as a specific lane in which a traffic jam is occurring, identification information corresponding to the characteristics of a peripheral vehicle, a relative orientation ⁇ of the peripheral vehicle, and a linear distance D.
  • Information indicating the types of surrounding vehicles, information indicating traffic signs, information indicating the position of the bounding box, information indicating the position of white lines and areas, the current location of the vehicle at the time of image capture, etc. are also transmitted in association with each other.
  • FIG. 4A is a scene determination process for determining whether or not the scene of the image information 30b is a specific lane congestion scene.
  • the scene determination process is executed when a peripheral vehicle is recognized in the object recognition process in step S100.
  • the control unit 20 acquires the relative orientation of the surrounding vehicles, the linear distance, and the identification information corresponding to the feature for a certain period before the present (step S200). That is, the control unit 20 acquires the identification information corresponding to the relative direction ⁇ , the linear distance D, and the features of each peripheral vehicle recognized in the object recognition process in step S100.
  • control unit 20 Since this processing is performed every time the image information 30b is acquired, the control unit 20 is based on the result of the object recognition processing performed on the image information 30b captured within the fixed period before the present, for the fixed period.
  • the relative azimuth ⁇ , the linear distance D, and the identification information corresponding to the feature are acquired in chronological order.
  • control unit 20 acquires the current location of the vehicle (step S205). That is, the control unit 20 acquires the current location of the vehicle (the current location when the image information 30b is taken by the camera 40) based on the output signals of the GNSS receiving unit 41, the vehicle speed sensor 42, and the gyro sensor 43.
  • the control unit 20 acquires the speed of the surrounding vehicle (step S210). That is, since the relative direction ⁇ and the linear distance D from the vehicle are obtained for each peripheral vehicle by the process of step S100, the control unit 20 has the vehicle based on the data of the peripheral vehicles for several frames before the present. The relative moving speed (dd shown in FIG. 3B) of the surrounding vehicles as seen from the above is specified. Then, the control unit 20 identifies the current speed of the vehicle (for example, the detected value of the vehicle speed sensor 42), and based on the current speed and the relative moving speed of each peripheral vehicle, the speed of each peripheral vehicle (speed with respect to the road). ) To get.
  • the current speed of the vehicle for example, the detected value of the vehicle speed sensor 42
  • the control unit 20 identifies the lane in which the surrounding vehicle is traveling (step S215). That is, the control unit 20 refers to the map information 30a and specifies the lane configuration of the road section in which the current location of the vehicle acquired in step S205 exists. Further, the control unit 20 identifies which lane the vehicle is traveling in is the lane shown in the map information 30a based on the lane recognized in step S105. When the lane is not recognized in step S105, the control unit 20 performs processing such as assuming that the surrounding vehicle and the vehicle are traveling in the same lane.
  • FIG. 4B to 4D are diagrams schematically showing the vehicle C and the peripheral vehicles 1 to 5 existing around the vehicle C.
  • the number of lanes in the road section in which the vehicle is traveling is three. Therefore, in the leftmost lane, the left boundary line is a solid white line and the right boundary line is a broken line white line. In the center lane, the left and right borders are broken white lines. In the rightmost lane, the right border is the solid white line and the left border is the dashed white line.
  • the control unit 20 identifies that the vehicle is traveling in the central lane based on the fact that the closest white lines existing on the left and right of the vehicle are broken lines. Then, the control unit 20 specifies the relative position of the peripheral vehicle as seen from the vehicle based on the relative orientation ⁇ of the peripheral vehicle as seen from the vehicle and the linear distance D. Further, the control unit 20 specifies the range of each lane existing around the vehicle based on the number of lanes and the width of the lane indicated by the map information 30a, and which position of each peripheral vehicle exists at each relative position. By specifying whether it corresponds to the lane, the lane in which the surrounding vehicle is traveling is specified. Of course, the lane in which the surrounding vehicle is traveling may be identified based on the image.
  • the control unit 20 determines whether or not there is a lane in a specific lane (step S220). That is, when there are a predetermined number or more (for example, three or more) of peripheral vehicles on the same lane, the control unit 20 considers that these peripheral vehicles form a convoy. Then, when there is a lane in which a lane is formed, the control unit 20 determines that the lane is a specific lane and determines that the lane exists in the specific lane. If it is not determined in step S220 that a lane exists in a specific lane, the control unit 20 skips steps S225 and S230.
  • the control unit 20 determines whether or not the vehicle speed is equal to or less than the threshold value (step S225). That is, the control unit 20 acquires the speeds of the peripheral vehicles acquired in step S210, and whether or not the speeds (for example, average speeds) of the peripheral vehicles forming the convoy are equal to or less than a predetermined threshold value. To judge.
  • the threshold value is a small speed (for example, 20 km / h or the like) for determining whether or not there is a traffic jam.
  • step S225 If it is not determined in step S225 that the vehicle speed is equal to or less than the threshold value, the control unit 20 skips step S230.
  • step S230 the control unit 20 determines that the image information 30b is a specific lane congestion scene (step S230). That is, the control unit 20 associates the image information 30b with information indicating that the scene is a specific lane congestion scene.
  • FIG. 5A is a scene determination process for determining whether or not the scene of the image information 30b is an obstacle existence scene.
  • the control unit 20 acquires an image area of the road surface (step S300). That is, the control unit 20 acquires an image region of the road surface from the regions recognized as continuous by the region recognition process in step S110.
  • FIG. 5B shows an example in which the image information 30b is an image I in which an obstacle Ob exists on the road surface.
  • step S300 is executed in this example, the portion of the road surface shown in gray is acquired as the region Zr of the image of the road surface.
  • control unit 20 identifies a non-road area within the area of the image of the road surface (step S305). That is, the control unit 20 extracts a non-road surface portion surrounded by the road surface image in the road surface image. At this time, the control unit 20 excludes the white line recognized by the white line recognition process. As a result, the control unit 20 considers the remaining portion as a non-road area. In the example shown in FIG. 5B, the obstacle Ob remains and is regarded as a non-road area.
  • the control unit 20 determines whether or not the area of the non-road area is equal to or greater than the threshold value (step S310). That is, when the non-road area on the road surface is larger than a certain size, the control unit 20 considers that the non-road area is likely to be an image of an obstacle.
  • the threshold value may be defined in advance as a value for determining whether or not it is an obstacle.
  • step S310 If it is not determined in step S310 that the area of the non-road area is equal to or greater than the threshold value, the control unit 20 skips step S315.
  • the control unit 20 determines that the image information 30b is an obstacle presence scene (step S315). That is, the control unit 20 associates the image information 30b with information indicating that the scene is an obstacle-existing scene. Since the example shown in FIG. 5B is an example in which an obstacle Ob exists on the road surface, in this example, it is determined that the area of the non-road area is equal to or larger than the threshold value, and the image I is an obstacle existence scene. It is judged.
  • the determination as to whether or not an obstacle exists may be performed by various other methods. For example, when the presence or absence of an obstacle is recognized by object recognition and the presence or absence of an obstacle is recognized, the obstacle is recognized. A configuration or the like that determines that the scene is an object existence may be adopted.
  • FIG. 6A is a scene determination process for determining whether or not the scene of the image information 30b is a map non-recording scene.
  • the control unit 20 acquires the current location of the vehicle (step S400). That is, the control unit 20 acquires the current location of the vehicle based on the output signals of the GNSS receiving unit 41, the vehicle speed sensor 42, and the gyro sensor 43.
  • control unit 20 acquires a traffic sign around the current location based on the map information (step S405). That is, the control unit 20 refers to the map information 30a and acquires the position and identification information of the traffic sign existing within the predetermined distance from the current location acquired in step S400.
  • control unit 20 extracts the difference from the recognition result (step S410). That is, the control unit 20 refers to the recording medium 30 and acquires the identification information of the traffic sign detected from the image information 30b by the object recognition process in step S100 as the recognition result.
  • FIG. 6B shows an example of image information 30b including a traffic sign prohibiting turning.
  • the identification information of the traffic sign is recorded on the recording medium 30 as a recognition result. Therefore, the control unit 20 acquires the identification information indicating the traffic sign prohibiting turning as a recognition result. Further, the control unit 20 compares the traffic sign identification information obtained as a recognition result with the traffic sign identification information acquired in step S405. Then, when there is a traffic sign that exists as a recognition result but does not exist in the map information 30a, the control unit 20 extracts the identification information of the traffic sign as a difference.
  • step S415 determines whether or not there is a difference. That is, when the difference is extracted by the process of step S410, the control unit 20 determines that there is a difference. If it is not determined in step S415 that there is a difference, the control unit 20 skips step S420.
  • step S420 determines that the image information 30b is a map non-recording scene. That is, the control unit 20 associates the image information 30b with information indicating that the scene is a map non-recording scene. As in the example shown in FIG. 6B, if the traffic sign prohibiting turning is recognized, but it is not recorded in the map information 30a, it is determined in step S415 that there is a difference, so that the image I is not recorded in the map. It is judged to be a scene.
  • the image transmission system may be a device mounted on a vehicle or the like, a device realized by a portable terminal, or a plurality of devices (for example, a control unit in a navigation system). It may be a system realized by a control unit or the like in the camera 40).
  • At least a part of the image capturing unit 21a, the scene determination unit 21b, the information reduction processing unit 21c, and the image transmission unit 21d constituting the image transmission system may be divided into a plurality of devices and exist.
  • some configurations of the above-described embodiments may be omitted, and the order of processing may be changed or omitted.
  • the number of scenes to be determined by the scene determination unit 21b may be larger or smaller.
  • the judgment order of the scenes may be changed.
  • the image capturing unit is mounted on the vehicle and can capture an image of the road on which the vehicle is traveling. That is, it suffices if the surroundings of the vehicle can be imaged while the vehicle is traveling on the road.
  • Various devices can be assumed as the devices for acquiring images. For example, a camera mounted on a vehicle, a camera mounted on a terminal used in a vehicle interior, or the like can be assumed.
  • the image may be an image of the road on which the vehicle is located, and may include an image of the surroundings thereof.
  • the scenery in front of the vehicle may be photographed, and the scenery in the rear or side may be photographed.
  • the information reduction processing unit reduces the amount of information reduction per unit area of the important object per unit area of the non-important object other than the important object. It suffices if information reduction processing can be performed on the image so that the amount is less than the amount. That is, it is sufficient that the information reduction processing unit can adjust the amount of information for each area of the image and can perform the information reduction processing so that the amount of information reduction of the important object is relatively smaller than that of other objects. ..
  • the amount of information per unit area is represented by, for example, the number of bytes of data required to represent an image in the unit area, but the amount of information can be adjusted by performing image processing on the amount of information. be.
  • the smaller the amount of information reduction from the captured image the closer the image is to the object represented by the captured image, and the higher the possibility that image processing such as image recognition processing can be performed accurately.
  • the information reduction process for reducing the amount of information per unit area is typically a compression process, but it may be a trimming process, or a trimming process and a compression process may be used in combination.
  • information reduction processing may be performed to leave important objects and trim other objects.
  • a portion not used for analysis such as an empty portion, may be trimmed from the captured image.
  • the area where the amount of information is adjusted may be an area surrounded by the outline of the important object, or an area having a predetermined shape including or including the important object (for example, a rectangular area). good.
  • the important object may be any object that can be noticed in the image, and may be determined in advance.
  • the important object is not limited to the object specified to be in a specific state according to the judgment of the scene. Therefore, the scene may not be determined, and the presence or absence of important objects may be determined by applying pattern matching, feature extraction, YOLO, or the like from each image.
  • Important objects are objects that want to suppress excessive reduction of the amount of information. Then, it is recognized as an important object when it is a specific object that can be photographed in the image and the specific object is in a predetermined specific state.
  • the important object may be a predetermined specific state, and can be an important object when various objects are in various states other than the above examples.
  • An important object is an object whose importance changes depending on the position in the image, the situation of surrounding objects, the relationship with map information, etc., even if the objects are of the same type. That is, even if the object is a vehicle, whether or not it is an important object may change depending on the traffic situation, such as an object in a lane that is not congested is regarded as a non-important object. Further, even if the object is an obstacle, the object existing on the road surface on which the vehicle travels is an important object, but the object existing on the sidewalk is a non-important object, and the object may change depending on the situation.
  • the object is a traffic sign
  • the traffic sign is recorded in the map information, it is regarded as a non-important object, and whether or not it is an important object in relation to the map information changes.
  • the non-important object may be any object other than the important object, and any subject outside the outline of the important object can be a non-important object. Therefore, various objects recognized as individual entities such as sidewalks and vehicles can be non-important objects, and objects that can exist continuously in an image such as the sky and sidewalks can also be non-important objects.
  • the amount of information reduction may differ between the important object and the non-important object, and as described above, the amount of information reduction may be adjusted for each rectangular area, and inside and outside the outline of the important object. The amount of information reduction may change.
  • the image transmission unit only needs to be able to transmit the image after the information reduction processing to an external device. That is, when an image is transmitted and used in the destination device, if the amount of information is large, the communication speed decreases and the communication cost increases. Therefore, when it is assumed that an image is transmitted, it is preferable that the amount of information is small.
  • whether or not it is an important object can change depending on whether or not it is an object of interest in an image. Therefore, by adjusting the amount of information reduction according to the importance, it is possible to reduce the amount of information to be communicated while making it possible to realize analysis according to the purpose of the image. Therefore, it is possible to efficiently reduce information.
  • the compression format is not limited to the JPEG format as in the above-described embodiment, and may be various formats.
  • various compression formats such as JPEG, PNG, and GIF can be adopted.
  • the image is not limited to a still image and may be a moving image.
  • various video compression formats can be adopted.
  • a plurality of compression formats may be used in combination, and different compression formats may be adopted for each region.
  • the scene determination unit only needs to be able to determine which of the predetermined scenes the image is. That is, the process for acquiring information for determining whether or not the image corresponds to the default scene and the determination criteria are predetermined, and the scene determination unit determines that the image is based on at least the image. It suffices if it can be determined whether or not it corresponds to the default scene.
  • the scene may be at least one of the three types included in the above-described embodiment, or may include other scenes. Other scenes include, for example, the presence or absence of traffic congestion that is not limited to a specific lane, accident occurrence scenes, scenes with relatively many blind spots, scenes with relatively many road changes (curve sections, etc.). Can be mentioned.
  • the specific lane may be any lane when viewed from the vehicle. Therefore, it is not limited to the scene in which the leftmost lane of the left-hand traffic is congested and the other lanes are not congested as in the above-described embodiment.
  • an arbitrary lane on a road section traveling in the same traveling direction as the traveling direction of the vehicle may be a specific lane.
  • any lane on the road section in which the vehicle travels in the direction opposite to the traveling direction of the vehicle may be a specific lane.
  • the amount of information reduction per unit area of the vehicle may be less than the amount of information reduction per unit area of the vehicle other than the end.
  • the obstacle may be a feature that hinders the running of the vehicle by being stationary on the road. Therefore, the type of obstacle is not limited. Further, the process for detecting an obstacle is not limited to the above-mentioned process. For example, a feature having a specific feature may be specified as an obstacle, or an obstacle may be detected by a machine learning model in which machine learning is performed based on images of various obstacles. There may be.
  • the feature is not limited to the traffic sign. That is, any feature whose existence can be recorded in the map information can be targeted. For example, it may be determined whether various facilities are included in the image and not included in the map information, or whether the structure of the road (new road, etc.) is different from the structure shown in the map information. May be good.
  • the compression ratio of the important objects is the same even when a plurality of important objects exist, but the compression ratio may be changed according to the importance. Further, the compression ratio may change depending on the position of the important object. That is, the control unit 20 may adjust the compression rate so that the higher the importance of the important object, the smaller the amount of information reduction per unit area.
  • the importance may be determined by various factors, for example, important objects in different scenes may have different compression ratios from each other. Further, the analysis target in the important object, for example, the central portion of the traffic sign may have a lower compression ratio than the other portions.
  • the method of processing so that the amount of information reduction of important objects is smaller than the amount of information reduction of non-important objects can also be applied as a method or a program to be executed by a computer.
  • the above systems, programs, and methods may be realized as a single device or may be realized by using parts shared with each part provided in the vehicle, including various aspects. It is a program.
  • some of them are software and some of them are hardware, which can be changed as appropriate.
  • the invention is also established as a recording medium for a program that controls a system.
  • the recording medium of the program may be a magnetic recording medium or a semiconductor memory, and any recording medium to be developed in the future can be considered in exactly the same way.

Abstract

[Problem] To provide a technology for reducing the possibility that the amount of information of an unimportant object becomes excessive. [Solution] An image transmission system is configured to comprise: an image capturing unit which is mounted on a vehicle and captures an image of the road on which the vehicle is traveling; an information reduction processing unit which, for important objects in a specific state among objects present in the image, performs an information reduction process on the image such that the amount of information reduction per unit area of the important objects is smaller than the amount of information reduction per unit area of unimportant objects other than the important objects; and an image transmission unit which transmits the image after an information reduction process to an external device.

Description

画像送信システムImage transmission system
 本発明は、画像送信システムに関する。 The present invention relates to an image transmission system.
 従来、車両において撮影された画像を圧縮する技術が知られている。例えば、特許文献1においては、撮影した画像データのうちの指定された部分、例えば、走行車両のナンバープレート、信号機、壁を他の部分より低い圧縮率で圧縮する技術が開示されている。 Conventionally, a technique for compressing an image taken in a vehicle is known. For example, Patent Document 1 discloses a technique for compressing a designated portion of captured image data, for example, a license plate, a traffic light, or a wall of a traveling vehicle at a compression rate lower than that of other portions.
特開2009-175848号公報Japanese Unexamined Patent Publication No. 2009-175848
 従来の技術においては、ナンバープレートが指定されるとナンバープレートの部分が低圧縮になる。しかし、特定のオブジェクトを一律に低圧縮にすると、重要ではないオブジェクトについて情報量が削減されていない場合が多かった。例えば、渋滞の車列に関する解析を行うために渋滞シーンの画像を送信する際に、車両の画像を一律に低圧縮にすると、渋滞に関連していない車両の画像の情報量が過多になる。
  本発明は、上記課題にかんがみてなされたもので、重要ではないオブジェクトの情報量が過多になる可能性を低減する技術の提供を目的とする。
In the conventional technique, when the license plate is specified, the portion of the license plate becomes low compression. However, uniform low compression of specific objects often did not reduce the amount of information about non-essential objects. For example, when transmitting an image of a traffic jam scene for analyzing a congested convoy, if the image of the vehicle is uniformly low-compressed, the amount of information of the image of the vehicle not related to the traffic jam becomes excessive.
The present invention has been made in view of the above problems, and an object of the present invention is to provide a technique for reducing the possibility that the amount of information of an unimportant object becomes excessive.
 上記の目的を達成するため、画像送信システムは、車両に搭載され、前記車両が走行中の道路の画像を撮影する画像撮影部と、前記画像内に存在するオブジェクトのうち特定の状態にある重要オブジェクトについて、前記重要オブジェクトの単位面積あたりの情報削減量が前記重要オブジェクト以外の非重要オブジェクトの単位面積あたりの情報削減量より少なくなるように、前記画像に対して情報削減処理を行う情報削減処理部と、情報削減処理後の前記画像を外部の装置に送信する画像送信部と、を備える。 In order to achieve the above object, the image transmission system is mounted on a vehicle and is important to be in a specific state among an image capturing unit that captures an image of the road on which the vehicle is traveling and an object existing in the image. Information reduction processing that performs information reduction processing on the image so that the amount of information reduction per unit area of the important object is smaller than the amount of information reduction per unit area of non-important objects other than the important object. A unit and an image transmission unit that transmits the image after information reduction processing to an external device are provided.
 すなわち、画像送信システムにおいては、予め重要オブジェクトが決められている。そこで、画像送信システムにおいては、撮影した画像に重要オブジェクトが含まれる場合には、当該重要オブジェクトの情報削減量が他の非重要オブジェクトの情報削減量より少なくなるように情報削減処理を行う。この結果、重要なオブジェクトの情報量が不足する可能性が低減され、重要ではないオブジェクトの情報量が過多になる可能性を低減される。このため、重要なオブジェクトの情報量が充分である可能性が高い画像を送信することができ、画像の送信先においては、重要なオブジェクトの画像を充分な情報量で利用することができる。 That is, in the image transmission system, important objects are determined in advance. Therefore, in the image transmission system, when an important object is included in the captured image, the information reduction process is performed so that the information reduction amount of the important object is smaller than the information reduction amount of the other non-important objects. As a result, the possibility that the amount of information of important objects is insufficient is reduced, and the possibility that the amount of information of non-important objects is excessive is reduced. Therefore, it is possible to transmit an image in which the amount of information of the important object is likely to be sufficient, and the image of the important object can be used in a sufficient amount of information at the destination of the image.
画像送信システムの構成を示すブロック図。A block diagram showing the configuration of an image transmission system. 画像送信処理のフローチャート。Flowchart of image transmission processing. 図3Aは撮影された画像の例を示す図、図3Bは車幅軸と車長軸とを含む平面を示す図。FIG. 3A is a diagram showing an example of a captured image, and FIG. 3B is a diagram showing a plane including a vehicle width axis and a vehicle length axis. 図4Aは特定車線渋滞シーンのシーン判定処理、図4B~図4Dは特定車線渋滞シーンを説明する図。FIG. 4A is a scene determination process for a specific lane congestion scene, and FIGS. 4B to 4D are diagrams for explaining a specific lane congestion scene. 図5Aは障害物存在シーンのシーン判定処理、図5Bは障害物存在シーンを説明する図。FIG. 5A is a scene determination process for an obstacle existence scene, and FIG. 5B is a diagram for explaining an obstacle existence scene. 図6Aは地図不記録シーンのシーン判定処理、図6Bは地図不記録シーンを説明する図。FIG. 6A is a scene determination process for a map non-recording scene, and FIG. 6B is a diagram for explaining a map non-recording scene.
 ここでは、下記の順序に従って本発明の実施の形態について説明する。
(1)画像送信システムの構成:
(2)画像送信処理:
(2-1)シーン判定処理(特定車線渋滞シーン):
(2-2)シーン判定処理(障害物存在シーン):
(2-3)シーン判定処理(地図不記録シーン):
(3)他の実施形態:
Here, embodiments of the present invention will be described in the following order.
(1) Image transmission system configuration:
(2) Image transmission processing:
(2-1) Scene judgment processing (specific lane congestion scene):
(2-2) Scene judgment processing (obstacle existence scene):
(2-3) Scene judgment processing (map non-recording scene):
(3) Other embodiments:
 (1)画像送信システムの構成:
  図1は、本発明の一実施形態にかかる画像送信システムを含むナビゲーションシステム10の構成を示すブロック図である。ナビゲーションシステム10は、車両に備えられており、CPU,RAM,ROM等を備える制御部20、記録媒体30を備えている。ナビゲーションシステム10は、記録媒体30やROMに記憶されたプログラムを制御部20で実行することができる。記録媒体30には、予め地図情報30aが記録されている。
(1) Image transmission system configuration:
FIG. 1 is a block diagram showing a configuration of a navigation system 10 including an image transmission system according to an embodiment of the present invention. The navigation system 10 is provided in the vehicle, and includes a control unit 20 including a CPU, RAM, ROM, and the like, and a recording medium 30. In the navigation system 10, the control unit 20 can execute a program stored in the recording medium 30 or the ROM. Map information 30a is recorded in advance on the recording medium 30.
 地図情報30aは、交差点の位置の特定や、経路案内等に利用される情報であり、車両が走行する道路上に設定されたノードの位置等を示すノードデータ,ノード間の道路の形状を特定するための形状補間点の位置等を示す形状補間点データ,ノード同士の連結を示すリンクデータ,道路やその周辺に存在する地物の位置や形状等を示す地物データ等を含んでいる。なお、本実施形態においてノードは交差点を示している。 The map information 30a is information used for specifying the position of an intersection, route guidance, etc., and specifies node data indicating the position of a node set on the road on which the vehicle travels, and the shape of the road between the nodes. It includes shape interpolation point data indicating the position of the shape interpolation point, link data indicating the connection between nodes, and feature data indicating the position and shape of features existing on the road and its surroundings. In this embodiment, the nodes indicate intersections.
 また、リンクデータには、当該リンクデータが示す道路区間に存在する車線の数および車線の幅を示す情報が対応づけられている。本実施形態においてノードや形状補間点が示す位置は道路区間上の中央線の位置を示しており、当該位置と車線の数および車線の幅によって車線の位置や車線が存在する範囲が特定可能である。地物データは、種々の地物の存在を示すデータである。本実施形態において地物には、少なくとも、交通標識が含まれている。すなわち、交通標識を示す地物データには、交通標識の位置および交通標識の識別情報が含まれている。なお、交通標識の位置は、座標(緯度や経度等)で示されていてもよいし、例えば、ノードからの距離等により、リンクで示される道路区間上の位置として示されていてもよい。 In addition, the link data is associated with information indicating the number of lanes existing in the road section indicated by the link data and the width of the lane. In the present embodiment, the position indicated by the node or the shape interpolation point indicates the position of the center line on the road section, and the position of the lane and the range in which the lane exists can be specified by the position, the number of lanes, and the width of the lane. be. Feature data is data indicating the existence of various features. In this embodiment, the feature includes at least a traffic sign. That is, the feature data indicating the traffic sign includes the position of the traffic sign and the identification information of the traffic sign. The position of the traffic sign may be indicated by coordinates (latitude, longitude, etc.), or may be indicated as a position on a road section indicated by a link, for example, by a distance from a node or the like.
 本実施形態における車両は、カメラ40とGNSS受信部41と車速センサ42とジャイロセンサ43とユーザI/F部44と通信部45とを備えている。GNSS受信部41は、Global Navigation Satellite Systemの信号を受信する装置であり、航法衛星からの電波を受信、図示しないインタフェースを介して車両の現在位置を算出するための信号を出力する。制御部20は、この信号を取得して地図の座標系における車両の現在位置(緯度、経度等)を取得する。車速センサ42は、車両が備える車輪の回転速度に対応した信号を出力する。制御部20は、図示しないインタフェースを介してこの信号を取得し、車速を取得する。ジャイロセンサ43は、車両の水平面内の旋回についての角加速度を検出し、車両の向きに対応した信号を出力する。制御部20は、この信号を取得して車両の進行方向を取得する。車速センサ42およびジャイロセンサ43等は、車両の走行軌跡を特定するために利用され、本実施形態においては、車両の出発地と走行軌跡とに基づいて現在位置が特定され、当該出発地と走行軌跡とに基づいて特定された車両の現在位置がGNSS受信部41の出力信号に基づいて補正される。 The vehicle in this embodiment includes a camera 40, a GNSS receiving unit 41, a vehicle speed sensor 42, a gyro sensor 43, a user I / F unit 44, and a communication unit 45. The GNSS receiving unit 41 is a device that receives signals from the Global Navigation Satellite System, receives radio waves from navigation satellites, and outputs a signal for calculating the current position of the vehicle via an interface (not shown). The control unit 20 acquires this signal and acquires the current position (latitude, longitude, etc.) of the vehicle in the coordinate system of the map. The vehicle speed sensor 42 outputs a signal corresponding to the rotational speed of the wheels provided in the vehicle. The control unit 20 acquires this signal via an interface (not shown) to acquire the vehicle speed. The gyro sensor 43 detects the angular acceleration for turning in the horizontal plane of the vehicle and outputs a signal corresponding to the direction of the vehicle. The control unit 20 acquires this signal and acquires the traveling direction of the vehicle. The vehicle speed sensor 42, the gyro sensor 43, and the like are used to identify the traveling locus of the vehicle, and in the present embodiment, the current position is specified based on the starting point and the traveling locus of the vehicle, and the departure point and the traveling are The current position of the vehicle specified based on the trajectory is corrected based on the output signal of the GNSS receiving unit 41.
 カメラ40は、車両の前方に向けられた視野内の画像を取得する装置である。カメラ40の光軸は車両に対して固定されており、ナビゲーションシステム10において当該光軸の方向が既知であればよい。本実施形態において、カメラ40は車両の車幅方向と光軸中心が垂直で、車両の進行方向前方が視野に含まれるような姿勢で車両に取り付けられている。制御部20は、当該カメラ40の出力する画像を取得し、特徴量の抽出等によって画像を解析することによって車両の周辺に存在する他の車両(周辺車両)を検出することができる。 The camera 40 is a device that acquires an image in the field of view directed to the front of the vehicle. The optical axis of the camera 40 is fixed to the vehicle, and the direction of the optical axis may be known in the navigation system 10. In the present embodiment, the camera 40 is attached to the vehicle in a posture in which the vehicle width direction and the center of the optical axis are perpendicular to each other and the front of the vehicle in the traveling direction is included in the field of view. The control unit 20 can detect other vehicles (peripheral vehicles) existing in the vicinity of the vehicle by acquiring the image output by the camera 40 and analyzing the image by extracting the feature amount or the like.
 ユーザI/F部44は、利用者の指示を入力し、また利用者に各種の情報を提供するためのインタフェース部であり、図示しないタッチパネル方式のディスプレイやスピーカ等を備えている。すなわち、ユーザI/F部44は画像や音の出力部およびユーザによる指示の入力部を備えている。通信部45は、外部の装置と通信を行うための装置である。本実施形態において、制御部20は、通信部45を介してサーバ50と通信を行う。 The user I / F unit 44 is an interface unit for inputting user instructions and providing various information to the user, and includes a touch panel type display, a speaker, and the like (not shown). That is, the user I / F unit 44 includes an image and sound output unit and an input unit for instructions by the user. The communication unit 45 is a device for communicating with an external device. In the present embodiment, the control unit 20 communicates with the server 50 via the communication unit 45.
 制御部20は、図示しないナビゲーションプログラムの機能により図示しないユーザI/F部44の入力部を介して利用者による目的地の入力を受け付け、地図情報30aに基づいて車両の現在位置から目的地までの走行予定経路を探索する。また、制御部20は、当該ナビゲーションプログラムの機能によりユーザI/F部44を制御し、走行予定経路に沿って走行するための案内を実行する。本実施形態において制御部20は、当該ナビゲーションプログラムの付加機能として、カメラ40で撮影した画像をサーバ50に対して送信する機能を実行可能である。 The control unit 20 receives the input of the destination by the user via the input unit of the user I / F unit 44 (not shown) by the function of the navigation program (not shown), and from the current position of the vehicle to the destination based on the map information 30a. Search for the planned travel route. Further, the control unit 20 controls the user I / F unit 44 by the function of the navigation program, and executes guidance for traveling along the planned travel route. In the present embodiment, the control unit 20 can execute a function of transmitting an image taken by the camera 40 to the server 50 as an additional function of the navigation program.
 当該画像の送信を行うため、ナビゲーションプログラムは画像送信プログラム21を備えている。画像送信プログラム21は、画像送信機能を実現するため、画像撮影部21aと、シーン判定部21bと、情報削減処理部21cと、画像送信部21dと、を備えている。 The navigation program includes an image transmission program 21 for transmitting the image. The image transmission program 21 includes an image capturing unit 21a, a scene determination unit 21b, an information reduction processing unit 21c, and an image transmission unit 21d in order to realize an image transmission function.
 画像撮影部21aは、車両に搭載され、車両が走行中の道路の画像を撮影する機能を制御部20に実行させるプログラムモジュールである。本実施形態においては、車両が走行している過程において制御部20が、カメラ40を制御し、車両の前方の道路を含む風景を撮影する。撮影によってカメラ40から出力された画像は、画像情報30bとして記録媒体30に記録される。本実施形態において制御部20は、車両の走行中の一定期間毎に画像情報30bを取得するが、むろん、各種の条件、例えば、特定の道路種類や道路区間などにおいて画像が撮影される構成等であってもよい。 The image capturing unit 21a is a program module mounted on the vehicle and causing the control unit 20 to execute a function of capturing an image of the road on which the vehicle is traveling. In the present embodiment, the control unit 20 controls the camera 40 in the process of traveling the vehicle, and photographs the landscape including the road in front of the vehicle. The image output from the camera 40 by shooting is recorded on the recording medium 30 as image information 30b. In the present embodiment, the control unit 20 acquires the image information 30b at regular intervals while the vehicle is running, but of course, the image is taken under various conditions, for example, a specific road type or a road section. It may be.
 シーン判定部21bは、画像撮影部21aの機能によって撮影された画像が予め決められたシーンのいずれであるか判定する機能を制御部20に実行させるプログラムモジュールである。本実施形態においては、サーバ50において画像が取得され、サーバ50において当該画像に基づいて各種解析を行う。このため、予め決められた特徴のある画像が解析対象とされており、当該特徴のある画像が解析対象のシーンとして予め定義されている。 The scene determination unit 21b is a program module that causes the control unit 20 to execute a function of determining which of the predetermined scenes the image captured by the function of the image capture unit 21a is. In the present embodiment, the server 50 acquires an image, and the server 50 performs various analyzes based on the image. For this reason, a predetermined characteristic image is targeted for analysis, and the characteristic image is defined in advance as a scene to be analyzed.
 シーンは解析対象であれば良く、限定されないが、本実施形態においては、主に、3種類のシーンが解析対象となることを想定して説明を行う。3種類のシーンの一つは、道路上の特定の車線が渋滞し、前記特定の車線以外の車線に渋滞が存在しないシーン(特定車線渋滞シーンと呼ぶ)である。残りの2つのシーンは、車両が走行する道路上に障害物が存在するシーン(障害物存在シーンと呼ぶ)と、画像に含まれる交通標識の存在が地図情報に示されていないシーン(地図不記録シーンと呼ぶ)である。 The scene may be an analysis target and is not limited, but in the present embodiment, the explanation will be made mainly assuming that three types of scenes are the analysis target. One of the three types of scenes is a scene in which a specific lane on the road is congested and there is no congestion in a lane other than the specific lane (referred to as a specific lane congestion scene). The remaining two scenes are a scene in which an obstacle exists on the road on which the vehicle travels (called an obstacle existence scene) and a scene in which the presence of a traffic sign included in the image is not shown in the map information (map not shown). It is called a recording scene).
 特定車線渋滞シーンであるか否か特定するため、制御部20は、画像情報30bに基づいて、周辺車両を認識する。また、制御部20は、画像情報30bに基づいて、車両から見た周辺車両の距離および方位を特定し、車両からみた周辺車両の相対位置を特定する。さらに、制御部20は、画像情報30bに基づいて道路上の白線を認識し、各周辺車両が存在する車線を特定する。そして、制御部20は、特定の車線に存在する周辺車両が車列を形成し、かつ、車速が既定の速度(例えば、20km/h)以下である場合に、特定の車線に渋滞が存在すると特定する。車両が走行する道路上に渋滞が存在しない他の車線が存在する場合、制御部20は、画像情報30bが示す画像が特定車線渋滞シーンであると判定する。 The control unit 20 recognizes peripheral vehicles based on the image information 30b in order to specify whether or not the scene is a specific lane congestion scene. Further, the control unit 20 specifies the distance and direction of the peripheral vehicle as seen from the vehicle based on the image information 30b, and specifies the relative position of the peripheral vehicle as seen from the vehicle. Further, the control unit 20 recognizes a white line on the road based on the image information 30b, and identifies the lane in which each peripheral vehicle exists. Then, the control unit 20 determines that there is a traffic jam in a specific lane when peripheral vehicles existing in the specific lane form a convoy and the vehicle speed is equal to or less than a predetermined speed (for example, 20 km / h). Identify. When there is another lane in which there is no traffic jam on the road on which the vehicle travels, the control unit 20 determines that the image indicated by the image information 30b is a specific lane traffic jam scene.
 障害物存在シーンであるか否かを特定するため、制御部20は、画像情報30bに基づいて車両が走行する道路の像が占める領域を認識する。また、制御部20は、道路の像に囲まれた領域を検出し、既定の大きさ以上の静止した領域が存在すれば、当該領域には道路上の障害物が撮影されていると判定する。この場合、制御部20は、画像情報30bが示す画像が障害物存在シーンであると判定する。 In order to identify whether or not the scene has an obstacle, the control unit 20 recognizes the area occupied by the image of the road on which the vehicle travels based on the image information 30b. Further, the control unit 20 detects an area surrounded by the image of the road, and if there is a stationary area having a predetermined size or more, determines that an obstacle on the road is photographed in the area. .. In this case, the control unit 20 determines that the image indicated by the image information 30b is an obstacle presence scene.
 地図不記録シーンであるか否かを特定するため、制御部20は、画像情報30bに基づいて、交通標識を認識する。また、制御部20は、画像情報30bを撮影した際の車両の現在地を特定し、地図情報30aを参照して当該現在地から既定範囲内に存在する交通標識を検索する。そして、認識された交通標識が、地図情報30aに含まれていない場合、制御部20は、画像情報30bが示す画像が地図不記録シーンであると判定する。 The control unit 20 recognizes a traffic sign based on the image information 30b in order to identify whether or not the scene is a map non-recording scene. Further, the control unit 20 identifies the current location of the vehicle when the image information 30b is captured, and searches for a traffic sign existing within a predetermined range from the current location with reference to the map information 30a. Then, when the recognized traffic sign is not included in the map information 30a, the control unit 20 determines that the image indicated by the image information 30b is a map non-recording scene.
 画像情報30bに基づいて、シーンが特定されると、制御部20は、画像情報30bに対して、シーンの識別情報を対応づける。識別情報は、シーンを示す情報であれば良く、例えば、ID等が挙げられる。なお、同一の画像情報30bが2以上のシーンに該当すると判定されてもよい。 When the scene is specified based on the image information 30b, the control unit 20 associates the scene identification information with the image information 30b. The identification information may be any information indicating a scene, and examples thereof include an ID and the like. It may be determined that the same image information 30b corresponds to two or more scenes.
 情報削減処理部21cは、画像内に存在するオブジェクトが特定の状態の重要オブジェクトに該当する場合に、重要オブジェクトの単位面積あたりの情報削減量が重要オブジェクト以外の非重要オブジェクトの単位面積あたりの情報削減量より少なくなるように、画像に対して情報削減処理を行う機能を制御部20に実行させるプログラムモジュールである。すなわち、制御部20は、情報削減処理部21cの機能により、画像情報30bをサーバ50に送信する際の通信量を可能な限り削減するために、画像処理を行って情報量を削減する。 When the object existing in the image corresponds to the important object in a specific state, the information reduction processing unit 21c determines that the amount of information reduction per unit area of the important object is the information per unit area of the non-important object other than the important object. This is a program module that causes the control unit 20 to execute a function of performing information reduction processing on an image so as to be less than the reduction amount. That is, the control unit 20 performs image processing to reduce the amount of information in order to reduce the amount of communication when transmitting the image information 30b to the server 50 as much as possible by the function of the information reduction processing unit 21c.
 カメラ40で撮影されて得られた画像情報30bは、情報量が削減されるほどオブジェクトが識別しにくくなったり、オブジェクトの詳細な構造がわかりづらくなったりするため、解析が困難になる。そこで、本実施形態においては、画像の重要な部分における情報削減量は相対的に小さく、重要でない部分における情報削減量は相対的に多くなるように構成されている。 The image information 30b obtained by being photographed by the camera 40 becomes difficult to analyze because the object becomes difficult to identify and the detailed structure of the object becomes difficult to understand as the amount of information is reduced. Therefore, in the present embodiment, the amount of information reduction in the important part of the image is relatively small, and the amount of information reduction in the non-important part is relatively large.
 本実施形態においてはシーン毎に重要オブジェクトが決められているため、制御部20は、画像情報30bが示す画像のシーン毎に、特定のオブジェクトが特定の状態になっている重要オブジェクトを抽出する。画像のシーンが特定車線渋滞シーンである場合、重要オブジェクトは、特定の車線に存在する車両である。そこで、画像のシーンが特定車線渋滞シーンである場合、制御部20は、周辺車両のうち、特定の車線上に存在する車両の像を重要オブジェクトとして特定し、当該重要オブジェクトの画像内での位置を特定する。このように、特定の車線に存在する車両を重要オブジェクトとすれば、渋滞している特定の車線上の車両に着目した解析を、画像の送信先にて実施することが可能になる。 Since the important object is determined for each scene in the present embodiment, the control unit 20 extracts the important object in which the specific object is in the specific state for each scene of the image indicated by the image information 30b. If the scene in the image is a specific lane congestion scene, the important object is a vehicle that is in a specific lane. Therefore, when the scene of the image is a specific lane congestion scene, the control unit 20 identifies an image of a vehicle existing in a specific lane among the surrounding vehicles as an important object, and positions the important object in the image. To identify. In this way, if a vehicle existing in a specific lane is regarded as an important object, it is possible to perform an analysis focusing on a vehicle in a specific lane that is congested at the image transmission destination.
 画像のシーンが障害物存在シーンである場合、重要オブジェクトは、車両が走行する道路上の障害物である。そこで、画像のシーンが障害物存在シーンである場合、制御部20は、シーンを特定する際に障害物であると判定された部分の像を重要オブジェクトとして特定する。また、制御部20は、当該重要オブジェクトの画像内での位置を特定する。このように、車両が走行する道路上の障害物を重要オブジェクトとすれば、当該障害物に着目した解析を、画像の送信先にて実施することが可能になる。 When the scene in the image is an obstacle presence scene, the important object is an obstacle on the road on which the vehicle travels. Therefore, when the scene of the image is a scene with an obstacle, the control unit 20 specifies the image of the portion determined to be an obstacle as an important object when the scene is specified. In addition, the control unit 20 specifies the position of the important object in the image. In this way, if an obstacle on the road on which the vehicle travels is regarded as an important object, it is possible to perform analysis focusing on the obstacle at the image transmission destination.
 画像のシーンが地図不記録シーンである場合、重要オブジェクトは、地図情報30aに記録されていない交通標識である。そこで、画像のシーンが地図不記録シーンである場合、制御部20は、シーンを特定する際に認識された交通標識の像を重要オブジェクトとして特定する。また、制御部20は、当該重要オブジェクトの画像内での位置を特定する。このように、地図情報30aに記録されていない交通標識を重要オブジェクトとすれば、画像の送信先において、当該地図情報30aに記録されていない交通標識に関する解析を行い、地図情報30aに追加すべき交通標識を示す情報を生成することができる。 When the image scene is a map non-recording scene, the important object is a traffic sign that is not recorded in the map information 30a. Therefore, when the scene of the image is a map non-recording scene, the control unit 20 identifies the image of the traffic sign recognized when specifying the scene as an important object. In addition, the control unit 20 specifies the position of the important object in the image. As described above, if a traffic sign not recorded in the map information 30a is regarded as an important object, the image transmission destination should analyze the traffic sign not recorded in the map information 30a and add it to the map information 30a. Information indicating a traffic sign can be generated.
 重要オブジェクトの位置が特定されると、制御部20は、重要オブジェクトが非重要オブジェクトよりも少ない情報削減量となるように情報削減処理を行う。情報削減は種々の手法で実施されてよいが、本実施形態においては、圧縮処理によって実施される。圧縮処理は、種々の手法であってよく、本実施形態においては、JPEG形式によって圧縮が行われる。すなわち、制御部20は、重要オブジェクトを含む矩形の領域に対して、重要オブジェクトを含まない矩形の領域より低圧縮率となるように圧縮率を調整し、画像情報30bをJPEG圧縮する。なお、矩形は、種々の手法で特定されてよく、例えば、複数の大きさの矩形が予め定義されており、各重要オブジェクトを含む最小の大きさの矩形が選択される構成等が挙げられる。また、既定の大きさの矩形が予め定義され、重要オブジェクトを含む最小の数の矩形が選択される構成等が挙げられる。さらに、後述するバウンディングボックス内の画像が低圧縮率とされ、バウンディングボックスに含まれない画像が高圧縮率とされる構成等が挙げられる。 When the position of the important object is specified, the control unit 20 performs information reduction processing so that the important object has a smaller amount of information reduction than the non-important object. Information reduction may be carried out by various methods, but in the present embodiment, it is carried out by compression processing. The compression process may be various methods, and in the present embodiment, compression is performed in the JPEG format. That is, the control unit 20 adjusts the compression ratio of the rectangular region including the important object so that the compression ratio is lower than that of the rectangular region not including the important object, and JPEG compresses the image information 30b. The rectangle may be specified by various methods. For example, a rectangle having a plurality of sizes is defined in advance, and a rectangle having the smallest size including each important object is selected. Further, there is a configuration in which rectangles having a default size are defined in advance and the minimum number of rectangles including important objects are selected. Further, there is a configuration in which the image in the bounding box, which will be described later, has a low compression rate, and the image not included in the bounding box has a high compression rate.
 以上の処理によれば、画像の重要な部分について削減される情報量が抑制され、一方、重要でない部分については情報量がより削減される。従って、情報削減後の画像情報30bに基づいて解析等が行われる際に、重要なオブジェクトの情報量が不足する可能性が低減される。 According to the above processing, the amount of information reduced for the important part of the image is suppressed, while the amount of information is further reduced for the non-important part. Therefore, when the analysis or the like is performed based on the image information 30b after the information reduction, the possibility that the amount of information of the important object is insufficient is reduced.
 画像送信部21dは、情報削減処理後の画像を外部の装置に送信する機能を制御部20に実行させるプログラムモジュールである。すなわち、制御部20は、画像送信部21dの機能により、通信部45を制御し、圧縮後の画像情報30bをサーバ50に対して送信する。なお、画像情報30bには、シーンの識別情報が対応づけられる。従って、サーバ50においては、識別情報に基づいて画像情報30bがどのシーンであるのかを特定することができる。 The image transmission unit 21d is a program module that causes the control unit 20 to execute a function of transmitting an image after information reduction processing to an external device. That is, the control unit 20 controls the communication unit 45 by the function of the image transmission unit 21d, and transmits the compressed image information 30b to the server 50. The image information 30b is associated with scene identification information. Therefore, the server 50 can specify which scene the image information 30b is based on the identification information.
 なお、本実施形態においては、シーン毎に重要オブジェクトが決められているため、シーンの識別情報は、その画像に含まれる重要オブジェクトを示していると考えることもできる。従って、シーンの識別情報は、重要オブジェクトに関する情報である。また、本実施形態においては、後述するように、重要オブジェクトの画像内での位置を示す情報が画像に対応づけられる。当該重要オブジェクトの画像内での位置を示す情報も、重要オブジェクトに関する情報であると言える。むろん、重要オブジェクトの画像内での位置を示す情報としては、他にも種々の情報が想定される。いずれにしても、情報削減後の画像に対して、シーンの識別情報や重要オブジェクトに関する情報が対応づけられ、サーバ50に対して送信される。以上の構成によれば、重要なオブジェクトの情報量が充分である可能性が高く、重要ではないオブジェクトの情報量が過多である可能性が低い画像を送信することができる。 In the present embodiment, since the important object is determined for each scene, it can be considered that the scene identification information indicates the important object included in the image. Therefore, the scene identification information is information about important objects. Further, in the present embodiment, as will be described later, information indicating the position of the important object in the image is associated with the image. Information indicating the position of the important object in the image can also be said to be information about the important object. Of course, various other information is assumed as the information indicating the position of the important object in the image. In any case, the image after the information reduction is associated with the scene identification information and the information about the important object, and is transmitted to the server 50. According to the above configuration, it is possible to transmit an image in which the amount of information of important objects is likely to be sufficient and the amount of information of non-important objects is unlikely to be excessive.
 サーバ50は、図示しないCPU,ROM,RAM等を含む制御部と、記録媒体とを備えている。サーバ50の制御部は、記録媒体に記録された各種プログラムを実行して各種の処理を実行することができる。本実施形態において制御部は図示しないプログラムによって画像受信部50aおよび画像記録部50bとして機能する。 The server 50 includes a control unit including a CPU, ROM, RAM, etc. (not shown) and a recording medium. The control unit of the server 50 can execute various programs recorded on the recording medium to execute various processes. In the present embodiment, the control unit functions as an image receiving unit 50a and an image recording unit 50b by a program (not shown).
 画像受信部50aは、画像を受信する機能を制御部に実行させる。すなわち、ナビゲーションシステム10から画像が送信されると、制御部は画像受信部50aの機能によって図示しない通信部を介して画像を受信する。画像記録部50bは、画像に対して当該画像内の重要オブジェクトに関する情報を対応づけて記録媒体に記録する機能を制御部に実行させる。すなわち、画像が受信されると、制御部は、画像記録部50bの機能により、記録媒体に画像を保存する。 The image receiving unit 50a causes the control unit to execute a function of receiving an image. That is, when an image is transmitted from the navigation system 10, the control unit receives the image via a communication unit (not shown) by the function of the image receiving unit 50a. The image recording unit 50b causes the control unit to execute a function of associating an image with information about an important object in the image and recording the information on a recording medium. That is, when the image is received, the control unit saves the image in the recording medium by the function of the image recording unit 50b.
 当該画像には、シーンの識別情報や、重要オブジェクトの画像内での位置等を示す重要オブジェクトに関する情報が対応づけられている。従って、サーバ50は、画像に含まれる重要オブジェクトに関する情報を当該画像に対応づけて記録媒体に記録する。このように、重要オブジェクトに関する情報が対応づけられた画像が記録媒体に記録されていると、サーバ50は、重要オブジェクトに関する情報に基づいて画像内の重要オブジェクトやシーンの種類等を特定し、重要オブジェクトに関する画像解析等を実施することが可能になる。 The image is associated with scene identification information and information about the important object indicating the position of the important object in the image. Therefore, the server 50 records the information about the important object included in the image on the recording medium in association with the image. When the image associated with the information about the important object is recorded on the recording medium in this way, the server 50 identifies the important object, the type of the scene, etc. in the image based on the information about the important object, and is important. It becomes possible to perform image analysis and the like related to objects.
 (2)画像送信処理:
  次に、制御部20が実行する画像送信処理を、図2を参照しながら説明する。制御部20は、画像撮影部21aの機能によってカメラ40による撮影を行うと、図2に示す画像送信処理を実行する。画像送信処理において、制御部20は、シーン判定部21bの機能により、物体認識処理を実行する(ステップS100)。具体的には、制御部20は、画像情報30bを取得し、レンズによる歪み補正等を施す。さらに、制御部20は、YOLO(You Only Look Once)やパターンマッチング等を用いて、周辺車両および交通標識を対象とした画像認識処理を実行する。この結果、制御部20は、画像情報30bに含まれる周辺車両の画像および交通標識の画像を検出する。
(2) Image transmission processing:
Next, the image transmission process executed by the control unit 20 will be described with reference to FIG. When the control unit 20 takes a picture with the camera 40 by the function of the image taking unit 21a, the control unit 20 executes the image transmission process shown in FIG. In the image transmission process, the control unit 20 executes the object recognition process by the function of the scene determination unit 21b (step S100). Specifically, the control unit 20 acquires the image information 30b and performs distortion correction or the like by the lens. Further, the control unit 20 executes image recognition processing for surrounding vehicles and traffic signs by using YOLO (You Only Look Once), pattern matching, and the like. As a result, the control unit 20 detects the image of the surrounding vehicle and the image of the traffic sign included in the image information 30b.
 本実施形態において、制御部20は、認識した周辺車両や交通標識を認識すると、これらのオブジェクトを囲むバウンディングボックスを画像内で特定する。バウンディングボックスの大きさおよび位置は、オブジェクトの画像の大きさや、画像内におけるオブジェクトの位置を示している。図3Aは、カメラ40によって撮影され、歪み補正が行われた後の画像Iの一例を示す図である。この例は、物体認識処理によって周辺車両が認識された例である。図3Aに示すように、本実施形態においてバウンディングボックスBは、画像Iから検出された周辺車両を囲む矩形領域である。 In the present embodiment, when the control unit 20 recognizes the recognized peripheral vehicle or traffic sign, the control unit 20 identifies the bounding box surrounding these objects in the image. The size and position of the bounding box indicate the size of the object's image and the position of the object in the image. FIG. 3A is a diagram showing an example of an image I taken by the camera 40 and after distortion correction has been performed. This example is an example in which a peripheral vehicle is recognized by the object recognition process. As shown in FIG. 3A, in the present embodiment, the bounding box B is a rectangular area surrounding the peripheral vehicles detected from the image I.
 バウンディングボックスBの大きさや位置は、例えばバウンディングボックスBの左上の頂点の座標と右下の頂点の座標によって表される。制御部20は、バウンディングボックスBの対角2頂点の座標から、バウンディングボックスBの高さh(画素数)と、バウンディングボックスBの代表座標Bo(x、y)を取得する。代表座標Boは、例えばバウンディングボックスBの中心座標(幅方向および高さ方向の中点)等である。制御部20は、バウンディングボックスBの代表座標Boの位置に基づいて、車両から見た周辺車両の相対方位を特定する。また、制御部20は、バウンディングボックスBの高さhおよび周辺車両の種類に基づいて、車両から周辺車両までの距離を特定する。 The size and position of the bounding box B are represented by, for example, the coordinates of the upper left vertex and the coordinates of the lower right vertex of the bounding box B. The control unit 20 acquires the height h (number of pixels) of the bounding box B and the representative coordinates Bo (x, y) of the bounding box B from the coordinates of the two diagonal vertices of the bounding box B. The representative coordinates Bo are, for example, the center coordinates of the bounding box B (midpoints in the width direction and the height direction) and the like. The control unit 20 specifies the relative orientation of the surrounding vehicles as seen from the vehicle based on the position of the representative coordinate Bo of the bounding box B. Further, the control unit 20 specifies the distance from the vehicle to the peripheral vehicle based on the height h of the bounding box B and the type of the peripheral vehicle.
 具体的には、画像I内の各座標には、車両を基準とした場合の、当該座標に写る物体の相対方位が対応付けられており、対応関係を示す情報が記録媒体30に記憶されている。制御部20はこの対応関係に基づいて、代表座標Boに写る周辺車両の相対方位を取得する。本実施形態において制御部20は、車両を基準とした車両座標系が定義される。車両座標系は、互いに直交する車幅軸(図3Bに示すX軸)と車長軸(図3Bに示すY軸)とで定義される座標系である。 Specifically, each coordinate in the image I is associated with the relative orientation of the object reflected in the coordinates when the vehicle is used as a reference, and information indicating the correspondence is stored in the recording medium 30. There is. Based on this correspondence, the control unit 20 acquires the relative orientation of the surrounding vehicles reflected in the representative coordinates Bo. In the present embodiment, the control unit 20 defines a vehicle coordinate system based on the vehicle. The vehicle coordinate system is a coordinate system defined by a vehicle width axis (X axis shown in FIG. 3B) and a vehicle length axis (Y axis shown in FIG. 3B) that are orthogonal to each other.
 図3Bは車幅軸と車長軸とを含む平面を示している。同図において点Oは、車両における車両座標系の原点である。図3Bの例において、車長軸は車両が走行している道路区間を示すリンクと平行である。相対方位は、例えば、車両座標系の原点Oと代表座標Boに対応する地点とを結ぶ直線SLと車長軸とのなす角度(θ)で表現される(例えばθが負値の場合は進行方向前方に向かって車長軸の左側、正値の場合は右側であることを示す)。 FIG. 3B shows a plane including the vehicle width axis and the vehicle length axis. In the figure, the point O is the origin of the vehicle coordinate system in the vehicle. In the example of FIG. 3B, the vehicle length axis is parallel to the link indicating the road section on which the vehicle is traveling. The relative orientation is represented by, for example, the angle (θ) formed by the straight line SL connecting the origin O of the vehicle coordinate system and the point corresponding to the representative coordinate Bo and the vehicle length axis (for example, when θ is a negative value, the vehicle travels. It indicates that it is on the left side of the vehicle length axis when facing forward in the direction, and on the right side if it is a positive value).
 さらに、制御部20は、物体認識処理により、バウンディングボックスB内の周辺車両の種類を特定する。周辺車両の種類は、車体の大きさを示す種類であればよく、例えば貨物自動車、乗用車、2輪車等のように分類されてよい。また、本実施形態においては周辺車両の種類毎に、代表的な車高(例えば乗用車の場合、1.5[m]等)が規定されている。さらに、車両と周辺車両との直線距離と、当該周辺車両をカメラ40で撮影した場合のバウンディングボックスBの高さhとが予め計測されている。そして、車両の種類毎に、バウンディングボックスBの高さhと、車両座標系の原点を基準とした直線距離との対応関係を示す情報が記録媒体30に記憶されている。 Further, the control unit 20 identifies the type of peripheral vehicle in the bounding box B by the object recognition process. The type of the peripheral vehicle may be any type indicating the size of the vehicle body, and may be classified into, for example, a freight vehicle, a passenger car, a two-wheeled vehicle, and the like. Further, in the present embodiment, a typical vehicle height (for example, 1.5 [m] in the case of a passenger car) is specified for each type of peripheral vehicle. Further, the linear distance between the vehicle and the peripheral vehicle and the height h of the bounding box B when the peripheral vehicle is photographed by the camera 40 are measured in advance. Information indicating the correspondence between the height h of the bounding box B and the linear distance with respect to the origin of the vehicle coordinate system is stored in the recording medium 30 for each type of vehicle.
 例えば、車高の代表的な実寸が1.5[m]の乗用車を囲むバウンディングボックスの高さがh1画素であれば直線距離がD1[m]であり、h2画素であれば直線距離がD2[m]であることが対応付けられている。貨物自動車や2輪車等の他の種類についてもそれぞれ対応関係を示す情報が記録媒体30に記憶されている。制御部20は、この対応関係に基づいて、バウンディングボックスBの高さhに対応する直線距離D(図3Bを参照)を算出する。以上のようにして、制御部20は、カメラ40が撮影した画像に基づいて、画像内に含まれる周辺車両の相対方位θと、車両との直線距離Dを取得する。 For example, if the height of the bounding box surrounding a passenger car having a typical actual vehicle height of 1.5 [m] is h1 pixels, the linear distance is D1 [m], and if it is h2 pixels, the linear distance is D2. It is associated with [m]. Information indicating the correspondence between other types such as freight vehicles and two-wheeled vehicles is stored in the recording medium 30. The control unit 20 calculates a linear distance D (see FIG. 3B) corresponding to the height h of the bounding box B based on this correspondence. As described above, the control unit 20 acquires the relative orientation θ of the surrounding vehicles included in the image and the linear distance D from the vehicle based on the image taken by the camera 40.
 本実施形態においては、カメラ40による撮影周期毎に画像が撮影され、各画像についてステップS100以降の処理が行われる。従って、数フレームの撮影過程に渡って同一の周辺車両が認識され得る。そこで、本実施形態において、制御部20は、同一の周辺車両が撮影されている間、当該周辺車両に対して同一の識別情報を付与する。このため、制御部20は、相対方位θと直線距離Dが特定された各周辺車両の画像の特徴(例えば、色、バウンディングボックスB内の模様等)を特定し、当該特徴に対応した識別情報(例えば、番号等)を、相対方位θ、直線距離D、周辺車両の種類を示す情報に対応付け、記録媒体30に記録する。 In the present embodiment, images are taken every shooting cycle by the camera 40, and the processing after step S100 is performed for each image. Therefore, the same peripheral vehicle can be recognized over the shooting process of several frames. Therefore, in the present embodiment, the control unit 20 gives the same identification information to the peripheral vehicle while the same peripheral vehicle is being photographed. Therefore, the control unit 20 identifies the characteristics (for example, colors, patterns in the bounding box B, etc.) of the images of the peripheral vehicles for which the relative orientation θ and the linear distance D are specified, and the identification information corresponding to the characteristics. (For example, a number or the like) is associated with information indicating a relative direction θ, a straight line distance D, and a type of a peripheral vehicle, and recorded on the recording medium 30.
 画像が撮影されるたびに同一の周辺車両に対して同一の識別情報を付与するため、制御部20は、記録媒体30を参照し、直前の画像と最新の画像で認識された周辺車両に対応づけられた画像の特徴と一致するか否か判定する。一致する場合、制御部20は、直前の画像において周辺車両に付与された識別情報を、最新の画像で認識された周辺車両に対しても付与する。この結果、カメラ40によって撮影され続けられる周辺車両に対しては同一の識別情報が付与される。 In order to give the same identification information to the same peripheral vehicle each time an image is taken, the control unit 20 refers to the recording medium 30 and corresponds to the peripheral vehicle recognized by the immediately preceding image and the latest image. It is determined whether or not it matches the characteristics of the attached image. If they match, the control unit 20 also gives the identification information given to the peripheral vehicle in the immediately preceding image to the peripheral vehicle recognized in the latest image. As a result, the same identification information is given to the peripheral vehicles that are continuously photographed by the camera 40.
 ステップS100においては、上述のような周辺車両の認識に加え、交通標識の認識も行われる。本実施形態において、制御部20は、交通標識毎の特徴に基づいて交通標識を認識すると、交通標識を示すバウンディングボックスが特定される。図6Bは、交通標識を含む画像情報30bの例を示している。画像情報30bに交通標識が含まれる場合、制御部20は、交通標識を認識し、交通標識の存在および画像内の位置を示すバウンディングボックスBを特定する。そして、制御部20は、バウンディングボックスBの位置を示す情報に対して交通標識の識別情報を対応付け、記録媒体30に記録する。 In step S100, in addition to the recognition of surrounding vehicles as described above, the recognition of traffic signs is also performed. In the present embodiment, when the control unit 20 recognizes a traffic sign based on the characteristics of each traffic sign, a bounding box indicating the traffic sign is specified. FIG. 6B shows an example of image information 30b including a traffic sign. When the image information 30b includes a traffic sign, the control unit 20 recognizes the traffic sign and identifies a bounding box B indicating the presence of the traffic sign and its position in the image. Then, the control unit 20 associates the identification information of the traffic sign with the information indicating the position of the bounding box B and records it on the recording medium 30.
 次に、制御部20は、シーン判定部21bの機能により、白線認識処理を実行する(ステップS105)。白線認識処理は、種々の手法で実施されてよい。例えば、制御部20が、ハフ変換等を利用した直線検出処理を実行し、検出された直線によって挟まれる領域の色が白であり、当該白い領域の幅が既定距離以内である場合に白線として認識する処理等が挙げられる。本実施形態においては、車線の幅方向の端に存在し、車両進行方向に延びる白線(車線の境界線を示す白い実線や白い破線等)を認識することが想定されているため、白線が消失点に向けて延びている等の条件が付加されてもよい。 Next, the control unit 20 executes the white line recognition process by the function of the scene determination unit 21b (step S105). The white line recognition process may be carried out by various methods. For example, when the control unit 20 executes a straight line detection process using a Hough transform or the like, the color of the area sandwiched by the detected straight lines is white, and the width of the white area is within a predetermined distance, it is set as a white line. Examples include recognition processing. In the present embodiment, since it is assumed that a white line existing at the end in the width direction of the lane and extending in the vehicle traveling direction (white solid line indicating the lane boundary line, white broken line, etc.) is recognized, the white line disappears. Conditions such as extending toward a point may be added.
 次に、制御部20は、シーン判定部21bの機能により、領域認識処理を実行する(ステップS110)。本実施形態において、領域認識処理は、画像情報30bが示す画像内で、連続した領域を認識する処理である。すなわち、画像内のオブジェクトのうち、路面や空などはほぼ一様な連続した領域を形成している可能性が高い。 Next, the control unit 20 executes the area recognition process by the function of the scene determination unit 21b (step S110). In the present embodiment, the area recognition process is a process of recognizing a continuous area in the image indicated by the image information 30b. That is, among the objects in the image, there is a high possibility that the road surface, the sky, etc. form a substantially uniform continuous region.
 そこで、制御部20は、画像情報30bに基づいて、色の変化(例えば、明度や彩度によって特定される色差)が既定範囲内である画素が連続している領域を特定する。むろん、領域認識のための条件としては、他にも種々の条件が想定され、特定の色の範囲(例えば、路面の色として予め設定された範囲)内にある画素が連続している領域等が特定されてもよい。連続している領域が特定されると、制御部20は、画像の下端を含み、画像の下方で連続している領域を車両が走行する路面の画像の領域として特定する。そして、当該路面の画像の領域を示す情報を記録媒体30に対して記録する。 Therefore, based on the image information 30b, the control unit 20 specifies a region in which pixels in which a color change (for example, a color difference specified by brightness or saturation) is within a predetermined range are continuous. Of course, as a condition for area recognition, various other conditions are assumed, such as an area in which pixels within a specific color range (for example, a preset range as a road surface color) are continuous. May be specified. When a continuous region is specified, the control unit 20 specifies a continuous region below the image including the lower end of the image as an image region of the road surface on which the vehicle travels. Then, the information indicating the area of the image of the road surface is recorded on the recording medium 30.
 次に、制御部20は、シーン判定部21bの機能により、物体、白線、領域のいずれかが認識されたか否か判定する(ステップS120)。すなわち、カメラ40によって撮影された画像に周辺車両および交通標識が含まれない場合、制御部20は、物体認識されたと判定しない。また、カメラ40によって撮影された画像に、車線の幅方向の端に存在し、車両進行方向に延びる白線が含まれない場合、制御部20は、白線認識されたと判定しない。さらに、制御部20は、カメラ40によって撮影された画像に車両が走行する路面の画像が含まれない場合、制御部20は、領域認識されたと判定しない。 Next, the control unit 20 determines whether or not any of the object, the white line, and the area is recognized by the function of the scene determination unit 21b (step S120). That is, if the image taken by the camera 40 does not include surrounding vehicles and traffic signs, the control unit 20 does not determine that the object has been recognized. Further, if the image taken by the camera 40 does not include a white line existing at the end in the width direction of the lane and extending in the vehicle traveling direction, the control unit 20 does not determine that the white line has been recognized. Further, the control unit 20 does not determine that the area is recognized when the image taken by the camera 40 does not include an image of the road surface on which the vehicle travels.
 物体、白線、領域が一つも認識されなかった場合、制御部20は、画像送信処理を終了する。すなわち、カメラ40によって撮影された画像において、周辺車両、交通標識、車線の境界線である白線、車両が走行する路面のいずれもが認識されなかった場合、当該画像は送信対象とされない。 If no object, white line, or area is recognized, the control unit 20 ends the image transmission process. That is, if none of the surrounding vehicle, the traffic sign, the white line which is the boundary line of the lane, and the road surface on which the vehicle travels is recognized in the image taken by the camera 40, the image is not targeted for transmission.
 一方、ステップS120において、物体、白線、領域のいずれかが認識された場合、制御部20は、シーン判定部21bの機能により、シーン判定処理を実行する(ステップS125)。シーン判定処理は、画像情報30bが、特定車線渋滞シーン、障害物存在シーン、地図不記録シーンのいずれかに該当するか、これらのいずれにも該当しないか判定するための処理である。シーン判定処理の詳細は後述する。シーン判定処理が実行されると、画像情報30bには、シーンを示す識別情報が対応づけられた状態になる。 On the other hand, when any of the object, the white line, and the area is recognized in step S120, the control unit 20 executes the scene determination process by the function of the scene determination unit 21b (step S125). The scene determination process is a process for determining whether the image information 30b corresponds to any of a specific lane congestion scene, an obstacle existence scene, and a map non-recording scene, or does not correspond to any of these. The details of the scene determination process will be described later. When the scene determination process is executed, the image information 30b is associated with the identification information indicating the scene.
 次に、制御部20は、情報削減処理部21cの機能により、画像情報30bに重要オブジェクトが含まれるか否か判定する(ステップS130)。本実施形態においては、シーン判定処理において、特定のオブジェクトが特定の状態であることに基づいてシーンが判定される。そして、当該特定の状態のオブジェクトが重要オブジェクトである。そこで、制御部20は、画像情報30bに対して特定車線渋滞シーン、障害物存在シーン、地図不記録シーンのいずれかであることを示す識別情報が対応づけられている場合、重要オブジェクトを含むと判定する。むろん、画像情報30bから重要オブジェクトを検出する処理が行われてもよい。 Next, the control unit 20 determines whether or not the image information 30b includes an important object by the function of the information reduction processing unit 21c (step S130). In the present embodiment, in the scene determination process, the scene is determined based on the fact that the specific object is in a specific state. Then, the object in the specific state is an important object. Therefore, when the control unit 20 includes identification information indicating that the image information 30b is any one of a specific lane congestion scene, an obstacle existence scene, and a map non-recording scene, the control unit 20 includes an important object. judge. Of course, a process of detecting an important object from the image information 30b may be performed.
 ステップS130において、重要オブジェクトが含まれると判定されない場合、制御部20は、情報削減処理部21cの機能により、均等圧縮を行う(ステップS140)。均等圧縮においては、画像情報30bの全域において共通の圧縮率となるように画像が圧縮される。 If it is not determined in step S130 that an important object is included, the control unit 20 performs uniform compression by the function of the information reduction processing unit 21c (step S140). In uniform compression, the image is compressed so that the compression ratio is common over the entire area of the image information 30b.
 ステップS130において、重要オブジェクトが含まれると判定された場合、制御部20は、情報削減処理部21cの機能により、非均等圧縮を行う(ステップS140)すなわち、制御部20は、バウンディングボックスB内の画像またはバウンディングボックスBを含む予め決められた大きさの矩形内の画像を低圧縮率で圧縮する。一方、制御部20は、当該低圧縮率とされた部分以外の画像をより高圧縮率で圧縮する。なお、圧縮率は、各種の手法で特定されてよいし、画像の内容や通信の状況に応じて選択されてもよい。 When it is determined in step S130 that an important object is included, the control unit 20 performs non-uniform compression by the function of the information reduction processing unit 21c (step S140), that is, the control unit 20 is in the bounding box B. The image or the image in a rectangle of a predetermined size including the bounding box B is compressed at a low compression rate. On the other hand, the control unit 20 compresses the image other than the portion having the low compression rate at a higher compression rate. The compression rate may be specified by various methods, or may be selected according to the content of the image and the communication situation.
 ステップS135またはステップS140において圧縮処理が行われると、制御部20は、画像送信部21dの機能により、画像情報30bをサーバ50に対して送信する(ステップS145)。すなわち、制御部20は、通信部45を制御し、圧縮後の画像情報30bに対してシーンの識別情報を対応づけた状態で、サーバ50に対して送信する。なお、本実施形態において、画像情報30bには、他の各種情報、例えば、渋滞が生じている特定の車線、周辺車両の特徴に対応した識別情報、周辺車両の相対方位θ、直線距離D、周辺車両の種類を示す情報、交通標識を示す情報、バウンディングボックスの位置を示す情報、白線や領域の位置を示す情報、画像撮影時の車両の現在地等も対応づけられて送信される。 When the compression process is performed in step S135 or step S140, the control unit 20 transmits the image information 30b to the server 50 by the function of the image transmission unit 21d (step S145). That is, the control unit 20 controls the communication unit 45 and transmits the compressed image information 30b to the server 50 in a state in which the scene identification information is associated with the image information 30b. In the present embodiment, the image information 30b includes various other information such as a specific lane in which a traffic jam is occurring, identification information corresponding to the characteristics of a peripheral vehicle, a relative orientation θ of the peripheral vehicle, and a linear distance D. Information indicating the types of surrounding vehicles, information indicating traffic signs, information indicating the position of the bounding box, information indicating the position of white lines and areas, the current location of the vehicle at the time of image capture, etc. are also transmitted in association with each other.
 (2-1)シーン判定処理(特定車線渋滞シーン):
  図4Aは、画像情報30bのシーンが特定車線渋滞シーンであるか否かを判定するためのシーン判定処理である。当該シーン判定処理は、ステップS100の物体認識処理において周辺車両が認識された場合に実行される。当該シーン判定処理が開始されると、制御部20は、周辺車両の相対方位と直線距離と特徴に対応した識別情報とを、現在以前の一定期間分取得する(ステップS200)。すなわち、制御部20は、ステップS100の物体認識処理で認識された各周辺車両の相対方位θと直線距離Dと特徴に対応した識別情報を取得する。この処理は、画像情報30bが取得されるたびに行われるため、制御部20は、現在以前の一定期間内に撮影された画像情報30bにおいて実施された物体認識処理の結果から、当該一定期間分に得られた相対方位θと直線距離Dと特徴に対応した識別情報とを時系列で取得する。
(2-1) Scene judgment processing (specific lane congestion scene):
FIG. 4A is a scene determination process for determining whether or not the scene of the image information 30b is a specific lane congestion scene. The scene determination process is executed when a peripheral vehicle is recognized in the object recognition process in step S100. When the scene determination process is started, the control unit 20 acquires the relative orientation of the surrounding vehicles, the linear distance, and the identification information corresponding to the feature for a certain period before the present (step S200). That is, the control unit 20 acquires the identification information corresponding to the relative direction θ, the linear distance D, and the features of each peripheral vehicle recognized in the object recognition process in step S100. Since this processing is performed every time the image information 30b is acquired, the control unit 20 is based on the result of the object recognition processing performed on the image information 30b captured within the fixed period before the present, for the fixed period. The relative azimuth θ, the linear distance D, and the identification information corresponding to the feature are acquired in chronological order.
 次に、制御部20は、車両の現在地を取得する(ステップS205)。すなわち、制御部20は、GNSS受信部41,車速センサ42,ジャイロセンサ43の出力信号に基づいて車両の現在地(カメラ40で画像情報30bを撮影した際の現在地)を取得する。 Next, the control unit 20 acquires the current location of the vehicle (step S205). That is, the control unit 20 acquires the current location of the vehicle (the current location when the image information 30b is taken by the camera 40) based on the output signals of the GNSS receiving unit 41, the vehicle speed sensor 42, and the gyro sensor 43.
 次に、制御部20は、周辺車両の速度を取得する(ステップS210)。すなわち、ステップS100の処理により、各周辺車両について車両からの相対方位θと直線距離Dが得られているため、制御部20は、現在以前の数フレーム分の周辺車両のデータに基づいて、車両から見た周辺車両の相対移動速度(図3Bに示すdd)を特定する。そして、制御部20は、車両の現在速度(例えば、車速センサ42の検出値)を特定し、当該現在速度と各周辺車両の相対移動速度とに基づいて、各周辺車両の速度(道路に対する速度)を取得する。 Next, the control unit 20 acquires the speed of the surrounding vehicle (step S210). That is, since the relative direction θ and the linear distance D from the vehicle are obtained for each peripheral vehicle by the process of step S100, the control unit 20 has the vehicle based on the data of the peripheral vehicles for several frames before the present. The relative moving speed (dd shown in FIG. 3B) of the surrounding vehicles as seen from the above is specified. Then, the control unit 20 identifies the current speed of the vehicle (for example, the detected value of the vehicle speed sensor 42), and based on the current speed and the relative moving speed of each peripheral vehicle, the speed of each peripheral vehicle (speed with respect to the road). ) To get.
 次に、制御部20は、周辺車両が走行している車線を特定する(ステップS215)。すなわち、制御部20は、地図情報30aを参照し、ステップS205で取得された車両の現在地が存在する道路区間の車線の構成を特定する。また、制御部20は、ステップS105で認識された車線に基づいて、車両が走行中の車線が、地図情報30aに示されたどの車線であるのか特定する。ステップS105において車線が認識されない場合、制御部20は、周辺車両と車両とが同一の車線を走行しているとみなす等の処理を行う。 Next, the control unit 20 identifies the lane in which the surrounding vehicle is traveling (step S215). That is, the control unit 20 refers to the map information 30a and specifies the lane configuration of the road section in which the current location of the vehicle acquired in step S205 exists. Further, the control unit 20 identifies which lane the vehicle is traveling in is the lane shown in the map information 30a based on the lane recognized in step S105. When the lane is not recognized in step S105, the control unit 20 performs processing such as assuming that the surrounding vehicle and the vehicle are traveling in the same lane.
 図4B~図4Dは、車両Cと車両Cの周囲に存在する周辺車両1~5を模式的に示す図である。この例においては図4B,図4C,図4Dの順に時間が進むことが想定されている。これらの例においては、車両が走行中の道路区間の車線数が3である。このため、左端の車線においては、左側の境界線が実線の白線であり、右側の境界線が破線の白線である。中央の車線においては、左右の境界線が破線の白線である。右端の車線においては、右側の境界線が実線の白線であり、左側の境界線が破線の白線である。 4B to 4D are diagrams schematically showing the vehicle C and the peripheral vehicles 1 to 5 existing around the vehicle C. In this example, it is assumed that the time advances in the order of FIG. 4B, FIG. 4C, and FIG. 4D. In these examples, the number of lanes in the road section in which the vehicle is traveling is three. Therefore, in the leftmost lane, the left boundary line is a solid white line and the right boundary line is a broken line white line. In the center lane, the left and right borders are broken white lines. In the rightmost lane, the right border is the solid white line and the left border is the dashed white line.
 図4B~図4Dの例であれば、制御部20は、車両の左右に存在する最も近い白線が破線であることに基づいて車両が中央の車線を走行中であると特定する。そして、制御部20は、車両からみた周辺車両の相対方位θ、直線距離Dに基づいて車両から見た周辺車両の相対位置を特定する。さらに、制御部20は、地図情報30aが示す車線の数および車線の幅に基づいて、車両の周辺に存在する各車線の範囲を特定し、各相対位置に存在する各周辺車両の位置がどの車線上に相当するのか特定することで、周辺車両が走行している車線を特定する。むろん、周辺車両が走行している車線は画像に基づいて特定されてもよい。 In the example of FIGS. 4B to 4D, the control unit 20 identifies that the vehicle is traveling in the central lane based on the fact that the closest white lines existing on the left and right of the vehicle are broken lines. Then, the control unit 20 specifies the relative position of the peripheral vehicle as seen from the vehicle based on the relative orientation θ of the peripheral vehicle as seen from the vehicle and the linear distance D. Further, the control unit 20 specifies the range of each lane existing around the vehicle based on the number of lanes and the width of the lane indicated by the map information 30a, and which position of each peripheral vehicle exists at each relative position. By specifying whether it corresponds to the lane, the lane in which the surrounding vehicle is traveling is specified. Of course, the lane in which the surrounding vehicle is traveling may be identified based on the image.
 以上の処理により、図4B~図4Dに示されたように、車両Cの現在地と、その周辺における周辺車両の位置が特定され、かつ、周辺車両が走行している車線も特定された状態になる。 By the above processing, as shown in FIGS. 4B to 4D, the current location of the vehicle C and the positions of the peripheral vehicles in the vicinity thereof are specified, and the lane in which the peripheral vehicles are traveling is also specified. Become.
 次に、制御部20は、特定の車線に車列が存在するか否かを判定する(ステップS220)。すなわち、制御部20は、同一車線上に既定台数以上(例えば、3台以上)の周辺車両が存在する場合に、これらの周辺車両が車列を形成するとみなす。そして、車列が生じている車線が存在する場合、制御部20は、当該車線が特定の車線であるとみなし、特定の車線に車列が存在すると判定する。ステップS220において、特定の車線に車列が存在すると判定されない場合、制御部20は、ステップS225,S230をスキップする。 Next, the control unit 20 determines whether or not there is a lane in a specific lane (step S220). That is, when there are a predetermined number or more (for example, three or more) of peripheral vehicles on the same lane, the control unit 20 considers that these peripheral vehicles form a convoy. Then, when there is a lane in which a lane is formed, the control unit 20 determines that the lane is a specific lane and determines that the lane exists in the specific lane. If it is not determined in step S220 that a lane exists in a specific lane, the control unit 20 skips steps S225 and S230.
 一方、ステップS220において、特定の車線に車列が存在すると判定された場合、制御部20は、車速が閾値以下であるか否かを判定する(ステップS225)。すなわち、制御部20は、ステップS210で取得された周辺車両の速度を取得し、車列を形成している周辺車両の速度(例えば、平均速度)が予め決められた閾値以下であるか否かを判定する。なお、閾値は、渋滞であるか否かを判定するための小さい速度(例えば、20km/h等)である。 On the other hand, when it is determined in step S220 that a lane exists in a specific lane, the control unit 20 determines whether or not the vehicle speed is equal to or less than the threshold value (step S225). That is, the control unit 20 acquires the speeds of the peripheral vehicles acquired in step S210, and whether or not the speeds (for example, average speeds) of the peripheral vehicles forming the convoy are equal to or less than a predetermined threshold value. To judge. The threshold value is a small speed (for example, 20 km / h or the like) for determining whether or not there is a traffic jam.
 ステップS225において、車速が閾値以下であると判定されない場合、制御部20は、ステップS230をスキップする。ステップS225において、車速が閾値以下であると判定された場合、制御部20は、画像情報30bが特定車線渋滞シーンであると判定する(ステップS230)。すなわち、制御部20は、画像情報30bに対して、シーンが特定車線渋滞シーンであることを示す情報を対応づける。 If it is not determined in step S225 that the vehicle speed is equal to or less than the threshold value, the control unit 20 skips step S230. When it is determined in step S225 that the vehicle speed is equal to or less than the threshold value, the control unit 20 determines that the image information 30b is a specific lane congestion scene (step S230). That is, the control unit 20 associates the image information 30b with information indicating that the scene is a specific lane congestion scene.
 (2-2)シーン判定処理(障害物存在シーン):
  図5Aは、画像情報30bのシーンが障害物存在シーンであるか否かを判定するためのシーン判定処理である。当該シーン判定処理が実行されると、制御部20は、路面の画像の領域を取得する(ステップS300)。すなわち、制御部20は、ステップS110の領域認識処理によって、連続しているとして認識された領域の中から、路面の画像の領域を取得する。図5Bは、画像情報30bが、路面上に障害物Obが存在する画像Iである例を示している。この例においてステップS300が実行されると、グレーで示した路面の部分が路面の画像の領域Zrとして取得される。
(2-2) Scene judgment processing (obstacle existence scene):
FIG. 5A is a scene determination process for determining whether or not the scene of the image information 30b is an obstacle existence scene. When the scene determination process is executed, the control unit 20 acquires an image area of the road surface (step S300). That is, the control unit 20 acquires an image region of the road surface from the regions recognized as continuous by the region recognition process in step S110. FIG. 5B shows an example in which the image information 30b is an image I in which an obstacle Ob exists on the road surface. When step S300 is executed in this example, the portion of the road surface shown in gray is acquired as the region Zr of the image of the road surface.
 次に、制御部20は、路面の画像の領域内の非道路領域を特定する(ステップS305)。すなわち、制御部20は、路面の画像内において路面の画像に囲まれた、路面ではない部分を抽出する。また、この際、制御部20は、白線認識処理によって認識された白線は除外する。この結果、制御部20は、残った部分を非道路領域とみなす。図5Bに示す例であれば、障害物Obが残り、非道路領域とみなされる。 Next, the control unit 20 identifies a non-road area within the area of the image of the road surface (step S305). That is, the control unit 20 extracts a non-road surface portion surrounded by the road surface image in the road surface image. At this time, the control unit 20 excludes the white line recognized by the white line recognition process. As a result, the control unit 20 considers the remaining portion as a non-road area. In the example shown in FIG. 5B, the obstacle Ob remains and is regarded as a non-road area.
 次に、制御部20は、非道路領域の面積が閾値以上であるか否か判定する(ステップS310)。すなわち、路面上の非道路領域がある程度以上の大きさである場合、制御部20は、当該非道路領域は、障害物の画像である可能性が高いとみなす。なお、閾値は、障害物であるか否かを判定するための値として予め定義されていればよい。 Next, the control unit 20 determines whether or not the area of the non-road area is equal to or greater than the threshold value (step S310). That is, when the non-road area on the road surface is larger than a certain size, the control unit 20 considers that the non-road area is likely to be an image of an obstacle. The threshold value may be defined in advance as a value for determining whether or not it is an obstacle.
 ステップS310において、非道路領域の面積が閾値以上であると判定されない場合、制御部20は、ステップS315をスキップする。ステップS310において、非道路領域の面積が閾値以上であると判定された場合、制御部20は、画像情報30bが障害物存在シーンであると判定する(ステップS315)。すなわち、制御部20は、画像情報30bに対して、シーンが障害物存在シーンであることを示す情報を対応づける。図5Bに示す例は、路面に障害物Obが存在する例であるため、この例であれば、非道路領域の面積が閾値以上であると判定され、画像Iが障害物存在シーンであると判定される。なお、障害物が存在するか否かの判定は、他にも種々の手法で実施されてよく、例えば、物体認識によって障害物の有無が認識され、障害物があると認識された場合に障害物存在シーンであると判定する構成等が採用されてもよい。 If it is not determined in step S310 that the area of the non-road area is equal to or greater than the threshold value, the control unit 20 skips step S315. When it is determined in step S310 that the area of the non-road area is equal to or larger than the threshold value, the control unit 20 determines that the image information 30b is an obstacle presence scene (step S315). That is, the control unit 20 associates the image information 30b with information indicating that the scene is an obstacle-existing scene. Since the example shown in FIG. 5B is an example in which an obstacle Ob exists on the road surface, in this example, it is determined that the area of the non-road area is equal to or larger than the threshold value, and the image I is an obstacle existence scene. It is judged. The determination as to whether or not an obstacle exists may be performed by various other methods. For example, when the presence or absence of an obstacle is recognized by object recognition and the presence or absence of an obstacle is recognized, the obstacle is recognized. A configuration or the like that determines that the scene is an object existence may be adopted.
 (2-3)シーン判定処理(地図不記録シーン):
  図6Aは、画像情報30bのシーンが地図不記録シーンであるか否かを判定するためのシーン判定処理である。当該シーン判定処理が実行されると、制御部20は、車両の現在地を取得する(ステップS400)。すなわち、制御部20は、GNSS受信部41,車速センサ42,ジャイロセンサ43の出力信号に基づいて、車両の現在地を取得する。
(2-3) Scene judgment processing (map non-recording scene):
FIG. 6A is a scene determination process for determining whether or not the scene of the image information 30b is a map non-recording scene. When the scene determination process is executed, the control unit 20 acquires the current location of the vehicle (step S400). That is, the control unit 20 acquires the current location of the vehicle based on the output signals of the GNSS receiving unit 41, the vehicle speed sensor 42, and the gyro sensor 43.
 次に、制御部20は、地図情報に基づいて現在地周辺の交通標識を取得する(ステップS405)。すなわち、制御部20は、地図情報30aを参照し、ステップS400で取得された現在地から既定距離以内に存在する交通標識の位置および識別情報を取得する。次に、制御部20は、認識結果との差分を抽出する(ステップS410)。すなわち、制御部20は、記録媒体30を参照し、ステップS100の物体認識処理によって画像情報30bから検出された交通標識の識別情報を認識結果として取得する。 Next, the control unit 20 acquires a traffic sign around the current location based on the map information (step S405). That is, the control unit 20 refers to the map information 30a and acquires the position and identification information of the traffic sign existing within the predetermined distance from the current location acquired in step S400. Next, the control unit 20 extracts the difference from the recognition result (step S410). That is, the control unit 20 refers to the recording medium 30 and acquires the identification information of the traffic sign detected from the image information 30b by the object recognition process in step S100 as the recognition result.
 図6Bは、転回禁止の交通標識を含む画像情報30bの例を示しており、この例であれば、交通標識の識別情報が認識結果として記録媒体30に記録されている。従って、制御部20は、転回禁止の交通標識を示す識別情報を認識結果として取得する。さらに、制御部20は、認識結果として得られた交通標識の識別情報と、ステップS405で取得された交通標識の識別情報とを比較する。そして、制御部20は、認識結果としては存在するが、地図情報30aに存在しない交通標識が存在する場合に、当該交通標識の識別情報を差分として抽出する。 FIG. 6B shows an example of image information 30b including a traffic sign prohibiting turning. In this example, the identification information of the traffic sign is recorded on the recording medium 30 as a recognition result. Therefore, the control unit 20 acquires the identification information indicating the traffic sign prohibiting turning as a recognition result. Further, the control unit 20 compares the traffic sign identification information obtained as a recognition result with the traffic sign identification information acquired in step S405. Then, when there is a traffic sign that exists as a recognition result but does not exist in the map information 30a, the control unit 20 extracts the identification information of the traffic sign as a difference.
 次に、制御部20は、差分があるか否かを判定する(ステップS415)。すなわち、制御部20は、ステップS410の処理によって差分が抽出された場合、差分があると判定する。ステップS415において、差分があると判定されない場合、制御部20は、ステップS420をスキップする。 Next, the control unit 20 determines whether or not there is a difference (step S415). That is, when the difference is extracted by the process of step S410, the control unit 20 determines that there is a difference. If it is not determined in step S415 that there is a difference, the control unit 20 skips step S420.
 ステップS415において、差分があると判定された場合、制御部20は、画像情報30bが地図不記録シーンであると判定する(ステップS420)。すなわち、制御部20は、画像情報30bに対して、シーンが地図不記録シーンであることを示す情報を対応づける。図6Bに示す例のように、転回禁止の交通標識が認識されたが、地図情報30aには記録されていないならば、ステップS415で差分があると判定されるため、画像Iが地図不記録シーンであると判定される。 If it is determined in step S415 that there is a difference, the control unit 20 determines that the image information 30b is a map non-recording scene (step S420). That is, the control unit 20 associates the image information 30b with information indicating that the scene is a map non-recording scene. As in the example shown in FIG. 6B, if the traffic sign prohibiting turning is recognized, but it is not recorded in the map information 30a, it is determined in step S415 that there is a difference, so that the image I is not recorded in the map. It is judged to be a scene.
 (3)他の実施形態:
  以上の実施形態は本発明を実施するための一例であり、他にも種々の実施形態を採用可能である。例えば、画像送信システムは、車両等に搭載された装置であっても良いし、可搬型の端末によって実現される装置であっても良いし、複数の装置(例えば、ナビゲーションシステム内の制御部とカメラ40内の制御部等)によって実現されるシステムであっても良い。
(3) Other embodiments:
The above embodiment is an example for carrying out the present invention, and various other embodiments can be adopted. For example, the image transmission system may be a device mounted on a vehicle or the like, a device realized by a portable terminal, or a plurality of devices (for example, a control unit in a navigation system). It may be a system realized by a control unit or the like in the camera 40).
 画像送信システムを構成する画像撮影部21a、シーン判定部21b、情報削減処理部21c、画像送信部21dの少なくとも一部が複数の装置に分かれて存在していても良い。むろん、上述の実施形態の一部の構成が省略されてもよいし、処理の順序が変動または省略されてもよい。例えば、シーン判定部21bにおいて判定対象となるシーンの数はより多くても少なくてもよい。また、シーンの判定順序が変わっていてもよい。 At least a part of the image capturing unit 21a, the scene determination unit 21b, the information reduction processing unit 21c, and the image transmission unit 21d constituting the image transmission system may be divided into a plurality of devices and exist. Of course, some configurations of the above-described embodiments may be omitted, and the order of processing may be changed or omitted. For example, the number of scenes to be determined by the scene determination unit 21b may be larger or smaller. Further, the judgment order of the scenes may be changed.
 画像撮影部は、車両に搭載され、前記車両が走行中の道路の画像を撮影することができればよい。すなわち、車両が道路を走行している状況で車両の周囲を画像化することができればよい。画像を取得する装置としては種々の装置を想定可能である。例えば、車両に搭載されたカメラや車室内で利用される端末に搭載されたカメラ等を想定可能である。画像は車両が存在する道路を撮影した画像であれば良く、その周囲の画像が含まれてもよい。また、車両の前方の風景が撮影されてもよいし、後方や側方の風景が撮影されてもよい。 It suffices if the image capturing unit is mounted on the vehicle and can capture an image of the road on which the vehicle is traveling. That is, it suffices if the surroundings of the vehicle can be imaged while the vehicle is traveling on the road. Various devices can be assumed as the devices for acquiring images. For example, a camera mounted on a vehicle, a camera mounted on a terminal used in a vehicle interior, or the like can be assumed. The image may be an image of the road on which the vehicle is located, and may include an image of the surroundings thereof. In addition, the scenery in front of the vehicle may be photographed, and the scenery in the rear or side may be photographed.
 情報削減処理部は、画像内に存在するオブジェクトが特定の状態の重要オブジェクトに該当する場合に、重要オブジェクトの単位面積あたりの情報削減量が重要オブジェクト以外の非重要オブジェクトの単位面積あたりの情報削減量より少なくなるように、画像に対して情報削減処理を行うことができればよい。すなわち、情報削減処理部は、画像の領域毎の情報量を調整することができ、重要オブジェクトの情報削減量が他のオブジェクトよりも相対的に少なくなるように情報削減処理を行うことができればよい。 When the object existing in the image corresponds to the important object in a specific state, the information reduction processing unit reduces the amount of information reduction per unit area of the important object per unit area of the non-important object other than the important object. It suffices if information reduction processing can be performed on the image so that the amount is less than the amount. That is, it is sufficient that the information reduction processing unit can adjust the amount of information for each area of the image and can perform the information reduction processing so that the amount of information reduction of the important object is relatively smaller than that of other objects. ..
 単位面積あたりの情報量は、例えば、単位面積における画像を表現するために必要なデータのバイト数等で表現されるが、当該情報量に対して画像処理を行うと当該情報量を調整可能である。そして、撮影された画像からの情報削減量が少ないほど、撮影された画像で表現されたオブジェクトに近い画像であり、画像認識処理等の画像処理を正確に実施できる可能性高くなる。 The amount of information per unit area is represented by, for example, the number of bytes of data required to represent an image in the unit area, but the amount of information can be adjusted by performing image processing on the amount of information. be. The smaller the amount of information reduction from the captured image, the closer the image is to the object represented by the captured image, and the higher the possibility that image processing such as image recognition processing can be performed accurately.
 単位面積あたりの情報量を削減する情報削減処理は、典型的には圧縮処理であるが、トリミング処理であってもよいし、トリミング処理と圧縮処理とが併用されてもよい。例えば、重要オブジェクトを残し、他のオブジェクトをトリミングする情報削減処理が行われてもよい。また、撮影された画像から、空の部分など、解析に利用されない部分がトリミングされてもよい。また、情報量の調整が行われる領域は、重要オブジェクトの輪郭に囲まれる領域等であってもよいし、重要オブジェクトを含むまたは内包する既定の形状の領域(例えば、矩形領域)等で当てもよい。 The information reduction process for reducing the amount of information per unit area is typically a compression process, but it may be a trimming process, or a trimming process and a compression process may be used in combination. For example, information reduction processing may be performed to leave important objects and trim other objects. In addition, a portion not used for analysis, such as an empty portion, may be trimmed from the captured image. Further, the area where the amount of information is adjusted may be an area surrounded by the outline of the important object, or an area having a predetermined shape including or including the important object (for example, a rectangular area). good.
 重要オブジェクトは、画像内で着目され得るオブジェクトであれば良く、予め決められていればよい。当該重要オブジェクトはシーンの判定に伴って特定の状態であると特定されたオブジェクトに限定されない。従って、シーンの判定が行われず、各画像からパターンマッチングや特徴抽出、YOLO等を適用することによって重要オブジェクトの有無が判定されてもよい。 The important object may be any object that can be noticed in the image, and may be determined in advance. The important object is not limited to the object specified to be in a specific state according to the judgment of the scene. Therefore, the scene may not be determined, and the presence or absence of important objects may be determined by applying pattern matching, feature extraction, YOLO, or the like from each image.
 重要オブジェクトは過度の情報量の削減を抑制したいオブジェクトである。そして、画像内に撮影され得る特定のオブジェクトであって、当該特定のオブジェクトが予め決められた特定の状態になっている場合に重要オブジェクトであると認定される。重要オブジェクトは、予め決められた特定の状態であればよく、上述の例以外にも種々のオブジェクトが種々の状態になっている場合に重要オブジェクトになり得る。 Important objects are objects that want to suppress excessive reduction of the amount of information. Then, it is recognized as an important object when it is a specific object that can be photographed in the image and the specific object is in a predetermined specific state. The important object may be a predetermined specific state, and can be an important object when various objects are in various states other than the above examples.
 なお、重要オブジェクトは、同一の種類のオブジェクトであっても画像内での位置や周囲のオブジェクトの状況、地図情報との関係等で、重要度が変化するオブジェクトである。すなわち、オブジェクトが車両であっても、渋滞していない車線のオブジェクトは非重要オブジェクトであるとみなされるなど、渋滞の状況によって重要オブジェクトであるか否かが変化し得る。また、オブジェクトが障害物であっても、車両が走行する路面上に存在するオブジェクトは重要オブジェクトであるが歩道上に存在するオブジェクトは非重要オブジェクトであるなど、状況によって変化し得る。 An important object is an object whose importance changes depending on the position in the image, the situation of surrounding objects, the relationship with map information, etc., even if the objects are of the same type. That is, even if the object is a vehicle, whether or not it is an important object may change depending on the traffic situation, such as an object in a lane that is not congested is regarded as a non-important object. Further, even if the object is an obstacle, the object existing on the road surface on which the vehicle travels is an important object, but the object existing on the sidewalk is a non-important object, and the object may change depending on the situation.
 さらに、オブジェクトが交通標識であっても、当該交通標識が地図情報に記録されている場合には非重要オブジェクトであるとみなされるなど、地図情報との関係で重要オブジェクトであるか否かが変化し得る。非重要オブジェクトは、重要オブジェクト以外のオブジェクトであれば良く、重要オブジェクトの輪郭の外側に存在するあらゆる被写体が非重要オブジェクトになり得る。従って、歩道や車両など、個別の存在として認識される各種のオブジェクトが非重要オブジェクトとなり得るし、空や歩道など画像内で連続して存在し得るオブジェクトも非重要オブジェクトとなり得る。なお、情報削減量は、重要オブジェクトと非重要オブジェクトとで差が生じればよく、上述のように、矩形の領域毎に情報削減量が調整されてもよいし、重要オブジェクトの輪郭の内外で情報削減量が変化してもよい。 Furthermore, even if the object is a traffic sign, if the traffic sign is recorded in the map information, it is regarded as a non-important object, and whether or not it is an important object in relation to the map information changes. Can be done. The non-important object may be any object other than the important object, and any subject outside the outline of the important object can be a non-important object. Therefore, various objects recognized as individual entities such as sidewalks and vehicles can be non-important objects, and objects that can exist continuously in an image such as the sky and sidewalks can also be non-important objects. The amount of information reduction may differ between the important object and the non-important object, and as described above, the amount of information reduction may be adjusted for each rectangular area, and inside and outside the outline of the important object. The amount of information reduction may change.
 画像送信部は、情報削減処理後の画像を外部の装置に送信することができればよい。すなわち、画像を送信し、送信先の装置で利用する場合、情報量が多いと通信の速度低下や通信コストの増大等が生じる。従って、画像を送信することを想定した場合、情報量は少ない方が好ましい。一方、画像送信システムにおいては、画像の中で着目する対象であるか否かによって重要なオブジェクトであるか否かが変化しえる。そこで、重要度に応じて情報削減量を調整することにより、画像の用途に応じた解析を実現可能にしつつも、通信される情報量を削減することが可能になる。このため、効率的な情報の削減を行うことができる。 The image transmission unit only needs to be able to transmit the image after the information reduction processing to an external device. That is, when an image is transmitted and used in the destination device, if the amount of information is large, the communication speed decreases and the communication cost increases. Therefore, when it is assumed that an image is transmitted, it is preferable that the amount of information is small. On the other hand, in an image transmission system, whether or not it is an important object can change depending on whether or not it is an object of interest in an image. Therefore, by adjusting the amount of information reduction according to the importance, it is possible to reduce the amount of information to be communicated while making it possible to realize analysis according to the purpose of the image. Therefore, it is possible to efficiently reduce information.
 情報削減処理が圧縮処理である場合、圧縮形式は上述の実施形態のようなJPEG形式に限定されず、種々の形式であってよい。例えば、JPEGやPNG、GIFなどの各種圧縮形式を採用可能である。また、画像は、静止画に限定されず、動画であってもよい。この場合、各種の動画圧縮形式を採用可能である。さらに、複数の圧縮形式が併用されてよく、領域毎に異なる圧縮形式が採用されてもよい。 When the information reduction process is a compression process, the compression format is not limited to the JPEG format as in the above-described embodiment, and may be various formats. For example, various compression formats such as JPEG, PNG, and GIF can be adopted. Further, the image is not limited to a still image and may be a moving image. In this case, various video compression formats can be adopted. Further, a plurality of compression formats may be used in combination, and different compression formats may be adopted for each region.
 シーン判定部は、画像が予め決められたシーンのいずれであるか判定することができればよい。すなわち、画像が既定のシーンに該当するか否かを判定するための情報を取得するための処理と、その判定基準が予め決められており、シーン判定部は、少なくとも画像に基づいて、画像が既定のシーンに該当するか否か判定することができればよい。シーンは、上述の実施形態に含まれる3種の中の少なくとも1種であってもよいし、他のシーンが含まれていてもよい。他のシーンとしては、例えば、特定の車線に限定されない渋滞の有無や、事故発生シーン、死角が相対的に多いシーン、道路変化が相対的に多いシーン(カーブ区間等)など、種々のシーンが挙げられる。 The scene determination unit only needs to be able to determine which of the predetermined scenes the image is. That is, the process for acquiring information for determining whether or not the image corresponds to the default scene and the determination criteria are predetermined, and the scene determination unit determines that the image is based on at least the image. It suffices if it can be determined whether or not it corresponds to the default scene. The scene may be at least one of the three types included in the above-described embodiment, or may include other scenes. Other scenes include, for example, the presence or absence of traffic congestion that is not limited to a specific lane, accident occurrence scenes, scenes with relatively many blind spots, scenes with relatively many road changes (curve sections, etc.). Can be mentioned.
 道路上の特定の車線が渋滞し、特定の車線以外の車線に渋滞が存在しないシーンであるか否か判定される場合、特定の車線は、車両からみて任意の車線であってよい。従って、上述の実施形態のような、左側通行の左端車線が渋滞し、他の車線が渋滞していないシーンに限定されない。例えば、車両の進行方向と同一の進行方向に向けて走行する道路区間上の任意の車線が特定の車線となってよい。また、車両の進行方向と逆の進行方向に向けて走行する道路区間上の任意の車線が特定の車線となってよい。特定の車線が渋滞しており、当該特定の車線に存在する車両が重要オブジェクトである場合、これらの重要オブジェクトである車両のうち、末尾に存在する車両(車両進行方向の最も後方側の車両)の単位面積あたりの情報削減量が末尾以外の車両の単位面積あたりの情報削減量より少ない構成であってもよい。この構成であれば、特定の車線において渋滞が発生している場合に、渋滞の末尾の車両に関する解析が容易になる。 When it is determined whether or not a specific lane on the road is congested and there is no congestion in a lane other than the specific lane, the specific lane may be any lane when viewed from the vehicle. Therefore, it is not limited to the scene in which the leftmost lane of the left-hand traffic is congested and the other lanes are not congested as in the above-described embodiment. For example, an arbitrary lane on a road section traveling in the same traveling direction as the traveling direction of the vehicle may be a specific lane. Further, any lane on the road section in which the vehicle travels in the direction opposite to the traveling direction of the vehicle may be a specific lane. When a specific lane is congested and the vehicle in the specific lane is an important object, the vehicle at the end of these important objects (the rearmost vehicle in the vehicle traveling direction) The amount of information reduction per unit area of the vehicle may be less than the amount of information reduction per unit area of the vehicle other than the end. With this configuration, when a traffic jam occurs in a specific lane, it becomes easy to analyze the vehicle at the end of the traffic jam.
 障害物は、道路上で静止していることによって車両の走行の障害となる地物であれば良い。従って、障害物の種類は限定されない。また、障害物を検出するための処理も上述の処理に限定されない。例えば、特定の特徴を有する地物が障害物として特定される構成であってもよいし、各種の障害物の画像に基づいて機械学習を行った機械学習モデルによって障害物を検出する構成等であってもよい。 The obstacle may be a feature that hinders the running of the vehicle by being stationary on the road. Therefore, the type of obstacle is not limited. Further, the process for detecting an obstacle is not limited to the above-mentioned process. For example, a feature having a specific feature may be specified as an obstacle, or an obstacle may be detected by a machine learning model in which machine learning is performed based on images of various obstacles. There may be.
 画像に含まれる地物の存在が地図情報に示されていないシーンであるか否か判定される場合、当該地物は交通標識に限定されない。すなわち、地図情報にその存在が記録され得る任意の地物が対象になり得る。例えば、各種の施設が画像に含まれ、地図情報に含まれないか判定されてもよいし、道路(新設道路等)の構造が、地図情報に示された構造と異なるか否か判定されてもよい。 When it is determined whether or not the presence of the feature included in the image is a scene not shown in the map information, the feature is not limited to the traffic sign. That is, any feature whose existence can be recorded in the map information can be targeted. For example, it may be determined whether various facilities are included in the image and not included in the map information, or whether the structure of the road (new road, etc.) is different from the structure shown in the map information. May be good.
 さらに、上述の実施形態において、重要オブジェクトが複数個存在する場合であっても重要オブジェクトの圧縮率は共通であることが想定されていたが、重要度に応じて変化してもよい。また、重要オブジェクトの位置毎に圧縮率が変化してもよい。すなわち、制御部20は、重要オブジェクトにおける重要度が高いほど、単位面積あたりの情報削減量が少なくなるように圧縮率を調整してもよい。重要度は、種々の要素によって決められてよく、例えば、異なるシーンの重要オブジェクトは互いに異なる圧縮率であってもよい。また、重要オブジェクトの中の解析対象、例えば、交通標識の中央部分が、他の部分より低圧縮率であってもよい。 Further, in the above-described embodiment, it is assumed that the compression ratio of the important objects is the same even when a plurality of important objects exist, but the compression ratio may be changed according to the importance. Further, the compression ratio may change depending on the position of the important object. That is, the control unit 20 may adjust the compression rate so that the higher the importance of the important object, the smaller the amount of information reduction per unit area. The importance may be determined by various factors, for example, important objects in different scenes may have different compression ratios from each other. Further, the analysis target in the important object, for example, the central portion of the traffic sign may have a lower compression ratio than the other portions.
 さらに、本発明のように、重要オブジェクトの情報削減量が非重要オブジェクトにおける情報削減量より少なくなるように処理を行う手法は、方法やコンピュータに実行させるプログラムとしても適用可能である。また、以上のようなシステム、プログラム、方法は、単独の装置として実現される場合もあれば、車両に備えられる各部と共有の部品を利用して実現される場合もあり、各種の態様を含むものである。また、一部がソフトウェアであり一部がハードウェアであったりするなど、適宜、変更可能である。さらに、システムを制御するプログラムの記録媒体としても発明は成立する。むろん、そのプログラムの記録媒体は、磁気記録媒体であってもよいし半導体メモリであってもよいし、今後開発されるいかなる記録媒体においても全く同様に考えることができる。 Furthermore, the method of processing so that the amount of information reduction of important objects is smaller than the amount of information reduction of non-important objects, as in the present invention, can also be applied as a method or a program to be executed by a computer. In addition, the above systems, programs, and methods may be realized as a single device or may be realized by using parts shared with each part provided in the vehicle, including various aspects. It is a program. In addition, some of them are software and some of them are hardware, which can be changed as appropriate. Further, the invention is also established as a recording medium for a program that controls a system. Of course, the recording medium of the program may be a magnetic recording medium or a semiconductor memory, and any recording medium to be developed in the future can be considered in exactly the same way.
  10…ナビゲーションシステム、20…制御部、21…画像送信プログラム、21a…画像撮影部、21b…シーン判定部、21c…情報削減処理部、21d…画像送信部、30…記録媒体、30a…地図情報、30b…画像情報、40…カメラ、41…GNSS受信部、42…車速センサ、43…ジャイロセンサ、44…ユーザI/F部、45…通信部、50…サーバ 10 ... Navigation system, 20 ... Control unit, 21 ... Image transmission program, 21a ... Image capture unit, 21b ... Scene determination unit, 21c ... Information reduction processing unit, 21d ... Image transmission unit, 30 ... Recording medium, 30a ... Map information , 30b ... image information, 40 ... camera, 41 ... GNSS receiver, 42 ... vehicle speed sensor, 43 ... gyro sensor, 44 ... user I / F unit, 45 ... communication unit, 50 ... server

Claims (5)

  1.  車両に搭載され、前記車両が走行中の道路の画像を撮影する画像撮影部と、
     前記画像内に存在するオブジェクトのうち特定の状態にある重要オブジェクトについて、前記重要オブジェクトの単位面積あたりの情報削減量が前記重要オブジェクト以外の非重要オブジェクトの単位面積あたりの情報削減量より少なくなるように、前記画像に対して情報削減処理を行う情報削減処理部と、
     情報削減処理後の前記画像を外部の装置に送信する画像送信部と、
    を備える画像送信システム。
    An image capturing unit that is mounted on a vehicle and captures an image of the road on which the vehicle is traveling.
    For important objects in a specific state among the objects existing in the image, the amount of information reduction per unit area of the important objects is smaller than the amount of information reduction per unit area of non-important objects other than the important objects. In addition, the information reduction processing unit that performs information reduction processing on the image and
    An image transmitter that transmits the image after information reduction processing to an external device,
    An image transmission system equipped with.
  2.  前記画像に基づいて前記画像が予め決められたシーンのいずれであるか判定するシーン判定部をさらに備え、
     前記情報削減処理部は、前記シーンに応じた前記重要オブジェクトを特定する、
    請求項1に記載の画像送信システム。
    Further, a scene determination unit for determining which of the predetermined scenes the image is based on the image is provided.
    The information reduction processing unit identifies the important object according to the scene.
    The image transmission system according to claim 1.
  3.  前記シーン判定部は、
      前記道路上の特定の車線が渋滞し、前記特定の車線以外の車線に渋滞が存在しない場合に特定車線渋滞シーンであると判定し、
     前記情報削減処理部は、
      前記画像の前記シーンが前記特定車線渋滞シーンである場合、前記画像の前記特定の車線上に存在する車両の像を前記重要オブジェクトとして特定する、
    請求項2に記載の画像送信システム。
    The scene determination unit
    When a specific lane on the road is congested and there is no congestion in a lane other than the specific lane, it is determined that the scene is a specific lane congestion scene.
    The information reduction processing unit
    When the scene of the image is the specific lane congestion scene, the image of the vehicle existing on the specific lane of the image is specified as the important object.
    The image transmission system according to claim 2.
  4.  前記シーン判定部は、
      道路に囲まれた既定の大きさ以上の静止した領域が存在する場合に障害物存在シーンであると判定し、
     前記情報削減処理部は、
      前記画像の前記シーンが前記障害物存在シーンである場合、前記領域の像を前記重要オブジェクトとして特定する、
    請求項2または請求項3に記載の画像送信システム。
    The scene determination unit
    If there is a stationary area surrounded by roads that is larger than the specified size, it is judged to be an obstacle presence scene, and it is determined.
    The information reduction processing unit
    When the scene of the image is the obstacle presence scene, the image of the region is specified as the important object.
    The image transmission system according to claim 2 or 3.
  5.  前記シーン判定部は、
      前記画像に含まれる地物が地図情報に存在しない場合に地図不記録シーンであると判定し、
     前記情報削減処理部は、
      前記画像の前記シーンが前記地図不記録シーンである場合、前記地物の像を前記重要オブジェクトとして特定する、
    請求項2~請求項4のいずれか一項に記載の画像送信システム。
    The scene determination unit
    If the feature included in the image does not exist in the map information, it is determined that the scene is not recorded on the map, and the scene is determined to be unrecorded.
    The information reduction processing unit
    When the scene of the image is the map non-recording scene, the image of the feature is specified as the important object.
    The image transmission system according to any one of claims 2 to 4.
PCT/JP2020/043733 2020-02-13 2020-11-25 Image transmission system WO2021161614A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020-022626 2020-02-13
JP2020022626A JP7392506B2 (en) 2020-02-13 2020-02-13 Image transmission system, image processing system and image transmission program

Publications (1)

Publication Number Publication Date
WO2021161614A1 true WO2021161614A1 (en) 2021-08-19

Family

ID=77292856

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/043733 WO2021161614A1 (en) 2020-02-13 2020-11-25 Image transmission system

Country Status (2)

Country Link
JP (1) JP7392506B2 (en)
WO (1) WO2021161614A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023030858A1 (en) * 2021-08-31 2023-03-09 Volkswagen Aktiengesellschaft Method and assistance device for supporting vehicle functions in a parking lot, and motor vehicle

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002099903A (en) * 2000-09-22 2002-04-05 Nec Corp System and method for detecting dropping object and storage medium
JP2009188792A (en) * 2008-02-07 2009-08-20 Sony Corp Image transmitter, image receiver, image transmitting/receiving system, image transmitting program, and image receiving program
JP2016085548A (en) * 2014-10-24 2016-05-19 株式会社ジオクリエイツ Simulation device, simulation method, and simulation program
WO2017022475A1 (en) * 2015-07-31 2017-02-09 日立オートモティブシステムズ株式会社 Vehicle periphery information management device
JP2019087969A (en) * 2017-11-10 2019-06-06 株式会社トヨタマップマスター Travel field investigation support device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002099903A (en) * 2000-09-22 2002-04-05 Nec Corp System and method for detecting dropping object and storage medium
JP2009188792A (en) * 2008-02-07 2009-08-20 Sony Corp Image transmitter, image receiver, image transmitting/receiving system, image transmitting program, and image receiving program
JP2016085548A (en) * 2014-10-24 2016-05-19 株式会社ジオクリエイツ Simulation device, simulation method, and simulation program
WO2017022475A1 (en) * 2015-07-31 2017-02-09 日立オートモティブシステムズ株式会社 Vehicle periphery information management device
JP2019087969A (en) * 2017-11-10 2019-06-06 株式会社トヨタマップマスター Travel field investigation support device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023030858A1 (en) * 2021-08-31 2023-03-09 Volkswagen Aktiengesellschaft Method and assistance device for supporting vehicle functions in a parking lot, and motor vehicle

Also Published As

Publication number Publication date
JP7392506B2 (en) 2023-12-06
JP2021128532A (en) 2021-09-02

Similar Documents

Publication Publication Date Title
US20220107651A1 (en) Predicting three-dimensional features for autonomous driving
US11748620B2 (en) Generating ground truth for machine learning from time series elements
WO2021259344A1 (en) Vehicle detection method and device, vehicle, and storage medium
CN113692587A (en) Estimating object properties using visual images
JP4321821B2 (en) Image recognition apparatus and image recognition method
US7957559B2 (en) Apparatus and system for recognizing environment surrounding vehicle
US9740942B2 (en) Moving object location/attitude angle estimation device and moving object location/attitude angle estimation method
JP4871909B2 (en) Object recognition apparatus and object recognition method
JP6626410B2 (en) Vehicle position specifying device and vehicle position specifying method
WO2020154990A1 (en) Target object motion state detection method and device, and storage medium
JP4577655B2 (en) Feature recognition device
JP2021099793A (en) Intelligent traffic control system and control method for the same
JP3849505B2 (en) Obstacle monitoring device and program
WO2021161614A1 (en) Image transmission system
CN115447600B (en) Vehicle anti-congestion method based on deep learning, controller and storage medium
KR20210136518A (en) Device for determining lane type and method thereof
JPH07302325A (en) On-vehicle image recognizing device
JP4492592B2 (en) Vehicle detection device, navigation device, vehicle detection program, and vehicle detection method
JPH0979847A (en) On board distance measuring device
JP2000003438A (en) Sign recognizing device
JP2011214961A (en) Reference pattern information generating device, method, program and general vehicle position specifying device
JP7449497B2 (en) Obstacle information acquisition system
JP6582891B2 (en) Empty vehicle frame identification system, method and program
JP3841323B2 (en) Vehicle rear side monitoring method and vehicle rear side monitoring device
JP5434745B2 (en) Reference pattern information generating device, method, program, and general vehicle position specifying device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20919056

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20919056

Country of ref document: EP

Kind code of ref document: A1