WO2022199311A1 - 交互方法和交互装置 - Google Patents

交互方法和交互装置 Download PDF

Info

Publication number
WO2022199311A1
WO2022199311A1 PCT/CN2022/077520 CN2022077520W WO2022199311A1 WO 2022199311 A1 WO2022199311 A1 WO 2022199311A1 CN 2022077520 W CN2022077520 W CN 2022077520W WO 2022199311 A1 WO2022199311 A1 WO 2022199311A1
Authority
WO
WIPO (PCT)
Prior art keywords
road
target
road condition
passable
image
Prior art date
Application number
PCT/CN2022/077520
Other languages
English (en)
French (fr)
Inventor
方君
Original Assignee
北京嘀嘀无限科技发展有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京嘀嘀无限科技发展有限公司 filed Critical 北京嘀嘀无限科技发展有限公司
Priority to BR112023019025A priority Critical patent/BR112023019025A2/pt
Publication of WO2022199311A1 publication Critical patent/WO2022199311A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3626Details of the output of route guidance instructions
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3667Display of a road map
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3691Retrieval, searching and output of information related to real-time traffic, weather, or environmental conditions

Definitions

  • the present invention relates to the field of computer technology, and in particular, to an interaction method and an interaction device.
  • the purpose of the embodiments of the present invention is to provide an interaction method and an interaction device, which are used to determine the position of each target object relative to the lane line of the corresponding lane of the corresponding road segment according to the road condition collection sequence collected in each road segment.
  • the road condition information of the road section, and the road condition information of each road section can be reflected in a timely and accurate manner by displaying the road condition collection image display method, so that the user can avoid the congested road section in time.
  • an interaction method comprising:
  • the road condition information is determined according to the position of the target object in the road condition collection sequence corresponding to each road section, and the position of the target object is used to represent the target object relative to the corresponding lane. the location of the lane lines;
  • a target image corresponding to a target road section is determined and sent, where the target road section is a road section whose road condition information in the route navigation information satisfies a predetermined road condition condition.
  • an interaction method comprising:
  • the target image is determined based on pre-uploaded route navigation information
  • the road condition display control is used to display the target image
  • the target road segment is a road segment in which the road condition information in the route navigation information satisfies a predetermined road condition condition
  • the The road condition information is determined according to the position of the target object in the road condition collection sequence corresponding to each road section in the route navigation information, and the position of the target object is used to represent the position of the target object relative to the lane line of the corresponding lane.
  • an interaction apparatus comprising:
  • a navigation information acquisition unit for acquiring route navigation information
  • a road condition information determination unit configured to determine the road condition information of each road section in the route navigation information, the road condition information is determined according to the position of the target object in the road condition collection sequence corresponding to each road section, and the position of the target object is used to represent the The position of the target object relative to the lane line of the corresponding lane;
  • An image sending unit configured to determine and send a target image corresponding to a target road section, where the target road section is a road section whose road condition information in the route navigation information satisfies a predetermined road condition condition.
  • an interaction apparatus comprising:
  • a control display unit used for rendering and displaying the road condition display control on the navigation page in response to receiving the target image corresponding to the target road segment;
  • the target image is determined based on pre-uploaded route navigation information
  • the road condition display control is used to display the target image
  • the target road segment is a road segment in which the road condition information in the route navigation information satisfies a predetermined road condition condition
  • the The road condition information is determined according to the position of the target object in the road condition collection sequence corresponding to each road section in the route navigation information, and the position of the target object is used to represent the position of the target object relative to the lane line of the corresponding lane.
  • a computer-readable storage medium on which computer program instructions are stored, wherein the computer program instructions, when executed by a processor, implement any one of the first aspect or the second aspect. one of the methods described.
  • an electronic device including a memory and a processor, wherein the memory is used to store one or more computer program instructions, wherein the one or more computer program instructions are The processor executes to implement the method of any one of the first aspect or the second aspect.
  • a computer program product comprising a computer program/instruction, wherein the computer program/instruction is executed by a processor to implement any one of the first aspect or the second aspect Methods.
  • the server in the embodiment of the present invention determines the road condition information of each road segment according to the position of the target object relative to the corresponding lane of the corresponding road segment in the road condition collection sequence of each road segment in the previously acquired route navigation information, And after determining the target image corresponding to the road segment in the road segment whose road condition information meets the predetermined road condition condition, the target image is sent to the terminal.
  • the terminal may render and display a road condition display control for displaying the target image on the navigation page.
  • the embodiment of the present invention can accurately determine the position of each target object through image recognition, and determine the road condition information of each road section according to the position of each target object, and at the same time display the road condition of a specific road section through a real-life image, which improves the accuracy of determining the road condition. and timeliness, so that users can avoid congested road sections in time.
  • FIG. 1 is a schematic diagram of a hardware system architecture according to an embodiment of the present invention.
  • FIG. 2 is a flow chart of the interaction method according to the first embodiment of the present invention.
  • FIG. 3 is a flowchart of determining road condition information of each road segment in an optional implementation manner of the first embodiment of the present invention
  • FIG. 4 is a schematic diagram of the position of a target object according to an embodiment of the present invention.
  • FIG. 5 is a flowchart of determining the congestion state of the first road section in an optional implementation manner of the first embodiment of the present invention
  • FIG. 6 is another schematic diagram of the position of a target object according to an embodiment of the present invention.
  • FIG. 7 is a flowchart of the interaction method on the server side according to the first embodiment of the present invention.
  • FIG. 9 is a flowchart of an interaction method according to a second embodiment of the present invention.
  • FIG. 10 is a schematic diagram of an interface according to an embodiment of the present invention.
  • FIG. 11 is another interface schematic diagram of an embodiment of the present invention.
  • FIG. 14 is a schematic diagram of an interaction system according to a third embodiment of the present invention.
  • FIG. 15 is a schematic diagram of an electronic device according to a fourth embodiment of the present invention.
  • FIG. 1 is a schematic diagram of a hardware system architecture according to an embodiment of the present invention.
  • the hardware system architecture shown in FIG. 1 may include at least one image acquisition device 11 , at least one platform-side server (hereinafter referred to as server) 12 and at least one user terminal 13 .
  • server platform-side server
  • One user terminal 13 will be described as an example.
  • the image acquisition device 11 is an image acquisition device with a positioning function installed on the driver's side, which can record the road condition acquisition sequence of the road segment traveled by during the driving of the vehicle and send the recorded road condition acquisition sequence and recorded road conditions to the server 12 after the user's authorization. The position when the sequence was acquired.
  • the image acquisition device 11 may specifically be an image acquisition device that is fixedly installed inside the vehicle (that is, the target device, not shown in the figure), such as a driving recorder, or an image that is additionally set and whose relative position to the corresponding vehicle is kept fixed.
  • a collection device such as a mobile terminal with a camera function carried when driving or riding a vehicle, including a mobile phone, a tablet computer, a notebook computer, etc., or a camera.
  • the image capturing device 11 can be connected to the server 12 and the user terminal 13 for communication through a network.
  • the image capturing apparatus 11 may also be disposed on other movable or non-movable devices, such as a movable robot and the like.
  • the server 12 can collect the target object relative to the lane line of the corresponding lane according to the road condition collection image in the road condition collection sequence uploaded by the image collection device 11.
  • the position determines the road segment information of each road segment in the route navigation information, and then determines the target image and/or the target image sequence including the target image corresponding to the road segment in the route navigation information for which the road condition information meets the predetermined road condition condition, and sends the target image to the user terminal 13. image.
  • the user terminal 13 may render and display a road condition display control for displaying the target image on the navigation page.
  • the user terminal 13 may also receive the target image sequence sent by the server 12, and in response to the road condition display control being triggered, display a video playback page, and play the target image sequence through the video playback page.
  • FIG. 2 is a flowchart of the interaction method according to the first embodiment of the present invention. As shown in Figure 1, the method of this embodiment includes the following steps:
  • Step S201 obtaining route navigation information.
  • a user can log in to a predetermined application program with a navigation function through a user terminal (hereinafter referred to as a terminal), and set a departure point and a destination.
  • the terminal can perform route planning according to the departure point and destination set by the user to obtain at least one route planning result, and determine the route planning result selected by the user as route navigation information.
  • the terminal can obtain the path planning result through various existing methods, for example, sending the set departure point and destination to the predetermined path planning interface, and obtaining the path planning result from the predetermined path planning interface. This embodiment does not Make specific restrictions.
  • the terminal can also send the route navigation information to the server, so that the server can store the route navigation information in the database.
  • the server can obtain the route navigation information pre-uploaded by the terminal from the database; if the route navigation information selected by the user is not stored in the database As a result of the route planning, the server can receive the route navigation information sent by the terminal. After acquiring the route navigation information, the server may extract the road segment names of each road segment in the route navigation information.
  • Step S202 determining the road condition information of each road segment in the route navigation information.
  • the road segment information of each road segment is determined by the server according to the position of the target object in the road condition collection sequence.
  • the road condition collection sequence is the image sequence of the road sections that each vehicle has recorded during the driving process.
  • the image acquisition device configured for each vehicle can upload at least one road condition acquisition sequence, and also upload and record the position of the vehicle when the road condition acquisition sequence is collected.
  • the position of the vehicle may be determined by a positioning system (eg, global positioning system, Beidou satellite navigation system, etc.) configured by the corresponding image acquisition device, and may specifically be the coordinates of the vehicle in the world coordinate system.
  • FIG. 3 is a flowchart of determining road condition information of each road segment in an optional implementation manner of the first embodiment of the present invention.
  • the server may determine the road condition information of each road segment through the following steps:
  • Step S301 determining the road section to be determined.
  • the server may determine each road segment within a predetermined geographic range (for example, a predetermined city, a predetermined district/county, etc.) as the road segment to be determined, and may also determine each road segment in the route navigation information as the road segment to be determined. , which is not specifically limited in this embodiment.
  • a predetermined geographic range for example, a predetermined city, a predetermined district/county, etc.
  • Step S302 image recognition is performed on each road condition collected image in the image sequence to be identified, and the position of the target object in each road condition collected image is determined.
  • the road condition collection sequence is collected by an image collection device that moves with the vehicle, so in this embodiment, the target object is a vehicle.
  • the target object can also be other objects, such as pedestrians, obstacles set in the road, and the like.
  • the server can perform image recognition on the collected images of each road condition in each road condition collection sequence through various existing methods. For example, through “Research on Vehicle Distance Detection Algorithm Based on Image Recognition, Yin Yijie, 2012 The method described in "Master's Thesis” determines the distance of each target object relative to the image acquisition device, and determines the coordinates of each target object in the world coordinate system corresponding to each road condition acquisition image according to the position of the vehicle when recording each road condition acquisition image as the target. the location of the object.
  • the position of the target object is used to determine the road condition information of the road section, so the position of the target object can be used to represent the lane line of the target object relative to the corresponding lane in the road section to be determined (that is, the lane where the target object is located)
  • the lane line here may be the left lane line of the corresponding lane or the right lane line of the corresponding lane, which is not specifically limited in this embodiment.
  • the server can also perform image recognition through various existing methods, such as “Design and Implementation of Auxiliary Positioning System Based on Image Recognition,” Wu Jiashun, 2018 The method described in “Master's Thesis”, or based on the trained SSD (Single Shot MultiBox Detector, single excitation multi-box detector) model to determine the position of each lane line, and according to the coordinates of each target object in the world coordinate system And the position of each lane line determines the position of each target object in each road condition collected image relative to the lane line of the corresponding lane in the road section to be determined.
  • SSD Single Shot MultiBox Detector, single excitation multi-box detector
  • FIG. 4 is a schematic diagram of the position of a target object according to an embodiment of the present invention.
  • the vehicle V1 is a target object in the road condition collection image P1
  • the lane line L1 and the lane line L2 are the left and right lane lines of the corresponding lane of the vehicle V1, respectively.
  • the server determines the positions of the vehicle V1, the lane line L1 and the lane line L2 by performing image recognition on the road condition collected image P1, the server can determine the position of the vehicle V1 relative to the lane line L1, that is, the shortest distance between the vehicle V1 and the lane line L1.
  • the distance d1 and the position of the vehicle V2 relative to the lane line L2, that is, the shortest distance d2 between the vehicle V1 and the lane line L2, are taken as the position of the vehicle V1.
  • Step S303 determining the passable state of the lane corresponding to the target object according to the position of the target object.
  • the congestion state of the road segment to be determined is determined by whether each lane in the road segment to be determined is passable. Therefore, in this step, it can be determined according to the The position of each target object determines the passable state of the lane corresponding to the target object.
  • the server may determine the target distance corresponding to the target object according to the position of the target object.
  • the target distance is used to characterize the maximum distance between the target object and the lane line of the corresponding lane. Taking the position of the target object shown in FIG. 4 as an example for illustration, the server can determine the larger distance between the shortest distance d1 between the vehicle V1 and the lane line L1 and the shortest distance d2 between the vehicle V1 and the lane line L2 (also That is, the shortest distance d2) is the target distance corresponding to the vehicle V1.
  • the server can also obtain the travelable distance corresponding to the target device.
  • the passable distance corresponding to the target device is equivalent to the width of the vehicle (ie, the distance between the planes parallel to the longitudinal symmetry plane of the vehicle and abutting against the fixed protrusions on both sides of the vehicle).
  • vehicles of the same type usually have almost the same width, so the server can determine the travelable distance corresponding to the vehicle according to the type of the vehicle.
  • the width of the ordinary car is usually between 1.4 and 1.8 meters, so the server can use 1.8 meters as the passable distance of the ordinary car.
  • the server may determine whether each lane is passable according to the target distance corresponding to the target object and the passable distance of the target device. For any lane, if the target distance corresponding to the target object is greater than (or greater than or equal to) the passable distance of the target device, the server can determine that the passable state of the lane is passable; if the target distance corresponding to the target object is less than the passable distance of the target device passable distance, the server can determine that the passable state of the lane is impassable.
  • the server can determine that the passable state of the lane is passable.
  • the server determines the target distance corresponding to the vehicle V1 (that is, the shortest distance d2) and the passable distance of the target device (for example, 1.8 meters), if the shortest If the distance d2 ⁇ 1.8 meters, the server can determine that the passable state of the lane corresponding to vehicle V1 is passable; if the shortest distance d2 ⁇ 1.8 meters, the server can determine that the passable state of the lane corresponding to vehicle V1 is impassable.
  • the target distance corresponding to the vehicle V1 that is, the shortest distance d2
  • the passable distance of the target device for example, 1.8 meters
  • Step S304 Determine the congestion state of the first road section corresponding to the image sequence to be recognized according to the passable state of each lane.
  • the congestion state of the first road segment is used to represent the congestion state of the road segment to be determined when the corresponding vehicle is traveling on the road segment to be determined.
  • FIG. 5 is a flowchart of determining the congestion state of the first road segment in an optional implementation manner of the first embodiment of the present invention. As shown in FIG. 5 , in an optional implementation manner of this embodiment, step S304 may include the following steps:
  • Step S501 Determine the corresponding congestion state of the second road section according to the passable state of each lane corresponding to the collected images of each road condition.
  • the server may determine, according to the lane-passable state of each lane, the image acquisition device of the target device, when recording the to-be-recognized image, the road section to be determined.
  • the second road section is congested.
  • the server may determine that the congestion state of the second road section corresponding to the image to be recognized is congested; when the passable state of each lane is passable, the server may determine The congestion state of the second road section corresponding to the image to be recognized is unblocked; when the passable state of at least one lane is impassable and the passable state of at least one lane is passable, the server may determine the second road section corresponding to the image to be recognized The congestion state is slow.
  • FIG. 6 is another schematic diagram of the position of a target object according to an embodiment of the present invention.
  • the road section to be determined includes a lane 61 , a lane 62 and a lane 63 .
  • the server determines the passable state (ie, impassable) of the lane 61, the passable state (ie, passable) of the lane 62, and the passable state (ie, passable) of the lane 63 according to the image recognition of the image to be identified P2 (ie, passable) ), it can be determined that the congestion state of the second road section corresponding to the image P2 to be recognized is slow.
  • Step S502 determining the congestion state of the first road segment according to the congestion state of each second road segment.
  • the server may determine the first road section congestion state corresponding to the road condition collection sequence according to the second road section congestion state corresponding to each road condition collection image in the same road condition collection sequence.
  • the frequency of capturing images of road conditions recorded by the image acquisition device is usually high. If the number of images (sorted by recording time) with the same continuous congestion state of the second road section in the road condition acquisition sequence is less than a certain number, for example, the road condition acquisition sequence There are 100 collected images of road conditions, but the number of collected images of road conditions where the congestion state of the second road segment is continuously congested is less than 30, then during the movement of the target device, the road condition information of the road segment to be determined may not actually reach the level of congestion. .
  • the server may determine the first road segment congestion state corresponding to the road condition collection sequence as the second road segment congestion state. Congestion of the road section.
  • the server may determine that the congestion state of the first road section corresponding to the road condition collection sequence is congestion in response to the congestion state of the second road section corresponding to the collection of images of consecutive road conditions; the second road section corresponding to the collection of images of consecutive road conditions
  • the congestion state is slow-moving, and it is determined that the congestion state of the first road section corresponding to the road condition collection sequence is slow-moving; in response to the congestion state of the second road section corresponding to the consecutive multiple road condition collection images, the congestion state of the first road section corresponding to the road condition collection sequence is determined to be unblocked .
  • Step S305 determining the road condition information of the road segment to be determined according to the congestion state of the first road segment.
  • a road condition acquisition sequence is recorded by an image acquisition device set by a target device, so it is relatively one-sided.
  • the actual road condition information of the road section to be determined is congestion, but the lane in which the target vehicle (that is, the target device) is driving is the emergency lane, so the congestion state of the first road section determined by image recognition may be slow travel, which is different from the actual road section to be determined.
  • the road condition information does not match and the accuracy is low.
  • the road condition of the road segment to be determined is determined by the congestion state of the first road segment corresponding to the road condition collection sequence obtained from the records of multiple vehicles driving on the road segment to be determined in the same time period. information to improve the accuracy of determining road condition information.
  • the server may acquire the congestion state of the first road section corresponding to each image sequence to be identified in the image sequence set.
  • the to-be-identified image sequence in the image sequence set is a road condition collection sequence recorded by a plurality of vehicles in the same time period on the road segment to be determined.
  • the road condition information changes from time to time, so the period length of the predetermined period can be determined according to the change rule of the road condition information in the historical data.
  • the change rule of the road condition information obtained from the historical data is roughly an hourly change (for example, from congestion to slow driving) , the period length of the predetermined period may be 1 hour.
  • the road segment name of the road segment to be determined is "xx street”
  • the reservation period is March 5, 2021 10:00-11:00
  • the server can obtain multiple vehicles on March 5, 2021 10:00-11: 00
  • the recorded road condition collection sequence when driving on xx street is taken as multiple road condition collection sequences corresponding to the road section of "xx street”.
  • the server may determine the number of image sequences to be recognized whose congestion state of the first road segment in the image sequence set is unblocked (ie, the first number), the number of image sequences in the image sequence set the number of image sequences to be identified (that is, the second number) in which the congestion state of the first road segment is slow-moving, and the number of image sequences to be recognized (that is, the third number) in which the congestion state of the first road segment is congested in the image sequence set,
  • the congestion state of the first road section corresponding to the road condition collection sequence with the largest number is determined as the road condition information of the road section to be determined.
  • the road condition information of the road segment to be determined is determined to be clear; when the second number is greater than the first number and the second number is greater than the third number, The road condition information of the road segment to be determined is determined to be slow; when the third number is greater than the first number and the third number is greater than the second number, the road condition information of the road segment to be determined is determined to be congestion.
  • step S301-step S305 the process of determining the road condition information of the road segment may occur in the In the previous cycle of the cycle in which the route navigation information is received, that is, the server determines the road condition information of each road segment in the route navigation information of the current cycle according to the road condition information determined in the previous cycle.
  • the cycle length of the predetermined cycle is 1 hour
  • the time when the server receives the route navigation information sent by the terminal is 9:30
  • the cycle is 9:00-10:00
  • the road condition information of each road section in the route navigation information is Determined within 8:00-9:00.
  • the server may also determine the number of target objects in each road condition collected image by performing image recognition on each road condition collected image. Then, it is determined whether the number of target objects in multiple consecutive road condition acquisition images meets the predetermined number condition for each road condition acquisition image corresponding to the same road condition acquisition sequence. If the predetermined number condition is satisfied, the server can determine the congestion status of the road section corresponding to the road condition acquisition sequence. is the congestion state corresponding to this number. For the road section to be determined, the server may determine the road section congestion state corresponding to the largest number of road condition collection sequences as the road condition information of the to-be-determined road section.
  • the corresponding relationship between the number of target objects and the congestion state can be preset. For example, if the number of target objects is 0 to 3, the congestion state can be unblocked; the number of target objects is 4 to 10, and the congestion state can be Slow travel; if the number of target objects is 11 or more, the congestion state can be congestion.
  • the above two methods can also be combined to determine the road condition information of each road section to be determined, so as to further improve the accuracy of determining the road condition information.
  • the process of determining the road condition information may also occur at the terminal side, that is, steps S301 to S305 may also be performed by the terminal.
  • Step S203 Determine and send a target image corresponding to the target road segment.
  • the server may determine the road segment whose road condition information meets the predetermined road condition condition as the target road segment, and determine the target image from the road condition acquisition images of multiple road condition acquisition sequences.
  • the predetermined road condition condition is used to determine whether the road condition information of each road segment is suitable for passing, so it can be set that the road condition information is congestion.
  • the road condition information is slow travel.
  • the target image may be determined according to at least one factor among the clarity of each road condition collected image and the number of target objects.
  • the server may collect the image from each road condition with the highest definition and the target object. The road condition acquisition image with the largest number of objects in the first place is determined as the target image.
  • the server can also remove sensitive information in the target image.
  • the server can remove sensitive information in the target image through various existing methods, such as identifying sensitive information such as face and license plate number in the target image through image recognition, and mosaic the above sensitive information. processing, so as to obtain the target image for subsequent sending to the terminal.
  • the server may send the target image to the terminal according to the terminal identification.
  • Step S204 in response to receiving the target image corresponding to the target road segment, rendering and displaying the road condition display control on the navigation page.
  • the terminal may render and display a road condition display control for displaying the target image on the navigation page.
  • the terminal may render and display the road condition display control at a predetermined position on the navigation page.
  • the predetermined position may be any position in the navigation page, for example, may be the display position of the target road section in the navigation page, and/or the lower part of the navigation page and/or the left side of the navigation page and/or the right side of the navigation page, This embodiment does not make specific limitations.
  • the terminal can display the target road section in different ways according to the received road section information, for example, display it by color distinction, which can make it easier for the user to view the target road section. Traffic information on different road sections.
  • the target image can also be determined on the terminal side.
  • FIG. 7 is a flowchart of the interaction method on the server side according to the first embodiment of the present invention. As shown in FIG. 7 , the method of this embodiment may include the following steps on the server side:
  • Step S201 obtaining route navigation information.
  • Step S202 determining the road condition information of each road segment in the route navigation information.
  • Step S203 Determine and send a target image corresponding to the target road segment.
  • FIG. 8 is a flowchart of the interaction method on the terminal side according to the first embodiment of the present invention. As shown in FIG. 8 , the method of this embodiment may include the following steps:
  • Step S204 in response to receiving the target image corresponding to the target road segment, rendering and displaying the road condition display control on the navigation page.
  • the server After acquiring the route navigation information of the terminal, the server in this embodiment determines the road condition information of each road segment according to the position of the target object relative to the corresponding lane of the corresponding road segment in the road condition collection sequence of each road segment in the previously acquired route navigation information, and After determining the target image corresponding to the road segment whose road condition information satisfies the predetermined road condition condition in the above road segment, the target image is sent to the terminal. After receiving the target image, the terminal may render and display a road condition display control for displaying the target image on the navigation page.
  • the position of each target object can be accurately determined by means of image recognition, and the road condition information of each road section can be determined according to the position of each target object, and the road conditions of a specific road section can be displayed through a real-life image, which improves the accuracy of determining the road conditions. and timeliness, so that users can avoid congested road sections in time.
  • FIG. 9 is a flowchart of an interaction method according to a second embodiment of the present invention. As shown in Figure 9, the method of this embodiment includes the following steps:
  • Step S901 obtaining route navigation information.
  • step S901 is similar to the implementation manner of step S201, and details are not described herein again.
  • Step S902 determining the road condition information of each road segment in the route navigation information.
  • step S902 is similar to the implementation manner of step S202, and details are not described herein again.
  • Step S903 Determine and send a target image corresponding to the target road segment.
  • step S903 is similar to the implementation manner of step S203, and details are not described herein again.
  • Step S904 in response to receiving the target image corresponding to the target road segment, rendering and displaying the road condition display control on the navigation page.
  • step S904 is similar to the implementation manner of step S204, and details are not described herein again.
  • the process of determining the road condition information may also occur at the terminal side, that is, steps S301 to S305 may also be performed by the terminal.
  • Step S905 determining and sending the link information of the target link.
  • the server may also determine and send the road segment information of the target road segment.
  • the road segment information may include road condition information of the target road segment, that is, congestion, slow travel, smooth flow, etc., and may also include the average driving speed and congestion length of the target road segment.
  • the average driving speed and the congestion length can be determined according to the position of the target device where the image capture device for uploading the road condition capture image of the target road section is set.
  • the target road section is xx street
  • vehicle V1-vehicle V100 is the target device set with the image acquisition device for uploading the road condition acquisition sequence of xx street
  • the server can record the road condition acquisition sequence according to the image acquisition device corresponding to vehicle V1-vehicle V100 respectively
  • the average moving speed of vehicle V1 - vehicle V100 is determined respectively, and then the average driving speed of the target road section is determined according to the average moving speed of vehicle V1 - vehicle V100.
  • the server can also determine the congestion length of the target road section according to the positions of vehicle V1-vehicle V100 at the same time. For example, if vehicle V1-vehicle V100 is distributed between 800 meters and 100 meters east of xx street, the server can determine The length of congestion on xx street is 700 meters.
  • step S903 and step S905 may be performed simultaneously, or may be performed sequentially, which is not limited in this embodiment.
  • Step S906 in response to receiving the road segment information of the target road segment, display the road segment information through the road condition display control.
  • the terminal may also display the road section information of the target road section through the road condition display control.
  • the terminal may display only part of the road section information, or may display all the road section information, which is not specifically limited in this embodiment.
  • FIG. 10 is a schematic diagram of an interface according to an embodiment of the present invention.
  • the interface shown in Figure 10 is a terminal interface.
  • the page P1 is a navigation page
  • the terminal can display the route navigation information 01 on the page P1, and display the target route in the route navigation information 101, that is, the road segment 02, in different colors.
  • the terminal can also render and display the road condition display control at the display position of the road section 102, that is, the control 103 and the bottom of the navigation page, render and display the road condition display control, that is, the control 104.
  • the road segment name ie, xx street
  • road condition information ie, congestion
  • congestion length ie, congestion xxx meters
  • the terminal may display only the control 03, or only the control 104, or may display the control 103 and the control 104 at the same time, which is not specifically limited in this embodiment.
  • the road section information of the target road section can also be determined on the terminal side.
  • Step S907 Determine and send a target image sequence corresponding to the target road segment.
  • the server may determine the road condition collection sequence including the target image as the target sequence, or may cut out a sequence segment of a predetermined length (for example, 10 seconds) from the road condition collection sequence including the target image as the target sequence. Examples are not specifically limited.
  • the server can also remove the sensitive information of the collected images of each road condition according to the order of the collected images of each road condition in the target image sequence through various existing methods, and then collect the images according to each road condition after removing the sensitive information to obtain the image for subsequent use.
  • the target image sequence sent to the terminal.
  • the server may send the target image sequence to the terminal according to the terminal identification.
  • step S903 and step S907 may be performed simultaneously, or may be performed sequentially, which is not specifically limited in this embodiment.
  • Step S908 receiving a target image sequence corresponding to the target road segment.
  • the terminal may also receive the target image sequence including the target image sent by the server.
  • the target image sequence can also be determined on the terminal side.
  • Step S909 in response to the road condition display control being triggered, the video playback page is displayed.
  • Characterizing the road condition information of the target road segment by a single image may have limitations for the user, so in this embodiment, the road condition information of the target road segment is more clearly represented by the target image sequence.
  • the terminal can display a video playback page for playing the target image sequence.
  • Step S910 play the target image sequence through the video playing page.
  • the terminal can automatically play the target image sequence through the video playback page, so as to avoid the possibility of unnecessary distraction of the user's attention caused by the user's need for multiple operations during driving the vehicle.
  • FIG. 11 is another interface schematic diagram of an embodiment of the present invention.
  • the interface shown in FIG. 10 is taken as an example for description.
  • the terminal can display the video playback page shown in FIG. 11 , namely page P2, and play the target video sequence 111 including the target image through the video playback page.
  • FIG. 12 is a flow chart on the server side of the interaction method according to the second embodiment of the present invention. As shown in FIG. 12 , the method of this embodiment may include the following steps on the server side:
  • Step S901 obtaining route navigation information.
  • Step S902 determining the road condition information of each road segment in the route navigation information.
  • Step S903 Determine and send a target image corresponding to the target road segment.
  • Step S905 sending the link information of the target link.
  • Step S907 Determine and send a target image sequence corresponding to the target road segment.
  • FIG. 13 is a flow chart on the terminal side of the interaction method according to the second embodiment of the present invention. As shown in FIG. 13 , the method of this embodiment may include the following steps:
  • Step S904 in response to receiving the target image corresponding to the target road segment, rendering and displaying the road condition display control on the navigation page.
  • Step S906 in response to receiving the road segment information of the target road segment, display the road segment information through the road condition display control.
  • Step S908 receiving a target image sequence corresponding to the target road segment.
  • Step S909 in response to the road condition display control being triggered, the video playback page is displayed.
  • Step S910 play the target image sequence through the video playing page.
  • the server After acquiring the route navigation information of the terminal, the server in this embodiment determines the road condition information of each road segment according to the position of the target object relative to the corresponding lane of the corresponding road segment in the road condition collection sequence of each road segment in the previously acquired route navigation information, and After determining the target image corresponding to the road segment whose road condition information satisfies the predetermined road condition condition in the above road segment, the target image is sent to the terminal.
  • the server may also send the target image sequence including the target image and the road section information of the target road section to the terminal.
  • the terminal After receiving the target image, the terminal may render and display a road condition display control for displaying the target image on the navigation page.
  • the terminal may display the road section information through the road condition display control, and after receiving the target image sequence, in response to the road condition display control being triggered, display the video playback page, and display the video playback page through the video.
  • the play page plays the target image sequence.
  • the position of each target object can be accurately determined by means of image recognition, and the road condition information of each road section can be determined according to the position of each target object, and the road conditions of a specific road section can be displayed through real-life images and real-life videos, which improves the determination of road conditions. With high accuracy and timeliness, users can avoid congested road sections in time.
  • FIG. 14 is a schematic diagram of an interaction system according to a third embodiment of the present invention. As shown in FIG. 14 , the system of this embodiment includes an interaction device 14A and an interaction device 14B.
  • the interaction device 14A is applicable to the server side, and includes a navigation information acquisition unit 1401 , a road condition information determination unit 1402 and an image transmission unit 1403 .
  • the navigation information acquisition unit 1401 is used for acquiring route navigation information.
  • the road condition information determining unit 1402 is configured to determine the road condition information of each road segment in the route navigation information, where the road condition information is determined according to the position of the target object in the road condition collection sequence corresponding to each road segment.
  • the image sending unit 1403 is configured to determine and send a target image corresponding to a target road section, where the target road section is a road section whose road condition information in the route navigation information satisfies a predetermined road condition condition.
  • the road condition information is determined by the road segment determination unit 1404 , the position determination unit 1405 , the traffic state determination unit 1406 , the congestion state determination unit 1407 and the road condition information determination unit 1408 .
  • the road segment determination unit 1404 is used to determine the road segment to be determined.
  • the position determination unit 1405 is configured to perform image recognition on each of the collected images of road conditions in the image sequence to be identified, and determine the position of the target object in each of the collected images of road conditions, and the image sequence to be identified corresponds to the road section to be determined.
  • the passing state determining unit 1406 is configured to determine the passable state of the lane corresponding to the target object according to the position of the target object.
  • the congestion state determination unit 1407 is configured to determine the congestion state of the first road section corresponding to the image sequence to be recognized according to the passable state of each lane.
  • the road condition information determining unit 1408 is configured to determine the road condition information of the to-be-determined road segment according to the congestion state of the first road segment.
  • the congestion state determination unit 1407 includes a second state determination subunit and a first state determination subunit.
  • the second state determination subunit is configured to determine the corresponding congestion state of the second road section according to the passable state of each of the lanes corresponding to the collected images of each of the road conditions.
  • the first state determination subunit is configured to determine the congestion state of the first road segment according to the congestion state of each of the second road segments.
  • the traffic state determination unit 1406 includes a first distance determination subunit, a second distance determination subunit, and a traffic state determination subunit.
  • the first distance determination subunit is used to determine the target distance corresponding to the target object according to the position of the target object, and the target distance is used to represent the maximum distance between the target object and the lane line of the corresponding lane.
  • the second distance determination subunit is used for determining the traversable distance corresponding to the target device, where the target device is the device corresponding to the image sequence to be recognized.
  • the passable state determination subunit is configured to determine the passable state of the lane according to the target distance and the passable distance.
  • the traffic state determination subunit includes a first state determination module and a second state determination module.
  • the first state determination module is configured to determine that the passable state of the lane is passable in response to the target distance being not less than the passable distance.
  • the second state determination module is configured to determine that the passable state of the lane is impassable in response to the target distance being less than the passable distance.
  • the second state determination subunit includes a third state determination module, a fourth state determination module and a fifth state determination module.
  • the third state determination module is configured to determine that the congestion state of the second road section is congestion in response to that the corresponding passable states of the lanes are all impassable.
  • the fourth state determination module is configured to determine that the congestion state of the second road section is unblocked in response to the corresponding passable states of the lanes being passable.
  • the fifth state determination module is configured to determine that the congestion state of the second road segment is slow driving in response to the corresponding at least one passable state of the lane being impassable and the passable state of at least one of the lanes being passable.
  • the first state determination subunit includes a sixth state determination module, a seventh state determination module and an eighth state determination module.
  • the sixth state determination module is configured to determine that the congestion state of the first road segment is congestion in response to the congestion state of the second road section corresponding to the collected images of successive road conditions being congestion.
  • a seventh state determination module is configured to determine that the congestion state of the second road section corresponding to a plurality of consecutive road condition collection images is slow driving, and determine that the congestion state of the first road section is slow driving.
  • the eighth state determination module is configured to determine that the congestion state of the first road segment is unblocked in response to the number of the congestion states of the second road segment corresponding to the consecutive plurality of road condition collection images being unblocked satisfying a third quantity condition.
  • the road condition information determination unit 1408 includes a state acquisition subunit, a quantity determination subunit, a first road condition determination subunit, a second road condition determination subunit, and a third road condition determination subunit.
  • the state acquisition subunit is used to acquire the congestion state of the first road section corresponding to each of the to-be-identified image sequences in the image sequence set, and the image sequence set includes a plurality of corresponding to-be-determined road sections within the same time period the sequence of images to be identified.
  • the quantity determination subunit is used for determining a first quantity, a second quantity and a third quantity, where the first quantity is used to represent the number of the image sequences to be identified in which the congestion state of the first road section in the image sequence set is unblocked.
  • the congestion state of the first road segment is the number of the image sequences to be identified that are congested.
  • the first road condition determination subunit is configured to determine the road condition information as clear in response to the first quantity being greater than the second quantity and the first quantity being greater than the third quantity.
  • the second road condition determination subunit is configured to determine the road condition information as slow travel in response to the second quantity being greater than the first quantity and the second quantity being greater than the third quantity.
  • a third road condition determination subunit is configured to determine the road condition information as congestion in response to the third quantity being greater than the first quantity and the third quantity being greater than the second quantity.
  • the image sending unit 1403 includes a quantity definition determination subunit and an image determination subunit.
  • the number and definition determining subunit is used to determine the number of target objects in each of the road condition collected images and/or the clarity of each of the road condition collected images.
  • the image determination subunit is configured to determine a target image according to the quantity and/or definition of the target object corresponding to each of the collected images of the road conditions.
  • the apparatus 14A further includes a sequence sending unit 1409 .
  • the sequence sending unit 1409 is configured to determine and send a target image sequence corresponding to the target road section, where the target image sequence includes the target image.
  • the apparatus 14A further includes a link information sending unit 1410 .
  • the road section information sending unit 1410 is configured to determine and send the road section information of the target road section, where the road section information includes road condition information of the target road section.
  • the interaction device 14B is suitable for a terminal, and includes a control display unit 1411 .
  • the control display unit 1411 is configured to render and display the road condition display control on the navigation page in response to receiving the target image corresponding to the target road segment.
  • the target image is determined based on pre-uploaded route navigation information
  • the road condition display control is used to display the target image
  • the target road segment is a road segment in which the road condition information in the route navigation information satisfies a predetermined road condition condition
  • the The road condition information is determined according to the position of the target object in the road condition collection sequence corresponding to each road segment in the route navigation information.
  • the road condition information is determined by the road segment determination unit 1404 , the position determination unit 1405 , the traffic state determination unit 1406 , the congestion state determination unit 1407 and the road condition information determination unit 1408 .
  • the road segment determination unit 1404 is used to determine the road segment to be determined.
  • the position determination unit 1405 is configured to perform image recognition on each of the collected images of road conditions in the image sequence to be identified, and determine the position of the target object in each of the collected images of road conditions, and the image sequence to be identified corresponds to the road section to be determined.
  • the passing state determining unit 1406 is configured to determine the passable state of the lane corresponding to the target object according to the position of the target object.
  • the congestion state determination unit 1407 is configured to determine the congestion state of the first road section corresponding to the image sequence to be recognized according to the passable state of each lane.
  • the road condition information determining unit 1408 is configured to determine the road condition information of the to-be-determined road segment according to the congestion state of the first road segment.
  • the congestion state determination unit 1407 includes a second state determination subunit and a first state determination subunit.
  • the second state determination subunit is configured to determine the corresponding congestion state of the second road section according to the passable state of each of the lanes corresponding to the collected images of each of the road conditions.
  • the first state determination subunit is configured to determine the congestion state of the first road segment according to the congestion state of each of the second road segments.
  • the traffic state determination unit 1406 includes a first distance determination subunit, a second distance determination subunit, and a traffic state determination subunit.
  • the first distance determination subunit is used to determine the target distance corresponding to the target object according to the position of the target object, and the target distance is used to represent the maximum distance between the target object and the lane line of the corresponding lane.
  • the second distance determination subunit is used for determining the traversable distance corresponding to the target device, where the target device is the device corresponding to the image sequence to be recognized.
  • the passable state determination subunit is configured to determine the passable state of the lane according to the target distance and the passable distance.
  • the traffic state determination subunit includes a first state determination module and a second state determination module.
  • the first state determination module is configured to determine that the passable state of the lane is passable in response to the target distance being not less than the passable distance.
  • the second state determination module is configured to determine that the passable state of the lane is impassable in response to the target distance being less than the passable distance.
  • the second state determination subunit includes a third state determination module, a fourth state determination module and a fifth state determination module.
  • the third state determination module is configured to determine that the congestion state of the second road section is congestion in response to that the corresponding passable states of the lanes are all impassable.
  • the fourth state determination module is configured to determine that the congestion state of the second road section is unblocked in response to the corresponding passable states of the lanes being passable.
  • the fifth state determination module is configured to determine that the congestion state of the second road segment is slow driving in response to the corresponding at least one passable state of the lane being impassable and the passable state of at least one of the lanes being passable.
  • the first state determination subunit includes a sixth state determination module, a seventh state determination module and an eighth state determination module.
  • the sixth state determination module is configured to determine that the congestion state of the first road segment is congestion in response to the congestion state of the second road section corresponding to the collected images of successive road conditions being congestion.
  • a seventh state determination module is configured to determine that the congestion state of the second road section corresponding to a plurality of consecutive road condition collection images is slow driving, and determine that the congestion state of the first road section is slow driving.
  • the eighth state determination module is configured to determine that the congestion state of the first road segment is unblocked in response to the number of the congestion states of the second road segment corresponding to the consecutive plurality of road condition collection images being unblocked satisfying a third quantity condition.
  • the road condition information determination unit 1408 includes a state acquisition subunit, a quantity determination subunit, a first road condition determination subunit, a second road condition determination subunit, and a third road condition determination subunit.
  • the state acquisition subunit is used to acquire the congestion state of the first road section corresponding to each of the to-be-identified image sequences in the image sequence set, and the image sequence set includes a plurality of corresponding to-be-determined road sections within the same time period the sequence of images to be identified.
  • the quantity determination subunit is used for determining a first quantity, a second quantity and a third quantity, where the first quantity is used to represent the number of the image sequences to be identified in which the congestion state of the first road section in the image sequence set is unblocked.
  • the congestion state of the first road segment is the number of the image sequences to be identified that are congested.
  • the first road condition determination subunit is configured to determine the road condition information as clear in response to the first quantity being greater than the second quantity and the first quantity being greater than the third quantity.
  • the second road condition determination subunit is configured to determine the road condition information as slow travel in response to the second quantity being greater than the first quantity and the second quantity being greater than the third quantity.
  • a third road condition determination subunit is configured to determine the road condition information as congestion in response to the third quantity being greater than the first quantity and the third quantity being greater than the second quantity.
  • the target image is determined according to the number of target objects in each road condition collected image and/or the clarity of each of the road condition collected images, and the road condition collected image is the one in the road condition collection sequence corresponding to the target road section. image.
  • the apparatus 14B further includes a sequence receiving unit 1412 , a page displaying unit 1413 and an image sequence playing unit 1414 .
  • the sequence receiving unit 1412 is configured to receive a target image sequence corresponding to the target road section, where the target image sequence includes the target image.
  • the page display unit 1413 is configured to display the video playback page in response to the road condition display control being triggered.
  • the image sequence playing unit 1414 is configured to play the target image sequence through the video playing page.
  • control display unit 1411 is configured to render and display the road condition display control at a predetermined position of the navigation page.
  • the predetermined position is the display position of the target road section in the navigation page and/or the lower part of the navigation page and/or the upper part of the navigation page and/or the left and the left side of the navigation page. / or the right side of the navigation page.
  • the apparatus 14B further includes a road segment information display unit 1415 .
  • the road segment information display unit 1415 is configured to display the road segment information through the road condition display control in response to receiving the road segment information of the target road segment, where the road segment information includes road condition information of the target road segment.
  • the server After acquiring the route navigation information of the terminal, the server in this embodiment determines the road condition information of each road segment according to the position of the target object relative to the corresponding lane of the corresponding road segment in the road condition collection sequence of each road segment in the previously acquired route navigation information, and After determining the target image corresponding to the road segment whose road condition information satisfies the predetermined road condition condition in the above road segment, the target image is sent to the terminal.
  • the server may also send the target image sequence including the target image and the road section information of the target road section to the terminal.
  • the terminal After receiving the target image, the terminal may render and display a road condition display control for displaying the target image on the navigation page.
  • the terminal may display the road section information through the road condition display control, and after receiving the target image sequence, in response to the road condition display control being triggered, display the video playback page, and display the video playback page through the video.
  • the play page plays the target image sequence.
  • the position of each target object can be accurately determined by means of image recognition, and the road condition information of each road section can be determined according to the position of each target object, and the road conditions of a specific road section can be displayed through real-life images and real-life videos, which improves the determination of road conditions. With high accuracy and timeliness, users can avoid congested road sections in time.
  • FIG. 15 is a schematic diagram of an electronic device according to a fourth embodiment of the present invention.
  • the electronic device shown in FIG. 15 is a general-purpose data processing apparatus, which includes a general-purpose computer hardware structure, which at least includes a processor 1501 and a memory 1502 .
  • the processor 1501 and the memory 1502 are connected by a bus 1503 .
  • Memory 1502 is adapted to store instructions or programs executable by processor 1501 .
  • the processor 1501 may be an independent microprocessor, or may be a set of one or more microprocessors. Thus, the processor 1501 executes the commands stored in the memory 1502 to execute the above-described method flow of the embodiments of the present invention to process data and control other devices.
  • the bus 1503 connects the above-mentioned various components together, while connecting the above-mentioned components to the display controller 1504 and the display device and the input/output (I/O) device 1505 .
  • the input/output (I/O) device 1505 may be a mouse, keyboard, modem, network interface, touch input device, somatosensory input device, printer, and other devices known in the art.
  • input/output (I/O) devices 1505 are connected to the system through input/output (I/O) controllers 1506 .
  • the memory 1502 may store software components, such as operating systems, communication modules, interaction modules, and application programs.
  • software components such as operating systems, communication modules, interaction modules, and application programs.
  • Each of the modules and applications described above corresponds to a set of executable program instructions that perform one or more functions and methods described in embodiments of the invention.
  • aspects of the embodiments of the present invention may be implemented as a system, method or computer program product. Accordingly, various aspects of embodiments of the present invention may take the form of an entirely hardware implementation, an entirely software implementation (including firmware, resident software, microcode, etc.), or may be generally referred to herein as "circuits," “modules,” ” or “system” that combines software aspects with hardware aspects. Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable media having computer readable program code embodied thereon.
  • the computer-readable medium may be a computer-readable signal medium or a computer-readable storage medium.
  • the computer-readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or apparatus, or any suitable combination of the foregoing.
  • a computer-readable storage medium may be any tangible medium that can contain or store a program for use by or in connection with an instruction execution system, device, or apparatus.
  • a computer-readable signal medium may include a propagated data signal having computer-readable program code embodied therein, such as in baseband or as part of a carrier wave. Such propagated signals may take any of a variety of forms including, but not limited to, electromagnetic, optical, or any suitable combination thereof.
  • a computer-readable signal medium can be any computer-readable medium that is not a computer-readable storage medium and that can communicate, propagate, and communicate a program for use by or in conjunction with the instruction execution system, apparatus, or apparatus. or transmission.
  • Computer program code for carrying out operations directed to aspects of the present invention may be written in any combination of one or more programming languages including: object-oriented programming languages such as Java, Smalltalk, C++, PHP, Python etc.; and conventional procedural programming languages such as the "C" programming language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package; partly on the user's computer and partly on a remote computer; or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network including a local area network (LAN) or wide area network (WAN), or may be connected to an external computer (eg, by using an Internet service provider's Internet) .
  • LAN local area network
  • WAN wide area network
  • Internet service provider's Internet an external computer

Abstract

一种交互方法和交互装置。服务器(12)在获取到终端的路径导航信息后,根据在先获取的路径导航信息中各路段的路况采集序列中目标对象相对于对应路段的对应车道的位置确定各路段的路况信息,并在确定上述路段中路况信息满足预定路况条件的路段对应的目标图像后,向终端发送目标图像。终端在接收到目标图像后,可以在导航页面渲染显示用于展示目标图像的路况展示控件。可以通过图像识别的方式准确确定各目标对象的位置,并根据各目标对象的位置确定各路段的路况信息,同时通过实景图像对特定路段的路况进行展示,提升了确定路况的准确性和及时性,使得用户可以及时避开拥堵路段。

Description

交互方法和交互装置
交叉引用
本申请要求于2021年3月23日提交的中国专利申请No.202110309280.4的优先权,其全部内容通过引用结合于此。
技术领域
本发明涉及计算机技术领域,具体涉及一种交互方法和交互装置。
背景技术
随着轿车等家庭交通工具在日常生活中的不断普及,越来越多的人通过乘坐家庭交通工具出行。以轿车为例,轿车的普及导致同一时间段(例如,上班高峰期)选择乘坐或驾驶轿车出行的人数不断增多,进而导致道路拥堵的频率越来越高。而用户在出行过程中获取到的路况信息往往是不及时的,因此无法及时避开拥堵路段,对时间造成了不必要的浪费。
发明内容
有鉴于此,本发明实施例的目的在于提供一种交互方法和交互装置,用于根据在各路段采集到的路况采集序列中各目标对象相对于对应路段的对应车道的车道线的位置确定各路段的路况信息,并通过展示路况采集图像展示的方式以及时准确地反映各路段的路况信息,以使得用户可以及时避开拥堵路段。
根据本发明实施例的第一方面,提供一种交互方法,所述方法包括:
获取路径导航信息;
确定所述路径导航信息中各路段的路况信息,所述路况信息根据各路段对应的路况采集序列中目标对象的位置确定,所述目标对象的位置用于表征所述目标对象相对于对应车道的车道线的位置;
确定并发送目标路段对应的目标图像,所述目标路段为所述路径导航信息中路况信息满足预定路况条件的路段。
根据本发明实施例的第二方面,提供一种交互方法,所述方法包括:
响应于接收到目标路段对应的目标图像,在导航页面渲染
显示路况展示控件;
其中,所述目标图像基于预先上传的路径导航信息确定,所述路况展示控件用于展示所述目标图像,所述目标路段为所述路径导航信息中路况信息满足预定路况条件的路段,所述路况信息根据所述路径导航信息中各路段对应的路况采集序列中目标对象的位置确定,所述目标对象的位置用于表征所述目标对象相对于对应车道的车道线的位置。
根据本发明实施例的第三方面,提供一种交互装置,所述装置包括:
导航信息获取单元,用于获取路径导航信息;
路况信息确定单元,用于确定所述路径导航信息中各路段的路况信息,所述路况信息根据各路段对应的路况采集序列中目标对象的位置确定,所述目标对象的位置用于表征所述目标对象相对于对应车道的车道线的位置;
图像发送单元,用于确定并发送目标路段对应的目标图像,所述目标路段为所述路径导航信息中路况信息满足预定路况条件的路段。
根据本发明实施例的第四方面,提供一种交互装置,所述装置包括:
控件显示单元,用于响应于接收到目标路段对应的目标图像,在导航页面渲染显示路况展示控件;
其中,所述目标图像基于预先上传的路径导航信息确定,所述路况展示控件用于展示所述目标图像,所述目标路段为所述路径导航信息中路况信息满足预定路况条件的路段,所述路况信息根据所述路径导航信息中各路段对应的路况采集序列中目标对象的位置确定,所述目标对象的位置 用于表征所述目标对象相对于对应车道的车道线的位置。
根据本发明实施例的第五方面,提供一种计算机可读存储介质,其上存储计算机程序指令,其中,所述计算机程序指令在被处理器执行时实现如第一方面或第二方面中任一项所述的方法。
根据本发明实施例的第六方面,提供一种电子设备,包括存储器和处理器,其中,所述存储器用于存储一条或多条计算机程序指令,其中,所述一条或多条计算机程序指令被所述处理器执行以实现如第一方面或第二方面中任一项所述的方法。
根据本发明实施例的第七方面,提供一种计算机程序产品,包括计算机程序/指令,其中,该计算机程序/指令被处理器执行以实现如第一方面或第二方面中任一项所述的方法。
本发明实施例的服务器在获取到终端的路径导航信息后,根据在先获取的路径导航信息中各路段的路况采集序列中目标对象相对于对应路段的对应车道的位置确定各路段的路况信息,并在确定上述路段中路况信息满足预定路况条件的路段对应的目标图像后,向终端发送目标图像。终端在接收到目标图像后,可以在导航页面渲染显示用于展示目标图像的路况展示控件。本发明实施例可以通过图像识别的方式准确确定各目标对象的位置,并根据各目标对象的位置确定各路段的路况信息,同时通过实景图像对特定路段的路况进行展示,提升了确定路况的准确性和及时性,使得用户可以及时避开拥堵路段。
附图说明
通过以下参照附图对本发明实施例的描述,本发明的上述以及其它目的、特征和优点将更为清楚,在附图中:
图1是本发明实施例的硬件系统架构的示意图;
图2是本发明第一实施例的交互方法的流程图;
图3是本发明第一实施例的一种可选的实现方式中确定各路段的路况信息的流程图;
图4是本发明实施例的目标对象的位置的一种示意图;
图5是本发明第一实施例的一种可选的实现方式中确定第一路段拥堵状态的流程图;
图6是本发明实施例的目标对象的位置的另一种示意图;
图7是本发明第一实施例的交互方法在服务器侧的流程图;
图8是本发明第一实施例的交互方法在终端侧的流程图;
图9是本发明第二实施例的交互方法的流程图;
图10是本发明实施例的一种界面示意图;
图11是本发明实施例的另一种界面示意图;
图12是本发明第二实施例的交互方法在服务器侧的流程图;
图13是本发明第二实施例的交互方法在终端侧的流程图;
图14是本发明第三实施例的交互系统的示意图;
图15是本发明第四实施例的电子设备的示意图。
具体实施方式
以下基于实施例对本发明进行描述,但是本发明并不仅仅限于这些实施例。在下文对本发明的细节描述中,详尽描述了一些特定的细节部分。对本领域技术人员来说没有这些细节部分的描述也可以完全理解本发明。为了避免混淆本发明的实质,公知的方法、过程、流程、元件和电路并没有详细叙述。
此外,本领域普通技术人员应当理解,在此提供的附图都是为了说明的目的,并且附图不一定是按比例绘制的。
除非上下文明确要求,否则在说明书的“包括”、“包含”等类似词语应当解释为包含的含义而不是排他或穷举的含义;也就是说,是“包括但不限于”的含义。
在本发明的描述中,需要理解的是,术语“第一”、“第二”等仅用于描述目的,而不能理解为指示或暗示相对重要性。此外,在本发明的描述中,除非另有说明,“多个”的含义是两个或两个以上。
轿车的普及导致同一时间段(例如,上班高峰期)选择乘坐或驾驶轿车出行的人数不断增多,进而导致道路拥堵的频率越来越高。而用户在出行过程中获取到的路况信息往往是不及时的。 现有的具有导航功能的应用程序往往通过交通管理系统获取各路段的路况信息,或者通过历史数据确定不同时间段各路段的路况信息。而在日常生活中,道路的路况信息往往瞬息万变的,因此获取到的路况信息往往是不及时的,这导致了用户无法及时避开拥堵路段,对用户的时间造成了不必要的浪费。
图1是本发明实施例的硬件系统架构的示意图。图1所示的硬件系统架构可以包括至少一个图像采集装置11、至少一个平台侧服务器(下述也即服务器)12以及至少一个用户终端13,图1以一个图像采集装置11、一个服务器12以及一个用户终端13为例进行说明。图像采集装置11为设置在司机侧具有定位功能的图像采集装置,可以在车辆行驶的过程中记录行驶过的路段的路况采集序列并经过用户授权后向服务器12发送记录的路况采集序列和记录路况采集序列时的位置。图像采集装置11具体可以为固定设置在车辆(也即目标设备,图中未示出)内部的图像采集装置,例如行车记录仪,也可以为额外设置且与对应车辆的相对位置保持固定的图像采集装置,例如在驾驶或乘坐车辆时携带的具有摄像功能的可移动终端,包括手机、平板电脑、笔记本电脑等,或者摄像头。图像采集装置11可以通过网络与服务器12以及用户终端13进行通信连接。
容易理解,在本发明实施例中,图像采集装置11还可以设置在其他可移动或非可移动的设备上,例如可移动机器人等。
在本发明实施例中,服务器12在获取到用户终端13预先上传的路径导航信息后,可以根据图像采集装置11上传的路况采集序列中的路况采集图像中目标对象相对于对应车道的车道线的位置确定路径导航信息中各路段的路段信息,然后确定路径导航信息中路况信息满足预定路况条件的路段对应的目标图像和/或包含目标图像在内的目标图像序列,并向用户终端13发送目标图像。用户终端13在接收到目标图像后,可以在导航页面中渲染显示用于展示目标图像的路况展示控件。
在一种可选的实现方式中,用户终端13还可以接收服务器12发送的目标图像序列,并响应于路况展示控件被触发,展示视频播放页面,并通过视频播放页面播放目标图像序列。
下面通过方法实施例来对本发明实施例的交互方法进行详细说明。图2是本发明第一实施例的交互方法的流程图。如图1所示,本实施例的方法包括如下步骤:
步骤S201,获取路径导航信息。
在实施例中,用户可以通过用户终端(下述也即终端)登录具有导航功能的预定应用程序,并设置出发点和目的地。终端在获取到用户设置的出发点和目的地后,可以根据用户设置的出发点和目的地进行路径规划得到至少一个路径规划结果,并将用户选择的路径规划结果确定为路径导航信息。在本实施例中,终端可以通过各种现有的方式得到路径规划结果,例如将设置的出发点和目的地发送至预定路径规划接口,并从预定路径规划接口获取路径规划结果,本实施例不做具体限定。同时,终端还可以将路径导航信息发送至服务器,以使得服务器可以将路径导航信息存储至数据库。
因此在本步骤中,若用户选择的路径导航信息为数据库中已存储的路径规划结果,服务器可以从数据库中获取终端预先上传的路径导航信息;若用户选择的路径导航信息不是数据库中已存储的路径规划结果,服务器可以接收终端发送的路径导航信息。在获取路径导航信息后,服务器可以提取路径导航信息中的各路段的路段名称。
步骤S202,确定路径导航信息中各路段的路况信息。
在本实施例中,各路段的路段信息由服务器根据路况采集序列中目标对象的位置确定。路况采集序列为各车辆在行驶过程中记录的行驶过的路段的图像序列。各车辆配置的图像采集装置在上传包括至少一个路况采集序列的同时,还可以上传记录路况采集序列中各路况采集图像时车辆的位置,因此服务器可以根据记录路况采集图像时车辆的位置确定各图像采集序列对应的路段。在本实施例中,车辆的位置可以通过对应图像采集装置所配置的定位系统(例如,全球定位系统、北斗卫星导航系统等)确定,具体可以为车辆在世界坐标系下的坐标。
图3是本发明第一实施例的一种可选的实现方式中确定各路段的路况信息的流程图。如图3所示,在本实施例的一种可选的实现方式中,服务器可以通过如下步骤确定各路段的路况信息:
步骤S301,确定待确定路段。
在本步骤中,服务器可以将预定地理范围(例如,预定城市、预定区县等)范围内的各路段分别确定为待确定路段,也可以将路径导航信息中的各路段分别确定为待确定路段,本实施例不做具体限定。
步骤S302,对待识别图像序列中各路况采集图像分别进行图像识别,确定各路况采集图像中目标对象的位置。
在本实施例中,路况采集序列是由随车辆移动的图像采集装置采集到的,因此在本实施例 中,目标对象为车辆。但是容易理解,目标对象也可以为其他对象,例如行人、道路中设置的障碍物等。
在目标对象为车辆时,服务器可以通过各种现有的方式,分别对各路况采集序列中的各路况采集图像进行图像识别,例如通过《基于图像识别的车辆距离检测算法研究,尹艺杰,2012年硕士学位论文》中记载的方法确定各目标对象相对于图像采集装置的距离,并根据记录各路况采集图像时车辆的位置确定各路况采集图像对应的各目标对象在世界坐标系下的坐标作为目标对象的位置。
在本实施例中,目标对象的位置用于确定路段的路况信息,因此目标对象的位置可以用于表征目标对象相对于待确定路段中对应车道(也即,目标对象所在的车道)的车道线的位置,这里的车道线可以为对应车道的左车道线,也可以为对应车道的右车道线,本实施例不做具体限定。在确定目标对象相对于待确定路段中对应车道的车道线的位置时,服务器还可以通过各种现有的方式进行图像识别,例如通过《基于图像识别的辅助定位系统设计与实现,吴加顺,2018年硕士学位论文》中记载的方法,或者基于训练好的SSD(Single Shot MultiBox Detector,单激发多盒探测器)模型确定各条车道线的位置,并根据各目标对象在世界坐标系下的坐标以及各条车道线的位置确定各路况采集图像中的各目标对象相对于待确定路段中对应车道的车道线的位置。
图4是本发明实施例的目标对象的位置的一种示意图。如图4所示,车辆V1为路况采集图像P1中的一个目标对象,车道线L1和车道线L2分别为车辆V1对应车道的左右车道线。服务器在通过对路况采集图像P1进行图像识别确定车辆V1、车道线L1和车道线L2的位置后,可以确定车辆V1相对于车道线L1的位置,也即车辆V1与车道线L1之间的最短距离d1,以及车辆V2相对于车道线L2的位置,也即车辆V1与车道线L2之间的最短距离d2作为车辆V1的位置。
步骤S303,根据目标对象的位置确定目标对象对应车道的车道可通行状态。
在本实施例的一种可选的实现方式中,待确定路段的路段拥堵状态是通过待确定路段中的各车道是否可以通行来确定的,因此在本步骤中,可以根据待识别图像对应的各目标对象的位置确定目标对象对应车道的车道可通行状态。
具体地,服务器可以根据目标对象的位置确定目标对象对应的目标距离。目标距离用于表征目标对象与对应车道的车道线的最大距离。以图4所示的目标对象的位置为例进行说明,服务器可以确定车辆V1与车道线L1之间的最短距离d1与车辆V1与车道线L2之间的最短距离d2中较大的距离(也即,最短距离d2)为车辆V1对应的目标距离。
同时,服务器还可以获取目标设备对应的可通行距离。在目标设备为车辆时,目标设备对应的可通行距离相当于车辆的宽度(也即,平行于车辆纵向对称平面并分别抵靠车辆两侧固定突出部位的量平面之间的距离)。且相同类型的车辆通常宽度几乎相同,因此服务器可以根据车辆的类型确定车辆对应的可通行距离。以普通轿车为目标设备为例,普通轿车的宽度通常在1.4至1.8米之间,因此服务器可以将1.8米作为普通轿车的可通行距离。
在确定目标对象对应的目标距离以及目标设备的可通行距离后,服务器可以根据目标对象对应的目标距离以及目标设备的可通行距离的大小来确定各车道是否可以通行。对于任一车道,若目标对象对应的目标距离大于(或大于等于)目标设备的可通行距离,则服务器可以确定该车道的可通行状态为可通行;若目标对象对应的目标距离小于目标设备的可通行距离,则服务器可以确定该车道的可通行状态为不可通行。
容易理解,对于任一车道,若在该车道上不存在目标对象,服务器可以确定该车道的可通行状态为可通行。
仍旧以图4所示的目标对象的位置为例进行说明,服务器在确定车辆V1对应的目标距离(也即,最短距离d2)和目标设备的可通行距离(例如,1.8米)后,若最短距离d2≥1.8米,则服务器可以确定车辆V1对应车道的可通行状态为可通行;若最短距离d2<1.8米,则服务器可以确定车辆V1对应车道的可通行状态为不可通行。
步骤S304,根据各车道可通行状态确定待识别图像序列对应的第一路段拥堵状态。
在本实施例中,第一路段拥堵状态用于表征对应车辆在待确定路段行驶时,待确定路段的拥堵状态。
图5是本发明第一实施例的一种可选的实现方式中确定第一路段拥堵状态的流程图。如图5所示,在本实施例的一种可选的实现方式中,步骤S304可以包括如下步骤:
步骤S501,根据各路况采集图像对应的各车道可通行状态确定对应的第二路段拥堵状态。
在确定各待识别图像记录到的待确定路段的各车道的车道可通行状态后,服务器可以根据 各车道的车道可通行状态确定目标设备的图像采集装置在记录待识别图像时,待确定路段的第二路段拥堵状态。
具体地,在各车道的车道可通行状态均为不可通行时,服务器可以确定待识别图像对应的第二路段拥堵状态为拥堵;在各车道的车道可通行状态均为可通行时,服务器可以确定待识别图像对应的第二路段拥堵状态为畅通;在至少一个车道的车道可通行状态为不可通行且至少一个车道的车道可通行状态为可通行时,服务器可以确定待识别图像对应的第二路段拥堵状态为缓行。
图6是本发明实施例的目标对象的位置的另一种示意图。如图6所示,待确定路段包括车道61、车道62和车道63。服务器根据对待识别图像P2进行图像识别确定车道61的可通行状态(也即,不可通行)、车道62的可通行状态(也即,可通行)和车道63的可通行状态(也即,可通行)后,可以确定待识别图像P2对应的第二路段拥堵状态为缓行。
步骤S502,根据各第二路段拥堵状态确定第一路段拥堵状态。
在确定各路况采集图像对应的第二路段拥堵状态后,服务器可以根据同一路况采集序列中各路况采集图像对应的第二路段拥堵状态确定该路况采集序列对应的第一路段拥堵状态。
日常生活中图像采集装置记录路况采集图像的频率通常较高,若路况采集序列中第二路段拥堵状态连续相同的路况采集图像(按照记录时间排序)的数量少于一定数量,例如,路况采集序列中包括100个路况采集图像,但第二路段拥堵状态连续为拥堵的路况采集图像的数量小于30个,则在目标设备的移动过程中,待确定路段的路况信息可能实际并没有达到拥堵的程度。因此在本实施例中,对于任一路况采集序列,在连续多个路况采集图像均对应于同一第二路段拥堵状态,服务器可以将该路况采集序列对应的第一路段拥堵状态确定为该第二路段拥堵状态。
具体地,服务器可以响应于连续多个路况采集图像对应的第二路段拥堵状态为拥堵,确定路况采集序列对应的第一路段拥堵状态为拥堵;响应于连续多个路况采集图像对应的第二路段拥堵状态为缓行,确定路况采集序列对应的第一路段拥堵状态为缓行;响应于连续多个路况采集图像对应的第二路段拥堵状态为畅通,确定路况采集序列对应的第一路段拥堵状态为畅通。
步骤S305,根据第一路段拥堵状态确定待确定路段的路况信息。
一个路况采集序列为一个目标设备设置的图像采集装置记录得到的,因此是较为片面的。例如,待确定路段实际的路况信息为拥堵,但目标车辆(也即,目标设备)行驶的车道为应急车道,因此通过图像识别确定的第一路段拥堵状态可能为缓行,与待确定路段实际的路况信息并不匹配,准确性较低。
因此在本实施例的一种可选的实现方式中,通过根据在同一时间段内行驶在待确定路段的多个车辆记录得到的路况采集序列对应的第一路段拥堵状态确定待确定路段的路况信息来提升确定路况信息的准确性。
在本步骤中,服务器可以获取图像序列集合中各待识别图像序列对应的第一路段拥堵状态。图像序列集合中的待识别图像序列为多个车辆同一时间段内在待确定路段记录的路况采集序列。路况信息时常发生变化,因此预定周期的周期长度可以根据历史数据中路况信息的变化规律确定,例如,根据历史数据得到的路况信息的变化规律大致为每小时发生变化(例如从拥堵变成缓行),则预定周期的周期长度可以为1小时。
例如,待确定路段的路段名称为“xx街”,预定周期为2021年3月5日10:00-11:00,服务器可以获取多个车辆在2021年3月5日10:00-11:00在xx街行驶时的记录的路况采集序列作为“xx街”这一路段对应的多个路况采集序列。
在获取各待识别图像序列的第一路段拥堵状态后,服务器可以分别确定图像序列集合中第一路段拥堵状态为畅通的待识别图像序列的数量(也即,第一数量)、图像序列集合中第一路段拥堵状态为缓行的待识别图像序列的数量(也即,第二数量)以及图像序列集合中第一路段拥堵状态为拥堵的待识别图像序列的数量(也即,第三数量),并将数量最大的路况采集序列对应的第一路段拥堵状态确定为待确定路段的路况信息。
具体地,在第一数量大于第二数量且第一数量大于第三数量时,将待确定路段的路况信息确定为畅通;在第二数量大于第一数量且第二数量大于第三数量时,将待确定路段的路况信息确定为缓行;在第三数量大于第一数量且第三数量大于第二数量时,将待确定路段的路况信息确定为拥堵。
容易理解,上述确定路段的路况信息的过程(也即,步骤S301-步骤S305)发生在步骤S203前,且若服务器以预定周期确定路段的路况信息,则确定路段的路况信息的过程可以发生在接收到路径导航信息所在周期的上一周期内,也就是说,服务器是根据上一周期确定得到的路况信息来确 定当前周期路径导航信息中各路段的路况信息的。例如,预定周期的周期长度为1小时,服务器接收到终端发送的路径导航信息的时刻为9:30,所在周期为9:00-10:00,则该路径导航信息中各路段的路况信息是在8:00-9:00内确定得到的。
在本实施例的另一种可选的实现方式中,服务器也可以通过对各路况采集图像进行图像识别,确定各路况采集图像中目标对象的数量。然后确定对于同一路况采集序列对应的各路况采集图像,连续多个路况采集图像中目标对象的数量是否满足预定数量条件,若满足预定数量条件,服务器可以将该路况采集序列对应的路段拥堵状态确定为该数量对应的拥堵状态。对于待确定路段,服务器可以将数量最大的路况采集序列对应的路段拥堵状态确定为待确定路段的路况信息。其中,目标对象的数量与拥堵状态的对应关系可以预先设置,例如,目标对象的数量为0至3个,则拥堵状态可以为畅通;目标对象的数量为4至10个,则拥堵状态可以为缓行;目标对象的数量为11个及以上,则拥堵状态可以为拥堵。
容易理解,也可以将上述两种方式结合,确定各待确定路段的路况信息,以进一步提升确定路况信息的准确性。且在本实施例中,确定路况信息的过程也可以发生在终端侧,也即,步骤S301-步骤S305也可以由终端执行。
步骤S203,确定并发送目标路段对应的目标图像。
在确定路径导航信息中各路段对应的路况信息后,服务器可以将路况信息满足预定路况条件的路段确定为目标路段,并从多个路况采集序列的路况采集图像中确定出目标图像。
在本实施例中,预定路况条件用于判断各路段的路况信息是否适于通行,因此可以被设置为路况信息为拥堵。可选地,也可以被设置为路况信息为缓行。目标图像可以根据各路况采集图像的清晰度和目标对象的数量中的至少一项因素确定,例如,在预定路况条件被设置为路况信息时,服务器可以从各路况采集图像中清晰度最高且目标对象的数量排序在最大的第一位的路况采集图像确定为目标图像。
在日常生活中,人脸、车牌号等均属于人们不希望被泄露的敏感信息,因此可选地,服务器还可以去除目标图像中的敏感信息。在本步骤中,服务器可以通过各种现有的方式去除目标图像中的敏感信息,例如通过图像识别的方式识别出目标图像中的人脸、车牌号等敏感信息,并将上述敏感信息进行马赛克处理,从而得到用于后续向终端发送的目标图像。
在确定目标图像后,服务器可以根据终端标识向终端发送目标图像。
步骤S204,响应于接收到目标路段对应的目标图像,在导航页面渲染显示路况展示控件。
终端在接收到服务器发送的目标图像后,可以在导航页面中渲染显示用于展示目标图像的路况展示控件。具体地,终端可以在导航页面的预定位置渲染显示路况展示控件。其中,预定位置可以为导航页面中的任意位置,例如可以为目标路段在导航页面中的显示位置,和/或导航页面的下方和/或导航页面的左侧和/或导航页面的右侧,本实施例不做具体限定。
可选地,对于路径导航信息中的目标路段,终端可以根据接收到目标路段的路段信息对目标路段采用不同的方式显示,例如通过颜色区分的方式进行显示,由此可以使得用户较容易查看到不同路段的路况信息。
容易理解,若在本发明实施例中,路况信息在终端侧确定,则目标图像同样可以在终端侧确定。
图7是本发明第一实施例的交互方法在服务器侧的流程图。如图7所示,本实施例的方法在服务器侧可以包括如下步骤:
步骤S201,获取路径导航信息。
步骤S202,确定路径导航信息中各路段的路况信息。
步骤S203,确定并发送目标路段对应的目标图像。
图8是本发明第一实施例的交互方法在终端侧的流程图。如图8所示,本实施例的方法可以包括如下步骤:
步骤S204,响应于接收到目标路段对应的目标图像,在导航页面渲染显示路况展示控件。
本实施例的服务器在获取到终端的路径导航信息后,根据在先获取的路径导航信息中各路段的路况采集序列中目标对象相对于对应路段的对应车道的位置确定各路段的路况信息,并在确定上述路段中路况信息满足预定路况条件的路段对应的目标图像后,向终端发送目标图像。终端在接收到目标图像后,可以在导航页面渲染显示用于展示目标图像的路况展示控件。本实施例可以通过图像识别的方式准确确定各目标对象的位置,并根据各目标对象的位置确定各路段的路况信息,同时通过实景图像对特定路段的路况进行展示,提升了确定路况的准确性和及时性,使得用户可以及 时避开拥堵路段。
图9是本发明第二实施例的交互方法的流程图。如图9所示,本实施例的方法包括如下步骤:
步骤S901,获取路径导航信息。
在本实施例中,步骤S901的实现方式和步骤S201的实现方式相似,在此不再赘述。
步骤S902,确定路径导航信息中各路段的路况信息。
在本实施例中,步骤S902的实现方式和步骤S202的实现方式相似,在此不再赘述。
步骤S903,确定并发送目标路段对应的目标图像。
在本实施例中,步骤S903的实现方式和步骤S203的实现方式相似,在此不再赘述。
步骤S904,响应于接收到目标路段对应的目标图像,在导航页面渲染显示路况展示控件。
在本实施例中,步骤S904的实现方式和步骤S204的实现方式相似,在此不再赘述。
容易理解,在本实施例中,确定路况信息的过程同样可以发生在终端侧,也即,步骤S301-步骤S305也可以由终端执行。
步骤S905,确定并发送目标路段的路段信息。
在本实施例中,在确定各路段的路况信息后,服务器还可以确定并发送目标路段的路段信息。路段信息可以包括目标路段的路况信息,也即拥堵、缓行、畅通等,还可以包括目标路段的平均行驶速率、拥堵长度等。其中,平均行驶速度和拥堵长度可以根据设置了上传目标路段的路况采集图像的图像采集装置的目标设备的位置确定。
例如,目标路段为xx街,车辆V1-车辆V100为设置了上传xx街的路况采集序列的图像采集装置的目标设备,服务器可以分别根据车辆V1-车辆V100对应的图像采集装置在记录路况采集序列的时间长度以及记录各路况采集图像时对应车辆的位置分别确定车辆V1-车辆V100的平均移动速率,然后根据车辆V1-车辆V100的平均移动速率确定目标路段的平均行驶速率。同时,服务器还可以根据同一时刻车辆V1-车辆V100的位置来确定目标路段的拥堵长度,例如,车辆V1-车辆V100分布在xx街东侧800米至东侧100米之间,则服务器可以确定xx街的拥堵长度为700米。
容易理解,在本实施例中,步骤S903和步骤S905可以同时执行,也可以先后执行,本实施例不做限定。
步骤S906,响应于接收到目标路段的路段信息,通过路况展示控件展示路段信息。
在接收到目标路段的路段信息后,终端还可以通过路况展示控件对目标路段的路段信息进行展示。可选地,终端可以仅展示部分路段信息,也可以展示全部路段信息,本实施例不做具体限定。
图10是本发明实施例的一种界面示意图。图10所示的界面为终端界面。如图10所示,页面P1为导航页面,终端可以在页面P1中显示路径导航信息01,并以不同颜色对路径导航信息101中的目标路径,也即路段02进行显示。同时,终端还可以在路段102的显示位置渲染显示路况展示控件,也即控件103以及导航页面的下方渲染显示路况展示控件,也即控件104,其中控件104还显示了路段102的路段信息,包括路段102的路段名称(也即,xx街)、路况信息(也即,拥堵)和拥堵长度(也即,拥堵xxx米)。可选地,终端可以仅显示控件03,也可以仅显示控件104,还可以同时显示控件103和控件104,本实施例不做具体限定。
容易理解,若在本发明实施例中,路况信息在终端侧确定,则目标路段的路段信息同样可以在终端侧确定。
步骤S907,确定并发送目标路段对应的目标图像序列。
在确定目标图像后,服务器可以将包括目标图像的路况采集序列确定为目标序列,也可以从包括目标图像的路况采集序列中截取预定长度(例如,10秒)的序列片段作为目标序列,本实施例不做具体限定。
可选地,服务器也可以通过各种现有的方式,按照目标图像序列中各路况采集图像的排序去除各路况采集图像的敏感信息,然后根据去除敏感信息后的各路况采集图像得到用于后续向终端发送的目标图像序列。
在确定目标图像序列后,服务器可以根据终端标识向终端发送目标图像序列。
容易理解,在本实施例中,步骤S903和步骤S907可以同时执行,也可以先后执行,本实施例不做具体限定。
步骤S908,接收目标路段对应的目标图像序列。
在本实施例中,终端还可以接收服务器发送的、包括目标图像的目标图像序列。
容易理解,若在本发明实施例中,路况信息在终端侧确定,则目标图像序列同样可以在终端侧确定。
步骤S909,响应于路况展示控件被触发,展示视频播放页面。
通过单一的图像对目标路段的路况信息进行表征对于用户而言可能存在局限性,因此在本实施例中,通过目标图像序列更加清晰地体现目标路段的路况信息。在路况展示控件被触发时,终端可以展示用于播放目标图像序列的视频播放页面。
步骤S910,通过视频播放页面播放目标图像序列。
在本步骤中,终端可以通过视频播放页面自动播放目标图像序列,以避免用户在驾驶车辆过程中需要多次操作对用户的注意力造成不必要的分散的可能。
图11是本发明实施例的另一种界面示意图。以图10所示的界面为例进行说明。终端可以响应于控件103被触发,或者控件104被触发,展示图11所示的视频播放页面,也即页面P2,并通过视频播放页面播放包括目标图像在内的目标视频序列111。
图12是本发明第二实施例的交互方法在服务器侧的流程图。如图12所示,本实施例的方法在服务器侧可以包括如下步骤:
步骤S901,获取路径导航信息。
步骤S902,确定路径导航信息中各路段的路况信息。
步骤S903,确定并发送目标路段对应的目标图像。
步骤S905,发送目标路段的路段信息。
步骤S907,确定并发送目标路段对应的目标图像序列。
图13是本发明第二实施例的交互方法在终端侧的流程图。如图13所示,本实施例的方法可以包括如下步骤:
步骤S904,响应于接收到目标路段对应的目标图像,在导航页面渲染显示路况展示控件。
步骤S906,响应于接收到目标路段的路段信息,通过路况展示控件展示路段信息。
步骤S908,接收目标路段对应的目标图像序列。
步骤S909,响应于路况展示控件被触发,展示视频播放页面。
步骤S910,通过视频播放页面播放目标图像序列。
本实施例的服务器在获取到终端的路径导航信息后,根据在先获取的路径导航信息中各路段的路况采集序列中目标对象相对于对应路段的对应车道的位置确定各路段的路况信息,并在确定上述路段中路况信息满足预定路况条件的路段对应的目标图像后,向终端发送目标图像。可选地,服务器还可以向终端发送包括目标图像在内的目标图像序列和目标路段的路段信息。终端在接收到目标图像后,可以在导航页面渲染显示用于展示目标图像的路况展示控件。可选地,终端还可以在接收到目标路段的路段信息后,通过路况展示控件展示路段信息,并在接收到目标图像序列后,响应于路况展示控件被触发,展示视频播放页面,并通过视频播放页面播放目标图像序列。本实施例可以通过图像识别的方式准确确定各目标对象的位置,并根据各目标对象的位置确定各路段的路况信息,同时通过实景图像以及实景视频对特定路段的路况进行展示,提升了确定路况的准确性和及时性,使得用户可以及时避开拥堵路段。
图14是本发明第三实施例的交互系统的示意图。如图14所示,本实施例的系统包括交互装置14A和交互装置14B。
其中,交互装置14A适用于服务器侧,包括导航信息获取单元1401、路况信息确定单元1402和图像发送单元1403。
其中,导航信息获取单元1401用于获取路径导航信息。路况信息确定单元1402用于确定所述路径导航信息中各路段的路况信息,所述路况信息根据各路段对应的路况采集序列中目标对象的位置确定。图像发送单元1403用于确定并发送目标路段对应的目标图像,所述目标路段为所述路径导航信息中路况信息满足预定路况条件的路段。
进一步地,所述路况信息通过路段确定单元1404、位置确定单元1405、通行状态确定单元1406、拥堵状态确定单元1407和路况信息确定单元1408确定。
其中,路段确定单元1404用于确定待确定路段。位置确定单元1405用于对待识别图像序列中各所述路况采集图像分别进行图像识别,确定各所述路况采集图像中所述目标对象的位置,所述待识别图像序列为所述待确定路段对应的路况采集序列。通行状态确定单元1406用于根据所述目标对象的位置确定所述目标对象对应车道的车道可通行状态。拥堵状态确定单元1407用于根据各所述车道可通行状态确定所述待识别图像序列对应的第一路段拥堵状态。路况信息确定单元1408 用于根据所述第一路段拥堵状态确定所述待确定路段的所述路况信息。
进一步地,所述拥堵状态确定单元1407包括第二状态确定子单元和第一状态确定子单元。
其中,第二状态确定子单元用于根据各所述路况采集图像对应的各所述车道可通行状态确定对应的第二路段拥堵状态。第一状态确定子单元用于根据各所述第二路段拥堵状态确定所述第一路段拥堵状态。
进一步地,所述目标对象的位置用于表征所述目标对象相对于对应车道的车道线的位置。所述通行状态确定单元1406包括第一距离确定子单元、第二距离确定子单元和通行状态确定子单元。
其中,第一距离确定子单元用于根据所述目标对象的位置确定所述目标对象对应的目标距离,所述目标距离用于表征所述目标对象与对应车道的车道线的最大距离。第二距离确定子单元用于确定目标设备对应的可通行距离,所述目标设备为所述待识别图像序列对应的设备。通行状态确定子单元用于根据所述目标距离和所述可通行距离确定所述车道可通行状态。
进一步地,所述通行状态确定子单元包括第一状态确定模块和第二状态确定模块。
其中,第一状态确定模块用于响应于所述目标距离不小于所述可通行距离,确定所述车道可通行状态为可通行。第二状态确定模块用于响应于所述目标距离小于所述可通行距离,确定所述车道可通行状态为不可通行。
进一步地,所述第二状态确定子单元包括第三状态确定模块、第四状态确定模块和第五状态确定模块。
其中,第三状态确定模块用于响应于对应的各所述车道可通行状态均为不可通行,确定所述第二路段拥堵状态为拥堵。第四状态确定模块,用于响应于对应的各所述车道可通行状态均为可通行,确定所述第二路段拥堵状态为畅通。第五状态确定模块用于响应于对应的至少一个所述车道可通行状态为不可通行且至少一个所述车道可通行状态为可通行,确定所述第二路段拥堵状态为缓行。
进一步地,所述第一状态确定子单元包括第六状态确定模块、第七状态确定模块和第八状态确定模块。
其中,第六状态确定模块用于响应于连续多个路况采集图像对应的所述第二路段拥堵状态均为拥堵,确定所述第一路段拥堵状态为拥堵。第七状态确定模块用于响应于连续多个路况采集图像对应的所述第二路段拥堵状态为缓行,确定所述第一路段拥堵状态为缓行。第八状态确定模块用于响应于连续多个路况采集图像对应的所述第二路段拥堵状态为畅通的数量满足第三数量条件,确定所述第一路段拥堵状态为畅通。
进一步地,所述路况信息确定单元1408包括状态获取子单元、数量确定子单元、第一路况确定子单元、第二路况确定子单元和第三路况确定子单元。
其中,状态获取子单元用于获取图像序列集合中各所述待识别图像序列对应的所述第一路段拥堵状态,所述图像序列集合包括所述待确定路段在同一时间段内对应的多个所述待识别图像序列。数量确定子单元用于确定第一数量、第二数量和第三数量,所述第一数量用于表征所述图像序列集合中所述第一路段拥堵状态为畅通的所述待识别图像序列的数量,所述第二数量用于表征所述图像序列集合中所述第一路段拥堵状态为缓行的所述待识别图像序列的数量,所述第三数量用于表征所述图像序列集合中所述第一路段拥堵状态为拥堵的所述待识别图像序列数量。第一路况确定子单元用于响应于所述第一数量大于所述第二数量且所述第一数量大于所述第三数量,将所述路况信息确定为畅通。第二路况确定子单元用于响应于所述第二数量大于所述第一数量且所述第二数量大于所述第三数量,将所述路况信息确定为缓行。第三路况确定子单元用于响应于所述第三数量大于所述第一数量且所述第三数量大于所述第二数量,将所述路况信息确定为拥堵。
进一步地,所述图像发送单元1403包括数量清晰度确定子单元和图像确定子单元。
其中,数量清晰度确定子单元用于确定各所述路况采集图像中目标对象的数量和/或各所述路况采集图像的清晰度。图像确定子单元用于根据各所述路况采集图像对应的所述目标对象的数量和/或清晰度确定目标图像。
进一步地,所述装置14A还包括序列发送单元1409。
其中,序列发送单元1409用于确定并发送所述目标路段对应的目标图像序列,所述目标图像序列包括所述目标图像。
进一步地,所述装置14A还包括路段信息发送单元1410。
路段信息发送单元1410用于确定并发送所述目标路段的路段信息,所述路段信息包括所 述目标路段的路况信息。
交互装置14B适用于终端,包括控件显示单元1411。
其中,控件显示单元1411用于响应于接收到目标路段对应的目标图像,在导航页面渲染显示路况展示控件。其中,所述目标图像基于预先上传的路径导航信息确定,所述路况展示控件用于展示所述目标图像,所述目标路段为所述路径导航信息中路况信息满足预定路况条件的路段,所述路况信息根据所述路径导航信息中各路段对应的路况采集序列中目标对象的位置确定。
进一步地,所述路况信息通过路段确定单元1404、位置确定单元1405、通行状态确定单元1406、拥堵状态确定单元1407和路况信息确定单元1408确定。
其中,路段确定单元1404用于确定待确定路段。位置确定单元1405用于对待识别图像序列中各所述路况采集图像分别进行图像识别,确定各所述路况采集图像中所述目标对象的位置,所述待识别图像序列为所述待确定路段对应的路况采集序列。通行状态确定单元1406用于根据所述目标对象的位置确定所述目标对象对应车道的车道可通行状态。拥堵状态确定单元1407用于根据各所述车道可通行状态确定所述待识别图像序列对应的第一路段拥堵状态。路况信息确定单元1408用于根据所述第一路段拥堵状态确定所述待确定路段的所述路况信息。
进一步地,所述拥堵状态确定单元1407包括第二状态确定子单元和第一状态确定子单元。
其中,第二状态确定子单元用于根据各所述路况采集图像对应的各所述车道可通行状态确定对应的第二路段拥堵状态。第一状态确定子单元用于根据各所述第二路段拥堵状态确定所述第一路段拥堵状态。
进一步地,所述目标对象的位置用于表征所述目标对象相对于对应车道的车道线的位置。所述通行状态确定单元1406包括第一距离确定子单元、第二距离确定子单元和通行状态确定子单元。
其中,第一距离确定子单元用于根据所述目标对象的位置确定所述目标对象对应的目标距离,所述目标距离用于表征所述目标对象与对应车道的车道线的最大距离。第二距离确定子单元用于确定目标设备对应的可通行距离,所述目标设备为所述待识别图像序列对应的设备。通行状态确定子单元用于根据所述目标距离和所述可通行距离确定所述车道可通行状态。
进一步地,所述通行状态确定子单元包括第一状态确定模块和第二状态确定模块。
其中,第一状态确定模块用于响应于所述目标距离不小于所述可通行距离,确定所述车道可通行状态为可通行。第二状态确定模块用于响应于所述目标距离小于所述可通行距离,确定所述车道可通行状态为不可通行。
进一步地,所述第二状态确定子单元包括第三状态确定模块、第四状态确定模块和第五状态确定模块。
其中,第三状态确定模块用于响应于对应的各所述车道可通行状态均为不可通行,确定所述第二路段拥堵状态为拥堵。第四状态确定模块,用于响应于对应的各所述车道可通行状态均为可通行,确定所述第二路段拥堵状态为畅通。第五状态确定模块用于响应于对应的至少一个所述车道可通行状态为不可通行且至少一个所述车道可通行状态为可通行,确定所述第二路段拥堵状态为缓行。
进一步地,所述第一状态确定子单元包括第六状态确定模块、第七状态确定模块和第八状态确定模块。
其中,第六状态确定模块用于响应于连续多个路况采集图像对应的所述第二路段拥堵状态均为拥堵,确定所述第一路段拥堵状态为拥堵。第七状态确定模块用于响应于连续多个路况采集图像对应的所述第二路段拥堵状态为缓行,确定所述第一路段拥堵状态为缓行。第八状态确定模块用于响应于连续多个路况采集图像对应的所述第二路段拥堵状态为畅通的数量满足第三数量条件,确定所述第一路段拥堵状态为畅通。
进一步地,所述路况信息确定单元1408包括状态获取子单元、数量确定子单元、第一路况确定子单元、第二路况确定子单元和第三路况确定子单元。
其中,状态获取子单元用于获取图像序列集合中各所述待识别图像序列对应的所述第一路段拥堵状态,所述图像序列集合包括所述待确定路段在同一时间段内对应的多个所述待识别图像序列。数量确定子单元用于确定第一数量、第二数量和第三数量,所述第一数量用于表征所述图像序列集合中所述第一路段拥堵状态为畅通的所述待识别图像序列的数量,所述第二数量用于表征所述图像序列集合中所述第一路段拥堵状态为缓行的所述待识别图像序列的数量,所述第三数量用于表征所述图像序列集合中所述第一路段拥堵状态为拥堵的所述待识别图像序列数量。第一路况确定子 单元用于响应于所述第一数量大于所述第二数量且所述第一数量大于所述第三数量,将所述路况信息确定为畅通。第二路况确定子单元用于响应于所述第二数量大于所述第一数量且所述第二数量大于所述第三数量,将所述路况信息确定为缓行。第三路况确定子单元用于响应于所述第三数量大于所述第一数量且所述第三数量大于所述第二数量,将所述路况信息确定为拥堵。
进一步地,所述目标图像根据各路况采集图像中目标对象的数量和/或各所述路况采集图像的清晰度确定,所述路况采集图像为所述目标路段对应的所述路况采集序列中的图像。
进一步地,所述装置14B还包括序列接收单元1412、页面展示单元1413和图像序列播放单元1414。
其中,序列接收单元1412用于接收所述目标路段对应的目标图像序列,所述目标图像序列包括所述目标图像。页面展示单元1413用于响应于所述路况展示控件被触发,展示视频播放页面。图像序列播放单元1414用于通过所述视频播放页面播放所述目标图像序列。
进一步地,所述控件显示单元1411用于在所述导航页面的预定位置渲染显示所述路况展示控件。
进一步地,所述预定位置为所述目标路段在所述导航页面中的显示位置和/或所述导航页面的下方和/或所述导航页面的上方和/或所述导航页面的左侧和/或所述导航页面的右侧。
进一步地,所述装置14B还包括路段信息展示单元1415。
进一步地,路段信息展示单元1415用于响应于接收到所述目标路段的路段信息,通过所述路况展示控件展示所述路段信息,所述路段信息包括所述目标路段的路况信息。
本实施例的服务器在获取到终端的路径导航信息后,根据在先获取的路径导航信息中各路段的路况采集序列中目标对象相对于对应路段的对应车道的位置确定各路段的路况信息,并在确定上述路段中路况信息满足预定路况条件的路段对应的目标图像后,向终端发送目标图像。可选地,服务器还可以向终端发送包括目标图像在内的目标图像序列和目标路段的路段信息。终端在接收到目标图像后,可以在导航页面渲染显示用于展示目标图像的路况展示控件。可选地,终端还可以在接收到目标路段的路段信息后,通过路况展示控件展示路段信息,并在接收到目标图像序列后,响应于路况展示控件被触发,展示视频播放页面,并通过视频播放页面播放目标图像序列。本实施例可以通过图像识别的方式准确确定各目标对象的位置,并根据各目标对象的位置确定各路段的路况信息,同时通过实景图像以及实景视频对特定路段的路况进行展示,提升了确定路况的准确性和及时性,使得用户可以及时避开拥堵路段。
图15是本发明第四实施例的电子设备的示意图。图15所示的电子设备为通用数据处理装置,其包括通用的计算机硬件结构,其至少包括处理器1501和存储器1502。处理器1501和存储器1502通过总线1503连接。存储器1502适于存储处理器1501可执行的指令或程序。处理器1501可以是独立的微处理器,也可以是一个或者多个微处理器集合。由此,处理器1501通过执行存储器1502所存储的命令,从而执行如上所述的本发明实施例的方法流程实现对于数据的处理和对于其他装置的控制。总线1503将上述多个组件连接在一起,同时将上述组件连接到显示控制器1504和显示装置以及输入/输出(I/O)装置1505。输入/输出(I/O)装置1505可以是鼠标、键盘、调制解调器、网络接口、触控输入装置、体感输入装置、打印机以及本领域公知的其他装置。典型地,输入/输出(I/O)装置1505通过输入/输出(I/O)控制器1506与系统相连。
其中,存储器1502可以存储软件组件,例如操作系统、通信模块、交互模块以及应用程序。以上所述的每个模块和应用程序都对应于完成一个或多个功能和在发明实施例中描述的方法的一组可执行程序指令。
上述根据本发明实施例的方法、设备(系统)和计算机程序产品的流程图和/或框图描述了本发明的各个方面。应理解,流程图和/或框图的每个块以及流程图图例和/或框图中的块的组合可以由计算机程序指令来实现。这些计算机程序指令可以被提供至通用计算机、专用计算机或其它可编程数据处理设备的处理器,以产生机器,使得(经由计算机或其它可编程数据处理设备的处理器执行的)指令创建用于实现流程图和/或框图块或块中指定的功能/动作的装置。
同时,如本领域技术人员将意识到的,本发明实施例的各个方面可以被实现为系统、方法或计算机程序产品。因此,本发明实施例的各个方面可以采取如下形式:完全硬件实施方式、完全软件实施方式(包括固件、常驻软件、微代码等)或者在本文中通常可以都称为“电路”、“模块”或“系统”的将软件方面与硬件方面相结合的实施方式。此外,本发明的方面可以采取如下形式:在一个或多个计算机可读介质中实现的计算机程序产品,计算机可读介质具有在其上实现的计算机可读程序代码。
可以利用一个或多个计算机可读介质的任意组合。计算机可读介质可以是计算机可读信号介质或计算机可读存储介质。计算机可读存储介质可以是如(但不限于)电子的、磁的、光学的、电磁的、红外的或半导体系统、设备或装置,或者前述的任意适当的组合。计算机可读存储介质的更具体的示例(非穷尽列举)将包括以下各项:具有一根或多根电线的电气连接、便携式计算机软盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或闪速存储器)、光纤、便携式光盘只读存储器(CD-ROM)、光存储装置、磁存储装置或前述的任意适当的组合。在本发明实施例的上下文中,计算机可读存储介质可以为能够包含或存储由指令执行系统、设备或装置使用的程序或结合指令执行系统、设备或装置使用的程序的任意有形介质。
计算机可读信号介质可以包括传播的数据信号,所述传播的数据信号具有在其中如在基带中或作为载波的一部分实现的计算机可读程序代码。这样的传播的信号可以采用多种形式中的任何形式,包括但不限于:电磁的、光学的或其任何适当的组合。计算机可读信号介质可以是以下任意计算机可读介质:不是计算机可读存储介质,并且可以对由指令执行系统、设备或装置使用的或结合指令执行系统、设备或装置使用的程序进行通信、传播或传输。
用于执行针对本发明各方面的操作的计算机程序代码可以以一种或多种编程语言的任意组合来编写,所述编程语言包括:面向对象的编程语言如Java、Smalltalk、C++、PHP、Python等;以及常规过程编程语言如“C”编程语言或类似的编程语言。程序代码可以作为独立软件包完全地在用户计算机上、部分地在用户计算机上执行;部分地在用户计算机上且部分地在远程计算机上执行;或者完全地在远程计算机或服务器上执行。在后一种情况下,可以将远程计算机通过包括局域网(LAN)或广域网(WAN)的任意类型的网络连接至用户计算机,或者可以与外部计算机进行连接(例如通过使用因特网服务供应商的因特网)。
以上所述仅为本发明的优选实施例,并不用于限制本发明,对于本领域技术人员而言,本发明可以有各种改动和变化。凡在本发明的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。

Claims (29)

  1. 一种交互方法,其特征在于,所述方法包括:
    获取路径导航信息;
    确定所述路径导航信息中各路段的路况信息,所述路况信息根据各路段对应的路况采集序列中目标对象的位置确定;
    确定并发送目标路段对应的目标图像,所述目标路段为所述路径导航信息中路况信息满足预定路况条件的路段。
  2. 根据权利要求1所述的方法,其特征在于,所述路况信息通过如下步骤确定:
    确定待确定路段;
    对待识别图像序列中各所述路况采集图像分别进行图像识别,确定各所述路况采集图像中所述目标对象的位置,所述待识别图像序列为所述待确定路段对应的路况采集序列;
    根据所述目标对象的位置确定所述目标对象对应车道的车道可通行状态;
    根据各所述车道可通行状态确定所述待识别图像序列对应的第一路段拥堵状态;
    根据所述第一路段拥堵状态确定所述待确定路段的所述路况信息。
  3. 根据权利要求2所述的方法,其特征在于,所述根据各所述车道可通行状态确定所述待识别图像序列对应的第一路段拥堵状态包括:
    根据各所述路况采集图像对应的各所述车道可通行状态确定对应的第二路段拥堵状态;
    根据各所述第二路段拥堵状态确定所述第一路段拥堵状态。
  4. 根据权利要求2所述的方法,其特征在于,所述目标对象的位置用于表征所述目标对象相对于对应车道的车道线的位置;
    所述根据所述目标对象的位置确定所述目标对象对应车道的车道可通行状态包括:
    根据所述目标对象的位置确定所述目标对象对应的目标距离,所述目标距离用于表征所述目标对象与对应车道的车道线的最大距离;
    确定目标设备对应的可通行距离,所述目标设备为所述待识别图像序列对应的设备;
    根据所述目标距离和所述可通行距离确定所述车道可通行状态。
  5. 根据权利要求4所述的方法,其特征在于,所述根据所述目标距离和所述可通行距离确定所述车道可通行状态包括:
    响应于所述目标距离不小于所述可通行距离,确定所述车道可通行状态为可通行;
    响应于所述目标距离小于所述可通行距离,确定所述车道可通行状态为不可通行。
  6. 根据权利要求3所述的方法,其特征在于,所述根据各所述路况采集图像对应的各所述车道可通行状态确定对应的第二路段拥堵状态包括:
    响应于对应的各所述车道可通行状态均为不可通行,确定所述第二路段拥堵状态为拥堵;
    响应于对应的各所述车道可通行状态均为可通行,确定所述第二路段拥堵状态为畅通;
    响应于对应的至少一个所述车道可通行状态为不可通行且至少一个所述车道可通行状态为可通行,确定所述第二路段拥堵状态为缓行。
  7. 根据权利要求3所述的方法,其特征在于,所述根据各所述第二路段拥堵状态确定所述第一路段拥堵状态包括:
    响应于连续多个路况采集图像对应的所述第二路段拥堵状态均为拥堵,确定所述第一路段拥 堵状态为拥堵;
    响应于连续多个路况采集图像对应的所述第二路段拥堵状态为缓行,确定所述第一路段拥堵状态为缓行;
    响应于连续多个路况采集图像对应的所述第二路段拥堵状态为畅通的数量满足第三数量条件,确定所述第一路段拥堵状态为畅通。
  8. 根据权利要求2所述的方法,其特征在于,所述根据所述第一路段拥堵状态确定所述待确定路段的所述路况信息包括:
    获取图像序列集合中各所述待识别图像序列对应的所述第一路段拥堵状态,所述图像序列集合包括所述待确定路段在同一时间段内对应的多个所述待识别图像序列;
    确定第一数量、第二数量和第三数量,所述第一数量用于表征所述图像序列集合中所述第一路段拥堵状态为畅通的所述待识别图像序列的数量,所述第二数量用于表征所述图像序列集合中所述第一路段拥堵状态为缓行的所述待识别图像序列的数量,所述第三数量用于表征所述图像序列集合中所述第一路段拥堵状态为拥堵的所述待识别图像序列数量;
    响应于所述第一数量大于所述第二数量且所述第一数量大于所述第三数量,将所述路况信息确定为畅通;
    响应于所述第二数量大于所述第一数量且所述第二数量大于所述第三数量,将所述路况信息确定为缓行;
    响应于所述第三数量大于所述第一数量且所述第三数量大于所述第二数量,将所述路况信息确定为拥堵。
  9. 根据权利要求1所述的方法,其特征在于,所述确定并发送目标路段对应的目标图像包括:
    确定各所述路况采集图像中目标对象的数量和/或各所述路况采集图像的清晰度;
    根据各所述路况采集图像对应的所述目标对象的数量和/或清晰度确定目标图像。
  10. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    确定并发送所述目标路段对应的目标图像序列,所述目标图像序列包括所述目标图像。
  11. 根据权利要求1或10所述的方法,其特征在于,所述方法还包括:
    确定并发送所述目标路段的路段信息,所述路段信息包括所述目标路段的路况信息。
  12. 一种交互方法,其特征在于,所述方法包括:
    响应于接收到目标路段对应的目标图像,在导航页面渲染显示路况展示控件;
    其中,所述目标图像基于预先上传的路径导航信息确定,所述路况展示控件用于展示所述目标图像,所述目标路段为所述路径导航信息中路况信息满足预定路况条件的路段,所述路况信息根据所述路径导航信息中各路段对应的路况采集序列中目标对象的位置确定。
  13. 根据权利要求12所述的方法,其特征在于,所述路况信息通过如下步骤确定:
    确定待确定路段;
    对待识别图像序列中各所述路况采集图像分别进行图像识别,确定各所述路况采集图像中所述目标对象的位置,所述待识别图像序列为所述待确定路段对应的路况采集序列;
    根据所述目标对象的位置确定所述目标对象对应车道的车道可通行状态;
    根据各所述车道可通行状态确定所述待识别图像序列对应的第一路段拥堵状态;
    根据所述第一路段拥堵状态确定所述待确定路段的所述路况信息。
  14. 根据权利要求13所述的方法,其特征在于,所述根据各所述车道可通行状态确定所述待识别图像序列对应的第一路段拥堵状态包括:
    根据各所述路况采集图像对应的各所述车道可通行状态确定对应的第二路段拥堵状态;
    根据各所述第二路段拥堵状态确定所述第一路段拥堵状态。
  15. 根据权利要求13所述的方法,其特征在于,所述根据所述目标对象的位置确定所述目标对象对应车道的车道可通行状态包括:
    根据所述目标对象的位置确定所述目标对象对应的目标距离,所述目标距离用于表征所述目标对象与对应车道的车道线的最大距离;
    确定目标设备对应的可通行距离,所述目标设备为所述待识别图像序列对应的设备;
    根据所述目标距离和所述可通行距离确定所述车道可通行状态。
  16. 根据权利要求15所述的方法,其特征在于,所述根据所述目标距离和所述可通行距离确定所述车道可通行状态包括:
    响应于所述目标距离不小于所述可通行距离,确定所述车道可通行状态为可通行;
    响应于所述目标距离小于所述可通行距离,确定所述车道可通行状态为不可通行。
  17. 根据权利要求14所述的方法,其特征在于,所述根据各所述路况采集图像对应的各所述车道可通行状态确定对应的第二路段拥堵状态包括:
    响应于对应的各所述车道可通行状态均为不可通行,确定所述第二路段拥堵状态为拥堵;
    响应于对应的各所述车道可通行状态均为可通行,确定所述第二路段拥堵状态为畅通;
    响应于对应的至少一个所述车道可通行状态为不可通行且至少一个所述车道可通行状态为可通行,确定所述第二路段拥堵状态为缓行。
  18. 根据权利要求14所述的方法,其特征在于,所述根据各所述第二路段拥堵状态确定所述第一路段拥堵状态包括:
    响应于连续多个路况采集图像对应的所述第二路段拥堵状态均为拥堵,确定所述第一路段拥堵状态为拥堵;
    响应于连续多个路况采集图像对应的所述第二路段拥堵状态为缓行,确定所述第一路段拥堵状态为缓行;
    响应于连续多个路况采集图像对应的所述第二路段拥堵状态为畅通的数量满足第三数量条件,确定所述第一路段拥堵状态为畅通。
  19. 根据权利要求13所述的方法,其特征在于,所述根据所述第一路段拥堵状态确定所述待确定路段的所述路况信息包括:
    获取图像序列集合中各所述待识别图像序列对应的所述第一路段拥堵状态,所述图像序列集合包括所述待确定路段在同一时间段内对应的多个所述待识别图像序列;
    确定第一数量、第二数量和第三数量,所述第一数量用于表征所述图像序列集合中所述第一路段拥堵状态为畅通的所述待识别图像序列的数量,所述第二数量用于表征所述图像序列集合中所述第一路段拥堵状态为缓行的所述待识别图像序列的数量,所述第三数量用于表征所述图像序列集合中所述第一路段拥堵状态为拥堵的所述待识别图像序列数量;
    响应于所述第一数量大于所述第二数量且所述第一数量大于所述第三数量,将所述路况信息确定为畅通;
    响应于所述第二数量大于所述第一数量且所述第二数量大于所述第三数量,将所述路况信息确定为缓行;
    响应于所述第三数量大于所述第一数量且所述第三数量大于所述第二数量,将所述路况信息确定为拥堵。
  20. 根据权利要求12所述的方法,其特征在于,所述目标图像根据各路况采集图像中目标对象的数量和/或各所述路况采集图像的清晰度确定,所述路况采集图像为所述目标路段对应的所述路况采集序列中的图像。
  21. 根据权利要求12所述的方法,其特征在于,所述方法还包括:
    接收所述目标路段对应的目标图像序列,所述目标图像序列包括所述目标图像;
    响应于所述路况展示控件被触发,展示视频播放页面;
    通过所述视频播放页面播放所述目标图像序列。
  22. 根据权利要求12所述的方法,其特征在于,所述在所述导航页面渲染显示路况展示控件包括:
    在所述导航页面的预定位置渲染显示所述路况展示控件。
  23. 根据权利要求22所述的方法,其特征在于,所述预定位置为所述目标路段在所述导航页面中的显示位置和/或所述导航页面的下方和/或所述导航页面的上方和/或所述导航页面的左侧和/或所述导航页面的右侧。
  24. 根据权利要求12或21所述的方法,其特征在于,所述方法还包括:
    响应于接收到所述目标路段的路段信息,通过所述路况展示控件展示所述路段信息,所述路段信息包括所述目标路段的路况信息。
  25. 一种交互装置,其特征在于,所述装置包括:
    导航信息获取单元,用于获取路径导航信息;
    路况信息确定单元,用于确定所述路径导航信息中各路段的路况信息,所述路况信息根据各路段对应的路况采集序列中目标对象的位置确定,所述目标对象的位置用于表征所述目标对象相对于对应车道的车道线的位置;
    图像发送单元,用于确定并发送目标路段对应的目标图像,所述目标路段为所述路径导航信息中路况信息满足预定路况条件的路段。
  26. 一种交互装置,其特征在于,所述装置包括:
    控件显示单元,用于响应于接收到目标路段对应的目标图像,在导航页面渲染显示路况展示控件;
    其中,所述目标图像基于预先上传的路径导航信息确定,所述路况展示控件用于展示所述目标图像,所述目标路段为所述路径导航信息中路况信息满足预定路况条件的路段,所述路况信息根据所述路径导航信息中各路段对应的路况采集序列中目标对象的位置确定,所述目标对象的位置用于表征所述目标对象相对于对应车道的车道线的位置。
  27. 一种计算机可读存储介质,其上存储计算机程序指令,其特征在于,所述计算机程序指令在被处理器执行时实现如权利要求1-24中任一项所述的方法。
  28. 一种电子设备,包括存储器和处理器,其特征在于,所述存储器用于存储一条或多条计算机程序指令,其中,所述一条或多条计算机程序指令被所述处理器执行以实现如权利要求1-24中任一项所述的方法。
  29. 一种计算机程序产品,包括计算机程序/指令,其特征在于,该计算机程序/指令被处理器执行以实现如权利要求1-24中任一项所述的方法。
PCT/CN2022/077520 2021-03-23 2022-02-23 交互方法和交互装置 WO2022199311A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
BR112023019025A BR112023019025A2 (pt) 2021-03-23 2022-02-23 Métodos e dispositivos de interação

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110309280.4 2021-03-23
CN202110309280.4A CN113048982B (zh) 2021-03-23 2021-03-23 交互方法和交互装置

Publications (1)

Publication Number Publication Date
WO2022199311A1 true WO2022199311A1 (zh) 2022-09-29

Family

ID=76514635

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/077520 WO2022199311A1 (zh) 2021-03-23 2022-02-23 交互方法和交互装置

Country Status (3)

Country Link
CN (1) CN113048982B (zh)
BR (1) BR112023019025A2 (zh)
WO (1) WO2022199311A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116972871A (zh) * 2023-09-25 2023-10-31 苏州元脑智能科技有限公司 一种行车路径推送方法、装置、可读存储介质及系统

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113048982B (zh) * 2021-03-23 2022-07-01 北京嘀嘀无限科技发展有限公司 交互方法和交互装置
CN113470408A (zh) * 2021-07-16 2021-10-01 浙江数智交院科技股份有限公司 一种交通信息提示方法、装置、电子设备及存储介质

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060167626A1 (en) * 2005-01-24 2006-07-27 Denso Corporation Navigation system and program for controlling the same
CN105225496A (zh) * 2015-09-02 2016-01-06 上海斐讯数据通信技术有限公司 道路交通预警系统
CN108010362A (zh) * 2017-12-29 2018-05-08 百度在线网络技术(北京)有限公司 行车路况信息推送的方法、装置、存储介质及终端设备
CN110364008A (zh) * 2019-08-16 2019-10-22 腾讯科技(深圳)有限公司 路况确定方法、装置、计算机设备和存储介质
US20200175855A1 (en) * 2018-12-03 2020-06-04 Hyundai Motor Company Traffic information service apparatus and method
CN111314651A (zh) * 2018-12-11 2020-06-19 上海博泰悦臻电子设备制造有限公司 基于v2x技术的路况显示方法、系统、v2x终端及v2x服务器
CN111325999A (zh) * 2018-12-14 2020-06-23 奥迪股份公司 车辆驾驶辅助方法、装置、计算机设备和存储介质
CN111739294A (zh) * 2020-06-11 2020-10-02 腾讯科技(深圳)有限公司 一种路况信息收集方法、装置、设备及存储介质
CN113048982A (zh) * 2021-03-23 2021-06-29 北京嘀嘀无限科技发展有限公司 交互方法和交互装置

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008031779A (ja) * 2006-07-31 2008-02-14 Atsunobu Sakamoto 自動車道路の渋滞防止
CN104851295B (zh) * 2015-05-22 2017-08-04 北京嘀嘀无限科技发展有限公司 获取路况信息的方法和系统
CN109326123B (zh) * 2018-11-15 2021-01-26 中国联合网络通信集团有限公司 路况信息处理方法和装置

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060167626A1 (en) * 2005-01-24 2006-07-27 Denso Corporation Navigation system and program for controlling the same
CN105225496A (zh) * 2015-09-02 2016-01-06 上海斐讯数据通信技术有限公司 道路交通预警系统
CN108010362A (zh) * 2017-12-29 2018-05-08 百度在线网络技术(北京)有限公司 行车路况信息推送的方法、装置、存储介质及终端设备
US20200175855A1 (en) * 2018-12-03 2020-06-04 Hyundai Motor Company Traffic information service apparatus and method
CN111314651A (zh) * 2018-12-11 2020-06-19 上海博泰悦臻电子设备制造有限公司 基于v2x技术的路况显示方法、系统、v2x终端及v2x服务器
CN111325999A (zh) * 2018-12-14 2020-06-23 奥迪股份公司 车辆驾驶辅助方法、装置、计算机设备和存储介质
CN110364008A (zh) * 2019-08-16 2019-10-22 腾讯科技(深圳)有限公司 路况确定方法、装置、计算机设备和存储介质
CN111739294A (zh) * 2020-06-11 2020-10-02 腾讯科技(深圳)有限公司 一种路况信息收集方法、装置、设备及存储介质
CN113048982A (zh) * 2021-03-23 2021-06-29 北京嘀嘀无限科技发展有限公司 交互方法和交互装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116972871A (zh) * 2023-09-25 2023-10-31 苏州元脑智能科技有限公司 一种行车路径推送方法、装置、可读存储介质及系统
CN116972871B (zh) * 2023-09-25 2024-01-23 苏州元脑智能科技有限公司 一种行车路径推送方法、装置、可读存储介质及系统

Also Published As

Publication number Publication date
CN113048982A (zh) 2021-06-29
CN113048982B (zh) 2022-07-01
BR112023019025A2 (pt) 2023-10-17

Similar Documents

Publication Publication Date Title
WO2022199311A1 (zh) 交互方法和交互装置
US10077986B2 (en) Storing trajectory
CN113029177B (zh) 基于频率的通行行程表征
TWI684746B (zh) 用於在地圖上顯示運輸工具的移動和行駛路徑的系統、方法和非暫時性電腦可讀取媒體
US10066954B1 (en) Parking suggestions
JP6488594B2 (ja) 自動運転支援システム、自動運転支援方法及びコンピュータプログラム
US11441918B2 (en) Machine learning model for predicting speed based on vehicle type
RU2677164C2 (ru) Способ и сервер создания прогнозов трафика
KR20210137197A (ko) 경로 생성 방법 및 장치, 전자 기기 및 기억 매체
US9752886B2 (en) Mobile trip planner and live route update system
TW201250208A (en) Navigation system and route planning method thereof
JP7106794B2 (ja) 道路状況予測方法、装置、機器、プログラム及びコンピュータ記憶媒体
US20220155091A1 (en) Landmark-assisted navigation
JP5949435B2 (ja) ナビゲーションシステム、映像サーバ、映像管理方法、映像管理プログラム、及び映像提示端末
US20180017406A1 (en) Real-Time Mapping Using Geohashing
JP2015076078A (ja) 渋滞予測システム、端末装置、渋滞予測方法および渋滞予測プログラム
JP2015076077A (ja) 交通量推定システム、端末装置、交通量推定方法および交通量推定プログラム
JP6786376B2 (ja) 評価装置、評価方法及び評価プログラム
JP2019020172A (ja) 経路提案装置および経路提案方法
US20230097373A1 (en) Traffic monitoring, analysis, and prediction
RU2664034C1 (ru) Способ и система создания информации о трафике, которая будет использована в картографическом приложении, выполняемом на электронном устройстве
US11719554B2 (en) Determining dissimilarities between digital maps and a road network using predicted route data and real trace data
JP6606354B2 (ja) 経路表示方法、経路表示装置及びデータベース作成方法
US20210364312A1 (en) Routes on Digital Maps with Interactive Turn Graphics
Richly et al. Predicting location probabilities of drivers to improved dispatch decisions of transportation network companies based on trajectory data

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22773973

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: MX/A/2023/011293

Country of ref document: MX

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: 112023019025

Country of ref document: BR

ENP Entry into the national phase

Ref document number: 112023019025

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20230919

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 18/01/2024)