WO2021221210A1 - 스마트경로 생성방법 및 장치 - Google Patents
스마트경로 생성방법 및 장치 Download PDFInfo
- Publication number
- WO2021221210A1 WO2021221210A1 PCT/KR2020/005720 KR2020005720W WO2021221210A1 WO 2021221210 A1 WO2021221210 A1 WO 2021221210A1 KR 2020005720 W KR2020005720 W KR 2020005720W WO 2021221210 A1 WO2021221210 A1 WO 2021221210A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- content
- user
- video
- smart route
- time
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 42
- 230000008033 biological extinction Effects 0.000 claims description 8
- 230000007423 decrease Effects 0.000 claims description 4
- 230000011218 segmentation Effects 0.000 description 4
- 238000013135 deep learning Methods 0.000 description 3
- 235000021152 breakfast Nutrition 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 208000025721 COVID-19 Diseases 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 235000013305 food Nutrition 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 235000012149 noodles Nutrition 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/36—Input/output arrangements for on-board computers
- G01C21/3697—Output of additional, non-guidance related information, e.g. low fuel level
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/36—Input/output arrangements for on-board computers
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/36—Input/output arrangements for on-board computers
- G01C21/3691—Retrieval, searching and output of information related to real-time traffic, weather, or environmental conditions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/783—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/787—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0241—Advertisements
- G06Q30/0251—Targeted advertisements
- G06Q30/0261—Targeted advertisements based on user location
Definitions
- the present invention relates to a method of providing content reflecting a user's request along a user's search path.
- a preferred embodiment of the present invention proposes a method in which the user can be provided with necessary contents on the moving path by providing contents or advertisements that meet the user's request on the user's search path.
- the method for generating a smart route includes the steps of: receiving an input of a departure point and a destination; receiving a user request as a search term; and the route display unit displays a smart route that meets the user's request among a plurality of candidate routes from the origin to the destination on a map, and displays at least one content that meets the user's request in real time according to the user's location and displaying it in real time, wherein the search word is in the form of a sentence.
- each of the at least one content has time information and location information on a Geographic Information System (GIS), and the time information includes a starting time and an ending time of the content. It is characterized in that it includes.
- GIS Geographic Information System
- the content is characterized in that the information value of the content decreases as the extinction time approaches.
- the content display unit displays the corresponding content when the conditions for the creation time and the extinction time of the at least one content are not arranged at the input time at which the search word is input and are not arranged with the content of the search word. characterized by selection.
- the content display unit is characterized in that it selects the corresponding content only when the location information of the at least one content is on the smart path.
- the content having the closest location information based on the user's current location is sequentially played in the order of the user's current location, and the user's request information is provided around the user's moving path. It is characterized in that it provides the related content in real time. In this case, it is characterized in that only a partial section related to the search word is reproduced in the video.
- the at least one content is a moving picture
- metadata is provided in the form of a sentence for each at least one scene constituting the moving picture, and the metadata is based on the metadata.
- a scene having the highest degree of matching with the search term is searched for, and only the starting point and the ending point of the searched scene are reproduced.
- the matching degree is determined based on the similarity between the search word and the metadata, and becomes 0 when two sentences are the same, and the value increases as the similarity between the two sentences decreases. It is characterized in that the determination is made using a Levenshtein distance technique.
- the at least one content includes at least one of a moving picture, a photo, text, and audio.
- At least one content that meets the user's request is displayed overlaid on the smart route displayed on the map.
- the smart route generating device includes a first input unit for receiving a starting point and a destination; A second input unit for receiving a user request as a search word; and a path display unit for displaying a smart path that meets the user request among a plurality of paths from the origin to the destination on a map; and the path display unit and a content display unit that selects at least one content that meets the user's request in real time according to the user's location and displays it on the smart route in real time, wherein the search word is in the form of a sentence. do.
- the at least one content includes advertisement information for a product or service related to the smart route.
- the method and apparatus for generating a smart route display on the user's movement route the contents of only a specific section in which the user wants to search within the user video within the specific route that the user finds. There is an effect that it is possible to acquire necessary information along the movement route.
- FIG. 1 illustrates an example in which components constituting a moving picture are divided into a scene and a shot as a preferred embodiment of the present invention.
- FIG. 2 is a flowchart of a method for retrieving information inside a moving picture as a preferred embodiment of the present invention.
- FIG. 3 is a diagram showing an internal configuration of an apparatus for searching video internal information as a preferred embodiment of the present invention.
- FIG. 4 shows an example of dividing a shot in a moving picture as a preferred embodiment of the present invention.
- FIG. 5 shows an example of assigning a tag set to a shot as a preferred embodiment of the present invention.
- FIG. 6 shows an example of grouping a shot into a scene as a preferred embodiment of the present invention.
- FIG. 7 is a flowchart of a method for retrieving information in a moving picture as another preferred embodiment of the present invention.
- FIG. 8 shows an embodiment of searching for information inside a moving picture as a preferred embodiment of the present invention.
- 9 is another preferred embodiment of the present invention, showing the internal configuration of the smart route generating device.
- FIG. 10 shows an implementation example of the content selection unit of FIG. 9 as a preferred embodiment of the present invention.
- FIG. 11 shows an example of generating a smart route as a preferred embodiment of the present invention.
- FIG. 12 is a flowchart for creating a smart route as a preferred embodiment of the present invention.
- the method for generating a smart route includes the steps of: receiving an input of a departure point and a destination; receiving a user request as a search term; and the route display unit displays a smart route that meets the user's request among a plurality of candidate routes from the origin to the destination on a map, and displays at least one content that meets the user's request in real time according to the user's location and displaying it in real time, wherein the search word is in the form of a sentence.
- FIG. 1 shows an example in which components constituting a moving picture are divided into a scene and a shot as a preferred embodiment of the present invention.
- the moving picture 100 is segmented into n shots (n is a natural number) 111, 113, 121, 123, 125, 131, 133.
- n shots is a natural number
- FIG. 4 For a method of classifying shots in a video, refer to FIG. 4 .
- At least one shot is grouped into units having similar meanings or subjects to constitute a scene.
- the first shot 111 and the second shot 113 are grouped into the first scene 110
- the third shot 121 , the fourth shot 123 , and the fifth shot are grouped together.
- 125 may be grouped into the second scene 120
- the sixth shot 131 and the seventh shot 133 may be grouped into the third scene 130 .
- a subject may include at least one meaning.
- FIG. 2 is a flowchart of a method for retrieving video internal information as a preferred embodiment of the present invention.
- the user selects the video and inputs a search word through a search word input interface provided when video selection is activated.
- the video is indexed in units of scenes by providing metadata in the form of sentences for each scene.
- the video internal information search apparatus searches for a specific section that matches the search word or has high relevance in the video, and reproduces only the searched specific section.
- the video internal information search apparatus searches for a scene with the highest matching degree with the search word in the video (S210). and (S220), only the start point to the end point of the searched scene is played back (S230).
- FIG. 3 shows an internal configuration diagram of an apparatus 300 for searching video internal information as a preferred embodiment of the present invention.
- 4 to 6 show detailed functions of the video section search unit 320 constituting the apparatus 300 for searching video internal information.
- 7 is a flowchart showing a search for video inside information.
- a method for searching video internal information in a device for searching video internal information will be described with reference to FIGS. 3 to 7 .
- the apparatus 300 for searching video internal information may be implemented in a terminal, a computer, a notebook computer, a handheld device, or a wearable device.
- the apparatus 300 for searching video internal information may be implemented in the form of a terminal having an input unit for receiving a user's search word, a display for displaying a video, and a processor.
- the method of searching the video internal information may be implemented by being installed in the form of an application in the terminal.
- the apparatus 300 for searching video internal information includes a search word input unit 310 , a video section search unit 320 , and a video section playback unit 330 .
- the video section search unit 320 includes a shot segmentation unit 340 , a scene generation unit 350 , a metadata generation unit 360 , and a video index unit 370 .
- the search word input unit 310 receives a search word from the user in the form of a sentence.
- the user can use all forms such as voice search, text search, and image search.
- An example of an image search is a case where the contents scanned from a book are converted into text and used as a search term.
- the search word input unit 310 may be implemented as a keyboard, a stylus pen, a microphone, or the like.
- the video section search unit 320 searches for a specific section in the video that matches the search word input from the search word input unit 310 or has content related to the search word. As an embodiment, the video section search unit 320 searches for a scene in which a sentence having the highest degree of matching with the input search word sentence is assigned as metadata.
- the video section search unit 320 indexes and manages videos so that information can be searched within a single video.
- the shot segmentation unit 340 segments the video in shot units (S710), assigns a tag set to each segmented shot (S720), and adds a tag set to each shot.
- a keyword is derived for each shot by applying a topic analysis algorithm (S730). The keyword is derived in the form of identifying and discriminating the content of each of at least one shot constituting the moving picture.
- the scene generator 350 determines the similarity between adjacent before and after shots on the timeline of the video. The similarity determination may be performed based on a keyword derived from each shot, an object detected in each shot, a voice feature detected in each shot, and the like. As a preferred embodiment of the present invention, the scene generator 350 may create a scene by grouping shots having a high degree of similarity between adjacent shots based on a keyword ( S740 ).
- An algorithm for performing grouping may include a hierarchical clustering technique (S750). In this case, a plurality of shots included in one scene may be interpreted as delivering content having similar meaning or subject matter. For an example of grouping shots through hierarchical clustering in the scene generator 350 , refer to FIG. 8 .
- the scene generator 350 assigns a scene tag to each created scene (351, 353, 355).
- the scene tag may be generated based on an image tag assigned to each of at least one shot included in each scene.
- a scene tag may be generated by a combination of a tag set assigned to each of at least one shot constituting a scene.
- the scene keyword may be generated by a combination of keywords derived from each of at least one shot constituting the scene.
- the scene tag may serve as a weight when generating metadata for each scene.
- the metadata generator 360 analyzes the scenes generated by the scene generator 350, and provides metadata for each scene, thereby supporting a search for internal video content (S760). Metadata assigned to each scene acts as an index.
- the metadata is in the form of a summary sentence indicating the contents of each scene.
- the metadata may be generated by further referring to a scene tag assigned to each of at least one shot constituting one scene.
- Scene tags can serve as weights when performing deep learning to generate metadata. For example, weight may be assigned to image tag information and voice tag information extracted from at least one tag set included in the scene tag.
- the metadata is generated based on STT (Speech to Text) data of voice data extracted from at least one shot constituting each scene, and a scene tag extracted from each of at least one shot constituting each scene.
- STT Seech to Text
- a summary sentence is generated by performing deep learning machine learning on at least one STT data and at least one scene tag obtained from at least one shot constituting one scene. Metadata is given to each scene by using a summary sentence generated through machine learning for each scene.
- the video indexing unit 370 uses metadata assigned to each scene of the video S300 as an index. For example, if the video S300 is classified into three scenes, the video indexing unit 370 uses the first sentence 371 given as metadata to the first scene 351 (0:00 to t1). Used as an index, the second sentence 373 assigned as metadata to the second scene 353 (t1 to t2) is used as an index, and given as metadata to the third scene 355 (t2 to t3) The third sentence 375 is used as an index.
- the user's search sentence is a first search sentence (S311)
- the first search sentence (S311) is a first of a plurality of metadata (371, 373, 375) allocated to each of a plurality of scenes in one video.
- the sentence 371 has the highest degree of matching
- the video section having the highest degree of matching with the search sentence input in the search word input unit 310 is the first scene 351 .
- the video section reproducing unit 330 reproduces only the section 0:00 to t1 of the first scene 351 in the video S300.
- the video indexing unit 370 uses the Levenshtein distance technique, in which the value becomes 0 when two sentences are identical and the value increases as the similarity between the two sentences decreases. can be determined, but is not limited thereto, and various algorithms for determining the similarity between two sentences can be used.
- the user's search text is the second search text (S313)
- the second search text (S313) is the first of a plurality of metadata (371, 373, 375) allocated to each of a plurality of scenes in one video
- the video section reproducing unit 330 reproduces only the section t1 to t2 of the second scene 353 in the video S300.
- the video indexing unit 370 determines that the user's search sentence is the third search sentence (S315) and the third search sentence (S315) has the highest degree of matching with the third sentence, it is input into the search word input unit 310 It is determined that the video section having the highest degree of matching with the search sentence is the third scene 355 .
- the video section reproducing unit 330 reproduces only the section t2 to t3 of the third scene 355 in the video S300.
- FIG. 4 shows an example of dividing a shot in a moving picture as a preferred embodiment of the present invention.
- the x-axis represents time (sec)
- the y-axis represents a representative HSV value.
- the shot segmentation unit 340 of the video internal information search apparatus extracts frames from the video S300 at regular intervals as images, and then converts each image into an HSV color space. Then, three time series data composed of representative values (median) of H (hue) (S401), S (saturation) (S403) and v (brightness) (S405) of each image are generated. And, when the inflection points of each of the three time series data of H (hue) (S401), S (saturation) (S403), and v (brightness) (S405) all match or are within a certain time period, the corresponding point of the shot Set as a starting point or an ending point.
- FIG. 5 shows an example of assigning a tag set to a shot as a preferred embodiment of the present invention.
- FIG. 5 illustrates an example in which the first tag set 550 is applied to the first shot 510 .
- the shot 510 is classified into image data 510a and audio data 510b.
- image data 510a after extracting images per second (520a), an object is detected in each image (530a). Then, an image tag is generated based on the detected object (540a).
- Image tags apply object annovation or labeling to objects detected in images to construct learning data, and then perform object recognition through deep learning related to image recognition. Information obtained by extracting objects from each image can be created based on
- a tag set 550 is generated.
- the tag set refers to a combination of the image tag 540a and the voice tag 540b detected during the time when the first shot 510 is, for example, between 00:00 and 10:00 seconds.
- FIG. 6 shows an example of grouping a shot into a scene as a preferred embodiment of the present invention.
- FIG. 6 illustrates an example of creating a scene through hierarchical clustering 640 after determining the degree of similarity based on the keyword 630 .
- FIG. 8 shows an embodiment of searching for information inside a moving picture as a preferred embodiment of the present invention.
- FIG. 8 shows an example in which the video 800 selected by the user in the shot segmentation unit is segmented into seven shots 801 to 807.
- the device for searching video internal information generates a tag set by extracting a video tag and an audio tag from each of the seven shots (801 to 807), and then performs topic analysis such as LDA on the tag set for each shot (801 to 807). 807), a keyword is derived.
- the first shot 801 is in the range of 0:00 to 0:17, and the first keyword derived from the first shot 801 is (Japan, Corona 19, severe) 801a ) am.
- the second shot 802 is a section from 0:18 to 0:29, and the second keyword derived from the second shot 802 is (Japan, Corona 19, Spread) 802a.
- the third shot 803 is a section from 0:30 to 0:34, and the third keyword derived from the third shot 803 is (New York, Corona 19, Europe, Inflow) 803a.
- the fourth shot 804 is a section from 0:34 to 0:38, and the fourth keyword derived from the fourth shot 804 is (US, Corona 19, death) 804a.
- the fifth shot 805 is a section from 0:39 to 0:41, and the fifth keyword derived from the fifth shot 805 is (US, Corona 19, confirmed, dead).
- the sixth shot 806 is a section from 0:42 to 0:45, and the sixth keyword derived from the sixth shot 806 is (US, Corona 19, death) 806a.
- the seventh shot 807 is a section from 0:46 to 0:50, and the seventh keyword derived from the seventh shot 807 is (US, Corona 19, death) 807a.
- the scene generator groups at least one shot based on the similarity.
- the degree of similarity can be determined based on keywords extracted from each shot, and video tags and voice tags can be further referred to.
- the first shot 801 and the second shot 802 are grouped into the first scene 810
- the third shot 803 is grouped with the second scene 820
- the fourth to seventh shots 804 to 807 are grouped into the third scene 830 .
- the first scene 810 is a section from 0:00 to 0:29, and the first keyword derived from the first shot 801 is derived from (Japan, Corona 19, severe) 801a and the second shot 802 .
- the second keyword used is (Japan, Corona 19, Spread) (802a) to "Japan Corona 19 continues to spread” (810b) with reference to the voice data of the first shot 801 and the second shot 802. metadata is provided.
- the second scene 820 is a section from 0:30 to 0:34, and the third keywords derived from the third shot 803 are (New York, Corona 19, Europe, Inflow) 803a and the third shot 803. Referring to the voice data of the "New York's Corona 19 is said to be coming from Europe" (820b) is given.
- the third scene 830 is a section from 0:35 to 0:50, and is derived from the fourth keyword (mi, corona 19, death) 804a derived from the fourth shot 804 and the fifth shot 805.
- the fifth keyword (US, Corona 19, confirmed, dead)
- the sixth keyword (US, Corona 19, dead)
- (806a) derived from the sixth shot 806, and the fourth shot 804 to the sixth shot 806 ) with reference to the voice data of "This is the news of the death of COVID-19 in the United States.” (830b) is given.
- the user when the user selects the video 800 and the search word input interface is activated, the user inputs the content to be searched in the form of a sentence. For example, a search term sentence "What is the current state of Corona in the United States?" (840) may be input.
- the video indexing unit searches for metadata with the highest degree of matching with the search word sentence 840 by using the metadata given to each scene as an index.
- the degree of matching is determined based on the similarity between the search word 840 and the metadata 810b, 820b, and 830b, and becomes 0 when two sentences are the same, and the Levinstein distance ( Levenshtein distance) technique can be used.
- the video indexer searches for the metadata most similar to the user search term (840) "How is the current state of Corona in the United States?"
- the finished third scene 830 is played back to the user.
- the search word 840 only the section of the third scene 830 corresponding to the section 0:35 to 0:50 related to the search word 840 in the video 800 can be searched and viewed.
- the video indexing unit may provide the user with metadata assigned to each scene 810 to 830 constituting the video as an index. Users can preview the contents of the video in advance through the video index.
- FIG. 9 is another preferred embodiment of the present invention, showing the internal configuration of the smart route generating device.
- 11 shows an example in which a smart route is presented by a smart route generating device. It will be described with reference to FIGS. 9 and 11 .
- the smart route generating device 900 includes an input unit 910 and a route display unit 920 , and can communicate with an external server 950 via wire or wireless.
- Examples of the external server 950 include a server 951 equipped with a map database, a server 953 equipped with a content database, a server 955 in which user-specific lifestyles, preferences, etc. are recorded and managed, and an advertisement database. a server 957 , a telecommunication company server, and the like.
- An example of the smart route generating device 900 includes a mobile phone, a smart phone, a smart watch, a tablet, a laptop computer, a terminal such as a PC.
- the smart route generating apparatus 900 may include all types of terminals having a processor that controls implementation so as to display the route input by the user.
- the input unit 910 includes a first input unit 912 and a second input unit 914 .
- the first input unit 912 is an interface for receiving a departure point and a destination from a user, and includes a text input, an audio input, a touch input, and the like.
- the second input unit 914 is an interface for receiving a user request as a search word from the user, and includes a text input and an audio input. Referring to FIG. 11 , the first input unit 912 may receive input of a departure point 1120a , a destination 1120b , a first waypoint 1120c , and a second waypoint 1120d .
- the second input unit 914 may be implemented to be activated after receiving the source and destination through the first input unit 912 . However, it should be noted that this corresponds to an embodiment of the present invention and can be modified. Referring to FIG. 11 , the second input unit 914 is an interface in the form of a search window 1110 and may receive an audio input 1111 to a text input 1113 . As a preferred embodiment of the present invention, the second input unit 914 receives the user's request in the form of a sentence.
- the route display unit 920 includes a smart route display unit 930 and a content display unit 940 .
- the smart route refers to routes that meet the user's request received through the second input unit 914 among a plurality of candidate routes from the source to the destination input by the user.
- the smart route display unit 930 searches for a user's request "find a place to eat breakfast" 1113 among a plurality of candidate routes from a departure point 1120a to a destination 1120b. Displays the paths with results.
- the smart route display unit 930 may display a first smart route S1100 , a second smart route S1110 , a third smart route S1120 , and the like.
- the content display unit 940 displays at least one content on the smart route in real time according to the user's location.
- the content display unit 940 may further include a content selection unit (not shown).
- each content shown on the smart route has time information and location information on a Geographic Information System (GIS), and the time information includes a starting time and ending time of the content. time) is included.
- GIS Geographic Information System
- the content selection unit selects content that meets the user's request on the smart route using time information and location information of each content pre-stored in a preset database. For a description of this, refer to FIG. 10 .
- the content display unit 940 may display all contents matching the user's search term on the smart path.
- the content includes text, audio, and video, and may include advertisements, SNS, blog posts, and the like.
- the content display unit 940 may sequentially reproduce some of the contents displayed on the smart route according to the user's real-time location.
- the content display unit 940 sequentially plays the video in order of the content having the closest location information based on the user's current location.
- the user may be provided with content related to the user's request in real time around the user's moving path.
- the sandwich shop 1101 is played.
- the user's location 1121a detected at time t2 and The contents of the nearest samgyetang restaurant 1102 are played. Thereafter, the contents of the noodle collection 1103 closest to the user's location 1121b detected at time t3 are reproduced.
- the video provided by the user may be provided only for a specific section of the video related to the user's search term, not the entire video uploaded to SNS, blog posting, or video provided by an advertising company.
- a method of extracting and providing only a specific section related to a user's search term from a video refer to the description related to FIGS. 1 to 8 .
- FIG. 10 shows an implementation example of a content selection unit as a preferred embodiment of the present invention.
- the content selection unit extracts all the contents associated with a plurality of candidate paths from the origin to the destination received from the user through the first input unit ( FIGS. 9 and 910 ) from the server or external server in which the contents are pre-stored to extract the contents list (S1010). ).
- the first content group is extracted by comparing the creation time and extinction time information of each content in the content list with the user's search word input time ( S1020 ).
- the creation time of the content provided by the first store is 11:00 and the extinction time is 3:00
- the creation time of the content provided by the second store is 11:30 and the extinction time is 3: 00:00
- the creation time of the content provided by the third store is 10:00
- the extinction time is 2:30
- the user's search term "find a place to have breakfast" (FIGS. 11, 1113) input time
- a second content group matching the meaning of the user's search term is extracted from the first content group (S1030). For example, when the content provided by the first store relates to a haircut and the content provided by the third store relates to food, only the content provided by the third store is extracted into the second content group. Then, the contents in the second content group are reproduced in an order adjacent to the user's location (S1040).
- FIG. 12 is a flowchart for creating a smart route as a preferred embodiment of the present invention.
- a method of generating a smart route that proposes a destination to which the user can perform a desired task to the user is as follows.
- the origin and destination are input from the user (S1210), and thereafter, the user's request is input as a search word (S1220).
- the search term has the form of a sentence.
- the smart route display unit displays a smart route that meets the user's request among a plurality of candidate routes from the source to the destination on the map (S1230).
- at least one content is further displayed on the smart route in real time according to the user's location through the content display unit at the same time or separately according to the user's selection (S1240).
- Methods according to an embodiment of the present invention may be implemented in the form of program instructions that can be executed through various computer means and recorded in a computer-readable medium.
- the computer-readable medium may include program instructions, data files, data structures, etc. alone or in combination.
- the program instructions recorded on the medium may be specially designed and configured for the present invention, or may be known and available to those skilled in the art of computer software.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Library & Information Science (AREA)
- Business, Economics & Management (AREA)
- Development Economics (AREA)
- Strategic Management (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Accounting & Taxation (AREA)
- Multimedia (AREA)
- General Engineering & Computer Science (AREA)
- Finance (AREA)
- Automation & Control Theory (AREA)
- Economics (AREA)
- Game Theory and Decision Science (AREA)
- Entrepreneurship & Innovation (AREA)
- Marketing (AREA)
- General Business, Economics & Management (AREA)
- Life Sciences & Earth Sciences (AREA)
- Atmospheric Sciences (AREA)
- Biodiversity & Conservation Biology (AREA)
- Ecology (AREA)
- Environmental & Geological Engineering (AREA)
- Environmental Sciences (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
Claims (15)
- 출발지 및 목적지를 입력받는 단계;사용자 요청사항을 검색어로 입력받는 단계; 및상기 출발지에서 상기 목적지에 이르는 복수의 후보경로 중 상기 사용자 요청사항에 부합하는 스마트경로를 지도상에 표시하고, 또한 상기 사용자의 위치에 따라 적어도 하나의 컨텐츠를 상기 스마트경로상에 실시간으로 더 표시하는 단계; 를 포함하고,상기 검색어는 문장(sentence)의 형태이고, 상기 적어도 하나의 컨텐츠는 상기 검색어와 연관된 내용을 포함하는 것을 특징으로 하는 스마트경로 생성방법.
- 제 1 항에 있어서, 상기 적어도 하나의 컨텐츠 각각은시간정보와 GIS(Geographic Information System)상에서 위치정보를 지니고, 상기 시간정보는 컨텐츠의 생성시각(starting time) 및 소멸시각(ending time)을 포함하는 것을 특징으로 하는 스마트경로 생성방법.
- 제 2 항에 있어서, 상기 컨텐츠는상기 소멸시각에 가까워질수록 상기 컨텐츠의 정보가치가 감소되는 것을 특징으로 하는 스마트경로 생성방법.
- 제 2 항에 있어서, 상기 컨텐츠표시부는상기 적어도 하나의 컨텐츠의 생성시각 및 소멸시각 조건이 상기 검색어가 입력되는 입력시간에 배치되지 않고 또한 상기 검색어의 내용과 배치되지 않는 경우 해당 컨텐츠를 선택하는 것을 특징으로 하는 스마트경로 생성방법.
- 제 2 항에 있어서, 상기 컨텐츠표시부는상기 적어도 하나의 컨텐츠의 위치정보가 상기 스마트경로 상에 있는 경우에만 해당 컨텐츠를 선택하는 것을 특징으로 하는 스마트경로 생성방법.
- 제 1 항에 있어서, 상기 적어도 하나의 컨텐츠가 동영상인 경우사용자의 현재 위치를 기준으로 가장 가까운 위치정보를 지닌 컨텐츠 순으로 순차적으로 재생되어, 상기 사용자의 이동 경로 주변에서 상기 사용자 요청사항과 관련한 컨텐츠를 실시간으로 제공하는 것을 특징으로 하는 스마트경로 생성방법.
- 제 6 항에 있어서,상기 동영상 내에서 상기 검색어와 관련된 일부구간만이 재생되는 것을 특징으로 하는 스마트경로 생성방법.
- 제 1 항에 있어서, 상기 적어도 하나의 컨텐츠가 동영상인 경우상기 동영상은 상기 동영상을 구성하는 적어도 하나의 씬(scene)마다 메타데이터가 문장 형식으로 부여되어 있으며,상기 메타데이터를 기준으로 상기 검색어와 매칭도가 가장높은 씬을 검색하여, 검색된 씬의 시작지점부터 끝지점만을 재생하는 것을 특징으로 하는 스마트경로 생성방법.
- 제 8 항에 있어서, 상기 매칭도는상기 검색어와 상기 메타데이터의 유사도를 기초로 판단되며, 두 문장이 동일한 경우에 0이 되고 두 문장의 유사도가 작아질수록 값이 커지는 레빈쉬타인 거리(Levenshtein distance)기법을 이용하여 판단하는 것을 특징으로 하는 스마트경로 생성방법.
- 제 1 항에 있어서, 상기 적어도 하나의 컨텐츠는동영상, 사진, 텍스트, 오디오 중 적어도 하나를 포함하는 것을 특징으로 하는 스마트경로 생성방법.
- 제 1 항에 있어서, 상기 지도상에 표시된 상기 스마트 경로 위에 중첩하여상기 사용자 요청사항에 부합하는 적어도 하나의 컨텐츠를 표시하는 것을 특징으로 하는 스마트경로 생성방법.
- 출발지 및 목적지를 입력받는 제 1 입력부;사용자 요청사항을 검색어로 입력받는 제 2 입력부;및지도상에 상기 출발지에서 상기 목적지에 이르는 복수의 경로 중 상기 사용자 요청사항에 부합하는 스마트경로를 표시하는 경로표시부;를 포함하고,상기 경로표시부는 상기 사용자 요청사항에 부합하는 적어도 하나의 컨텐츠를 사용자의 위치에 따라 실시간으로 선택하여 상기 스마트경로상에 실시간으로 표시하는 컨텐츠표시부;를 포함하며, 상기 검색어는 문장(sentence)의 형태인 것을 특징으로 하는 스마트경로 생성장치.
- 제 12 항에 있어서, 상기 적어도 하나의 컨텐츠는상기 스마트경로와 연관된 상품이나 서비스에 대한 광고정보를 포함하는 것을 특징으로 하는 스마트경로 생성장치.
- 제 12 항에 있어서, 상기 적어도 하나의 컨텐츠 각각은시간정보와 GIS(Geographic Information System)상에서 위치정보를 지니고, 상기 시간정보는 컨텐츠의 생성시각(starting time) 및 소멸시각(ending time)을 포함하는 것을 특징으로 하는 스마트경로 생성장치.
- 제 1 항 내지 제 11 항 중 어느 한 항에 기재된 방법을 수행하기 위한 프로그램을 기록한 컴퓨터로 읽을 수 있는 기록매체.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020207014778A KR102369324B1 (ko) | 2020-04-29 | 2020-04-29 | 스마트경로 생성방법 및 장치 |
PCT/KR2020/005720 WO2021221210A1 (ko) | 2020-04-29 | 2020-04-29 | 스마트경로 생성방법 및 장치 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/KR2020/005720 WO2021221210A1 (ko) | 2020-04-29 | 2020-04-29 | 스마트경로 생성방법 및 장치 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021221210A1 true WO2021221210A1 (ko) | 2021-11-04 |
Family
ID=78374129
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2020/005720 WO2021221210A1 (ko) | 2020-04-29 | 2020-04-29 | 스마트경로 생성방법 및 장치 |
Country Status (2)
Country | Link |
---|---|
KR (1) | KR102369324B1 (ko) |
WO (1) | WO2021221210A1 (ko) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002039774A (ja) * | 2000-07-28 | 2002-02-06 | Denso Corp | ナビゲーション装置 |
KR20020068863A (ko) * | 2001-02-23 | 2002-08-28 | 위콘인터넷 주식회사 | 멀티미디어 드라이빙 시스템과 그 제어방법 |
JP2012242296A (ja) * | 2011-05-20 | 2012-12-10 | Navitime Japan Co Ltd | 経路探索装置、経路探索システム、サーバ装置、端末装置、経路探索方法、および、プログラム |
KR20150022088A (ko) * | 2013-08-22 | 2015-03-04 | 주식회사 엘지유플러스 | 컨텍스트 기반 브이오디 검색 시스템 및 이를 이용한 브이오디 검색 방법 |
KR20190031935A (ko) * | 2017-09-19 | 2019-03-27 | 현대자동차주식회사 | 대화 시스템과 이를 포함하는 차량 및 모바일 기기와 대화 처리 방법 |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20090105300A (ko) * | 2008-04-02 | 2009-10-07 | (주)아이피인프라 | 내비게이션 광고 서비스 제공에 따른 광고료 정산 처리시스템 및 방법 |
KR101101111B1 (ko) * | 2011-02-28 | 2011-12-30 | 팅크웨어(주) | 전자기기 및 전자기기의 동작 방법 |
-
2020
- 2020-04-29 KR KR1020207014778A patent/KR102369324B1/ko active IP Right Grant
- 2020-04-29 WO PCT/KR2020/005720 patent/WO2021221210A1/ko active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002039774A (ja) * | 2000-07-28 | 2002-02-06 | Denso Corp | ナビゲーション装置 |
KR20020068863A (ko) * | 2001-02-23 | 2002-08-28 | 위콘인터넷 주식회사 | 멀티미디어 드라이빙 시스템과 그 제어방법 |
JP2012242296A (ja) * | 2011-05-20 | 2012-12-10 | Navitime Japan Co Ltd | 経路探索装置、経路探索システム、サーバ装置、端末装置、経路探索方法、および、プログラム |
KR20150022088A (ko) * | 2013-08-22 | 2015-03-04 | 주식회사 엘지유플러스 | 컨텍스트 기반 브이오디 검색 시스템 및 이를 이용한 브이오디 검색 방법 |
KR20190031935A (ko) * | 2017-09-19 | 2019-03-27 | 현대자동차주식회사 | 대화 시스템과 이를 포함하는 차량 및 모바일 기기와 대화 처리 방법 |
Also Published As
Publication number | Publication date |
---|---|
KR20210134867A (ko) | 2021-11-11 |
KR102369324B1 (ko) | 2022-03-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7480317B2 (ja) | 検索方法、装置、電子機器及び記憶媒体 | |
WO2020080606A1 (ko) | 비디오 메타데이터와 스크립트 데이터를 활용한 비디오 컨텐츠 통합 메타데이터 자동 생성 방법 및 시스템 | |
US9244923B2 (en) | Hypervideo browsing using links generated based on user-specified content features | |
WO2010117213A2 (en) | Apparatus and method for providing information related to broadcasting programs | |
US9875222B2 (en) | Capturing and storing elements from a video presentation for later retrieval in response to queries | |
WO2015119335A1 (ko) | 콘텐츠 추천 방법 및 장치 | |
JP2002092032A (ja) | 次検索候補単語提示方法および装置と次検索候補単語提示プログラムを記録した記録媒体 | |
WO2020251233A1 (ko) | 영상데이터의 추상적특성 획득 방법, 장치 및 프로그램 | |
KR101550886B1 (ko) | 동영상 콘텐츠에 대한 부가 정보 생성 장치 및 방법 | |
WO2021221209A1 (ko) | 동영상 내부의 정보를 검색하는 방법 및 장치 | |
CN112749328B (zh) | 搜索方法、装置和计算机设备 | |
WO2015080371A1 (en) | Image search system and method | |
JPH1040260A (ja) | 映像検索方法 | |
KR20060100646A (ko) | 영상물의 특정 위치를 검색하는 방법 및 영상 검색 시스템 | |
WO2021167238A1 (ko) | 내용 기반 동영상 목차 자동생성 방법 및 시스템 | |
WO2021221210A1 (ko) | 스마트경로 생성방법 및 장치 | |
WO2018143490A1 (ko) | 웹 콘텐츠를 이용한 사용자 감성 예측 시스템 및 그 방법 | |
Luo et al. | Exploring large-scale video news via interactive visualization | |
JP2007328713A (ja) | 関連語表示装置、検索装置、その方法及びプログラム | |
WO2012046904A1 (ko) | 다중 자원 기반 검색정보 제공 장치 및 방법 | |
CN110008314B (zh) | 一种意图解析方法及装置 | |
WO2017179778A1 (ko) | 빅데이터를 이용한 검색 방법 및 장치 | |
KR20160060803A (ko) | 오디오 및 비디오 데이터를 포함하는 영상의 저장 및 검색 장치와 저장 및 검색 방법 | |
WO2016072772A1 (ko) | 레퍼런스 의미 지도를 이용한 데이터 시각화 방법 및 시스템 | |
CN108628911A (zh) | 针对用户输入的表情预测 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20932893 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20932893 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 12/04/2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20932893 Country of ref document: EP Kind code of ref document: A1 |