-
Notifications
You must be signed in to change notification settings - Fork 205
KB Relations
When we are querying a structured knowledge base, whether based on raw question representation or a logical form, we need to map question terminology to the actual graph relations in the KB.
This concerns two specific problems - first, mapping from natural language vocabulary to relations; second, finding template subgraphs in order to capture constraints (like co-occurrence with another entity, or yielding only "first" entity by particular ordering or their count).
This is a big TODO.
Currently, we use two approaches both at once.
First, we produce answers from all the immediate relations of a concept. Some vocabulary mapping is done by assigning each answer the LAT based on the relation name. This is an "emergency" solution.
Second, we produce answers from specific (fixed-label) relation paths. So, vocabulary is fixed; template subgraph is just a 1 or 2 entity path. Logistic regression based multi-label classifier based on (few) lexical question features. Based on Yao: Lean question answering over freebase from scratch (2015).
Compare embedding of question and property label (using transformation matrix) to determine how likely it is to be answer-producing. Work in progress by Silvestr in f/property-selection, based on his sentence selection work.
TODO properly report property selection MRR.
Baseline:
moviesC-test ab04e7d 2015-10-02 CluesToConcepts Labe... 101/165/233 43.3%/70.8% mrr 0.511 avgtime 435.469
moviesC-test uab04e7d 2015-10-02 CluesToConcepts Labe... 100/175/233 42.9%/75.1% mrr 0.510 avgtime 306.597
moviesC-test vab04e7d 2015-10-02 CluesToConcepts Labe... 101/165/233 43.3%/70.8% mrr 0.512 avgtime 380.044
moviesC-trai ab04e7d 2015-10-02 CluesToConcepts Labe... 328/400/542 60.5%/73.8% mrr 0.657 avgtime 1133.952
moviesC-trai uab04e7d 2015-10-02 CluesToConcepts Labe... 250/406/542 46.1%/74.9% mrr 0.547 avgtime 812.428
moviesC-trai vab04e7d 2015-10-02 CluesToConcepts Labe... 292/400/542 53.9%/73.8% mrr 0.613 avgtime 1003.900
curated-test 1f0b793 2015-10-08 Merge branch 'f/labe... 153/282/430 35.6%/65.6% mrr 0.433 avgtime 2456.937
curated-test u1f0b793 2015-10-08 Merge branch 'f/labe... 147/333/430 34.2%/77.4% mrr 0.433 avgtime 2226.956
curated-test v1f0b793 2015-10-08 Merge branch 'f/labe... 153/282/430 35.6%/65.6% mrr 0.434 avgtime 2387.664
curated-trai 1f0b793 2015-10-08 Merge branch 'f/labe... 294/305/430 68.4%/70.9% mrr 0.694 avgtime 3316.498
curated-trai u1f0b793 2015-10-08 Merge branch 'f/labe... 173/334/430 40.2%/77.7% mrr 0.493 avgtime 2951.546
curated-trai v1f0b793 2015-10-08 Merge branch 'f/labe... 253/305/430 58.8%/70.9% mrr 0.641 avgtime 3206.849
Using embeddings and transformation matrix, trained on moviesC...
moviesC-test 22f3433 2015-10-08 Mbprop.txt: Retrain ... 102/169/233 43.8%/72.5% mrr 0.510 avgtime 490.464
moviesC-test u22f3433 2015-10-08 Mbprop.txt: Retrain ... 101/175/233 43.3%/75.1% mrr 0.509 avgtime 356.568
moviesC-test v22f3433 2015-10-08 Mbprop.txt: Retrain ... 103/169/233 44.2%/72.5% mrr 0.516 avgtime 434.219
moviesC-trai 22f3433 2015-10-08 Mbprop.txt: Retrain ... 332/400/542 61.3%/73.8% mrr 0.662 avgtime 1223.704
moviesC-trai u22f3433 2015-10-08 Mbprop.txt: Retrain ... 275/406/542 50.7%/74.9% mrr 0.578 avgtime 891.278
moviesC-trai v22f3433 2015-10-08 Mbprop.txt: Retrain ... 298/400/542 55.0%/73.8% mrr 0.618 avgtime 1090.162
...or on curated...
curated-test 4445206 2015-10-08 StructuredPrimarySea... 148/284/430 34.4%/66.0% mrr 0.426 avgtime 2524.334
curated-test u4445206 2015-10-08 StructuredPrimarySea... 146/333/430 34.0%/77.4% mrr 0.427 avgtime 2280.070
curated-test v4445206 2015-10-08 StructuredPrimarySea... 157/284/430 36.5%/66.0% mrr 0.444 avgtime 2451.670
curated-trai 4445206 2015-10-08 StructuredPrimarySea... 300/307/430 69.8%/71.4% mrr 0.705 avgtime 3197.488
curated-trai u4445206 2015-10-08 StructuredPrimarySea... 175/334/430 40.7%/77.7% mrr 0.494 avgtime 2818.837
curated-trai v4445206 2015-10-08 StructuredPrimarySea... 261/307/430 60.7%/71.4% mrr 0.653 avgtime 3084.787
Pending further investigation - it seems this overfits a bit...
Actually, what happens if we train on curated but apply to moviesC, or vice versa?
moviesC-test 2f685d4 2015-10-08 Mbprop on curated... 101/166/233 43.3%/71.2% mrr 0.514 avgtime 461.919
moviesC-test u2f685d4 2015-10-08 Mbprop on curated... 101/175/233 43.3%/75.1% mrr 0.516 avgtime 327.857
moviesC-test v2f685d4 2015-10-08 Mbprop on curated... 106/166/233 45.5%/71.2% mrr 0.526 avgtime 406.005
moviesC-trai 2f685d4 2015-10-08 Mbprop on curated... 328/400/542 60.5%/73.8% mrr 0.657 avgtime 1201.842
moviesC-trai u2f685d4 2015-10-08 Mbprop on curated... 262/406/542 48.3%/74.9% mrr 0.564 avgtime 875.238
moviesC-trai v2f685d4 2015-10-08 Mbprop on curated... 305/400/542 56.3%/73.8% mrr 0.627 avgtime 1070.478
curated-test a44d2e2 2015-10-08 Mbprop on moviesC... 145/286/430 33.7%/66.5% mrr 0.417 avgtime 4122.357
curated-test ua44d2e2 2015-10-08 Mbprop on moviesC... 146/333/430 34.0%/77.4% mrr 0.426 avgtime 3933.659
curated-test va44d2e2 2015-10-08 Mbprop on moviesC... 143/286/430 33.3%/66.5% mrr 0.423 avgtime 4071.076
curated-trai a44d2e2 2015-10-08 Mbprop on moviesC... 295/302/430 68.6%/70.2% mrr 0.694 avgtime 4563.561
curated-trai ua44d2e2 2015-10-08 Mbprop on moviesC... 180/334/430 41.9%/77.7% mrr 0.504 avgtime 4277.613
curated-trai va44d2e2 2015-10-08 Mbprop on moviesC... 252/302/430 58.6%/70.2% mrr 0.637 avgtime 4487.528
Definitely helps overfitting on moviesC!
Ideas:
- LAT or some "wide-LAT" instead of full question (or in ensemble, a second matrix)
- CNN
- "informed convolution" with max-pool integrating wide-LAT and rest of sentence (or possibly some other identified chunks)
Instead of fixed-label relation path, consider a more complex subgraph template with other entity references. Our first iteration will keep using the fixed vocabulary, just add a "T-shaped" subgraph of three entities in addition to the path. Investigated by Honza P.
When done, this will yield (with regard to subgraph problem) a baseline that is popular across systems, with three subgraph templates - direct relation, one-hop relation, and T-shaped relation with an extra fixed entity. This is enough to get huge WebQuestions coverage, apparently.
We now call this extension "Branched fbpaths". Branched fbpaths try to cover question which have additional relation between two concepts in addition to relation between question entity and answer. These paths have to have one common relation.
For example one path: tv/tv_character/appeared_in_tv_program", "/tv/regular_tv_appearance/actor" and second path: tv/tv_character/appeared_in_tv_program", "/tv/regular_tv_appearance/series
This is typical for question which looks like: Who played character X in film Y.
The webquestion dataset (can be obtained from here) was used for training classifier for branched (T-shaped) fbpaths relations. You gen get the file from "d-freebase-rp" directory and then you need to create tsv format of these file using "scripts/json2tsv.py". Finally, you can follow the README file in the yodaqa repository in "data/ml/fbpath" to create classifier.
Because this type of fbpath corresponds mostly to one type of question from questions about movies, it improves MRR on movies dataset. Unfortunately, it make MRR on curated dataset worse.
moviesC-test u58e6f15 2015-09-18 Added sparql query f... 100/177/233 42.9%/76.0% mrr 0.506 avgtime 745.413
curated-test u88085fb 2015-09-18 Added sparql query f... 135/329/430 31.4%/76.5% mrr 0.408 avgtime 3921.207
For further information see Benchmarks wiki page.
Many systems use semantic parsing first to produce a logical form, then learn rules that convert this logical form to a SPARQL query. Often, this SPARQL query is fixed to be essentially just a subgraph template like we do, e.g. the QALD5 winner Xser (the FBGraph subgraphs).
Another subgraph template matching approach is Bast, Haussmann: More accurate question answering on freebase (2015). It matches the FBGraph subgraphs. Answers are produced aggressively, and for vocabulary, it measures Freebase relation(s) alignment with the question - number of overlaping words, derived words, word vector embedding cosine similarities and indicator words in question trained by distant supervision.
Distant supervision is common in other systems too (TODO) - Wikipedia sentences that contain two entities connected with such relation in Freebase would often have the indicator word on the path between the entities in dependency parse). Some tools may be reused for this (TODO).
Another way to map vocabulary to relations is using the PATTY resource (Xser). TODO link