Logo

Data Science Colloquium FY21 - Shared screen with speaker view
Suzie Xi
43:30
https://sites.google.com/view/frame-blends-entry/home?authuser=1
Timothy Beal
43:42
Thank you Suzie!
Suzie Xi
43:42
sxi@smith.edu
Justin Barber
49:55
Thank you, Suzie! A few questions: (1) What sort of dataset did you use to test this? (2) Since word embeddings often average different uses of any given word into a single embedding, do you think including other sorts of data in the nominations models would help (e.g., part of speech)? (3) How is the frame embedding different from the word embedding? Are they the same embeddings (using word2vec or GloVE)? Or are they created differently?
Justin Barber
52:04
Sorry, ignore any of these. A fourth question, do you take word order into account when you sum the embeddings?
Collin Baker
56:43
Really exciting work! Have you used some of the information in the FrameNet annotations in this process? I'm thinking of some of the syntactic patterns in the annotated sentences.
Justin Barber
58:05
Thank you, Suzie!
Justin Barber
01:03:11
I have one other question.
Timothy Beal
01:07:43
I wonder if these great questions might also apply to thinking about the possibility of metaphor embeddings …
Justin Barber
01:08:46
Great question!
Amy Cook
01:11:09
Once you have found a data set of frame blends, could you then cross check that with gestures to see if there are correlations there?
Timothy Beal
01:12:30
Thank you!
Tiago Torrent
01:12:58
Here’s an interesting paper on Automatic metaphor detection: https://doi.org/10.1075/cf.8.2.06hon
Timothy Beal
01:13:57
Thanks Tiago!
Amy Cook
01:18:23
Thank you!
Justin Barber
01:18:28
Thank you!
Timothy Beal
01:18:39
Thank you!
Tiago Torrent
01:18:46
Thank you!
Keaton Markey
01:18:57
Thank you!