Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

(Some others that

I don't expect to have time to discuss

 

Expand
  1. (in a long line of LiWC-like lexicons) Chenhao Tan's list of hedging phrases, such as "I suspect", "raising the possibility": [README] [list itself]  

    Expand
    titleReferences and applications

    Chenhao Tan and Lillian Lee, "Talk it up or play it down? (Un)expected correlations between (de-)emphasis and recurrence of discussion points in consequential U.S. economic policy meetings", Text As Data 2016

    Abstract: In meetings where important decisions get made, what items receive more attention may influence the outcome. We examine how different types of rhetorical (de-)emphasis — including hedges, superlatives, and contrastive conjunctions — correlate with what gets revisited later, controlling for item frequency and speaker. Our data consists of transcripts of recurring meetings of the Federal Reserve’s Open Market Committee (FOMC), where important aspects of U.S. monetary policy are decided on. Surprisingly, we find that words appearing in the context of hedging, which is usually considered a way to express uncertainty, are more likely to be repeated in subsequent meetings, while strong emphasis indicated by superlatives has a slightly negative effect on word recurrence in subsequent meetings. We also observe interesting patterns in how these effects vary depending on social factors such as status and gender of the speaker. For instance, the positive effects of hedging are more pronounced for female speakers than for male speakers.

     

    Chenhao Tan,  Vlad Niculae, Cristian Danescu-Niculescu-Mizil, Lillian Lee. "Winning arguments: Interaction dynamics and persuasion strategies in good-faith online discussions." Proc. of WWW 2016

  2. language models, which assign probabilities P(x) to words, sentences or text units.
    These are great for similarity, distinctiveness, visualization.
    1. Monroe et al's "Fightin words": what makes two "languages" different?

      Slides and handout from Cristian Danescu-Niculescu-Mizil and my class "NLP and social interaction" : [ pptx ] [ pdf ] [handout]

      Expand
      title(Test-audience-familiarity image)

      Expand
      titleApplications of the method

      Jurafsky, Dan, Victor Chahuneau, Bryan R. Routledge, Noah A. Smith. 2014. Narrative framing of consumer sentiment in online restaurant reviews. First Monday 19(4).

      Mark Liberman on Language Log.   The most Kasichoid, Cruzian, Trumpish, and Rubiositous words , 2016.  The most Trumpish (and Bushish) words , 2015.  Obama's favored (and disfavored) SOTU words , 2014.  Draft words  (descriptions of white vs black NFL prospects), 2014.  Male and female word usage , 2014.

      Expand
      titleReferences and code

      Monroe, Burt L., Michael P. Colaresi, and Kevin M. Quinn. 2008.  Fightin' words: Lexical feature selection and evaluation for identifying the content of political conflict .   Political Analysis  16(4): 372-403. [alternate link]

      Abstract: Entries in the burgeoning “text-as-data” movement are often accompanied by lists or visualizations of how word (or other lexical feature) usage differs across some pair or set of documents. These are intended either to establish some target semantic concept (like the content of partisan frames) to estimate word-specific measures that feed forward into another analysis (like locating parties in ideological space) or both. We discuss a variety of techniques for selecting words that capture partisan, or other, differences in political speech and for evaluating the relative importance of those words. We introduce and emphasize several new approaches based on Bayesian shrinkage and regularization. We illustrate the relative utility of these approaches with analyses of partisan, gender, and distributive speech in the U.S. Senate.

      The method is also described in Section 19.5.1, "Log odds ratio informative Dirichlet prior" of the 3rd edition of Jurafsky and Martin, Speech and Language Processing.

      Slides adapted from slides 85-94 of Cristian Danescu-Niculescu-Mizil and Lillian Lee, Natural language processing for computational social science, Invited tutorial at NIPS 2016 [alternate link: tutorial announcement, slides] for lecture 16 of the class NLP and Social Interaction, Fall 2017.

       

      Code

      • Hessel, Jack:  FightingWords.
      • Lim, Kenneth: fightin-words 1.0.4. Compliant with sci-kit learn and distributed by PyPI; borrows (with acknowledgment) from Jack's version.
      • Marzagão, Thiago:  mcq.py

      Visualizers

       

    2. Similarity measured on the most frequent words ("stop words") only vs. on the content words
      How Measures of distributional similarity: how similar are two language models? The standard measure is the cross-entropy: - Σ   p( x) log( q(x)) ; a variant is the KL divergence,   Σ p(x) log( p(x) / q(x)) = the cross entropy of p(x) and q(x) minus the entropy of p(x)  
    3. Similarity of each of A or B to a baseline of "regular" or "null hypothesis" language.

  3.  

  4. Distributional similarity (word embeddings are the modern version)

    Here's a figure from 1997 about ideas from the early 90's:
    Image Added

    For references, see the word embeddings section later in this document

    Expand
    titleShould we measure distance using regular ol' vector differences? Pictures from an intuition

    Image Modified

     

    Image Modified

     

     

     

    Expand
    titleReference

    Lee, Lillian. 1999. Measures of distributional similarity. Proc. of the ACL, 25--32. 

  5. Similarity measured on the most frequent words ("stop words") only vs. on the content words
  6.  

     

     

  7.  

    Some others

    Distributional similarity (word embeddings are the modern version)

    Image Removed

Expand
title
Expand
titleShow us those mystery features!)

Type/token ratio

 

 

 

 


... and one feature that I both like and drives me crazy: length

...

Expand
  1. BERT vs hand features, controversy paper
  2. Anchor
    word-embeddings
    word-embeddings
    Word embeddings - now contextual/polysemy-aware!
    BERT and ELMo.

     

    Question/proposal : where is the word embedding version of LIWC? ("Can we BERT LIWC?").

     

    Expand
    titleFine-tuning reference (thanks to Yoav Artzi)
     


    Expand
    titleSome work in this direction

    Fast, Ethan, Binbin Chen, Michael S Bernstein. Lexicons on demand: Neural word embeddings for large-scale text analysis. IJCAI 2017.

    Abstract: Human language is colored by a broad range of topics, but existing text analysis tools only focus on a small number of them. We present Empath, a tool that can generate and validate new lexical categories on demand from a small set of seed terms (like “bleed” and “punch” to generate the category violence). Empath draws connotations between words and phrases by learning a neural embedding across billions of words on the web. Given a small set of seed words that characterize a category, Empath uses its neural embedding to discover new related terms, then validates the category with a crowd-powered filter. Empath also analyzes text across 200 built-in, pre-validated categories we have generated such as neglect, government, and social media. We show that Empath’s data-driven, human validated categories are highly correlated (r=0.906) with similar categories in LIWC.

    Expand
    titleOverview references

    Smith, Noah A. 2019. Contextual word representations: A contextual introduction. arxiv:1092.06006, version 2, dated Feb 19. 2019.
    Twitter commentary regarding the history as recounted in the above (Naftali Tishby and yours truly are among the "& co." referred to by Robert Munro): [1] [2] [3]

    Goldberg, Yoav. 2017. Neural network methods for natural language processing. Morgan Claypool. Earlier, shorter, open-access journal version: A primer on neural network models for natural language processing: JAIR 57:345--420, 2016.

  3. Language modeling = the bridge?

    Note that the basic units might be characters or unicode code points ("names of character") instead of words.

    Expand
    titleRecommendations from Jack Hessel and Yoav Artzi, Cornell

    Thanks to Jack Hessel and Yoav Artzi for the below. Paraphrasing errors are my own.

    The best off-the-shelf language model right now (caveat: this is a very fast-moving field) is the 12-or-so layer GPT-2, where GPT stands for Generative Pre-Training. [code] [(infamous) announcement] [hugging face's reimplementation of pre-trained GPT-2]

    But a single-layer LSTM trained from scratch, with carefully chosen hyperparameters, is still often a very strong baseline, especially with small data (around 10K samples).

    Both BERT and GPT seems to transfer well via fine-tuning to small new datasets, at least in expert hands. [code] [Colab] [hugging face's reimplementation of pre-trained BERT] [announcement]

    The Giant Language model Test Room (GLTR) can be used for analyzing what a neural LM is doing, although its stated purpose is to enable "detect automatically generated text".

    Expand
    titleReferences

    Devlin, Jacob, Ming-wei Chang, Kenton Lee, Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Proc. of NAACL. [arXiv version]

    Rush, Sasha, with VIncent Nguyen and Guillaume Klein. April 3, 2018. The annotated transformer — interpolates code line-by-line for Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin, 2017. Attention is all you need. Proc. of NIPS. [arxiv version]

    Radford, Alec, Wu, Jeffrey, Child, Rewon, Luan, David, Amodei, Dario, Sutskever, Ilya. 2019. Language models are unsupervised multitask learners. Manuscript. (The GPT-2 paper)

    Expand
    titlePaper using BERT for similarity (Thanks to Yoav Artzi)

    Zhang, Tianyi, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, Yoav Artzi. April 21, 2019. BERTScore: Evaluating Text Generation with BERT. arxiv version 1. [code]

     

     

     

 

...