Skip to content

INPUTrrr0/LLM-in-context

Repository files navigation

Summary: Motivated from the paper “neural theory of mind? On the Limits of Social Intelligence in Large LMs ”, I want to explore LLM’s ability to tell apart sentences with a more formal language and sentences from casual conversations. The formal sentences are experts from a stanford’s encyclopedia of philosophy on “belief” while the casual language is from twitter conversations.

Setup: Language model used (using API):

  • OpenAI’s text-Davinci-003 model

(manually, without using API):

  • Cohere's command

(In the future):

  • Cohere's embed-english-v3.0

examples of the prompts: Help me classify some text inputs with classification labels. Here are some examples: 'It is common to think of believing as involving entities—beliefs—that are in some sense contained in the mind.'], 'formal'), ([' I understand. I would like to assist you. We would need to get you into a private secured link to further assist.'], 'casual'), (['When someone learns a particular fact, for example, when Kai reads that garden snails are hermaphrodites, they acquire a new belief (in this case, the belief that garden snails are hermaphrodites).'], 'formal'). Complete the sentence: ' and how do you propose we do that''s has the label of _

classification tasks: Data1 (casual language): https://www.kaggle.com/datasets/manovirat/aspect/data Data2 (formal language): Excerpts from https://plato.stanford.edu/entries/belief/

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages