Spaces:
Configuration error
Configuration error
prompt,answer | |
"You are a teacher preparing questions for a quiz. Given the following document, please generate 1 multiple-choice questions (MCQs) with 4 options and a corresponding answer letter based on the document.\n\nExample question:\n\nQuestion: question here\nCHOICE_A: choice here\nCHOICE_B: choice here\nCHOICE_C: choice here\nCHOICE_D: choice here\nAnswer: A or B or C or D\n\nThese questions should be detailed and solely based on the information provided in the document.\n\n<Begin Document>\nAbstract\nLLM-powered chatbots are becoming widely\nadopted in applications such as healthcare, personal assistants, industry hiring decisions, etc.\nIn many of these cases, chatbots are fed sensitive, personal information in their prompts,\nas samples for in-context learning, retrieved\nrecords from a database or as part of the conversation. The information provided in the\nprompt could directly appear in the output,\nwhich might have privacy ramifications if there\nis sensitive information there. As such, in this\npaper, we aim to understand the input copying\nand regurgitation capabilities of these models during inference and how they can be directly instructed to limit this copying by complying with regulations such as HIPAA and\nGDPR, based on their internal knowledge of\nthem. More specifically, we find that when\nChatGPT is prompted to summarize cover letters of a 100 candidates, it would retain personally identifiable information (PII) verbatim in\n57.4% of cases, and we find this retention to\nbe non-uniform between different subgroups\nof people, based on attributes such as gender\nidentity. We then probe ChatGPT’s perception of privacy-related policies and privatization mechanisms by directly instructing it to\nprovide compliant outputs and observe a significant omission of PII from output.\n<End Document>",Question:\nWhat is one of the concerns mentioned in the document regarding the information provided in the prompts to chatbots?\nA) The use of sensitive information in healthcare applications\nB) The potential retention of personally identifiable information (PII) in the output\nC) The impact of gender identity on chatbot performance\nD) The need for chatbots to comply with regulations such as HIPAA and GDPR\n\nAnswer: B | |